title
listlengths
0
18
author
listlengths
0
4.41k
authoraffiliation
listlengths
0
6.45k
venue
listlengths
0
9
abstract
stringlengths
1
37.6k
doi
stringlengths
10
114
pdfurls
listlengths
1
3
corpusid
int64
158
259M
arxivid
stringlengths
9
16
pdfsha
stringlengths
40
40
text
stringlengths
66
715k
github_urls
listlengths
0
36
[ "Parametric Root Finding for Supporting Proving and Discovering Geometric Inequalities in GeoGebra", "Parametric Root Finding for Supporting Proving and Discovering Geometric Inequalities in GeoGebra" ]
[ "Zoltán Kovács \nThe Private University College of Education of the Diocese of Linz Linz\nAustria\n\nBolyai Institute\nUniversity of Szeged Szeged\nHungary\n" ]
[ "The Private University College of Education of the Diocese of Linz Linz\nAustria", "Bolyai Institute\nUniversity of Szeged Szeged\nHungary" ]
[ "Automated Deduction in Geometry (ADG 2021) EPTCS" ]
We will consider only two simple subclasses of elementary geometry inequality exploration challenges:
10.4204/eptcs.352.19
[ "https://arxiv.org/pdf/2201.00545v1.pdf" ]
245,550,540
2201.00545
64cdabfcf2ebe3b81e54876e7b8b6efad4993417
Parametric Root Finding for Supporting Proving and Discovering Geometric Inequalities in GeoGebra 2021 Zoltán Kovács The Private University College of Education of the Diocese of Linz Linz Austria Bolyai Institute University of Szeged Szeged Hungary Parametric Root Finding for Supporting Proving and Discovering Geometric Inequalities in GeoGebra Automated Deduction in Geometry (ADG 2021) EPTCS 352202110.4204/EPTCS.352.19 We will consider only two simple subclasses of elementary geometry inequality exploration challenges: We introduced the package/subsystem GeoGebra Discovery to GeoGebra which supports the automated proving or discovering of elementary geometry inequalities. In this case study, for inequality exploration problems related to isosceles and right angle triangle subclasses, we demonstrate how our general real quantifier elimination (RQE) approach could be replaced by a parametric root finding (PRF) algorithm. The general RQE requires the full cell decomposition of a high dimensional space, while the new method can avoid this expensive computation and can lead to practical speedups. To obtain a solution for a 1D-exploration problem, we compute a Gröbner basis for the discriminant variety of the 1-dimensional parametric system and solve finitely many nonlinear real (NRA) satisfiability (SAT) problems. We illustrate the needed computations by examples. Since Gröbner basis algorithms are available in Giac (the underlying free computer algebra system in GeoGebra) and freely available efficient NRA-SAT solvers (SMT-RAT, Tarski, Z3, etc.) can be linked to GeoGebra, we hope that the method could be easily added to the existing reasoning tool set for educational purposes. Introduction As we reported in our earlier paper [10] and recent works [3], the dynamic geometry system GeoGebra [8,9] supports an automated reasoning toolset (ART). In particular, a GeoGebra user may try to prove or explore a relation between geometric quantities defined by a standard (planar) Euclidean construction. That is, for instance, we may wish to prove that in each non-degenerate triangle, the ratio of the sum of the medians and the perimeter of a triangle cannot exceed 1. To phrase it differently, we may want to explore general elementary geometric inequalities related to Euclidean plane geometry constructions. Recently, based on general real quantifier elimination (RQE) [5], the realgeom [15] tool supports this automated exploration in the background. However, due to high complexity of the general RQE/CAD algorithms [7] and the big number of variables in the input RQE formula, some of the well known and elementary results [1] are inaccessible with our approach and implemented tools. In this case study we want to replace the solution method based on general purpose RQE with a new approach where we may avoid the full cylindrical decomposition of a high dimensional space at least for a particular problem class. We hope that with the new method speedups can be obtained. • problems related to isosceles triangles (IT) and • for right angle triangles (RT). The reasons are as follows. The proving/disproving of IT/RT-conjectures of inequality types via algebraic methods, after the algebraic formulation leads to a real nonlinear satisfiability (NRA-SAT) problem. That is, to an existentially closed formula which validity should be decided. In contrast, for the IT/RT exploration problems we are concerned with in this paper, the typical associated first order input formula contains one free variable, and the semialgebraic system is (generically) one-dimensional: the quantities which we want to compare, does not have a fixed ratio in a triangle, but it can vary from triangle to triangle from the investigated class IT (or RT). Still, in some sense, the translated algebraic problems are very close to NRA-SAT problems: for each fixed parameter m = m 0 , the system has finitely many (maybe zero) real solutions. Thus we can avoid a generic real quantifier elimination process for determining the possible range of the parameter where a real solution exists via constructing a full CAD of the r-dimensional space, where r is defined by the number of the variables in the input formula. Instead, by knowing the Discriminant Variety (DV), which characterizes the "critical/wrong" points W , and which could be determined by (the hopefully cheaper) Gröbner basis computations, we can be sure that the system behaves well (it has constantly many solutions) in the open cells of a 1Ddecomposition R\W determined by the DV-polynomials. Therefore, sampling the open cells and adding the finitely many zero-dimensional cells in W to the sample, we may reduce the exploration problem to finitely many NRA-SAT (in fact, to nonparametric real root counting (RRC)) problems. We call this method the parametric root finding method (PRF). For the the details of the parametric real root counting and for the role of the (minimal) discriminant variety the reader is referred to [11,12]. Examples Example 1: A problem from IT In GeoGebra, for each construction and to the related exploration problem, the translated semi-algebraic problem is based on the planar coordinates of the geometric points involved in the construction of the objects (vertices, midpoints of triangles, intersection points of lines, etc.) However, in this first example we intentionally avoid yet the introduction of the coordinatization, to reduce the number of variables in the input formula and making the related computational steps as simple as possible. We note that the number of indeterminates in the automatically derived input formulae based on coordinatization is much higher, typically between 4-10. However, the number of parameters in these problems, no matter if they were derived by coordinatization or they are coordinate-free versions, is constant, namely, 1. Assume that an isosceles triangle with vertices A, B,C and side lengths a = b, b, c is given (see Figure 1, left). Without loss of generality we can assume that c = 1. We want to investigate (explore) the range of the ratio m of AB 2 + BC 2 +CA 2 AB · BC + AB ·CA + BC ·CA = 1 + 2b 2 b 2 + 2b .(1) The related standard input formula for RQE would look like ∃ b m > 0 ∧ 2b − 1 > 0 ∧ (1 + 2b 2 ) = m (b 2 + 2b),(2) and the quantifier free output is 1 ≤ m < 2. We show now that the same output formula can be computed with a different/new method. As a first step we compute the univariate polynomials for the discriminant variety DV O crit = (2 + m)(m − 1), O in = m(−6 + 5m), O inf = 2 − m.(3) To obtain the univariate polynomials in O crit , O in , O inf , we follow [13, Theorem 2], but for an update see [14]. Let f = (1 + 2b 2 ) − m(b 2 + 2b), g 1 = 2b − 1, g 2 = m. For O crit we need the determinant of the partial Jacobian J( f ) (w.r.t. b): J( f ) = ∂ (1 + 2b 2 ) − m(b 2 + 2b) ∂ b = b(4 − 2m) − 2m.(4) Then we compute a Gröbner basis of the elimination ideal of f , J( f ),t · g 1 · g 2 − 1 ∩ Q[m] and get m 2 + m − 2. In a similar way, for O in , that is, for inequalities, we compute the Gröbner basis of the ideal ( f , g 1 · g 2 − u, ut − 1 ∩ Q[m, u]) u=0 and we get as factors m and 5m − 6. The remaining univariate polynomial(s) for O inf are also computed via Gröbner bases, but besides the elimination and specialization, we need homogenizations as well. We skip the details here. The With NRA-SAT (RRC) we obtain that for the cells d 2 , d 3 , d 4 , d 5 there is at least one real solution of the above semialgebraic system when m is replaced by the sample in the cell. Therefore we conclude that m = 1 ∨ 1 < m < 6/5 ∨ m = 6/5 ∨ 6/5 < m < 2 ≡ 1 ≤ m < 2 is the solution for the exploration problem. See Figure 1, right. Example 2: A problem from RT Our second worked example considers a non-degenerate right angle triangle with vertices A, B,C and hypotenuse c = AB, where the exploration task is to investigate the ratio of sum of the medians m a and m b and the perimeter p = a + b + c (see Figure 2). In the first order input formula now we still have the parameter m, but instead of a, b, c, m a , m b we will see the (bound) variables v 13 , v 15 , v 16 , v 17 . These variables were generated in GeoGebra Discovery in a mechanical way: the input first order formula was automatically derived, based on the coordinates of the triangle vertices and midpoints of the triangle sides. Some indexed v j 's, where indexing starts from 1, are missing, because they were (linearly) eliminated in a preprocessing step or they were set to special values without loss of generality: ∃ v 13 ,v 15 ,v 16 ,v 17 m > 0 ∧ v 13 > 0 ∧ v 15 > 0 ∧ v 16 > 0 ∧ v 17 > 0 ∧ (6) v 13 + v 16 − m(1 + v 15 + v 17 ) = 0 ∧ 15 + 4v 2 13 − 16v 2 16 = 0 ∧ 3 − 4v 2 16 + v 2 17 = 0 ∧ 4 + v 2 15 − 4v 2 16 = 0. For the reader's convenience, we note that the variables v 15 and v 17 correspond to the triangle side lengths b and c, and the variables v 13 and v 16 to the triangle medians m a and m b . The quantifier free output is (32805 − 523422m 2 + 388800m 4 + 1377792m 6 + 737280m 8 + 131072m 10 ), O in = m(−3 + 4m)(−1 + 4m)(1 + 4m)(3 + 4m)(−3 + 4m 2 )(8) (3 + 4m 2 )(50625 − 324000m 2 + 633600m 4 + 368640m 6 + 65536m 8 ), O inf = (−3 + 4m)(−1 + 4m)(1 + 4m)(3 + 4m),(9) and therefore we have to work also with higher order algebraic numbers in our sample: d 2 = 1/4, d 4 = r (10,3) , d 6 = r (10,4) , d 8 = 5 2 (3 − 2 √ 2),(10)d 10 = 3/4, d 12 = √ 3/2, d 14 = 5 2 (3 + 2 √ 2), where r (10,3) and r (10,4) refer to the third and fourth positive roots of the degree 10 polynomial in O crit . This time only two consecutive SAT problems (with samples from d 8 and d 9 ) evaluate to True and the final formulae obtained is m = 5 2 (3 − 2 √ 2) ∨ 5 2 (3 − 2 √ 2) < m < 3/4.(11) Discussion/Conclusion Variants of the general parametric real root finding and real root counting algorithms are implemented and available in Maple [4,12], but to our best knowledge, they are not available directly in free computer algebra systems. Also our prototype implementation for the 1D explorations is in Mathematica [16]. Therefore, in our future work we will make statistics of the required computational times for the Gröbner basis computation and for solving the NRA-SAT problems related to the harder exploration problems in IT/RT. As a preliminary example, in Table 4 we give the number of cells reported by QEPCAD B, version 1.72 [2]. The cells were constructed when solving a 1-dimensional, six variable exploration problem related to the ratio of the inradius and circumradius (m = R/r) in right triangles, via CAD-based RQE. Here the generated output formula is m ≥ √ 2 + 1 and the DV-polynomials generated by the PRF method are relatively simple: DV = {m, 2m − 1, 2m + 1, m 2 − 2m − 1, m 2 + 2m − 1}.(12) We see that the the construction of the more than 10 5 cells may be replaced by 7 pieces of 1D cells in this example. If we gain practical speedups, then we intend to adapt and re-implement the PRF method and solve the computational subproblems with Giac and the freely available NRA-SAT solvers. Thus the emerging tool may be used in a broader educational context. Note that the question about the possible ratio m = AB 2 + BC 2 +CA 2 AB · BC + AB ·CA + BC ·CA ,(13) which is discussed in Example 1, could have been also asked about a nondegenerate general triangle as an exploration problem. We observed that most of the exploration problems for general triangles lead to semialgebraic problems, which have still infinitely many solutions for a fixed m = m 0 (after setting one of the triangle side lengths to 1). However, considering another variable in the input formula, say, one of the triangle sides b, as an additonal parameter to m, that is, if we fix b = b 0 and m = m 0 , then the number of solutions of the semialgebraic system is again finite. This opens up the road to an application of the same idea and methods for solving some more difficult geometric exploration problems without a full CAD. However, after investigating the open cells of the 2D space, we would have to cover here also all the infinitely many parameter pairs where the bivariate polynomials in the DV vanish. This can be done by adding the bivariate polynomials one by one to the original semialgebraic system and performing new "real root parametric counting" for the new systems. An additonal problem may occur, because some resulting, again 1-dimensional semialgebraic system may have solutions with multiplicity > 1 for almost all parameter values; in this case the DV-based method cannot be applied directly (cf. [13,Theorem 2]). Still, we think the above general idea may be elaborated in the future for handling geometric exploration problems for arbitrary triangles as well. Acknowledgements The first author was partially supported by the grant PID2020-113192GB-I00 (Mathematical Visualization: Foundations, Algorithms and Applications) from the Spanish MICINN. The second author was supported by the EU-funded Hungarian grant EFOP-3.6.1-16-2016-00008. real positive roots of the polynomials in DV in increasing order are d 2 = 1, d 4 = 6/5, d 6 = 2. That is, W = {−2, 0, 1, 6/5, 2}. We choose sample points from the open cells d 1 = (0, 1), d 3 = (1, 6/5), d 5 = (6/5, 2), d 7 = (2, ∞) and add them to the positive roots: Figure 1 : 1An exploration problem for a non-degenerate isosceles triangle with GeoGebra Discovery and the (m, b)-space Figure 2 : 2Example 2: An exploration problem for a non-degenerate right triangle with GeoGebra Discovery ) ≤ m < 3/4. Now this time the univariate polynomials in DV look like O crit = (25 − 60m 2 + 4m 4 ) O Bottema, R Z Djordjevic, R R Janic, D S Mitrinovic, P M Vasic, Geometric Inequalities. GroningenWolters-Noordhoff PublishingBottema, O., Djordjevic, R. Z., Janic, R. R., Mitrinovic, D. S., Vasic, P. M., Geometric Inequalities, Wolters- Noordhoff Publishing, Groningen, 1969. QEPCAD B -a program for computing with semi-algebraic sets using CADs. C W Brown, 10.1145/968708.968710ACM SIGSAM Bulletin. 374Brown, C. W., QEPCAD B -a program for computing with semi-algebraic sets using CADs. ACM SIGSAM Bulletin, 37(4):97-108, December 2003. doi:10.1145/968708.968710 Supporting proving and discovering geometric inequalities in GeoGebra by using Tarski. C W Brown, Z Kovács, R Vajda, extended abstract submitted to ADG 2021Brown, C. W., Kovács, Z., Vajda, R., Supporting proving and discovering geometric inequalities in GeoGebra by using Tarski, 1-12, extended abstract submitted to ADG 2021, 2021. Solving Parametric Polynomial Systems by RealComprehensiveTriangularize, Proceedings of ICMS 2014 -4th International Congress. C Chen, M M Maza, 10.1007/978-3-662-44199-2_76Lecture Notes in Computer Science. 8592Chen, C., Maza, M. M., Solving Parametric Polynomial Systems by RealComprehensiveTriangularize, Pro- ceedings of ICMS 2014 -4th International Congress, Seoul, South Korea, Lecture Notes in Computer Sci- ence, volume 8592, 504-511, 2014. doi:10.1007/978-3-662-44199-2 76 Quantifier Elimination for Real Closed Fields by Cylindrical Algebraic Decomposition. G E Collins, 10.1007/978-3-7091-9459-1_4Caviness-JohnsonSpringerQuantifier elimination and cylindrical algebraic decompositionCollins, G. E., Quantifier Elimination for Real Closed Fields by Cylindrical Algebraic Decomposition. In: Caviness-Johnson (eds): Quantifier elimination and cylindrical algebraic decomposition, 85-121, Springer, 1998. doi:10.1007/978-3-7091-9459-1 4 Partial Cylindrical Algebraic Decomposition for Quantifier Elimination. G E Collins, H Hong, 10.1016/S0747-7171(08)80152-6Journal of Symbolic Computation. 12Collins, G. E., Hong, H., Partial Cylindrical Algebraic Decomposition for Quantifier Elimination, Journal of Symbolic Computation 12, 299-328, 1991. doi:10.1016/S0747-7171(08)80152-6 Real quantifier elimination is doubly exponential. J Davenport, J Heintz, 10.1016/S0747-7171(88)80004-XJournal of Symbolic Computation. 5Davenport, J., Heintz J., Real quantifier elimination is doubly exponential, Journal of Symbolic Computation 5, 29-35, 1988. doi:10.1016/S0747-7171(88)80004-X Ein Softwaresystem für dynamische Geometrie und Algebra der Ebene. M Hohenwarter, SalzburgParis Lodron UniversityMaster thesisHohenwarter, M., Ein Softwaresystem für dynamische Geometrie und Algebra der Ebene. Master thesis. Salzburg: Paris Lodron University, 2002. . International GeoGebra Institute, GeoGebra. 2021International GeoGebra Institute, GeoGebra, https://www.geogebra.org/, 2021. GeoGebra and the realgeom Reasoning Tool. Z Kovács, R Vajda, PAAR+SC-Square 2020 Workshop. Paris, France2752CEUR Workshop ProceedingsKovács, Z., Vajda, R., GeoGebra and the realgeom Reasoning Tool, CEUR Workshop Proceedings Vol. 2752, PAAR+SC-Square 2020 Workshop, Paris, France, 204-219, 2020. Solving parametric polynomial systems. D Lazard, F Rouillier, 10.1016/j.jsc.2007.01.007Journal of Symbolic Computation. 426Lazard, D., Rouillier, F., Solving parametric polynomial systems. Journal of Symbolic Computation 42(6), 636-667, 2007. doi:10.1016/j.jsc.2007.01.007 A package for solving parametric polynomial systems. S Liang, J Gerhard, D J Jeffrey, G Moroz, 10.1145/1823931.1823933ACM Communications in Computer Algebra. 16943Liang, S., Gerhard, J., Jeffrey D.J., Moroz, G., A package for solving parametric polynomial systems, ACM Communications in Computer Algebra 169(43), 2009. doi:10.1145/1823931.1823933 Complexity of the Resolution of Parametric Systems of Polynomial Equations and Inequations. G Moroz, 10.1145/1145768.1145810Proceedings of the 2006 International Symposium on Symbolic and Algebraic Computation. the 2006 International Symposium on Symbolic and Algebraic ComputationMoroz, G., Complexity of the Resolution of Parametric Systems of Polynomial Equations and Inequations, Proceedings of the 2006 International Symposium on Symbolic and Algebraic Computation, 246-253, 2006. doi:10.1145/1145768.1145810 Properness defects of projection and minimal discriminant variety. G Moroz, 10.1016/j.jsc.2011.05.013Journal of Symbolic Computation. 4610Moroz, G., Properness defects of projection and minimal discriminant variety, Journal of Symbolic Compu- tation 46(10), 1139-1157, 2011. doi:10.1016/j.jsc.2011.05.013 . R Vajda, Z Kovács, realgeom, a GitHub project, github.com/kovzol/realgeomVajda, R., Kovács, Z., realgeom, a GitHub project, github.com/kovzol/realgeom, 2018. Mathematica, Version 12. Wolfram Research, Inc22021Champaign, ILWolfram Research, Inc., Mathematica, Version 12.2, Champaign, IL, 2021.
[]
[ "Pair-Density-Wave Order and Paired Fractional Quantum Hall Fluids", "Pair-Density-Wave Order and Paired Fractional Quantum Hall Fluids" ]
[ "Luiz H Santos \nDepartment of Physics\nInstitute for Condensed Matter Theory\nUniversity of Illinois at Urbana-Champaign\n1110 West Green Street61801-3080UrbanaIllinoisUSA\n\nDepartment of Physics\nEmory University\n400 Dowman Drive30322AtlantaGAUSA\n", "Yuxuan Wang \nDepartment of Physics\nInstitute for Condensed Matter Theory\nUniversity of Illinois at Urbana-Champaign\n1110 West Green Street61801-3080UrbanaIllinoisUSA\n\nDepartment of Physics\nUniversity of Florida\n2001 Museum Rd32611GainesvilleFL\n", "Eduardo Fradkin \nDepartment of Physics\nInstitute for Condensed Matter Theory\nUniversity of Illinois at Urbana-Champaign\n1110 West Green Street61801-3080UrbanaIllinoisUSA\n" ]
[ "Department of Physics\nInstitute for Condensed Matter Theory\nUniversity of Illinois at Urbana-Champaign\n1110 West Green Street61801-3080UrbanaIllinoisUSA", "Department of Physics\nEmory University\n400 Dowman Drive30322AtlantaGAUSA", "Department of Physics\nInstitute for Condensed Matter Theory\nUniversity of Illinois at Urbana-Champaign\n1110 West Green Street61801-3080UrbanaIllinoisUSA", "Department of Physics\nUniversity of Florida\n2001 Museum Rd32611GainesvilleFL", "Department of Physics\nInstitute for Condensed Matter Theory\nUniversity of Illinois at Urbana-Champaign\n1110 West Green Street61801-3080UrbanaIllinoisUSA" ]
[]
The properties of the isotropic incompressible ν = 5/2 fractional quantum Hall (FQH) state are described by a paired state of composite fermions in zero (effective) magnetic field, with a uniform px +ipy pairing order parameter, which is a non-Abelian topological phase with chiral Majorana and charge modes at the boundary. Recent experiments suggest the existence of a proximate nematic phase at ν = 5/2. This finding motivates us to consider an inhomogeneous paired state -a px + ipy pair-density-wave (PDW) -whose melting could be the origin of the observed liquidcrystalline phases. This state can viewed as an array of domain and anti-domain walls of the px + ipy order parameter. We show that the nodes of the PDW order parameter, the location of the domain walls (and anti-domain walls) where the order parameter changes sign, support a pair of symmetry-protected counter-propagating Majorana modes. The coupling behavior of the domain wall Majorana modes crucially depends on the interplay of the Fermi energy EF and the PDW pairing energy E pdw . The analysis of this interplay yields a rich set of topological states: (1) In the weak coupling regime (EF > E pdw ), the hybridization of domain walls leads to a Majorana Fermi surface (MFS), which is protected by inversion and particle-hole symmetries.(2) As the MFS shrinks towards degenerate Dirac points, lattice effects render it unstable towards an Abelian striped phase with two co-propagating Majorana modes at the boundary. (3) An uniform component of the order parameter, which breaks inversion symmetry, gaps the MFS and causes the system to enter a non-Abelian FQH state supporting a chiral Majorana edge state. (4) In the strong coupling regime, EF < E pdw , the bulk fermionic spectrum becomes gapped; this is a trivial phase with no chiral Majorana edge states, which is in the universality class of an Abelian Halperin paired state. The pair-density-wave order state in paired FQH system provides a fertile setting to study Abelian and non-Abelian FQH phases -as well as transitions thereof -tuned by the strength of the pairing liquid crystalline order.CONTENTS
10.1103/physrevx.9.021047
[ "https://arxiv.org/pdf/1811.08897v3.pdf" ]
118,677,921
1811.08897
2f93199acfba9434d139d7568a64a56e5f0aee34
Pair-Density-Wave Order and Paired Fractional Quantum Hall Fluids Luiz H Santos Department of Physics Institute for Condensed Matter Theory University of Illinois at Urbana-Champaign 1110 West Green Street61801-3080UrbanaIllinoisUSA Department of Physics Emory University 400 Dowman Drive30322AtlantaGAUSA Yuxuan Wang Department of Physics Institute for Condensed Matter Theory University of Illinois at Urbana-Champaign 1110 West Green Street61801-3080UrbanaIllinoisUSA Department of Physics University of Florida 2001 Museum Rd32611GainesvilleFL Eduardo Fradkin Department of Physics Institute for Condensed Matter Theory University of Illinois at Urbana-Champaign 1110 West Green Street61801-3080UrbanaIllinoisUSA Pair-Density-Wave Order and Paired Fractional Quantum Hall Fluids (Dated: November 30, 2018) The properties of the isotropic incompressible ν = 5/2 fractional quantum Hall (FQH) state are described by a paired state of composite fermions in zero (effective) magnetic field, with a uniform px +ipy pairing order parameter, which is a non-Abelian topological phase with chiral Majorana and charge modes at the boundary. Recent experiments suggest the existence of a proximate nematic phase at ν = 5/2. This finding motivates us to consider an inhomogeneous paired state -a px + ipy pair-density-wave (PDW) -whose melting could be the origin of the observed liquidcrystalline phases. This state can viewed as an array of domain and anti-domain walls of the px + ipy order parameter. We show that the nodes of the PDW order parameter, the location of the domain walls (and anti-domain walls) where the order parameter changes sign, support a pair of symmetry-protected counter-propagating Majorana modes. The coupling behavior of the domain wall Majorana modes crucially depends on the interplay of the Fermi energy EF and the PDW pairing energy E pdw . The analysis of this interplay yields a rich set of topological states: (1) In the weak coupling regime (EF > E pdw ), the hybridization of domain walls leads to a Majorana Fermi surface (MFS), which is protected by inversion and particle-hole symmetries.(2) As the MFS shrinks towards degenerate Dirac points, lattice effects render it unstable towards an Abelian striped phase with two co-propagating Majorana modes at the boundary. (3) An uniform component of the order parameter, which breaks inversion symmetry, gaps the MFS and causes the system to enter a non-Abelian FQH state supporting a chiral Majorana edge state. (4) In the strong coupling regime, EF < E pdw , the bulk fermionic spectrum becomes gapped; this is a trivial phase with no chiral Majorana edge states, which is in the universality class of an Abelian Halperin paired state. The pair-density-wave order state in paired FQH system provides a fertile setting to study Abelian and non-Abelian FQH phases -as well as transitions thereof -tuned by the strength of the pairing liquid crystalline order.CONTENTS Fractional Quantum Hall (FQH) states are the quintessential example of topological electronic systems. While the majority of the FQH plateaus are observed near filling fractions ν = p/q with odd denominators [1], even-denominator FQH states [2,3] provide a fertile arena to study exotic non-Abelian statistics [4,5], as well as the interplay between symmetry breaking and topological orders. In addition to FQH states, a host of symmetry breaking states have also been observed in two-dimensional electron gases (2DEGs) in magnetic field in various Landau levels (LL). These states, generally known as electronic liquid crystal phases, [6,7] break spatial symmetries to various degrees. Examples of such states are crystals (Wigner crystals [1] and bubble phases [8]), stripe phases [7,9,10], and electronic nematic states [7,11]. While crystal phases break translation and rotational invariance (down to the point group symmetry of the underlying lattice), stripe (or smectic) phases break translation invariance along one direction (and concomitantly rotation symmetry), nematic phases only break rotational invariance and are spatially uniform [12]. Most of the stripe and nematic phases that have so far been seen in experiment are compressible, and do not exhibit the (integer or fractional) quantum Hall effect, although they occur in close proximity to such incompressible states. Compressible nematic phases exhibit strong transport anisotropies, which is how they are detected experimentally. In addition, stripe phases also exhibit strong pinning and non-linear transport at low bias. Compressible electronic nematic order was first observed at filling fractions in N ≥ 2 LL such as ν = 9/2, 11/2, etc. [13][14][15] Evidence for a stripe to nematic order in the N = 2 LL in a compressible regime has also been seen quite recently. [16] On the other hand, in the N = 1 LL, FQH states are observed [2] at ν = 5/2, presumably paired states of the Moore-Read type [4]. Remarkably, experimental results in the N = 1 LL also show the existence of nematic order, originally in samples where rotation symmetry is explicitly broken by an in-plane magnetic field [17][18][19][20][21][22]. More recently, a spontaneously formed nematic phase has been reported in GaAs/AlGaAs samples under hydrostatic pressure [23]. (See also Refs. [24] and [25].) The mechanism behind this spontaneous nematicity remains an open problem, and has been speculated to be a Pomeranchuk instability of the composite fermions, as indicated by recent a numerical calculation. [26] In all of these experiments the nematic phase is compressible and arises after the gap or the 5/2 FQH state vanished. Magnetoresistance measurements show that the isotropic 5/2 FQH state collapses at a hydrostatic pressure P c ≈ 7.8 kbar. This is followed by onset of a compressible nematic state detected as a strong and temperature-dependent longitudinal transport anisotropy at higher pressures. This nematic phase persists up to a critical value of 10 kbar, where the 2DEG appears to become a Fermi liquid. Moreover, experiments also discovered in the N = 1 LL a large nematic susceptibility (with a strong temperature dependence) in the FQH state with ν = 7/3 [27]. This experimental finding suggests that in the N = 1 Landau level, phases in which nematic and/or stripe order may also occur in proximity and/or coexistence with a FQH topological state. The experimental observation of (presumably) paired FQH states in close proximity to nematic (and possibly stripe) phases suggests that all these phases may have a common physical origin, and that these orders may be actually intertwined rather than simply competing with each other. This scenario is strongly reminiscent of the current situation in cuprate superconductors, and other strongly correlated oxides, where superconducting orders are intertwined, rather than competing, with stripe or nematic phases [28,29]. The prototype of an intertwined superconducting state is a pair-density-wave (PDW) [30]. The PDW is a paired state that breaks spontaneous translation invariance. Its order parameter is closely related to that of the Larkin-Ovchinnikov state (but occurring without a Zeeman effect). A system of electrons in a half-filled Landau level (the N = 1 LL in the case of the 5/2 FQH state) is equivalent to a system of composite fermions [31,32] coupled to a Chern-Simons gauge field, in which two flux quanta have been attached to each electron [33]. The composite fermions are coupled to both the external magnetic field and to the dynamical Chern-Simons gauge field. In a half-filled Landau level, the composite fermions experience, on average, an effective zero net magnetic field. The resulting (mean field state) forms a Fermi surface (FS) of composite fermions [34]. In this representation, the topological incompressible isotropic FQH at ν = 5/2 arises from an pairing instability of the composite fermion FS resulting in a chiral paired state. In other terms, the paired FQH state can be viewed as a superconductor with p x + ip y pairing coupled to a dynamical Chern-Simons gauge field (at Chern-Simons level 2). The aim of this paper is to construct an intertwined orders scenario for a 2DEG proximate to a paired Moore-Read state [4] near the ν = 5/2 filling fraction. (Other paired FQH states have been proposed [35][36][37][38][39].) The state that we will propose is a stripe state that locally has a p x + ip y form while, at the same time, breaking unidirectional translation invariance. We will call the resulting intertwined state a p x + ip y paired density wave state (instead of the d-wave local pairing of the PDW state of the cuprate high T c superconductors.) Such a state may also occur as an inhomogeneous version of a topological p x + ip y superconductor as well. To this end we first present a theory of a p x +ip y PDW state, which is an interesting superconducting state in its own right, and later examine the resulting FQH state by considering the effects of coupling this PDW state to the dynamical Chern-Simons gauge field. The resulting state has the remarkable property of having a host of neutral fermionic excitations that are either gapless or gapped with nontrivial band topology. At the same time, it is still incompressible in the charge channel and has a precisely defined plateau in the Hall conductivity. In other words, it is a state in which the paired Hall state has long range stripe order. Theories of Laughlin FQH states that coexist with nematic order have been discussed in the literature [40][41][42][43][44][45][46]. Moreover, a rich set of possible phases, may be accessed (including a nematic state) from such a p x + ip y PDW by quantum or thermal melting transitions, at which some of the spatial symmetries are progressively restored. The p x +ip y PDW FQH state can be viewed as an array of stripes of Moore-Read states in which the p x + ip y pair field changes sign form one stripe to the next, just as in the (now conventional) PDW superconductor. This unidirectional state breaks translation invariance along one direction and also breaks rotations by 90 • . Since locally it is equivalent to a Moore-Read state, this state also breaks the particle-hole symmetry of the Landau level as well. The p x + ip y PDW FQH state can arise either by spontaneous symmetry breaking of translation (and rotation) symmetry, or by the explicit breaking of rotation symmetry by a tilted-magnetic field or by in-plane strain, as in the very recent experiments by Hossain and coworkers [47]. We should note that the p x + ip y PDW FQH state is not equivalent to the particle-hole symmetric Pfaffian state proposed by Wan and Yang [48]. While both states break translation (and rotation) symmetry, the p x + ip y PDW FQH state breaks the Landau level particle-hole symmetry while the Wan-Yang state does not. This distinction leads to profound differences in their spectra and properties. This work is organized as follows. In Section II we setup the proposed p x + ip y PDW state and present a summary of the main results both as a possible superconducting state and as an inhomogeneous paired FQH state. In Section III we present a theory of the p x + ip y paired state. Here we present the solution of the Bogoliubov-de Gennes (BdG) equations for this PDW state and discuss in detail the properties of its fermionic spectrum. In Section IV we study the coexistence of the PDW order and the uniform pairing order. In Section V we use this construction to infer the properties of the p x + ip y PDW FQH state. Section VI is devoted to the experimental implications of this PDW state and to conclusions. Theoretical details are presented in the Appendix. II. THE px + ipy PAIR DENSITY WAVE: SETUP AND RESULTS In this section we present a summary on the p x + ip y PDW state. The pairing order parameter of the uni-form p x + ip y state has the form ∆(p) = ∆(p x + ip y ) (with ∆ = constant). Its effective BdG Hamiltonian is H = p ( p 2 2m − µ) ψ † p ψ p + ∆(p) ψ −p ψ p + H.c., where m is the composite fermion effective mass and µ is the chemical potential. [49] In the "weak-pairing phase" of Ref. [49], where µ > 0, this system is a chiral topological superconductor where all bulk fermionic excitations are gapped and there is a chiral Majorana edge state propagating along the boundary separating the topological p-wave state and the vacuum. The p x + ip y PDW state that we propose here is a version of this state with a spatially modulated order parameter of the form ∆ ∼ ∆ pdw f (Q · r), where f is a periodic function with period λ = 2π/Q, such that the nodes of f correspond to domain walls (DWs) and anti-domain walls (ADWs), where the order parameter is suppressed, thus allowing for the existence of low energy modes localized on these nodes. Here, for simplicity, we consider only unidirectional order. In the language of superconductors, our theory is analogous to the PDW state conjectured for the cuprates whose order parameter has wave vector Q = (Q, 0) and that locally has d-wave SC order parameter [30,[50][51][52][53][54]. The main difference is that the PDW state that we consider here has, instead, local p x + ip y pairing order. Although at the level of the Landau-Ginzburg theory the d-PDW and the p x + ip y -PDW are virtually identical, their fermionic spectra are drastically different as are their topological properties. Before moving forward with our analysis of this problem, we stress important differences between the low energy fermion states we shall encounter in this work, which are associated with the spatial modulation of the PDW order parameter, and those discussed by Read and Green. [49] As discussed in Ref. [49], the edge state of p x + ip y paired state is a chiral Majorana fermion theory. The existence of this chiral branch is of topological origin, since the edge represents a Chern number changing transition from C = 1 (in the bulk of the paired state) to C = 0 (in vacuum). This change in the Chern number is also tied to the change in the sign of the chemical potential in the BdG Hamiltonian, for the region with µ > 0 is topological (C = 1) and that with µ < 0 is trivial (C = 0) and, as such, identified with the vacuum state. In our analysis of the bulk properties of the PDW state, we shall always be in the regime where µ > 0 (and constant) throughout the system, and consider the effects of a change in the overall sign of the p x + ip y order parameter. In this striped system, regions where the order parameter is non-zero (regardless of whether it is positive or negative) have the same value Chern number C = 1. In spite of that, we shall demonstrate that the nodes of the order parameter still support gapless modes. Instead of a single chiral Majorana branch as in the edge of the system discussed in Ref. [49], a node of the PDW order parameter supports rather two non-chiral Majorana branches. Below we show that the Lagrangian of the effective low energy theory at each isolated domain wall is L d.w. = i ψ R (∂ t − v ∂ y )ψ R + i ψ L (∂ t + v ∂ y )ψ L , (2.1) where ψ L/R represent left/right moving massless Majorana fermions. This pair of neutral fermion modes -whose spectrum is identical to that of the onedimensional critical quantum Ising model -owe their existence both to a combination of mirror and chiral (in Class BDI) symmetries inherent of the Larkin-Ovchinikov order parameter as well as to the p x + ip y character of the order parameter. In fact, the chiral pwave nature of the order parameter plays a crucial role in the stability of the fermion zero modes on the nodes of the order parameter, for an earlier analysis [55] similar in spirit to ours, but in a rather different context of finite momentum s-wave superfluids produced by imbalanced cold Fermi gases, has found Caroli-De Gennes-Matricon midgap states supported at an isolated node of the swave order parameter, in contrast to the Majorana zero modes of the p x + ip y PDW state. We further show that the coupling between the domain wall counter-propagating Majorana modes leads to a highly nontrivial fermionic spectrum. In general, the (Majorana) fermionic excitations remain gapless. Their energy bands cross at the Fermi level, leading to a twofold degenerate "Majorana Fermi surface". The Majorana Fermi surface is of topological origin, and the band crossing is protected by a combination of particle-hole symmetry and inversion symmetry [56]. Again, the inversion symmetry here crucially relies on both the p-wave character of the local pairing and the Larkin-Ovchinikov order parameter. For PDW states in general, one expects a gapless fermionic spectrum, as a weak PDW order parameter opens gaps only at selected points in kspace. In those cases the excitations form a "Bogoliubov Fermi surface (pocket)", which are closely tied to the original normal state Fermi surface. Along the Bogoliubov Fermi surface the quasiparticles alternate from being more electron-like to more hole-like. Here we stress that the Majorana Fermi surface is distinct from the original normal state Fermi surface, and satisfy the Majorana condition γ † (−k) = γ(k) everywhere. Moreover, in particular ranges of the PDW order parameter, the fermionic spectrum becomes gapped. Interestingly the topology of these gapped phases are distinct from a uniform p x + ip y state with a Chern number C = 1. Instead we have found phases with both C = 2 and C = 0, even though locally the pairing is identical to a p x + ip y pairing state. The bulk regions where ∆(r) is non-zero (which is everywhere except on isolated one dimensional lines extended in the y direction) have the same Chern number and the same the Hall response, irrespective of the overall sign of the order parameter. Consequently, the system is a quantum Hall insulator with respect to the charge modes (albeit with a spatial dependent charge gap) while supporting low energy excitations in the form of gapless neutral fermions supported along the domain walls. Thus, while Majorana fermions may tunnel as soft excitations on the PDW domain walls, electron tunneling is suppressed everywhere in the bulk (including along the domain walls) due to the charge gap. The resulting state is an exotic heat conductor but an electric insulator. Our detailed investigation of the properties of fermionic excitations of the p x +ip y PDW state finds that this system represents a symmetry protected topological phase whose remarkably rich properties are summarized as follows: 1. Each isolated DW supports a pair of massless Majorana fermions, as shown in Fig. 1(a), which are protected by the unitary symmetry U = M y S, where M y is the mirror symmetry along the direction of the domain wall and S is a chiral symmetry (in class BDI). In the presence of a uniform component ∆ u of the p x + ip ywave order parameter that preserves U symmetry, the massless Majorana fermions cannot be gapped out for |∆ u | < |∆ pdw |, whereas no massless Majorana fermions exist in DWs for |∆ u | > |∆ pdw |, representing the phase adiabatically connected to the uniform p x + ip y -wave state. [49] 2. For ∆ pdw < v F , where v F is the Fermi velocity of the composite Fermi liquid, in general there exists a two-fold degenerate Majorana Fermi surface (made out of Majorana fermions), protected by the particle-hole symmetry and the inversion symmetry of the PDW state. As stated above, this state supports gapless neutral excitations but is an electric insulator. This state is one of the main findings of the present work, and we illustrate this phase in Fig. 1(b). As ∆ pdw varies, this Majorana Fermi surface shrinks and expands periodically, and when the Majorana Fermi surface shrinks to zero size, the fermionic spectrum gets gapped. We found that this gapped state has a Chern number C = 2 even if the local pairing is of p x + ip y form. This can be understood as the result of a Chernnumber-one contribution from the bulk p x + ip y pairing order in addition to a Chern-number-one contribution from the domain walls. The corresponding quantum Hall state has Abelian topological order, as the vortices of the pairing order do not host Majorana zero modes. The edge conformal field theory (CFT) consists a charge mode and two Majorana fermions, which in total has a chiral central charge c = 2. This phase is illustrated in Fig. 1(c). 3. For PDW states with ∆ pdw > v F , the fermionic spectrum is gapped (see Fig. 1(d)). From the fermionic point of view, this gapped phase is topologically trivial with C = 0 as it does not support a chiral edge Majorana fermions. In the QH setting, we identify this phase with the striped Halperin Abelian quantum Hall state where electrons form tightly bound charge-2e bosons that condense in a striped Laughlin state. 4. The bulk spectrum changes in the presence of a uniform component ∆ u of the p x + ip y pairing order. For ∆ pdw < v F , the Majorana FS becomes gapped by an infinitesimal ∆ pdw , while for ∆ pdw > v F the trivial gapped phase survives until a critical value of ∆ u . We have found that the gapped phase with ∆ u has a Chern number C = 1, i.e., is in the same phase as the uniform Moore-Read p x + ip y state. This phase is represented in Fig. 1 (e). So, interestingly, the neutral FS in Fig. 1(b) represents a quantum critical "phase" that separates distinct neutral fermion edge states. Based on our detailed analysis in the remainder of the paper, all these phases mentioned above has been placed in a schematic mean-field phase diagram, shown in Fig. 10. III. FERMIONIC SPECTRUM OF THE px + ipy PAIR DENSITY WAVE The quantum Hall state with a half-filled Landau level can be viewed as the paired state of the composite fermions coupled to both a dynamical gauge field and the external electromagnetic field. In this section, we analyze the spectrum of the fermionic sector described by the mean-field pairing of composite fermions. We postpone a full description of the quantum Hall state with gauge fields and charge modes to Section V. The analysis in this section also serves as a selfcontained mean-field theory for the p x + ip y PDW superconductor, which could potentially be relevant for, e.g., Sr 2 RuO 4 [57], or superfluid 3 He [58]. To our knowledge this theory has not been presented before in the literature. A. BdG description of the px + ipy PDW state Before turning to a PDW state, we consider a generic two-dimensional state with p x + ip y local pairing symmetry. We begin with the Bogoliubov-de Gennes (BdG) Hamiltonian in the continuum H(r) = (k) 1 2 {k − , ∆(r)} 1 2 {k + , ∆ * (r)} − (k) , (3.1) where k = (k x , k y ) = (−i∂ x , −i∂ y ), k ± = k x ± ik y = −i∂ ± (we set = 1) . For now let us take the simplest Galilean invariant continuum dispersion (k) = k 2 2m − µ. (3.2) We will later discuss the lattice effects of the BdG Hamiltonian. Here, the anti-commutator {k − , ∆(r)} ≡ k − ∆(r) + ∆(r)k − is taken to symmetrize the r dependence and p dependence, a standard procedure to treat a non-uniform order parameter ∆(r). Throughout this work, we consider the case with a normal-state FS, i.e., µ > 0, which, in the case of a uniform order parameter ∆, corresponds to the "weakpairing regime", describing a topological paired state with chiral Majorana fermion edge states [49]. Notice that the name "strong-pairing regime" has been used by Read and Green [49] for cases with µ < 0. Even though we will consider cases with a large pairing order |∆|, it should not to be confused with the "strong-pairing regime" in the sense of Read and Green. The BdG Hamiltonian of Eq.(3.1) possesses a particlehole symmetry σ 1 Hσ 1 = −H * ,(3.3) which relates positive and negative energy states: if Ψ E (r) = r|Ψ E is an eigenmode of H with energy E, then σ 1 Ψ * E is an eigenmode with energy −E. Of these states, a particularly interesting eigenstate is the zero mode (ZM), with E = 0. It satisfies σ 1 Ψ * 0 = ±Ψ 0 such that they can be expressed as Ψ 0 (x) = e −iπ/4 ψ(x)(1, ±i) T , with ψ(x) ∈ R. For a PDW the order parameter varies along the x axis, ∆(x), and we will work in the gauge where it is a real function of x. With the ansatz that the zero modes are translation invariant along the y direction (k y = 0), the equation for the potential zero modes reads − ∂ 2 x 2m − µ ψ(x) ± 1 2 {∂ x , ∆(x)} ψ(x) = 0. (3.4) It should be emphasized that these states are zero modes of the BdG Hamiltonian, and, as a result, they obey the Majorana condition. However, we will see in Sec. III B that here these are not isolated states in the spectrum, but are actually part of a branch of propagating massless Majorana fermions, propagating along the domain wall. Thus they should not be confused with their formal cousins, the isolated zero modes at endpoints of one-dimensional p-wave superconductors [59], or at the core of vortices of 2D chiral superconductors [60]. The latter type of zero modes are associated with the nonabelian statistics of these defects, whereas the massless Majorana fermions we find here are bound states of domain walls, and are not associated with non-abelian statics. For these reasons, and to avoid confusion, we will not refer to the zero modes of the BdG Hamiltonian for domain walls as "Majorana zero modes." B. Domain wall bound states A PDW state is characterized by pairing order parameters ∆ ±Q (and their higher harmonics such as ∆ ±3Q , ∆ ±5Q , ...) with nonzero momentum ±Q, which couple to fermions via (3.5) where c † (k) is a spinless fermion creation operator at momentum k, and f (k) is the PDW form factor that is an odd function enforced by fermionic statistics. At the level of mean field theory, the PDW order parameters ∆ ±Q satisfy (3.6) and this relation holds similarly for all higher harmonics. Then the real space form of the order parameter is H pdw = k,±,n=odd ∆ ±nQ f n (k) × c † (k ± nQ/2)c † (−k ± nQ/2) + h.c.,|∆ Q | = |∆ −Q |,∆(r) = n=odd |∆ Q |e iθ cos (nQ · r + φ),(3.7) and the phases θ and φ can be both set to zero after a gauge transformation and a spatial translation. As we shall see later, this defining property of PDW leads to important symmetries that protect a gapless fermionic spectrum. However, fluctuations about the mean field state do not obey these constraints. As a result, the full PDW order parameter has, in its simplest form, two complex order parameters , ∆ ±Q [28,50,61]. This complexity of the order parameter manifold has important consequences for the pathways to the quantum and/or thermal melting this state. In real space, a PDW state can be viewed as a periodic arrangement of domains of pairing order with alternating signs of the order parameter. Across each domain wall the pairing gap ∆ changes sign and vanishes at the domain wall location. Thus, we expect the low-energy fermionic states to be concentrated to the close vicinity of the domain walls. For simplicity we will only consider the domain wall states with lowest energy. The interplay between higher-energy domain wall states can be similarly analyzed and does not lead to any qualitative differences, as we shall see later. Moreover, it turns out that, for an isolated domain wall, the lowest energy states have interesting topological properties. It is convenient to consider a simple picture of a PDW whose p x + ip y order parameter has constant magnitude but alternating signs. In this simple case, the midgap states with non-zero energies are pushed away from E = 0 and we can study the properties of Majorana zero modes more clearly. We begin our analysis with a single isolated domain wall (DW), or anti-domain wall (ADW), and use the result as a starting point to couple the bound states for a DW-ADW array. It should be noted that the zero modes that we will find below arise as bound states of the BdG one-particle Hamiltonian, much in the same way as Majorana zero modes at the end-point of a p-wave superconductor [59] (or in the cores of a half-vortex of a p x + ip y superconductor [60].) As we noted above, their physics is very different. We begin with a DW configuration at x = 0, given by ∆ isol (r) = −∆ pdw sgn(x) ,(3.8) where ∆ pdw > 0. For convenience, we define a quantity with units of momentum q = m∆ pdw . (3.9) The solutions to Eq. (3.4) and Eq.(3.16) yield a pair of normalizable zero-energy solution with k y = 0 localized at x = 0, with even and odd parity, given by (for more details see Appendix A) r|Ψ e = N e √ L e −q |x| cos (κ x) u 1 , r|Ψ o = N o √ L e −q |x| sin (κ x) u 1 , (3.10) where u 1 = (1, i) T / √ 2, κ = k 2 F − q 2 , (3.11) k F ≡ √ 2mµ is the Fermi momentum, and the normalization constants N e , N o are given by N e = 2q(κ 2 + q 2 ) κ 2 + 2 q 2 , N o = 2q(κ 2 + q 2 ) κ 2 , (3.12) where L is the system length along y direction. For q k F we have N o = N e , but in general they are different. Notice that the above expression (3.10) applies to both q < k F and q > k F : in particular for q > k F the coefficient κ is imaginary and cos(κx) and sin(κx) functions in (3.10) become cosh(|κ|x) and −i sinh(|κ|x) and are nonoscillatory. One can easily verify that the wave functions are still normalizable, thanks to the e −q|x| factor, with the same normalization factor N e,o . (Note that N o becomes imaginary, and r|Ψ o remains real.) However, as we will see, the different forms of the wave packets for q < k F and q > k F generally lead to very different coupling between the domain wall modes. The dispersion relation of the propagating modes along y axis can be obtained using degenerate perturbation theory by computing the 2 × 2 perturbation matrix V (k y ) p,p = Ψ p | δH(k y ) | Ψ p , for p, p = e, o and δH(k y ) = H(k y ) − H(k y = 0) = k y ∆ isol (x)σ y + k 2 y 2m σ z . (3.13) Direct calculation gives that the eigenstates are a pair of counterpropagating modes: r|Ψ R,L (k y ) = e ikyy ( r|Ψ e ∓ r|Ψ o ) / √ 2 (3.14) with linear dispersion E R,L = ±v y k y , v y = q 2 m 2q 2 + κ 2 . (3.15) Notice that the quadratic dependence on the momentum disappears due to u 1 |σ z |u 1 = 0. For a ADW configuration with ∆ isol (r) = ∆ pdw sgn(x) ,(3.16) the counter-propagating edge states can be straightforwardly obtained by the same procedure. Since a DW and an ADW transforms into each other under a gauge transformation ∆ → −∆, much of the result above for a DW should hold for an ADW. The only difference is that the spinor part u of the wave functions in Eq. (3.10) is replaced with u 2 = (1, −i) T / √ 2. (3.17) 1. Symmetry-protected stability of the domain wall counter-propagating modes The existence of two gapless modes at the domain wall may seem surprising at first sight. After all, a domain wall separates regions with p x + ip y pairing and −(p x + ip y ) pairing, and the two regions have the same Chern number. Thus without additional symmetry, the domain wall states are generally gapped. To establish the stability of the domain wall modes, it is convenient to "fold" the system along a single domain wall and treat the domain wall as the edge of the folded system. The symmetry that is pertinent to the stability of the edge modes involves a spinless time-reversal T = K operation (K is the complex conjugation operator). For a p x +ip y state, both the (spinless) time-reversal symmetry T and the mirror symmetries M x,y are broken, but one can define a composite symmetry M x,y T that remains intact. Together with the particle-hole symmetry C = τ x K that comes with the BdG Hamiltonian, our (folded) system has a M y S symmetry, where S = CT = τ x is known as a chiral operation [62]. The system satisfies (M y S)H(M y S) −1 = −H. (3.18) For the mirror invariant value k y = 0, the composite symmetry reduces to a chiral symmetry S, and the 1d subsystem belongs to the BDI class [62]. According to the classification table, BDI class in one dimension has a Z classification characterized by an integer winding number ν. We find that the folded system has ν = 2, and this corresponds to the two zero modes at the edge at k y = 0. One can show that a term ∼ ∆ σ y added to the Hamiltonian of Eq. (3.1) would gap out these two modes, but such a term is prohibited by the M y S symmetry. We note that the chiral symmetry stems from the defining symmetry of the PDW state. In general, nonuniform superconducting states consists of finite-momentum pairing order parameters ∆ Q and ∆ −Q , which are related by inversion. For a Fulde-Ferrel state, which does not oscillate and has a single complex order parameter, ∆ Q = 0 and ∆ −Q = 0, or in general a state with |∆ Q | = |∆ −Q |. This SC order parameter in real space has a "spiral" pattern in phase rather than the oscillatory pattern. In these cases the M y T symmetry is absent, and so is the M y S symmetry, and there are no such gapless domain wall modes. It is crucial that for a PDW state, similar to a Larkin-Ovchinikov (LO) state, |∆ Q | = |∆ −Q |, such that the M y S symmetry is intact. C. FS from domain wall coupling So far we have considered the case of completely isolated DWs. At finite values of the PDW wavelength, though, hybridization between DWs inevitably occurs, and is responsible for making the DW excitations regain their 2D character. In this Subsection, we will consider a PDW state with DW (and ADW) bound states and derive the dispersion of the (hybridized) bulk states. Due to the exponential decay of the domain wall state wave function in Eq.(3.10), we expect the effective hopping matrix elements between DWs separated by distance d scales as e −qd and, for nearest neighbor DW and anti-DW separated by λ/2 ≡ π/Q, the coupling is of the order e −πq/Q . Then if Q < q, we can employ a tight-binding approximation where the nearest neighbor hopping gives the dominant contribution. For the rest of the work, we will mainly focus on the regime Q < q < k F , (3.19) where the first inequality enables us to use a tight-binding approximation, and the second inequality ensures that the local pairing gap ∆(r) is smaller than Fermi energy µ, a reasonable assumption in the spirit of the weak coupling theory. As we discussed in Sec. III B, in this regime the wave functions in (3.10) are oscillatory functions enveloped by symmetric exponential decay. We have set the PDW wavevector Q < k F -this is needed in order for the normal state FS to be reconstructed in a meaningful way. As we proceed, we will discuss other regimes of the length scales with q < Q and q > k F as well. Consider the PDW state obtained as a periodic sequence of DWs and ADWs, ∆(x) = ∆ pdw 1 + =1,2 n∈Z (−1) sgn(x − x ( ) n ) , (3.20) where DWs are located at x (1) n = n λ and ADWs are located at x (2) n = (n + 1/2) λ. The order parameter of (3.20), and consequently the BdG Hamiltonian of the state, are then periodic under shifts of x by integer multiples of λ. Other than this translational symmetry, the PDW configuration (3.20) also entails an inversion symmetry of the BdG Hamiltonian (3.1) with inversion centers at x ( ) n . Indeed, under such an inversion, both k ± and ∆(r) change sign, rendering their anticommutator and hence H(r) invariant. For the domain wall modes, from Eq. (3.14) we see that left movers and right movers transform into each other under inversion. It is straightforward to see that this inversion symmetry simply derives from Eq. (3.6), the defining property of a PDW state. The system also has a "half-translation" symmetry. Namely, under a translation by λ/2 the order parameter wall modes, left and right movers retain their chirality under the half translation. We will use these symmetries to establish relations between the hopping matrices. Let us consider a variational state |Ψ kx,ky = =1,2 µ=L,R c ,µ n∈Z e ikx n λ √ N |Ψ ,µ,n (k y ) ,(3.∝ u 1 = (1, i) T / √ 2 and |Ψ =2 ∝ u 2 = (1, −i) T / √ 2. The coefficients c ,µ are variational parameters, the dependence on the momentum k y enters via the dispersive modes |Ψ L,R along each DWs and ADWs and the dependence on the crystal momentum k x ∈ (−Q/2, Q/2) enforces that the state of Eq. (3.21) satisfies the Bloch theorem. The steps leading to the energy of this variational tight-binding state are lengthy but straightforward [63], and are presented in Appendix C. Minimization of the energy of the state where the effective Hamiltonian is given by E kx,ky [{c ,µ }] = Ψ kx,H(k x , k y ) =     v y k y 0 t + t e −ikx t −te −ikx 0 −v y k y −t + te −ikx t + t e −ikx t + t e ikx −t + te ikx v y k y 0 t −te ikx t + t e ikx 0 −v y k y     ,(3.24) where for convenience we have redefined k x λ → k x so that k x ∈ (−π, π). This effective Hamiltonian is expressed in the basis of states {|Ψ 1,R , |Ψ 1,L , |Ψ 2,R , |Ψ 2,L } (momentum dependence omitted), where the indices 1(2) denote DW (ADW) degrees of freedom. The diagonal blocks proportional to v y k y σ 3 then represent the kinetic energies of the right-and left-moving modes on DWs and ADWs, respectively, while the off-diagonal blocks represent the coupling between an adjacent DW-ADW pair. The constants t,t, t can be understood intuitively as "hopping amplitudes" between the neighboring domain wall modes, which we illustrate in Fig. 2. Specifically, t describes the coupling between neighboring modes with the same chirality. Importantly, all these couplings are the same following the inversion symmetry and the half-translation symmetry. t describes the coupling between the rightmover at a DW with the left-mover at an ADW to its right. By the half-translational symmetry or inversion symmetry, t also describes the coupling between the right-mover at a ADW with the left-mover at an DW to its right. On the other hand,t describes the coupling of a left-mover with a right-mover to its left. Notice that there are no symmetry requirement relating t andt. In Appendix C, we evaluated t,t, and t , and the results are, t = − κ 4m exp (−qλ/2) × 2N e N o cos κλ 2 + (N 2 e − N 2 o ) sin κλ 2 t = − κ 4m exp (−qλ/2) × 2N e N o cos κλ 2 − (N 2 e − N 2 o ) sin κλ 2 t = − κ 4m exp (−qλ/2)(N 2 e + N 2 o ) sin κλ 2 (3.25) We note that so far our analysis and Eqs. (3.24, 3.25) apply to both q < k F and q > k F . In particular, it is easy to verify that for q > k F , t,t, and t are still real. As we promised, we will focus on q < k F for now. In this regime, we find that out of the four bands [64] given by Eq. (3.24), two of them cross each other at zero energy, illustrated in Fig. 3 the eigenstate of (3.24) for t = 0.5, t = 0.4,t = 0.6 at k y = 0. The zero-energy band crossing results in a (two-fold degenerate) FS, whose contour is given by the vanishing of the determinant det[H(k x , k y )] = (v y k y ) 2 − 4(t 2 + tt) cos 2 (k x /2) + (t +t ) 2 2 = 0. (3.26) It is easy to verify that this equation does have a solution for q < k F . Importantly, this degenerate FS belongs to energy bands of Majorana modes, and by construction quasiparticles near it satisfy the Majorana condition γ † (k) = γ(−k). For this reason, we term it a "Majorana FS". To verify Eq. (3.26), we numerically solved the lattice version of the BdG Hamiltonian (3.1). For the normal state we used the dispersion (k) = −t 0 (cos k x + cos k y ) − µ,(3.27) and for the off-diagonal element of the BdG Hamiltonian we used ∆ pdw = 1 2 {(sin k x − i sin k y ), ∆(x)} . (3.28) We set the parameters as t 0 = 1, µ = −1.25, ∆ pdw = 0.82, and Q = π/6. The match between the computed spectral function ρ(k, E) and the FS analytically given by (3.26) is good, as shown in Fig. 4. The match becomes even better if we take µ → −2. In this case the relevant dispersion becomes parabolic and approaches the continuum limit. As q ≡ m∆ pdw varies, the relative amplitudes of t,t, and t varies periodically, and the the two-fold degenerate FS expands and shrinks. Note that at k 2 F − q 2 ≡ κ = nQ, n ∈ Z,(3.29) from Eq. (3.25) we have t = 0 and t =t. Plugging these into Eq. (3.26) we see that the two-fold degenerate FS shrinks to two Dirac points both at k = 0. However, we will see in the next Subsection that the existence of two overlapping Dirac points, i.e., the four-fold degeneracy at k = 0, is a non-universal property of the continuum theory, and in generic cases at κ = nQ the fermionic spectrum is actually gapped. To that end, we will first need to understand whether and why the band crossing at the FS for generic values of t,t, and t is robust. Before we move on, let us briefly discuss the fermionic spectrum for q Q. So far we have worked in the regime where we only need to consider the nearest-neighbor coupling between the domain wall Majorana modes. For q Q, the domain wall states are no longer well-defined, as their localization length becomes longer than the PDW wavelength. In this case the domain wall Majorana modes are not a good starting point for analytical calculations. It turns out this regime admits a simple description in k space. We note that due to Brillouin zone folding, the typical energy scale for the relevant bands in the folded BZ is given by E Q F ∼ v F Q. In this regime we have k F ∆ pdw E Q F , k F ∆ pdw being the size of the p-wave gap on the FS, which indicates that PDW can be treated perturbatively in k space. Indeed, numerically we found that the FS resembles that of the composite fermions, except at the regions with k x = ±Q/2, ± 3Q/2, · · · , which gets gapped and perturbatively reconstructed by the PDW order. Importantly, in this case the FS are made out of Bogoliubov quasiparticles d(k) = u k c(k) + v k c † (−k + Q), which are in general not Majorana quasiparticles, i.e., u k = v k . For this reason we call it the "Bogoliubov FS" to distinguish it from the Majorana FS we obtained previously. As one increases ∆ pdw , the Bogoliubov FS gets progressively gapped and crosses over to the one obtained previously in Fig. 4. Symmetry-protected stability of the Majorana FS As we emphasized, two bands cross at the FS given by (3.26). It is then a natural question whether this band crossing is robust against perturbations, or it is 0. 2. 2. 4. 4. 6. 6. 8. 8. 10. 10. 10. 10. For parameter values such that κ = n Q, which correspond to t = 0 and t =t, the FS shrinks to a doubly degenerate Dirac point at (kx, ky) = (0, 0). We note, however, that this Dirac point is accidental, in the sense that it is a property of the continuum approximation of the band structure in which the original FS (in the absence of a PDW order parameter) is circular. Our numerical calculation indeed shows that this Dirac point is gapped once lattice effects become non-negligible. accidental due to the particular BdG Hamiltonian (3.1) we are using. Here we show that the gapless nature of the FS is protected by symmetry. In particular, the defining inversion symmetry of the PDW state |∆ Q | = |∆ −Q | again plays a crucial role. In the literature, band crossings in k space that form sub-manifolds with co-dimension 2 and 3 have been intensively discussed. In two spatial dimensions the band crossings are known as Dirac points, while in three spatial dimensions, these are Weyl points (with co-dimension 3), Dirac points (with co-dimension 3), and nodal lines (with co-dimension 2). The band crossing we obtained has co-dimension 1, which corresponds to "nodal FS's". The stability of the nodal FS is less-well known, but has also been recently analyzed [56,[65][66][67][68][69]. A particularly systematic analysis has been done in Ref. [67]. For our purposes, we will closely follow the analysis in Ref. [56]. We focus on the particle-hole symmetry and the inversion symmetry previously identified for the BdG Hamiltonian (3.1) for the PDW state. With regard to the effective Hamiltonian Eq. (3.24), the particle-hole symmetry that relates positive and negative energy states of the Hamiltonian Eq. (3.24) is expressed through a unitary operator C: C H(−k) T C † = −H(k) , C = σ 0 ⊗ τ 3 = σ 0 0 0 −σ 0 .(3.I(−k x ) H(−k) I(−k x ) † = H(k) , I(k x ) = σ 1 0 0 e −ikx σ 1 ,(3.31) where the action of σ 1 is to switch left-and right-moving modes and the momentum dependence e −ikx on the ADW degrees of freedom reflects that fact that the center of inversion is taken with respect to a DW. Both C and I relates k with −k, and it is useful to consider their composite that relates H(k) with itself. We define another unitary operator U CI (k x ) ≡ CI(k x ) = σ 1 0 0 −e −ikx σ 1 ,(3.32) which, importantly, is symmetric. It then follows that for any given k, (3.34) U CI (k x ) H(k) U † CI (k x ) = −H T (k) . At the location of the FS, det(H(k)) = Pf(H(k)) = 0. Importantly, since H is Hermitian, one can check that the Pfaffian Pf(H(k)) is always real. If two points at the BZ k 1 and k 2 satisfy Pf(H(k 1 )) Pf(H(k 2 )) < 0, then there is a FS separating k 1 and k 2 at which the Pfaffian changes sign. Symmetry-preserving perturbations can move the location of the FS in k space, but they cannot gap the spectrum unless the FS shrinks to zero size. Specifically for our tight-binding Hamiltonian (3.24) one obtains Pf(H(k)) = (v y k y ) 2 + (t +t ) 2 − 4(t 2 + tt) cos 2 (k x /2). (3.35) The condition Pf(H(k)) = 0 indeed matches the location of the FS given by (3.26). The FS is stable in the presence of small perturbations that preserve the two symmetries simultaneously. D. Gapped states from domain wall coupling gapped phase near κ = nQ We continue to focus on the regime Q < q < k F . Our argument on grounds of inversion symmetry in Sec. III A establishes the stability of the doubly degenerate FS. However, it does not ensure the stability of the double Dirac points obtained from (3.26) and (3.25) at κ = nQ, which in turn are obtained from the continuum BdG Hamiltonian Eq. (3.1). Here we show that for a BdG Hamiltonian with a generic lattice dispersion and p-wave form factor of local pairing, the Dirac spectra at κ = nQ in the continuum model are replaced by gapped fermionic spectra. Moreover, remarkably, the gapped system has a trivial band topology, even though the local pairing symmetry is p x + ip y with µ > 0. It is instructive to first understand the origin of the double Dirac points at κ = nQ in the continuum model. At these points, from Eq. (3.25), the same-chirality hopping amplitude t vanishes. As a result, a left mover only couple to their adjacent right movers and vise versa. The domain wall modes decompose to two separate chains of coupled wires, each of them alternating between left-and right-movers. We illustrate this situation in Fig. 8, where the solid arrowed lines denotet and dashed arrowed lines denote t. From Eq. (3. 25) we see that at κ = nQ we have t =t. With t =t, each of two chains gives rise to a Dirac point at k = 0, in a mechanism similar to the Dirac cone "reconstruction" at the surface of a topological insulator via hybridization of chiral modes localized at oppositely oriented ferromagnetic domain walls. [70]. However, recall that after a careful analysis we have concluded there is no symmetry that relates t andt. The fact that we obtained t =t in the continuum model at κ = nQ is merely an accident. For a generic dispersion with an almost circular normal state FS, we expect from (3.25) that when t = 0, t − t ∝ N 2 e − N 2 o . (3.36) Following an analogy with the well-known Su-Schrieffer-Heeger model for polyacetylene [71], this asymmetrical coupling pattern gaps out the fermionic spectrum. The spectral gap is proportional to N 2 o − N 2 e . This spectral gap is rather small -in particular for q k F we have from (3.38) that N e ≈ N o . For q more comparable to k F this spectral gap increases. Naturally, in the vicinity of the would-be Dirac point values, i.e., near κ = nQ, the spectral gap persists, and for larger q's, the range of q with a gapped spectrum is larger. Indeed, we numerically solved the lattice version of the problem with Eqs. (3.27) and (3.28). With t = 1 and µ = 1.9, the normal-state FS is nearly isotropic. Yet we see that when the FS shrinks it becomes gapped, instead of Dirac points. We show the gapped Dirac dispersion in this situation in Fig. 7(a). We have also verified that as ∆ pdw increases, the Dirac gap becomes larger. The band topology of this gapped phase can be obtained by inspecting the edge modes. From Fig. 8, it is straightforward to see that the coupling pattern between the domain wall modes (not including the leftmost and rightmost modes, which are edge modes), leaves two unpaired chiral domain wall modes at the two ends. On the other hand, owing to the local p x + ip y pairing symmetry, there would be a chiral mode (shown in yellow in Fig. 8) at each physical edge of the system. Fort > t, i.e., when the hopping represented by solid arrowed lines is stronger, one can check that the unpaired domain wall mode and the would-be edge mode are of opposite chirality, and they gap each other. The resulting state does not host any gapless edge modes, and is thus topologically trivial with Chern number C = 0. On the other hand, if t >t, the unpaired domain wall mode and the edge mode are of the same chirality; in this case at each edge there would be two chiral modes propagating in the same direction, with C = 2. A similar situation has been found in a p-wave SC in the presence of a vortex lattice [72]. It is also instructive to understand how the competition between t andt changes the Chern number by 2, by considering the following reasoning. Let t = 0, t = τ + δ andt = τ − δ, then the effective Hamiltonian, after an appropriate unitary transformation, is in the form H = B(k) · Γ, where B(k) = (v y k y , −2τ sin (k x /2), 2δ cos (k x /2)) and Γ = (Γ 1 , Γ 2 , Γ 3 ) are anti-commuting matrices with Γ 2 i = 1. Then at δ = 0 (t =t) we see the two Dirac points at k = 0, which become massive for δ = 0. The Chern number measures the winding of the spinor B(k) as k is varied. Importantly, the sign of δ controls the orientation of the spinor along the third axis (i.e. direction Γ 3 ). Reversing the sign of δ reverses the orientation of the spinor and changes the Chern number by ∆C = 2 × 1 = 2, where the factor of 2 accounts for the number Dirac cones. Notice that for both C = 0 and C = 2 pairing states there are no Majorana zero modes bound at vortex cores. In particular for C = 2 state there are two would-be zero modes near a vortex that generally gap each other. In our PDW setup one of these would-be zero modes comes from the vortex core and the other from a domain wall mode circulating the vortex, as can be seen through an analysis similar to what we did for the edge modes. In terms of the quantum Hall physics, as we will see, the absence of the vortex Majorana modes indicates that these states have Abelian topological order. For our square lattice model, from counting the number of edge modes in open boundary conditions we found C = 2 at κ = nQ points in the quasi-continuum limit. We show in Fig. 7(b) such a situation with n = 8 with an open boundary condition in x direction. As can be seen, there are two propagating modes of each chirality. In Appendix D we compute the lattice corrections to t,t, and t in Eq. (3.25) for our square lattice model, and show that indeedt > t at t = 0. We have not done the calculation for other lattices, and from symmetry constraints alone both C = 0 and C = 2 phases are possible. Quite remarkably, with a p x + ip y local pairing symmetry, the PDW state realizes a band topology of that for a d + id superconductor, even though their symmetry properties are very different. gapped phase for q > kF Now we consider the hybridization of bound states with q > k F = √ 2mµ located at the nodes of the PDW order parameter. Here we show that the bulk spectrum of the 2D array is gapped, and it is topologically trivial. For q > k F , as we mentioned, both (3.10) and (3.25) continue to hold. The only difference is that now κ, N o , and sin(κx) are imaginary. It is convenient to express (3.25) in terms of real variables: t = −κ 4m exp (−qλ/2) × 2N eNo cosh κλ 2 − (N 2 e +N 2 o ) sinh κλ 2 t = −κ 4m exp (−qλ/2) × 2N eNo cosh κλ 2 + (N 2 e +N 2 o ) sinh κλ 2 t =κ 4m exp (−qλ/2)(N 2 e −N 2 o ) sinh κλ 2 (3.37) whereκ ≡ |κ| = q 2 − k 2 F , and N e = 2q(q 2 −κ 2 ) 2q 2 −κ 2 ,N o = 2q(q 2 −κ 2 ) κ 2 ,(3.38) With these, we notice that the Pfaffian of the spectrum (3.26): Pf[H(k x , k y )] = (v y k y ) 2 − 4(t 2 + tt) cos 2 (k x /2) + (t +t ) 2 > 0. always hold for q > k F . The proof is elementary: The Pfaffian is greater than (t −t) 2 − 4t 2 ∝N 2 o N 2 e > 0. This indicates that the fermionic spectrum is gapped for q > k F . The size of the gap is of the same order as the t's. We remind that the gap in q > k F regime is typically larger than the gap in q < k F , since the latter is given by lattice corrections (see Appendix D) and vanishes in the continuum limit. The topology of this state can be obtained by a similar analysis at q k F . Since for smaller q > k F the gap does not close, the topology does not change. For q k F , q ≈ κ, and N e ≈N o . Thus from (3.37) we have t → 0. Then, as we discussed previously in Sec. III D 1 and shown in Fig. 8, the Chern number of this state again depends on the relative amplitude of t andt. Here, from Eq. (3.37), we have unambiguously t >t, and therefore the gapped state has C = 0, i.e., the band topology of the gapped state at q > k F is trivial. It is worth comparing the trivial pairing state we obtained with the "strong pairing phase" considered in Ref. [49]. As we cautioned, the "strong pairing" there refers to a situation in which the "normal state" does not have a FS (µ < 0). Both our state and the strong pairing phase are topologically trivial. In our case, however, we note that we have always set µ > 0 in our state, so it may seem puzzling at first why our state is trivial. Here the trivial topology is obtained by invoking additional domain wall states, which by themselves couple into a 2d system that neutralizes the total Chern number. IV. COEXISTENCE OF PDW ORDER AND UNIFORM PAIRING ORDER In this section we focus on the fermionic spectrum in the presence of coexisting PDW order parameter and uniform p x + ip y pairing order parameter. We will refer to this state as the p x + ip y striped pairing state. We determine the fermionic spectrum in the regime where the paired state has a p x + ip y PDW state coexisting with a uniform component of the p x + ip y pairing order. In general, we find that the fermionic spectrum is gapped. In particular, for Q < q < k F , the Majorana FS is gapped as the inversion symmetry is broken. We analyze the band topology of the gapped phases and present a phase diagram. A. Gapping of the Majorana FS We assume that the order parameter of the uniform component has the same phase as the overall phase for the order parameter of the PDW state. The order parameter in real space is of the form ∆(x) = ∆ u + ∆ pdw 1 + =1,2 n∈Z (−1) sgn(x − x ( ) n ) . (4.1) Crucially, we see that the inversion symmetry centered at the DW's and ADW's with x = x ( ) n are now broken by the uniform component ∆ u . A direct consequence is that the Majorana FS for q < k F protected by the particlehole symmetry C and inversion symmetry I (Sec. III A), gets gapped. Indeed, numerical calculations on Eq. (3.27) with both ∆ u and ∆ pdw confirm that the fermionic spectrum is gapped. Instead of a detailed evaluation of the hopping matrices in a tight-binding Hamiltonian, like we did for (3.24), one can understand the gap opening in an intuitive way. In Appendix B we show that the two zero-mode solutions obtained in Sec. III B persist so long as |∆ u | < |∆ pdw |. With ∆ u , the domains and anti-domains become "imbalanced", with order parameters alternating between ±∆ pdw + ∆ u , and we assume |∆ u | < |∆ pdw |. As a direct result, the wave packets of the propagating modes bound on a DW at x = 0 also becomes asymmetric. Following a similar procedure leading to Eq. (3.10) and Eq. (3.14), r|Ψ R ∝ exp[ik y y − iκ + x] exp[−q + x], for x > 0 exp[ik y y − iκ − x] exp[q − x], for x < 0 r|Ψ L ∝ exp[ik y y − iκ + x] exp[−q + x], for x > 0 exp[ik y y − iκ − x] exp[q − x], for x < 0 (4.2) where q ± ≡m(∆ pdw ± ∆ u ), κ ± ≡ k 2 F − q 2 ± . (4.3) Importantly, the wave packet of both left and right moving modes are more extended into the domain where the order parameter has a smaller magnitude. Indeed, this is expected since the local pairing order gaps out the local density of states and dictates the exponential decay of the wave packet. Intuitively, the coupling between domain wall states is stronger at regions with greater overlap of their wave functions. Analogous to the hopping amplitudes depicted in Fig. 2, one can define six hopping matrices t ± ,t ± , and t ± , where ± distinguishes domains with stronger or weaker local pairing order. Similar to Eq. (3.25), we have t ± ,t ± , t ± ∝ exp (−q ± λ/2). In the tight-binding limit, we then have t − t + , t − t + , andt − t + . In this limit, the system is "quadrumerized," with each quadrumer being composed of the left and right moving modes at a DW-ADW pair. We illustrate this in Fig. 9. The quadrumerization develops in regions with a smaller pairing order and hence greater overlap between wave packets. Each quadrumer consists of two left movers and two right movers, and the hybridization in their wave functions lead to a gap. It turns out the such a coexistence state has nontrivial band topology manifested by the presence of chiral edge states. In the quadrumer (tight-binding) limit, we consider a finite system (see Fig. 9). Depending on the termination of the finite system, near each physical edge there is either one unpaired chiral mode (left edge in Fig. 9) or three would-be chiral modes (one edge mode and two nearby domain wall modes, right edge in Fig. 9) with a net chirality. Either way, one gapless chiral mode survives at each physical edge. The existence of the stable gapless modes near the edges indicates the this coexistence state is topological and has a Chern number C = 1. It belongs to the same universality class as the weak-coupling regime in Ref. [49]. 10. Schematic pairing phase diagram for the fermionic states as a function of the PDW order parameter and a coexisting uniform p-wave order. When the uniform component ∆u = 0, the hybrididization of the bulk domain walls in general gives rise to a FS for ∆ pdw < vF . For ∆ pdw < Q/m (or q < Q), the FS is from a perturbative reconstruction of the normal state FS. The fermionic excitations are Bogoliubov quasiparticles. For Q/m < ∆ pdw < vF , the FS is made of Majorana modes from the domain walls. We use terms "Bogoliubov FS" and "Majorana FS" to distinguish them. Near specific values of ∆ pdw such that κ = nQ, (weak) lattice effects gaps out the fermionic spectrum with a Chern number C = 2 (although C = 0 states may also be possible depending on lattice details). Above the critical pairing strength ∆ pdw > vF , the system enters a topologically trivial gapped state (C = 0). This state survives a finite amount of uniform component ∆u. The neutral FS becomes gapped for any ∆u = 0, when the system enters the topological pairing phase (C = 1) whose edge states contain a chiral Majorana mode. For ∆u ∆ pdw the system approaches a uniform p-wave state. We end this section by placing all the phases mentioned above in a phase diagram in terms of the PDW order parameter ∆ pdw and a possible coexisting uniform p x + ip y -wave order parameter ∆ u . We summarize the results in Fig. 10. We have carefully analyzed the gapped phases in a pure PDW state, both for q < k F (or equivalently ∆ pdw < v F ) and for q > k F (or equivalently ∆ pdw > v F ). Due to the spectral gap, these states are stable in the presence of a small ∆ u , which induces a "competing mass" that leads to a C = 1 phase. One naturally expects that the sizes of these phases in ∆ u direction is proportional to their spectral gaps. Therefore, in a semi-continuous limit where the lattice corrections are small, the C = 2 phase with q < k F occupies a much smaller region with ∆ u = 0 than the C = 0 phase with q > k F does. Both of these phase transitions involves a change in Chern number by 1, and we have numerically verified that the phase transition occurs with a gap closing through a Dirac point at the phase boundaries. V. THE px + ipy PDW FRACTIONAL QUANTUM HALL STATES Our study so far addressed the properties of the fermion spectrum in a paired state and, as such, can be viewed as a description of a striped superconductor with chiral p-wave order parameter. We now turn on the implications of our results for the FQH physics of this state, keeping in mind that a paired FQH state is not a superconductor, but in fact a charge insulator in an applied magnetic field. In order to make contact with the physics of the paired quantum Hall states, we reintroduce both charge and neutral modes on equal footing, and recall that they are coupled to a dynamical Chern-Simons gauge field. The neutral fermion modes we studied in the previous sections, which originated from a change in sign of the p x + ip y order parameter, are akin to zero energy Andreev bound states in a Josephson junction, where the difference in the phase of the order parameter is π. Just as an external magnetic flux alters the phase difference and gives rise to a spatially oscillating current passing through a Josephson junction, [73] one might worry that the same would happen in this case due to the Chern-Simons and the external magnetic fields. The situation, however, is greatly simplified (at least in the mean field description assumed here) due to the complete screening of the external magnetic flux by the Chern-Simons flux attached to the particles, which implies that the total effective magnetic field experienced by the composite fermions is zero and, thus, the gauge fields do not alter the character of the Andreev bound states. The discussion above can be made more concrete by recalling that at filling fraction ν = 1/2 of this N = 1 LL, upon performing a standard mapping to composite fermions coupled to a fluctuating Chern-Simons gauge fields a µ , with µ = 0, 1, 2, the effective Lagrangian of the + _ L = ψ † (iD 0 + µ) ψ + 1 2m |Dψ| 2 + ∆ ψψ + ∆ * ψ † ψ † + 1 4π 1 2 µνλ a µ ∂ ν a λ (5.1) where, D µ = ∂ µ + i(A µ + a µ ) and, on average, ∇ × (a + A) = 0. (5.2) This condition defines the mean field state and enforces that the electronic density ρ = 1/2 everywhere in the bulk of the system. Had the total flux ∇ × (a + A) been non-zero in the region across the domain wall (which would have implied a local variation either of the magnetic field, the charge density, or both), then the associated Josephson effect would have depended on the gauge invariant phase difference across the junction (i.e., the domain wall) that carries a contribution from the gauge fields. However, in the mean field state characterized by Eq. (5.2) the phase difference π associate with the order parameter ∆ fully specifies the properties of the low energy states bound at the domain walls. To simplify notation, in the Lagrangian of Eq.(5.1) the p x + ip y symmetry structure of the pairing has been included in the pair field ∆. A. Spectra of px + ipy PDW FQH states The bulk Chern-Simons term in Eq. (5.1) encodes the property that the system is a charge insulator in bulk with a gapless chiral bosonic mode at the boundary of the system describing the charged excitations. The neutral fermion excitations of the system, either in the bulk or at the boundary, on the other hand, are described by the fermionic sector with the PDW order parameter. Thus, the neutral fermionic spectrum of the striped paired FQH states are those that we obtained for the p x + ip y PDW state in Sec. III, while the charged bosonic sector is described by the Chern-Simons action. The striped paired FQH system then has gapless neutral excitations supported at domain walls in the bulk of the system, while remaining a bulk charge insulator with gapless charge modes on the edge, as illustrated in Fig. 11 (showing only two domains). The analysis of Sec. III combined with the charge sector discussed above, shows that there are four phases of the striped paired FQH state, which are summarized by Fig. 1 and the pairing phase diagram Fig. 10. In the absence of a uniform p x + ip y -wave component ∆ u , when the Fermi energy is large compared to the pairing term of the PDW order (k F > q, or equivalently v F > ∆ pdw ) and the system supports domain walls in the bulk, then the zero modes in each domain wall hybridize with their neighbors giving rise to a 2D FS of charge-neutral Bogoliubov quasiparticles, a Majorana FS, as represented in Fig. 1(c). Quite remarkably, these neutral Majorana excitations are formed while the charged degrees of freedom remain gapped (which implies that tunneling of electrons in the bulk is suppressed by the charge gap). This neutral FS implies the system has an anisotropic unquantized bulk thermal conductivity, and a heat capacity that scales linearly with temperature T , while its charge transport is gapped with a sharp plateau of σ xy = 1/2. This exotic "critical phase" is one of our central findings of this work. A different paired stripe FQH state at ν = 5/2 was proposed by Wan and Yang [48], which is a state with alternating domains of Pfaffian and anti-Pfaffian states. Similar to our results, they found a state with gapped charge modes but gapless neutral modes at each domain wall. However, the domain wall between the Pfaffian and the anti-Pfaffian state has a more intricate structure that in the case of the p x +ip y PDW state we prose here, leading to a more complex set of domain-wall modes. Moreover, the analysis of Ref. [48] neglects the coupling (and tunneling) between the neighboring domain wall modes which, as we showed here, plays an important role in the physics of the state. Thus, it is an open question whether these couplings will induce a bulk gap or not. In contrast, the gapless state obtained here survives the coupling between the domain wall modes, as is protected by symmetry. Furthermore, as indicated in the phase diagram Fig. 10, for k F > q, centered around each would be Dirac points at k 2 F − q 2 = nQ, n ∈ Z, there exists a gapped phase with Chern number C = 2 with two co-propagating neutral models near the boundary (in addition to the charge mode). This C = 2 region represents an Abelian FQH state, as the vortices do not support Majorana zero modes. The edge CFT is composed of two chiral Majorana fermions and one charge mode, with a chiral central charge c = 2. We are not aware of any previous discussions of this exotic FQH state. The neutral FS is unstable towards gapped phases with distinct topological properties. The first type of instability happens in the weak pairing regime (k F > q) and it is triggered by a non-zero uniform component of the p x + ip y -wave order parameter ∆ u = 0. In this case, the neutral FS becomes topologically gapped with a Chern number C = 1, and the system is in the same universality class as the non-Abelian Pfaffian state. The transition between this state and the aforementioned C = 2 state is of Dirac type. Just like the Pfaffian state, in the bulk there exists non-Abelian anyons with e/4 electric charge and the edge is described by a U (1) 2 × Ising/Z 2 CFT with a chiral central charge c = 3/2. The factor of Z 2 accounts gauge symmetry associated with representing the electron operator as a product of a Majorana fermion of the Ising sector and a charge one vertex operator of the U (1) 2 sector. Another instability of the neutral FS occurs at ∆ u = 0 when the pairing potential is stronger than the Fermi energy, q > k F . This transition is associated with a qualitative change in the character of the DW zero mode states, as discussed in Sec. III B, which causes right and left moving modes to display an asymmetric decay near the domains and, consequently, gaps both the bulk and the edge modes. This pairing phase is characterized by a Chern number C = 0. The disappearance of the neutral fermion modes from the low energy spectrum indicates a transition from a non-Abelian state to an Abelian state [74], the latter in the universality class of the Halperin paired state [75], where electrons form tightly bound charge 2 pairs that condense in an Abelian state with σ xy = 1/2. Note that, unlike in Ref. [49], where the transition from the Pfaffian to the Abelian state only occurs at chemical potential µ = 0 for a spatially uniform order parameter, for the PDW state considered here, the critical phase occurs for a positive µ, and for a finite range of the parameter ∆ pdw . The strength of PDW order parameter behaves as a new "knob" that tunes the system through that transition between different topological orders. As we explained, this striking stability of the neutral FS stems from the symmetries possessed by the PDW state, which restricts the coupling of the Majorana modes both within each domain wall and between domain walls. B. Phase structure near ν = 5/2 We end with a qualitative discussion of the place of the p x + ip y -PDW FQH state in a global phase diagram of quantum Hall states. Much as in the case of other liquid-crystalline quantum Hall states [7], the p x + ip y -PDW FQH state can melt either quantum mechanically or thermally in a number of different ways, similar to the melting phase diagram conjectured for the PDW superconductor in Ref. [76], by a generalization of the wellknown theories of 2D classical melting [77][78][79]. In the case of the PDW superconductor (including a p x + ip y -PDW state), the different pathways are also determined by the proliferation of the panoply of its topological excitations. The p x + ip y PDW, just as its dwave cousin, has three types of topological excitations: quantized vortices, half-vortices bound to single dislocations, and double-dislocations [76]. The proliferation of quantized and/or half-vortices destroy the paired state and lead to two possible compressible unquantized states: either a charge stripe state or a compressible nematic phase. On the other hand, the proliferation of double dislocations leads to an uniform incompressible state best described as a quartet FQH condensate. The quartet FQH condensate is an analog of the charge-4e superconductor [76], where four (rather than two) fermions form a bound state and condense. Strong arguments have been presented [80] that a quartet condensate (as well as a charge-4e topological superconductor) has Abelian topological order. A detailed analysis of the properties of the quartet FQH state, however, is beyond the scope of the present work. However, the properties of the different resulting phases depend on features specific to the physics of the FQH states. In addition to the condensates (paired or not), FQH fluids have a dynamical emergent gauge field, the Chern-Simons gauge field. One of the consequences of the emergent Chern-Simons gauge field being dynamical is that the vortices of the condensate (i.e. the fundamental quasiparticles of the FQH state) have finite energy, instead of the logarithmically divergent energy of a vortex of a neutral superfluid. On the other hand the effective interaction between the vortices may be attractive (as in a type I superconductor) or repulsive (as in a type II superconductor). In addition, FQH vortices carry fractional charge and, hence, vortices also interact with each other through the Coulomb interaction. The interplay between these different interactions was analyzed in the context of uniform paired FQH states by Parameswaran and coworkers [81,82], who predicted a complex phase with different liquid-crystal phases depending on whether the FQH fluid is in a type I or type II regime. Much of the analysis summarized above can be extended, with some caveats, to the case of the p x + ip y -PDW FQH state. One important difference vis-á-vis the PDW superconductors is that in a 2D system such as the 2DEG, in the absence of an underlying lattice the dislocations of the associated charge order cost a finite amount of free energy. As such they proliferate at any finite temperature, thus restoring translation invariance, and resulting in a nematic phase at all non-zero temperatures [83]. This problem was considered before in the context of high temperature superconductors in Ref. [84]. However, in the presence of strong enough anisotropy, e.g. by uniaxial strain or by a tilted magnetic field, can trigger a phase transition to a state with unidirectional order which can be a p x +ip y PDW FQH state or a charge stripe state (the latter case was found in a DMRG numerical work of Zhu and coworkers [85]). Both of these stripe states thermally melt by proliferating dislocations, whose interactions are logarithmic in an anisotropic system [83]. The precise interplay between these (and other) phases depends on details of the length scales that govern quantum Hall fluids. It is widely believed (for good reasons!) that in the lowest Landau level all length scales are ap-proximately of the same order of magnitude as the magnetic length. In Landau levels N ≥ 1 and higher, other scales may come into play. This fact is evinced by the recent experiments near ν = 5/2 which find an interplay between a (presumably uniform) paired state and a compressible nematic phase [23], and between a compressible nematic phase and a stripe phase (albeit in the N = 2 Landau level) [16]. These additional length scales may affect the structure of the vortices and of the other topological excitations, and therefore the nature of the state obtained for fields and/or densities away from the precise value of the filling fraction ν = 5/2, but still inside the plateau for the incompressible state. More specifically, the FQH state has a fluctuating gauge field, with a Chern-Simons term and a (subdominant) Maxwell term, which introduces a screening-length in the problem which will affect the structure of the vortices, "type I" or "type II". This problem was considered before in the context of relativistic field theory [86], and, more relevant to our analysis, in the context of paired FQH states [81,82], although they did not consider the interplay of a possible p x +ip y paired state. For example, if a "type II" regime may become accessible, the vortex states may exhibit intertwined orders of analogous to those that arise in high T c superconductors [87,88]. In this case, a p x + ip y PDW phase may arise in the vortex "halos" of the uniform paired state, and could be stabilized close to ν = 5/2. The upshot of this analysis is that a complex phase diagram may yet to be uncovered, beyond what has been seen in recent experiments. VI. DISCUSSION AND CONCLUSION In this paper we have studied the properties of a 2D pair-density wave state with a p x + ip y chiral order parameter, which is periodically varying along one direction, and have shown that this physical system can support exotic bulk symmetry-protected (gapless or gapped) fermionic spectrum. This bulk gapless phase results from the hybridization of pairs of counter-propagating Majorana fermion states localized near the nodes of the order parameter. The stability of the Majorana states near the domain walls is a consequence of a combination of inversion and chiral symmetries associated with the unidirectional PDW order parameter. In the weak coupling regime (in the BCS sense) characterized by v F > ∆ pdw , the zero modes are localized within the distance q −1 , where q = m ∆ pdw . We have shown that the hybridization of these domain wall modes gives rise to a Majorana FS that is protected by both particle-hole and inversion symmetries and that the robustness of the FS can be captured by the properties of a Pfaffian. Our findings have been supported both by an effective theory valid in the regime q Q, in which the low energy modes on adjacent domain walls hybridize weakly, as well as by numerical calculations in the regime where the domain walls strongly couple to many neighboring domain walls. The FS obtained in the v F > ∆ pdw regime is generically unstable to the presence of perturbations that break inversion symmetry. In particular, a small uniform of the order parameter breaks the inversion symmetry that maps ∆ pdw → −∆ pdw around a domain wall and destroys the FS, giving rise to a gapped spectrum of neutral fermionic excitations. Moreover, we have shown that this gapped phase is topological as it supports a chiral Majorana branch at the boundary of the system, which has the same topological properties of the uniform p x + ip y paired state. Our analysis has also shown the existence of special points characterized by the condition k 2 F − q 2 = nQ (n ∈ Z), for which FS becomes a Dirac point at (k x , k y ) = (0, 0). This Dirac point is a consequence of the continuum approximation of the band structure and generically becomes gapped by distortions of the Majorana wavefunctions due to lattice effects, where the system, interestingly, has a fermionic spectrum with Chern number C = 2, and thus supports two edge Majorana modes. On the other hand, for the strong coupling limit ∆ pdw > v F , we found the resulting fermionic spectrum to be trivial. These findings have been summarized in the phase diagram Fig. 10. Viewed as a striped superconductor, our theory shows the existence of zero energy extended Majorana states in the bulk of a PDW phase with chiral p-wave order parameter. In this case, all the excitations of the systems are neutral Majorana modes. We applied this theory to the paired FQH state at filling ν = 5/2 in which the composite fermions pair into a state with a spatially dependent order parameter. In fact, recent numerical work [26] has shown that, as a function of the 2DEG layer thickness, the effective interactions experienced by composite fermions in N ≥ 1 Landau levels can give rise to a Pomeranchuk instability, which could account for a mechanism behind the formation of a nematic FQH state as it is, in fact, in line with recent experimental findings. [23] In our description of the striped FQH state at ν = 5/2, the charge modes remain gapped in the bulk and give rise to a chiral bosonic density mode at the boundary, which is a conformal field theory with central charge c = 1. The PDW order parameter changes only the properties of the neutral fermionic sector. From the discussion above, in the weak coupling regime, the neutral particles develop and gapless FS protected by symmetry while the bulk remains gapped to charge excitations. Consequently, while the tunneling of neutral (Majorana) quasiparticles is facilitated by the absence of an energy gap in the bulk, the tunneling of electron is suppressed by the charge gap. Moreover, a non-zero uniform component gaps the neutral fermionic spectrum and the system develops a chiral Majorana branch; we then identify this phase as a striped Moore-Read state. At k 2 F − q 2 = nQ points, the edge CFT includes two Majorana branches, and the topological order becomes Abelian. On the other hand, when the pairing effects become sufficiently strong, the system becomes gapped (even in the absence of a uniform component) and the systems enters a phase without a neutral Majorana edge state; this phase is then identified with the striped (Abelian) Halperin paired state. We close with a discussion of the possible relation between the p x + ip y PDW FQH state and the very recent experiments of Hossain and coworkers [47], whose results were posted on the Physics Archive after this work was completed. This experiment considers a 2DEG in an AlAs heterostructure which has two elliptical electron pockets oriented at 90 • degrees of each other. Each pocket has very anisotropic effective masses, with a ratio of 5:1. Under a very weak unidirectional strain field, the Landau level of one or the other pocket is emptied and the system has a strong electronic anisotropy. Importantly, in these systems, at the fields in which the experiments are done, the Zeeman energy is larger than the Landau gap, as also is the energy splitting due to the applied strain. Remarkably, the experiments of Ref. [47] find a clear plateau in the N = 1 Landau level at ν = 3/2, equivalent of the much studied ν = 5/2 plateau in the 2DEG in GaAs-AlAs heterostructures. However, these authors also found a remarkable transport anisotropy inside the plateau regime, by which, below some well-defined temperature, the longitudinal resistance R xx (along the (100) direction) rises sharply to a value comparable to R xy , while resistance R yy (along the (0, 1, 0) direction) decreases sharply. This nematic behavior is reminiscent to the earlier findings of Xia and coworkers [27] near filling fraction ν = 7/3 in the N = 1 Landau Level of the 2DEG in GaAs-AlAs heterostructures. While it is tempting to interpret these experimental results as evidence for the existence of the p x + ip y PDW FQH state, it also raises a puzzle since the magnitude of the longitudinal resistance seems incompatible with this state which has a bulk charge gap. We should note that this experiment cannot distinguish a nematic state (which is uniform) from any stripe state (which breaks translational symmetry), paired or not. There are several possible ways to understand this behavior. One is that for a sample with the form of a QH bar the strain does not force the system into a single oriented domain but that there may be two orthogonally oriented domains in the bar geometry. In this scenario, the longitudinal transport is only carried by the charge edge mode and it is drastically anisotropic. Other scenarios are also possible, such as the one suggested by the analysis of Parameswaran and coworkers [82], perhaps the paired state is in the "type I" regime which leads to a form of Coulomb frustrated phase separation. However, in this latter scenario, it is hard to understand why R xy has a sharp plateau. At any rate, if the state found in these experiments is a p x + ip y PDW FQH state it should exhibit bulk thermal conduction, as predicted by our analysis. In summary, we have presented a new scenario characterized by a 2D chiral topological phase being inter-twined with a striped order, in which low energy neutral fermionic degrees of freedom are found to be supported at the nodes of the PDW order parameter. Our findings have implications both for the understanding of nematic paired FQH states at filling ν = 5/2, as well as for nematic (or striped) superconductors. Note: After this work was completed we became aware of a preprint by Barkman and coworkers [89] who considered a time-reversal invariant p-wave superconductor consisting of alternating domains with p x ± ip y pairing. The physics of this state is very different of the timereversal breaking p x + ip y PDW superconductor that we present in this paper. ACKNOWLEDGMENTS We thank Daniel Agterberg, Steven Kivelson, Ganpathy Murthy, Mansour Shayegan, and Ajit Srivastava for discussions. EF is particularly grateful to S. Kivelson for numerous discussions (and the suggestion for the interpretation of the anisotropic transport in the context of the 7/3 state.) This work was supported in part by the Gordon and Betty Moore Foundation EPiQS Initiative through Grant No. GBMF4305 at the University of Illinois (LHS and YW) and the National Science Foundation grant No. DMR-1725401 at the University of Illinois (EF). LHS and YW performed part of this work at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. where u are the eigenspinors of σ y , the zero mode equation (A4) reads − ∂ 2 x 2m − µ ψ(x) − ∆ pdw sgn(x)∂ x ψ(x) − ∆ pdw δ(x)ψ(x) = 0 . (A7) We first consider x < 0 and plug the ansatz ψ L ∼ e iP L x into (A7), leading to P 2 L 2m − µ + i∆ pdw P L = 0, whose solutions are P L = −i(q ±κ) , q = m∆ pdw > 0 ,κ = q 2 − 2mµ > 0 . (A8) Then, as long as µ > 0, the solution is normalizable in the x < 0 half space, and the solution for x < 0 reads (κ ≡ iκ) ψ L (x) = e qx [A cos (κx) + B sin (κx)] , x < 0 . Majorana fermions for q < kF For k 2 F ≡ 2mµ > q 2 , there are two orthonormal zero energy solutions for a given = 1, 2. Ψ ,e (x) = N e e −q|x| √ L cos (κ x) u , N e = 2q(κ 2 + q 2 ) κ 2 + 2 q 2 , and Ψ ,o (x) = N o e −q|x| √ L sin (κ x) u , N o = 2q(κ 2 + q 2 ) κ 2 . (A12b) Majorana fermions for q > kF For q > k F , The expressions in (A12a) and (A12b) are still correct, but it is convenient to re-express them in real parameters: Ψ ,e (x) =N e e −q|x| √ L cosh (κ x) u ,N e = 2q(q 2 −κ 2 ) 2 q 2 −κ 2 ,(A13a) and Ψ ,o (x) =N o e −q|x| √ L sinh (κ x) u ,N o = 2q(q 2 −κ 2 ) κ 2 .(A13b) with ∆ u > 0 and ∆ pdw > 0 . We now show that the zero energy solutions are stable as long as ∆ u < ∆ pdw . To see that, we note that for x < 0, the order parameter is ∆ pdw + ∆ u ≡ ∆ L and for x > 0, we have −∆ pdw + ∆ U ≡ −∆ R , where ∆ L/R > 0. Defining q L = m ∆ L , q R = m ∆ R , κ L = 2 m µ − q 2 L and κ R = 2 m µ − q 2 R , the zero mode solutions have the form x < 0 : ψ(x) = e q L x [A cos (κ L x) + B sin (κ L x)] / √ L x > 0 : ψ(x) = e −q R x [A cos (κ R x) + C sin (κ R x)] / √ L (B2) and satisfy the condition − 1 2m lim →0 ψ (+ ) − ψ (− ) − ∆ pdw ψ(0) = 0 ,(B3) which implies κ R C = κ L B .(B4) We then identify two orthogonal solutions ψ 1 (x) and ψ 2 (x) given by ψ 1 (x) = A Θ(−x) e q L x cos (κ L x) + Θ(x) e −q R x cos (κ R x) ,(B5)ψ 2 (x) = B Θ(−x) e q L x sin (κ L x) + κ L κ R Θ(x) e −q R x sin (κ R x) ,(B6) where A and B are normalization constants that can be readily determined. Notice that, in the limit ∆ u → 0, the solutions above reduce to the even and odd parity solutions obtained before. where the subscript = 1 denotes DW modes and = 2 ADW modes. Recall that |Ψ =1 ∝ |u 1 = (1, i) T / √ 2 and |Ψ =2 ∝ |u 2 = (1, −i) T / √ 2. The coefficients c ,µ are variational parameters, which are determined by minimizing the energy E k = Ψ kx,ky |H|Ψ kx,ky / Ψ kx,ky |Ψ kx,ky , which yields the secular equation det O −1 (k) H(k) − E k I 4×4 = 0 ,(C2) where O(k) ε1µ1,ε2µ2 = n e ikxn Ψ ε1µ1,−n (k y )|Ψ ε2µ2,0 (k y ) (C3) H(k) ε1µ1,ε2µ2 = n e ikxn Ψ ε1µ1,−n (k y )|E µ2 (k y ) + V total (x) − v (ε2) 0 (x)|Ψ ε2µ2,0 (k y ) (C4) V total (x) = 1 2 {−i ∂ x , ∆ total (x)} σ x , v (ε2) 0 (x) = 1 2 {−i ∂ x , ∆ (ε2) 0 (x)} σ x ,(C5a) We first notice that, because DWs and ADWs are respectively proportional to the orthonormal spinors |u 1 and |u 2 , the overlap matrix O is the identity matrix to leading order: O = I 4×4 + O(e −q λH 1R , 1R (k x , k y ) = n e ikx λ n x Ψ * 1R (x + n λ) u 1 | E R (k y ) + V total (x) − v (1) (x) |u 1 Ψ 1R (x) = E R (k y ) n e ikx λ n x Ψ * 1R (x + n λ) φ 1R (x) = E R (k y ) 1 + O(e −q λ ) ≈ E R (k y ) ,(C8) where, in passing from the first to the second line, we used We can show, by the same reasoning as before, that, to leading order, the following matrix elements are zero: u 1 | V total (x) − v (1) (x) |u 1 = 0 due to V total (x) − v (1) (x) H 1R/L , 1L/R = 0 , H 2R/L , 2L/R = 0 . We are then left with the non-zero off-diagonal matrix elements of (C7), H 2µ2 , 1µ1 (k), with µ 1 , µ 2 = R/L. To leading order H 2µ2 , 1µ1 (k x , k y ) = R µ2 , µ1 e ikx λ + S µ2 , µ1 + O(e −q λ ) , where R µ2 , µ1 = −∆ pdw −λ/2 −∞ dx Ψ * 2µ2 (x + λ/2)∂ x Ψ 1µ1 (x) − ∂ x Ψ * 2µ2 (x + λ/2)Ψ 1µ1 (x) . (C12a) S µ2 , µ1 = ∆ pdw ∞ λ/2 dx Ψ * 2µ2 (x − λ/2)∂ x Ψ 1µ1 (x) − ∂ x Ψ * 2µ2 (x − λ/2)Ψ 1µ1 (x) .(C12b) Evaluation of the integrals Eq. C12 gives the effective Hamiltonian H(k x , k y ) =     v y k y 0 t + t e −ikx t −te −ikx 0 −v y k y −t + te −ikx t + t e −ikx t + t e ikx −t + te ikx v y k y 0 t −te ikx t + t e ikx 0 −v y k y     ,(C13) where the parameters t,t and t are given by Eq. 3.25. Appendix D: Lattice corrections to the hopping matrices t,t, and t In this appendix we compute the leading order corrections to Eq. (3.25) by an underlying square lattice. We will focus on the quasi-continuous limit, where the Fermi wavelength λ F ≡ 2π/k F is much larger than the lattice constant a. The wave function of the domain wall modes can be obtained by solving the lattice version of (3.4), and by using a exponential function ansatz, the even-and odd-parity wave functions still satisfy Eq. (3.10), only the expression for q, κ, and N o,e are different from their continuum version. By a simple analysis these lattice corrections are of O[(k F a) 2 ]. We recall that the hopping amplitudes, for example t was obtained by an integral t = −∆ pdw ∞ λ/2 dx [∂ x Ψ * L (r)] Ψ R (r − λ/2) − Ψ * L (r) [∂ x Ψ R (r − λ/2)](D1) For a lattice system, first one needs to replace ∂ x with its lattice version i sin(k x ), and doing so introduces corrections of O[(k F a) 2 ]. Besides, one should replace the integral with summations at the lattice sites. The leading correction from this replacement can be obtained from the Euler-Maclaurin formula ∞ λ/2 f (x)dx =a f λ 2 + a 2 + f λ 2 + 3a 2 + · · · + a 2 f λ 2 + a 2 + O(a 3 ).(D2) Then including the leading-order Euler-Maclaurin correction, t is found to be t = − κ 4m exp − qλ 2 N 2 e + N 2 o sin κλ 2 − q 2 2m exp − qλ 2 N 2 e (2δ(0) + q) + 2N e N o κ a 2 .(D3) Regularizing δ(0) = 1/a, we see that the leading correction to t is of O(k F a) (q k F ), given by the δ-function term. We do not need to keep all other O[(k F a) 2 ] terms. Including the lattice corrections for all couplings we have t = − κ 4m exp − qλ 2 2N e N o cos κλ 2 + (N 2 e − N 2 o ) sin κλ 2 − q 2 a m exp − qλ 2 N 2 ẽ t = − κ 4m exp − qλ 2 2N e N o cos κλ 2 − (N 2 e − N 2 o ) sin κλ 2 − q 2 a m exp − qλ 2 N 2 e t = − κ 4m exp − qλ 2 (N 2 e + N 2 o ) sin κλ 2 − q 2 a m exp − qλ 2 N 2 e .(D4) In the main text we are interested in the case where t = 0. It is straightforward to verify that in this case sin(κλ/2) < 0, andt > t. From the criterion given in the main text, the Chern number of this phase is C = 2. (3.20) flips sign, but this is identical to the original state after a gauge transformation ∆ → −∆. For the domain L R FIG. 2. The coupling t,t, and t between the neighboring domain wall modes. FIG. 3 . 3The energy bands of the tight-binding Hamiltonian (3.24) for t = 0.5, t = 0.4,t = 0.6 at ky = 0. FIG. 4 . 4FS from coupling the domain wall modes. Left panel: FS obtained from Eq.(3.26), where the hopping parameters in Eq.(3.25) were computed for lattice parameters t0 = 1, µ = −1.25, ∆ pdw = 0.82 and Q = π/6. Right panel: Simulated fermionic spectral function ρ(k, E = 0) from a lattice nearestneighbor hopping model subject to a PDW order parameter.FIG. 5. Simulated fermionic spectral function ρ(k, E = 0) from a lattice nearest-neighbor hopping model subject to a PDW order parameter. For the normal state we used the dispersion (k) = −t0(cos kx + cos ky) − µ, and we took the local PDW coupling with wave vector Q as ±∆ pdw (sin kx + i sin ky)c † k c † −k + h.c. We set the parameters as t0 = 1, µ = −1.25, ∆ pdw = 0.052, and Q = π/6. In the left panel we plot the spectral function in the original Brillouin zone. In right panel, we plot the spectral function in the folded Brillouin zone (the region between the dashed lines in the left panel) with a new lattice constant aQ = a0 × 2π/Q. Compared withFig. 4, the FS here can be viewed as a small perturbation of the original (circular) FS formed by composite fermions. FIG. 6 . 6Contour plots of the Pfaffian Eq. (3.35) t = 0.4, t = 0.5,t = 0.4 showing the FS for at Pf(H(k)) = 0, which separates the region where the Pfaffian changes sign. matrix can be diagonalized as U CI = QΛQ T , with Λ diagonal and Q unitary. Then, as shown in Ref. [56], Eq. (3.33) can be used to define an anti-symmetricH(k) = Ω(k) H(k) Ω † (k), withH(k) = −H T (k), where Ω(k) = √ Λ † Q † is unitary. The antisymmetric nature ofH(k) allows us to express the determinant at any given k in terms of the Pfaffian as det(H(k)) = det(H(k)) = Pf(H(k)) 2 . FIG. 7 . 7The gapped Dirac spectrum from a numerical calculation near κ = 8Q. We have used the lattice model in Eqs.(3.27) and (3.28), with t = 1 and µ = 1.9. The chemical potenial is very close to the band bottom, thus the system is quasi-continuous, kF 2π/a. Panel (a) is obtained by the periodic boundary condition in x direction, while Panel (b) is from the open boundary condition. From counting the number of the localized edge modes in (b), such a phase has a Chern number C = 2. FIG. 8 . 8Illustration of the gapped fermionic spectrum with Chern number C = 0 or C = 2, depending on whethert (solid double lines) or t (dashed double lines) is larger. FIG. 9 . 9Illustration of the gapped fermionic spectrum in the coexistence phase of PDW and uniform pairing with a Chern number C = 1. FIG. 10. Schematic pairing phase diagram for the fermionic states as a function of the PDW order parameter and a coexisting uniform p-wave order. When the uniform component ∆u = 0, the hybrididization of the bulk domain walls in general gives rise to a FS for ∆ pdw < vF . For ∆ pdw < Q/m (or q < Q), the FS is from a perturbative reconstruction of the normal state FS. The fermionic excitations are Bogoliubov quasiparticles. For Q/m < ∆ pdw < vF , the FS is made of Majorana modes from the domain walls. We use terms "Bogoliubov FS" and "Majorana FS" to distinguish them. Near specific values of ∆ pdw such that κ = nQ, (weak) lattice effects gaps out the fermionic spectrum with a Chern number C = 2 (although C = 0 states may also be possible depending on lattice details). Above the critical pairing strength ∆ pdw > vF , the system enters a topologically trivial gapped state (C = 0). This state survives a finite amount of uniform component ∆u. The neutral FS becomes gapped for any ∆u = 0, when the system enters the topological pairing phase (C = 1) whose edge states contain a chiral Majorana mode. For ∆u ∆ pdw the system approaches a uniform p-wave state. FIG. 11 . 11Charge chiral mode at the boundary and Majorana modes both at the edge and at the domain wall in the bulk of the system.paired FQH state (in units where e = = c = 1)reads to the x > 0 region leads toψ R (x) = e −qx [A cos (κx) + C sin (κx)] , x > 0 .(A10)Notice that continuity of ψ(x) at x = 0 fixes the same coefficient A in (A9) and (A10) and that the asymptotic form of ψ L/R for large values of |x| guarantees that the solution is normalizable. The boundary condition at x = 0 is delt with by integrating (A7) in an infinitesimal interval (− , + ) around x = 0 and invoking continuity of ψ(x) at x = 0, which yields− 1 2m lim →0 [ψ (+ ) − ψ (− )] − ∆ pdw ψ(0) = 0 . (A11) C: Derivation of the tight-binding effective Hamiltonian Eq. (3.24)Let us consider a tight-binding variational state |Ψ kx,ky = |Ψ ,µ,n (k y ) , ∆ total (x) = ∆ pdw + n ∆ (1) n (x) + n ∆ (2) n (x)= ∆ pdw − n ∆ pdw sgn(x − n λ) + n ∆ pdw sgn(x − (n + 1/2) λ) . being proportional to the Pauli matrix σ x . Similar calculation leads to the expression for the diagonal matrix elements of Eq. (C7):H 1R/L , 1R/L = ±E R (k y ) , H 2R/L , 2R/L = ±E R (k y ) .(C9)2. Off-diagonal Matrix Elements of (C7) III. Fermionic spectrum of the p x + ip y Pair Density Wave 5 A. BdG description of the p x + ip yI. Introduction 2 II. The p x + ip y pair density Wave: Setup and results 3 PDW state 5 B. Domain wall bound states 6 1. Symmetry-protected stability of the domain wall counter-propagating modes 7 C. FS from domain wall coupling 8 1. Symmetry-protected stability of the Majorana FS 10 D. Gapped states from domain wall coupling 11 1. gapped phase near κ = nQ 11 2. gapped phase for q > k F 13 IV. Coexistence of PDW order and uniform pairing order 13 A. Gapping of the Majorana FS 13 * These authors contributed equally to the work. B. Mean-field pairing phase diagram 14 V. The p x + ip y PDW fractional quantum Hall states 15 A. Spectra of p x + ip y PDW FQH states 15 B. Phase structure near ν = 5/2 16 VI. Discussion and conclusion 17 Acknowledgments 19 A. Majorana fermions at the nodes of the pair-density wave state 19 1. Zero Modes 19 2. Majorana fermions for q < k F 20 3. Majorana fermions for q > k F 20 B. Stability of the Zero modes in the presence of a small uniform p-wave component 20 C. Derivation of the tight-binding effective Hamiltonian Eq. (3.24) 21 1. Diagonal Matrix Elements of (C7) 22 2. Off-diagonal Matrix Elements of (C7) 22 D. Lattice corrections to the hopping matrices t,t, and t 23 arXiv:1811.08897v3 [cond-mat.str-el] 29 Nov 2018 I. INTRODUCTION FIG. 1. Illustration of the different quantum Hall states from the stripe pairing order. The blue and green strips denotes regions with positive and negative local pairing order parameter. The black arrows denote chiral Majorana modes while the red arrows denote chiral bosonic charge modes. Panel (a): Gapless modes on domain walls of the pair density wave (PDW) and on physical edges in the limit of vanishingly small localization length and negligible couplings between these modes. Panel (b): For PDW order parameter ∆ pdw < vF , in general the domain wall modes form a Majorana FS, while there exist an energy gap for charge excitations. Panel (c): For particular values (see Sec. III D 1) of PDW order parameter ∆ pdw < vF , the Majorana FS shrinks to zero size and the fermionic sector gets gapped. In our model this phase has C = 2, and has Abelian topological order. Panel (d): For ∆ pdw > vF , the fermionic sector become trivially gapped, and the resulting quantum Hall state is an Abelian one. Panel (e): The neutral FS at ∆ pdw < vF becomes gapped with a uniform px + ipy pairing component with a nontrivial topology. The resulting quantum Hall state has non-Abelian topological order just as the Pfaffian state.(a) Coupling the domain wall modes _ _ + _ + + (e) (With uniform component) (b) Majorana FS k x k y (c) (d) (special values) 21) where the subscript = 1 denotes DW modes and = 2 ADW modes. Recall that |Ψ =1 ky |H|Ψ kx,ky Ψ kx,ky |Ψ kx,ky , (3.22) with respect to the variational parameters {c ,µ }, yields the secular equation det H kx,ky − E kx,ky I 4×4 = 0, (3.23) ) . ( C6 ) .C6Then, according to Eq. (C2), the band structure is obtained by diagonalizing the Hermitian matrixH(k x , k y ) = H 1R , 1R H 1R , 1L H 1R , 2R H 1R , 2L H 1L , 1R H 1L , 1L H 1L , 2R H 1L , 2L H 2R , 1R H 2R , 1L H 2R , 2R H 2R , 2L H 2L , 1R H 2L , 1L H 2L , 2R H 2L , 2L .We have for H 1R , 1R      (C7) 1. Diagonal Matrix Elements of (C7) Appendix A: Majorana fermions at the nodes of the pair-density wave stateWe provide here details of the Majorana fermion states located at the nodes of the PDW order parameter. For that we begin with the BdG Hamiltonian of the paired statewhere (p) = p 2 /2m, p = (p x , p y ) = (−i∂ x , −i∂ y ) and p ± = p x ± ip y = −i∂ ± (We set = 1). The anticommutators appearing in the BdG Hamiltonian can be expressed asThe system is defined on the plane with x ∈ (−∞, ∞) and y ∈ (−L/2, L/2). The BdG Hamiltonian (A1) possesses a particle-hole symmetry (redundancy) σ 1 Hσ 1 = −H * , which relates positive and negative energy states: if Ψ E is an eigenmode of H with energy E, then σ 1 Ψ * E is an eigemode with energy −E. For an order parameter ∆(x) ∈ R, Hamiltonian (A1) is translation invariant along the y direction, such that the momentum eigenmodes Ψ E,ky (x, y) = e ikyy φ E,ky (x) satisfy(A2)Zero ModesWe now consider an order parameter that changes sign at x = 0 and we look for Majorana fermions supported along this domain wall. Particle-hole symmetry of (A1) implies that Majorana modes are described by(where denotes the zero modes chirality) and, according to (A3), the zero mode equation, after setting k y = 0 in (A2), simplifies toWe now consider the sharp domain wall configurationand solve for the zero mode with positive/negative chiralities (the ± label in the order parameter is intended to show that it is correlated with the zero mode chirality). With the ansatzAppendix B: Stability of the Zero modes in the presence of a small uniform p-wave componentIn the presence of a uniform component ∆ u of the order parameter, the domain wall is described by R E Prange, S M Girvin, 10.1007/978-1-4612-3350-3The Quantum Hall Effect. HeidelbergSpringer-VerlagR. E. Prange and S. M. Girvin, The Quantum Hall Effect (Springer-Verlag, Heidelberg, 1987). Observation of an even-denominator quantum number in the fractional quantum Hall effect. R Willett, J P Eisenstein, H L Störmer, D C Tsui, A C Gossard, J H English, 10.1103/PhysRevLett.59.1776Phys. Rev. Lett. 59R. Willett, J. P. Eisenstein, H. L. Störmer, D. C. Tsui, A. C. Gossard, and J. H. English, "Observation of an even-denominator quantum number in the fractional quantum Hall effect," Phys. Rev. Lett. 59, 1776-1779 (1987). Exact Quantization of the Even-Denominator Fractional Quantum Hall. W Pan, J S Xia, V Shvarts, D E Adams, H L Störmer, D C Tsui, L N Pfeiffer, K W Baldwin, K W West, 5W. Pan, J. S. Xia, V. Shvarts, D. E. Adams, H. L. Störmer, D. C. Tsui, L. N. Pfeiffer, K. W. Baldwin, and K. W. West, "Exact Quantization of the Even- Denominator Fractional Quantum Hall State at ν = 5/2 Landau Level Filling Factor. 10.1103/PhysRevLett.59.1776Phys. Rev. Lett. 83Landau Level Filling Factor," Phys. Rev. Lett. 83, 3530- 3533 (1999). Non-Abelions in the Fractional Quantum Hall Effect. G Moore, N Read, 10.1016/0550-3213(91)90407-ONucl. Phys. B. 360362G. Moore and N. Read, "Non-Abelions in the Fractional Quantum Hall Effect," Nucl. Phys. B 360, 362 (1991). 2n quasihole states realize 2 n−1 -dimensional spinor braiding statistics in paired quantum Hall states. C Nayak, F Wilczek, 10.1016/0550-3213(96)00430-0Nucl. Phys. B. 479529C. Nayak and F. Wilczek, "2n quasihole states real- ize 2 n−1 -dimensional spinor braiding statistics in paired quantum Hall states," Nucl. Phys. B 479, 529 (1996). Electronic liquid-crystal phases of a doped Mott insulator. A Steven, Eduardo Kivelson, Victor J Fradkin, Emery, 10.1038/31177Nature. 393Steven A. Kivelson, Eduardo Fradkin, and Victor J. Emery, "Electronic liquid-crystal phases of a doped Mott insulator," Nature 393, 550-53 (1998). Liquid-crystal phases of quantum Hall systems. Eduardo Fradkin, Steven A Kivelson, 10.1103/PhysRevB.59.8065Phys. Rev. B. 598065Eduardo Fradkin and Steven A. Kivelson, "Liquid-crystal phases of quantum Hall systems," Phys. Rev. B 59, 8065 (1999). Ground state of a two-dimensional electron liquid in a weak magnetic field. M M Fogler, A A Koulakov, B Shklovskii, 10.1103/PhysRevB.54.1853Phys. Rev. B. 541853M.M. Fogler, A. A. Koulakov, and B. Shklovskii, "Ground state of a two-dimensional electron liquid in a weak magnetic field," Phys. Rev. B 54, 1853 (1996). Exact results for interacting electrons in high Landau levels. R Moessner, J T Chalker, 10.1103/PhysRevB.54.5006Phys. Rev. B. 545006R. Moessner and J. T. Chalker, "Exact results for inter- acting electrons in high Landau levels," Phys. Rev. B 54, 5006 (1996). Charge Density Wave in Two-Dimensional Electron Liquid in Weak Magnetic Field. A A Koulakov, M M Fogler, B I Shklovskii, 10.1103/PhysRevLett.76.499Phys. Rev. Lett. 76499A. A. Koulakov, M. M. Fogler, and B. I. Shklovskii, "Charge Density Wave in Two-Dimensional Electron Liq- uid in Weak Magnetic Field," Phys. Rev. Lett. 76, 499 (1996). Nematic phase of the two-dimensional electron gas in a magnetic field. Eduardo Fradkin, Steven A Kivelson, Efstratios Manousakis, Kwangsik Nho, 10.1103/PhysRevLett.84.1982Phys. Rev. Lett. 841982Eduardo Fradkin, Steven A. Kivelson, Efstratios Manousakis, and Kwangsik Nho, "Nematic phase of the two-dimensional electron gas in a magnetic field," Phys. Rev. Lett. 84, 1982 (2000). Nematic Fermi Fluids in Condensed Matter Physics. E Fradkin, S A Kivelson, M J Lawler, J P Eisenstein, A P Mackenzie, 10.1146/annurev-conmatphys-070909-103925Annu. Rev. Condens. Matter Phys. 11E. Fradkin, S. A. Kivelson, M. J. Lawler, J. P. Eisen- stein, and A. P. Mackenzie, "Nematic Fermi Fluids in Condensed Matter Physics," Annu. Rev. Condens. Mat- ter Phys. 1, 7.1 (2010). Evidence for an Anisotropic State of Two-Dimensional Electrons in High Landau Levels. M P Lilly, K B Cooper, J P Eisenstein, L N Pfeiffer, K W West, 10.1103/PhysRevLett.82.394Phys. Rev. Lett. 82M. P. Lilly, K. B. Cooper, J. P. Eisenstein, L. N. Pfeif- fer, and K. W. West, "Evidence for an Anisotropic State of Two-Dimensional Electrons in High Landau Levels," Phys. Rev. Lett. 82, 394-97 (1999). Strongly Anisotropic Transport in Higher Two-Dimensional Landau Levels. R R Du, D C Tsui, H L Störmer, L N Pfeiffer, K W Baldwin, K W West, 10.1016/S0038-1098(98)00578-XSolid State Comm. 109389R. R. Du, D. C. Tsui, H. L. Störmer, L. N. Pfeiffer, K. W. Baldwin, and K. W. West, "Strongly Anisotropic Trans- port in Higher Two-Dimensional Landau Levels," Solid State Comm. 109, 389 (1999). Competing quantum Hall phases in the second Landau level in the low-density limit. W Pan, A Serafin, J S Xia, L Yin, N S Sullivan, K W Baldwin, K W West, L N Pfeiffer, D C Tsui, 10.1103/PhysRevB.89.241302Phys. Rev. B. 89241302W. Pan, A. Serafin, J. S. Xia, L. Yin, N. S. Sullivan, K. W. Baldwin, K. W. West, L. N. Pfeiffer, and D. C. Tsui, "Competing quantum Hall phases in the second Landau level in the low-density limit," Phys. Rev. B 89, 241302 (2014). Possible nematic to smectic phase transition in a two-dimensional electron gas at half-filling. Q Qian, J Nakamura, S Fallahi, G C Gardner, M J Manfra, 10.1038/s41467-017-01810-yNature Communications. 81536Q. Qian, J. Nakamura, S. Fallahi, G. C. Gardner, and M. J. Manfra, "Possible nematic to smectic phase tran- sition in a two-dimensional electron gas at half-filling," Nature Communications 8, 1536 (2017). Strongly Anisotropic Electronic Transport at Landau Level Filling Factor ν = 9/2 and ν = 5/2 Under Tilted Magnetic Field. W Pan, R R Du, H L Störmer, D C Tsui, L N Pfeiffer, K W Baldwin, K W West, 10.1103/PhysRevLett.83.820Phys. Rev. Lett. 83820W. Pan, R. R. Du, H. L. Störmer, D. C. Tsui, L. N. Pfeiffer, K. W. Baldwin, and K. W. West, "Strongly Anisotropic Electronic Transport at Landau Level Filling Factor ν = 9/2 and ν = 5/2 Under Tilted Magnetic Field," Phys. Rev. Lett. 83, 820 (1999). Anisotropic states of two-dimensional electron systems in high Landau levels: effect of an in-plane magnetic field. M P Lilly, K B Cooper, J P Eisenstein, L N Pfeiffer, K W West, 10.1103/PhysRevLett.83.824Phys. Rev. Lett. 83M. P. Lilly, K. B. Cooper, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, "Anisotropic states of two-dimensional electron systems in high Landau levels: effect of an in-plane magnetic field," Phys. Rev. Lett. 83, 824-827 (1999). Probing the microscopic structure of stripe phase at filling factor 5/2. B Friess, V Umansky, L Tiemann, K Klitzing, J H Smet, 10.1103/PhysRevLett.113.076803Phys. Rev. Lett. 11376803B. Friess, V. Umansky, L. Tiemann, K. von Klitzing, and J. H. Smet, "Probing the microscopic structure of stripe phase at filling factor 5/2," Phys. Rev. Lett. 113, 076803 (2014). Impact of the modulation doping layer on the ν = 5/2 anisotropy. X Shi, W Pan, K W Baldwin, K W West, L N Pfeiffer, D C Tsui, 10.1103/PhysRevB.91.125308Phys. Rev. B. 91125308X. Shi, W. Pan, K. W. Baldwin, K. W. West, L. N. Pfeiffer, and D. C. Tsui, "Impact of the modulation dop- ing layer on the ν = 5/2 anisotropy," Phys. Rev. B 91, 125308 (2015). Tilt-Induced Anisotropic to Isotropic Phase Transition at ν = 5/2. Jing Xia, Vaclav Cvicek, J P Eisenstein, L N Pfeiffer, K W West, 10.1103/PhysRevLett.105.176807Phys. Rev. Lett. 105176807Jing Xia, Vaclav Cvicek, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, "Tilt-Induced Anisotropic to Isotropic Phase Transition at ν = 5/2," Phys. Rev. Lett. 105, 176807 (2010). Evidence for a ν = 5/2 fractional quantum Hall nematic state in parallel magnetic fields. Yang Liu, S Hasdemir, M Shayegan, L N Pfeiffer, K W West, K W Baldwin, 10.1103/PhysRevB.88.035307Phys. Rev. B. 8835307Yang Liu, S. Hasdemir, M. Shayegan, L. N. Pfeiffer, K. W. West, and K. W. Baldwin, "Evidence for a ν = 5/2 fractional quantum Hall nematic state in par- allel magnetic fields," Phys. Rev. B 88, 035307 (2013). Observation of a transition from a topologically ordered to a spontaneously broken symmetry phase. N Samkharadze, K A Schreiber, G C Gardner, M J Manfra, E Fradkin, G A Csáthy, 10.1038/NPHYS3523Nat. Phys. 12191N. Samkharadze, K. A. Schreiber, G. C. Gardner, M. J. Manfra, E. Fradkin, and G. A. Csáthy, "Observation of a transition from a topologically ordered to a spon- taneously broken symmetry phase," Nat. Phys. 12, 191 (2016). Onset of quantum criticality in the topological-to-nematic transition in a two-dimensional electron gas at filling factor ν = 5/2. K A Schreiber, N Samkharadze, G C Gardner, Rudro R Biswas, M J Manfra, G A Csáthy, 10.1103/PhysRevB.96.041107Phys. Rev. B. 9641107K. A. Schreiber, N. Samkharadze, G. C. Gardner, Rudro R. Biswas, M. J. Manfra, and G. A. Csáthy, "On- set of quantum criticality in the topological-to-nematic transition in a two-dimensional electron gas at filling fac- tor ν = 5/2," Phys. Rev. B 96, 041107 (2017). Electron-Electron Interactions and the Paired-to-Nematic Quantum Phase Transition in the Second Landau Level. K A Schreiber, N Samkharadze, G C Gardner, Y Lyanda-Geller, M J Manfra, L N Pfeiffer, K W West, G A Csáthy, 10.1038/s41467-018-04879-1Nature Communications. 92400K. A. Schreiber, N. Samkharadze, G. C. Gardner, Y. Lyanda-Geller, M. J. Manfra, L. N. Pfeiffer, , K. W. West, and G. A. Csáthy, "Electron-Electron Interactions and the Paired-to-Nematic Quantum Phase Transition in the Second Landau Level," Nature Communications 9, 2400 (2018). Pomeranchuk instability of composite fermi liquids. Kyungmin Lee, Junping Shao, Eun-Ah Kim, F D M Haldane, Edward H Rezayi, 10.1103/PhysRevLett.121.147601Phys. Rev. Lett. 121147601Kyungmin Lee, Junping Shao, Eun-Ah Kim, F. D. M. Haldane, and Edward H. Rezayi, "Pomeranchuk insta- bility of composite fermi liquids," Phys. Rev. Lett. 121, 147601 (2018). Evidence for a fractionally quantized Hall state with anisotropic longitudinal transport. Jing Xia, J P Eisenstein, L N Pfeiffer, K W West, 10.1038/nphys2118Nature Phys. 7845Jing Xia, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, "Evidence for a fractionally quantized Hall state with anisotropic longitudinal transport," Nature Phys. 7, 845 (2011). Striped superconductors: how spin, charge and superconducting orders intertwine in the cuprates. Erez Berg, Eduardo Fradkin, Steven A Kivelson, John M Tranquada, 10.1088/1367-2630/11/11/115004New J. Phys. 11115004Erez Berg, Eduardo Fradkin, Steven A. Kivelson, and John M. Tranquada, "Striped superconductors: how spin, charge and superconducting orders intertwine in the cuprates," New J. Phys. 11, 115004 (2009). Colloquium: Theory of intertwined orders in high temperature superconductors. Eduardo Fradkin, Steven A Kivelson, John M Tranquada, 10.1103/RevModPhys.87.457Rev. Mod. Phys. 87Eduardo Fradkin, Steven A. Kivelson, and John M. Tranquada, "Colloquium: Theory of intertwined orders in high temperature superconductors," Rev. Mod. Phys. 87, 457-482 (2015). Dynamical Layer Decoupling in a Stripe-Ordered High-Tc Superconductor. E Berg, E Fradkin, E.-A Kim, S A Kivelson, V Oganesyan, J M Tranquada, S C Zhang, 10.1103/PhysRevLett.99.127003Phys. Rev. Lett. 99127003E. Berg, E. Fradkin, E.-A. Kim, S. A. Kivelson, V. Oganesyan, J. M. Tranquada, and S. C. Zhang, "Dy- namical Layer Decoupling in a Stripe-Ordered High-Tc Superconductor," Phys. Rev. Lett. 99, 127003 (2007). Magnetic Flux, Angular Momentum, and Statistics. F Wilczek, 10.1103/PhysRevLett.48.1144Phys. Rev. Lett. 481144F. Wilczek, "Magnetic Flux, Angular Momentum, and Statistics," Phys. Rev. Lett. 48, 1144 (1982). Composite-fermion approach for the fractional quantum Hall effect. J K Jain, 10.1103/PhysRevLett.63.199Phys. Rev. Lett. 63199J. K. Jain, "Composite-fermion approach for the frac- tional quantum Hall effect," Phys. Rev. Lett. 63, 199 (1989). Fractional quantum Hall effect and Chern-Simons gauge theories. Ana López, Eduardo Fradkin, 10.1103/PhysRevB.44.5246Phys. Rev. B. 445246Ana López and Eduardo Fradkin, "Fractional quantum Hall effect and Chern-Simons gauge theories," Phys. Rev. B 44, 5246 (1991). Theory of the half-filled Landau level. B I Halperin, P A Lee, N Read, 10.1103/PhysRevB.47.7312Phys. Rev. B. 477312B. I. Halperin, P. A. Lee, and N. Read, "Theory of the half-filled Landau level," Phys. Rev. B 47, 7312 (1993). Particle-Hole Symmetry and the Pfaffian State. Michael Levin, Bertrand I Halperin, Bernd Rosenow, 10.1103/PhysRevLett.99.236806Phys. Rev. Lett. 99236806Michael Levin, Bertrand I. Halperin, and Bernd Rosenow, "Particle-Hole Symmetry and the Pfaffian State," Phys. Rev. Lett. 99, 236806 (2007). Particle-Hole Symmetry and the ν = 5 2 Quantum Hall State. Sung-Sik Lee, Shinsei Ryu, Chetan Nayak, Matthew P A Fisher, 10.1103/PhysRevLett.99.236807Phys. Rev. Lett. 99236807Sung-Sik Lee, Shinsei Ryu, Chetan Nayak, and Matthew P. A. Fisher, "Particle-Hole Symmetry and the ν = 5 2 Quantum Hall State," Phys. Rev. Lett. 99, 236807 (2007). Is the Composite Fermion a Dirac Particle?. Thanh Dam, Son, 10.1103/PhysRevX.5.031027Phys. Rev. X. 531027Dam Thanh Son, "Is the Composite Fermion a Dirac Particle?" Phys. Rev. X 5, 031027 (2015). Topological order from disorder and the quantized Hall thermal metal: Possible applications to the ν = 5/2 state. Chong Wang, Ashvin Vishwanath, Bertrand I Halperin, 10.1103/PhysRevB.98.045112Phys. Rev. B. 9845112Chong Wang, Ashvin Vishwanath, and Bertrand I. Halperin, "Topological order from disorder and the quan- tized Hall thermal metal: Possible applications to the ν = 5/2 state," Phys. Rev. B 98, 045112 (2018). Theory of Disorder-Induced Half-Integer Thermal Hall Conductance. David F Mross, Yuval Oreg, Ady Stern, Gilad Margalit, Moty Heiblum, 10.1103/PhysRevLett.121.026801Phys. Rev. Lett. 12126801David F. Mross, Yuval Oreg, Ady Stern, Gilad Margalit, and Moty Heiblum, "Theory of Disorder-Induced Half- Integer Thermal Hall Conductance," Phys. Rev. Lett. 121, 026801 (2018). Isotropic to anisotropic transition in a fractional quantum Hall state. Michael Mulligan, Chetan Nayak, Shamit Kachru, Phys. Rev. B. 8285102Michael Mulligan, Chetan Nayak, and Shamit Kachru, "Isotropic to anisotropic transition in a fractional quan- tum Hall state," Phys. Rev. B 82, 085102 (2010). Effective field theory of fractional quantized Hall nematics. Michael Mulligan, Chetan Nayak, Shamit Kachru, Phys. Rev. B. 84195124Michael Mulligan, Chetan Nayak, and Shamit Kachru, "Effective field theory of fractional quantized Hall nemat- ics," Phys. Rev. B 84, 195124 (2011). Field theory of the quantum Hall nematic transition. Joseph Maciejko, Ben Hsu, Steven A Kivelson, Ye Ju Park, Shivaji L Sondhi, 10.1103/PhysRevB.88.125137Phys. Rev. B. 88125137Joseph Maciejko, Ben Hsu, Steven A. Kivelson, Ye Ju Park, and Shivaji L. Sondhi, "Field theory of the quan- tum Hall nematic transition," Phys. Rev. B 88, 125137 (2013). Theory of Nematic Fractional Quantum Hall States. Yizhi You, Gil Young Cho, Eduardo Fradkin, 10.1103/PhysRevX.4.041050Phys. Rev. X. 441050Yizhi You, Gil Young Cho, and Eduardo Fradkin, "The- ory of Nematic Fractional Quantum Hall States," Phys. Rev. X 4, 041050 (2014). Fractional quantum Hall systems near nematicity: Bimetric theory, composite fermions, and Dirac brackets. Andrey Dung Xuan Nguyen, Dam Thanh Gromov, Son, 10.1103/PhysRevB.97.195103Phys. Rev. B. 97195103Dung Xuan Nguyen, Andrey Gromov, and Dam Thanh Son, "Fractional quantum Hall systems near nematicity: Bimetric theory, composite fermions, and Dirac brack- ets," Phys. Rev. B 97, 195103 (2018). Broken rotation symmetry in the fractional quantum Hall system. K Musaelian, Robert Joynt, 10.1088/0953-8984/8/8/002J. Phys.: Condens. Matter. 8K. Musaelian and Robert Joynt, "Broken rotation sym- metry in the fractional quantum Hall system," J. Phys.: Condens. Matter 8, L105-10 (1996). Spatially ordered fractional quantum Hall states. L Balents, Europhysics Letters). 33291EPLL. Balents, "Spatially ordered fractional quantum Hall states," EPL (Europhysics Letters) 33, 291 (1996). Unconventional anisotropic even-denominator fractional quantum Hall state in a system with mass anisotropy. Md, Meng K Shafayat Hossain, Y J Ma, L N Chung, K W Pfeiffer, K W West, M Baldwin, Shayegan, arXiv:1811.07094unpublishedMd. Shafayat Hossain, Meng K. Ma, Y. J. Chung, L. N. Pfeiffer, K. W. West, K. W. Baldwin, and M. Shayegan, "Unconventional anisotropic even-denominator frac- tional quantum Hall state in a system with mass anisotropy," (2018), unpublished, arXiv:1811.07094. Striped quantum Hall state in a half-filled Landau level. Xin Wan, Kun Yang, 10.1103/PhysRevB.93.201303Phys. Rev. B. 93201303Xin Wan and Kun Yang, "Striped quantum Hall state in a half-filled Landau level," Phys. Rev. B 93, 201303(R) (2016). Paired states of fermions in two dimensions with breaking of parity and time-reversal symmetries and the fractional quantum Hall effect. N Read, D Green, 10.1103/PhysRevB.61.10267Phys. Rev. B. 6110267N. Read and D. Green, "Paired states of fermions in two dimensions with breaking of parity and time-reversal symmetries and the fractional quantum Hall effect," Phys. Rev. B 61, 10267 (2000). Dislocations and vortices in pair-density-wave superconductors. D F Agterberg, H Tsunetsugu, 10.1038/nphys999Nature Phys. 4639D. F. Agterberg and H. Tsunetsugu, "Dislocations and vortices in pair-density-wave superconductors," Nature Phys. 4, 639 (2008). Stripe States with Spatially Oscillating d -Wave Superconductivity in the Two-Dimensional t − t − J Model. A Himeda, T Kato, M Ogata, 10.1103/PhysRevLett.88.117001Phys. Rev. Lett. 88117001A. Himeda, T. Kato, and M. Ogata, "Stripe States with Spatially Oscillating d -Wave Superconductivity in the Two-Dimensional t − t − J Model," Phys. Rev. Lett. 88, 117001 (2002). Unidirectional d-wave superconducting domains in the two-dimensional t − J model. Marcin Raczkowski, Manuela Capello, Didier Poilblanc, Raymond Frésard, Andrzej M Oleś, 10.1103/PhysRevB.76.140505Phys. Rev. B. 76140505Marcin Raczkowski, Manuela Capello, Didier Poilblanc, Raymond Frésard, and Andrzej M. Oleś, "Unidirectional d-wave superconducting domains in the two-dimensional t − J model," Phys. Rev. B 76, 140505 (2007). Coexistence of charge-density-wave and pair-density-wave orders in underdoped cuprates. Yuxuan Wang, Daniel F Agterberg, Andrey Chubukov, 10.1103/PhysRevLett.114.197001Phys. Rev. Lett. 114197001Yuxuan Wang, Daniel F. Agterberg, and Andrey Chubukov, "Coexistence of charge-density-wave and pair-density-wave orders in underdoped cuprates," Phys. Rev. Lett. 114, 197001 (2015). Amperean pairing and the pseudogap phase of cuprate superconductors. Patrick A Lee, 10.1103/PhysRevX.4.031017Phys. Rev. X. 431017Patrick A. Lee, "Amperean pairing and the pseudogap phase of cuprate superconductors," Phys. Rev. X 4, 031017 (2014). Fluctuations and phase transitions in Larkin-Ovchinnikov liquid-crystal states of a populationimbalanced resonant Fermi gas. Leo Radzihovsky, 10.1103/PhysRevA.84.023611Phys. Rev. A. 8423611Leo Radzihovsky, "Fluctuations and phase transitions in Larkin-Ovchinnikov liquid-crystal states of a population- imbalanced resonant Fermi gas," Phys. Rev. A 84, 023611 (2011). Bogoliubov Fermi Surfaces in Superconductors with Broken Time-Reversal Symmetry. D F Agterberg, P M R Brydon, C Timm, 10.1103/PhysRevLett.118.127001Phys. Rev. Lett. 118127001D. F. Agterberg, P. M. R. Brydon, and C. Timm, "Bo- goliubov Fermi Surfaces in Superconductors with Broken Time-Reversal Symmetry," Phys. Rev. Lett. 118, 127001 (2017). Vertical line nodes in the superconducting gap structure of sr2ruo4. E Hassinger, P Bourgeois-Hope, H Taniguchi, S René De Cotret, G Grissonnanche, M S Anwar, Y Maeno, N Doiron-Leyraud, Louis Taillefer, 10.1103/PhysRevX.7.011032Phys. Rev. X. 711032E. Hassinger, P. Bourgeois-Hope, H. Taniguchi, S. René de Cotret, G. Grissonnanche, M. S. Anwar, Y. Maeno, N. Doiron-Leyraud, and Louis Taillefer, "Ver- tical line nodes in the superconducting gap structure of sr2ruo4," Phys. Rev. X 7, 011032 (2017). Bi-state superfluid 3 he weak links and the stability of josephson π states. A Marchenkov, R W Simmonds, S Backhaus, A Loshak, J C Davis, R E Packard, 10.1103/PhysRevLett.83.3860Phys. Rev. Lett. 83A. Marchenkov, R. W. Simmonds, S. Backhaus, A. Loshak, J. C. Davis, and R. E. Packard, "Bi-state superfluid 3 he weak links and the stability of josephson π states," Phys. Rev. Lett. 83, 3860-3863 (1999). Unpaired Majorana fermions in quantum wires. Alexei Yu, Kitaev, Physics-Uspekhi. 171131Suppl.Alexei Yu. Kitaev, "Unpaired Majorana fermions in quantum wires," Physics-Uspekhi (Suppl.) 171, 131 (2001). Non-Abelian Statistics of Half-Quantum Vortices in p-Wave Superconductors. D A Ivanov, 10.1103/PhysRevLett.86.268Phys. Rev. Lett. 86268D. A. Ivanov, "Non-Abelian Statistics of Half-Quantum Vortices in p-Wave Superconductors," Phys. Rev. Lett. 86, 268 (2001). Theory of the Striped Superconductor. Erez Berg, Eduardo Fradkin, Steven A Kivelson, 10.1103/PhysRevB.79.064515Phys. Rev. B. 7964515Erez Berg, Eduardo Fradkin, and Steven A. Kivelson, "Theory of the Striped Superconductor," Phys. Rev. B 79, 064515 (2009). Topological insulators and superconductors: tenfold way and dimensional hierarchy. S Ryu, A P Schnyder, A Furusaki, A W W Ludwig, 10.1088/1367-2630/12/6/065010New Journal of Physics. 1265010S. Ryu, A. P. Schnyder, A. Furusaki, and A. W. W. Lud- wig, "Topological insulators and superconductors: ten- fold way and dimensional hierarchy," New Journal of Physics 12, 065010 (2010). J M Ziman, 10.1017/CBO9781139644075Principles of the Theory of Solids. Cambridge University Press2nd ed.J. M. Ziman, Principles of the Theory of Solids, 2nd ed. (Cambridge University Press, 1979). For the rest of the work, we use "band" and "band topology" to refer to those for the BdG Hamiltonian. For the rest of the work, we use "band" and "band topol- ogy" to refer to those for the BdG Hamiltonian. Topological blount's theorem of oddparity superconductors. Shingo Kobayashi, Ken Shiozaki, Yukio Tanaka, Masatoshi Sato, 10.1103/PhysRevB.90.024516Phys. Rev. B. 9024516Shingo Kobayashi, Ken Shiozaki, Yukio Tanaka, and Masatoshi Sato, "Topological blount's theorem of odd- parity superconductors," Phys. Rev. B 90, 024516 (2014). Unified Theory of P T and CP Invariant Topological Metals and Nodal Superconductors. Y X Zhao, Andreas P Schnyder, Z D Wang, 10.1103/PhysRevLett.116.156402Phys. Rev. Lett. 116156402Y. X. Zhao, Andreas P. Schnyder, and Z. D. Wang, "Unified Theory of P T and CP Invariant Topological Metals and Nodal Superconductors," Phys. Rev. Lett. 116, 156402 (2016). Robust doubly charged nodal lines and nodal surfaces in centrosymmetric systems. Tomáš Bzdušek, Manfred Sigrist, 10.1103/PhysRevB.96.155105Phys. Rev. B. 96155105Tomáš Bzdušek and Manfred Sigrist, "Robust doubly charged nodal lines and nodal surfaces in centrosymmet- ric systems," Phys. Rev. B 96, 155105 (2017). Weyl nodal surfaces. Oguz Türker, Sergej Moroz, 10.1103/PhysRevB.97.075120Phys. Rev. B. 9775120Oguz Türker and Sergej Moroz, "Weyl nodal surfaces," Phys. Rev. B 97, 075120 (2018). Bogoliubov Fermi surfaces: General theory, magnetic order, and topology. P M R Brydon, D F Agterberg, H Menke, C Timm, arXiv:1806.03773unpublishedP. M. R. Brydon, D. F. Agterberg, H. Menke, and C. Timm, "Bogoliubov Fermi surfaces: General theory, magnetic order, and topology," (2018), unpublished, arXiv:1806.03773. Explicit derivation of duality between a free dirac cone and quantum electrodynamics in (2 + 1) dimensions. David F Mross, Jason Alicea, Olexei I Motrunich, 10.1103/PhysRevLett.117.016802Phys. Rev. Lett. 11716802David F. Mross, Jason Alicea, and Olexei I. Motrunich, "Explicit derivation of duality between a free dirac cone and quantum electrodynamics in (2 + 1) dimensions," Phys. Rev. Lett. 117, 016802 (2016). Solitons in Polyacetylene. W P Su, J R Schrieffer, A J Heeger, 10.1103/PhysRevLett.42.1698Phys. Rev. Lett. 421698W. P. Su, J. R. Schrieffer, and A. J. Heeger, "Solitons in Polyacetylene," Phys. Rev. Lett. 42, 1698 (1979). Majorana bands, berry curvature, and thermal hall conductivity in the vortex state of a chiral p-wave superconductor. James M Murray, Oskar Vafek, 10.1103/PhysRevB.92.134520Phys. Rev. B. 92134520James M. Murray and Oskar Vafek, "Majorana bands, berry curvature, and thermal hall conductivity in the vortex state of a chiral p-wave superconductor," Phys. Rev. B 92, 134520 (2015). Michael Tinkham, Introduction to Superconductivity. New York, USAMcGraw-Hill, IncMichael Tinkham, Introduction to Superconductivity (McGraw-Hill, Inc., New York, USA, 1996). Pairing in Luttinger Liquids and Quantum Hall States. Charles L Kane, Ady Stern, Bertrand I Halperin, 10.1103/PhysRevX.7.031009Phys. Rev. X. 731009Charles L. Kane, Ady Stern, and Bertrand I. Halperin, "Pairing in Luttinger Liquids and Quantum Hall States," Phys. Rev. X 7, 031009 (2017). Theory of the Quantized Hall Conductance. B I Halperin, 10.5169/seals-115362Helv. Phys. Acta. 5675B. I. Halperin, "Theory of the Quantized Hall Conduc- tance," Helv. Phys. Acta 56, 75 (1983). Charge 4e superconductivity from pair density wave order in certain high temperature superconductors. Erez Berg, Eduardo Fradkin, Steven A Kivelson, Nat. Phys. 5Erez Berg, Eduardo Fradkin, and Steven A. Kivelson, "Charge 4e superconductivity from pair density wave or- der in certain high temperature superconductors," Nat. Phys. 5, 830-33 (2009). Order, metastability and phase transitions in two-dimensional systems. J M Kosterlitz, D J Thouless, J. Phys. C: Solid State Phys. 61181J. M. Kosterlitz and D. J. Thouless, "Order, metastabil- ity and phase transitions in two-dimensional systems," J. Phys. C: Solid State Phys. 6, 1181 (1973). Dislocation-mediated melting in two dimensions. D R Nelson, B I Halperin, Phys. Rev. B. 192457D. R. Nelson and B. I. Halperin, "Dislocation-mediated melting in two dimensions," Phys. Rev. B 19, 2457 (1979). Melting and the vector Coulomb gas in two dimensions. A P Young, 10.1103/PhysRevB.19.1855Phys. Rev. B. 19A. P. Young, "Melting and the vector Coulomb gas in two dimensions," Phys. Rev. B 19, 1855-1866 (1979). Braiding statistics and classification of two-dimensional charge-2m superconductors. Chenjie Wang, 10.1103/PhysRevB.94.085130Phys. Rev. B. 9485130Chenjie Wang, "Braiding statistics and classification of two-dimensional charge-2m superconductors," Phys. Rev. B 94, 085130 (2016). Weakly Coupled Pfaffian as a Type I Quantum Hall Liquid. S A Parameswaran, S A Kivelson, S L Sondhi, B Z Spivak, 10.1103/PhysRevLett.106.236801Phys. Rev. Lett. 106236801S. A. Parameswaran, S. A. Kivelson, S. L. Sondhi, and B. Z. Spivak, "Weakly Coupled Pfaffian as a Type I Quantum Hall Liquid," Phys. Rev. Lett. 106, 236801 (2011). Typology for quantum Hall liquids. S A Parameswaran, S A Kivelson, E H Rezayi, S H Simon, S L Sondhi, B Z Spivak, 10.1103/PhysRevB.85.241307Phys. Rev. B. 85241307S. A. Parameswaran, S. A. Kivelson, E. H. Rezayi, S. H. Simon, S. L. Sondhi, and B. Z. Spivak, "Typology for quantum Hall liquids," Phys. Rev. B 85, 241307 (2012). Smectic, Cholesteric, and Rayleigh-Benard order in two dimensions. John Toner, David R Nelson, 10.1103/PhysRevB.23.316Phys. Rev. B. 23316John Toner and David R. Nelson, "Smectic, Cholesteric, and Rayleigh-Benard order in two dimensions," Phys. Rev. B 23, 316 (1980). Role of nematic fluctuations in the thermal melting of pair-density-wave phases in two-dimensional superconductors. G Daniel, Eduardo Barci, Fradkin, 10.1103/PhysRevB.83.100509Phys. Rev. B. 83100509Daniel G. Barci and Eduardo Fradkin, "Role of nematic fluctuations in the thermal melting of pair-density-wave phases in two-dimensional superconductors," Phys. Rev. B 83, 100509 (2011). Anisotropy-driven transition from the Moore-Read state to quantum Hall stripes. Zheng Zhu, D N Inti Sodemann, Liang Sheng, Fu, 10.1103/PhysRevB.95.201116Phys. Rev. B. RZheng Zhu, Inti Sodemann, D. N. Sheng, and Liang Fu, "Anisotropy-driven transition from the Moore-Read state to quantum Hall stripes," Phys. Rev. B , 201116(R) (2017). Charged vortices in an abelian Higgs model with Chern-Simons term. K Samir, Avinash Paul, Khare, 10.1016/0370-2693(86)91028-2Physics Letters B. 174Samir K. Paul and Avinash Khare, "Charged vortices in an abelian Higgs model with Chern-Simons term," Physics Letters B 174, 420 -422 (1986). Magnetic-field Induced Pair Density Wave State in the Cuprate Vortex Halo. S D Edkins, A Kostin, K Fujita, A P Mackenzie, H Eisaki, S.-I Uchida, S Sachdev, M J Lawler, E.-A Kim, J C Davis, M H Hamidian, arXiv:1802.04673unpublishedS. D. Edkins, A. Kostin, K. Fujita, A. P. Macken- zie, H. Eisaki, S.-I. Uchida, S. Sachdev, M. J. Lawler, E.-A. Kim, J. C. Séamus Davis, and M. H. Hamid- ian, "Magnetic-field Induced Pair Density Wave State in the Cuprate Vortex Halo," (2018), unpublished, arXiv:1802.04673. Pair density waves in superconducting vortex halos. Yuxuan Wang, Stephen D Edkins, Mohammad H Hamidian, J C Davis, Eduardo Fradkin, Steven A Kivelson, 10.1103/PhysRevB.97.174510Phys. Rev. B. 97174510Yuxuan Wang, Stephen D. Edkins, Mohammad H. Hamidian, J. C. Séamus Davis, Eduardo Fradkin, and Steven A. Kivelson, "Pair density waves in superconduct- ing vortex halos," Phys. Rev. B 97, 174510 (2018). Anti-chiral and nematicity-wave superconductivity. Mats Barkman, Alexander A Zyuzin, Egor Babaev, arXiv:1811.10594Mats Barkman, Alexander A. Zyuzin, and Egor Babaev, "Anti-chiral and nematicity-wave superconductivity," (2018), unpublished, arXiv:1811.10594.
[]
[ "Aperiodic tilings with one prototile and low complexity atlas matching rules", "Aperiodic tilings with one prototile and low complexity atlas matching rules" ]
[ "D Fletcher " ]
[]
[]
We give a constructive method that can decrease the number of prototiles needed to tile a space. We achieve this by exchanging edge to edge matching rules for a small atlas of permitted patches. This method is illustrated with Wang tiles, and we apply our method to present via these rules a single prototile that can only tile R 3 aperiodically, and a pair of square tiles that can only tile R 2 aperiodically.
10.1007/s00454-010-9278-8
[ "https://arxiv.org/pdf/1003.4909v1.pdf" ]
46,496,646
1003.4909
d83169ef4e2737d01441da4ebc75e962591d4a2c
Aperiodic tilings with one prototile and low complexity atlas matching rules 25 Mar 2010 March 26, 2010 D Fletcher Aperiodic tilings with one prototile and low complexity atlas matching rules 25 Mar 2010 March 26, 2010arXiv:1003.4909v1 [math.CO] We give a constructive method that can decrease the number of prototiles needed to tile a space. We achieve this by exchanging edge to edge matching rules for a small atlas of permitted patches. This method is illustrated with Wang tiles, and we apply our method to present via these rules a single prototile that can only tile R 3 aperiodically, and a pair of square tiles that can only tile R 2 aperiodically. Introduction The field of aperiodic tilings was created by Berger's discovery [7] of a set of 20426 square tiles which were strongly aperiodic in the sense that they could only tile the plane in a non-repeating global structure. Unsurprisingly there has been some interest on how far this number could be decreased. The number of tiles was reduced over time, to 6 square tiles by Robinson [10] in 1971, and, relaxing to non-square tiles, to 2 tiles by Penrose [11] two years later. This led naturally to serious consideration about the possible existence of a single tile. While a simple example has not been forthcoming, if we relax the requirement that the monotile be completely defined by its shape alone, there has been progress. In [9] Socolar studied a more general problem, 'k-isohedral' monotiles, which had the monotile as a limiting case. Relaxing conditions on edge-coloring, nonconnected tiles or space-filling provided positive results, but not in the limit. In 1996 Gummelt [1] considered tiles that where allowed to overlap, and produced a decorated tile which could force strong aperiodicity. This paper extends work by Goodman-Strauss on 'atlas matching rules'. In [2] Goodman-Strauss describes how by requiring a tiling be covered by a suitable finite atlas of permitted bounded configurations, a domino can serve as a monotile. Sadly the atlas requires patches of extremely large radius. This paper describes a method of altering matching rules from coloured tiles to atlas matching rules with very small patches. Furthermore if two of the tiles have the same shape, the number of prototiles needed is decreased. We use this method to construct a pair of square tiles which tile R 2 aperiodically, and a single cubic tile that tiles R 3 aperiodically. The atlas matching rule construction We shall describe the basic definitions we will be using in this paper, drawing many of them from [6]. For clarity we will be limiting the spaces we are tiling to R n for some n ∈ N. With minor alterations the method will work in any homogeneous space (for example hyperbolic space H n ). Let P be a finite set of compact subsets of R n , each the closure of its interior. Denote these subsets as prototiles. Let G be a group of isometries of R n , which includes all translations of R n . The groups we will be using most in this paper are the group of translations G T r and the group of all isometries G I . For a given set of prototiles P and a group of isometries G, define a tile t as the image f (P ), for some f ∈ G, P ∈ P. A patch for P is a set of tiles with pairwise disjoint interiors and the support of a patch is the union of its tiles. A tiling with prototiles P is a patch with support R n . We shall refer to the support of a prototile P as supp(P ). We now want to introduce the notion of 'decorating' a prototile, and hence all tiles produced from it. Construct a function g : P ∈P P → C where C is a set containing a distinguished element, 0 say, and possibly other elements. A point x in the prototile P is c-coloured if g(x) = c. We will refer to points that are 0-coloured as uncoloured points. Extend g to points of any given tile t = f (P ) by g t (x) = gf −1 (x) for each x ∈ t. Definition 1. A coloured tiling (T, g) satisfies the identical facet (matching) rule if for all tiles t 1 , t 2 (where t 1 = t 2 ), g t 1 (x) = g t 2 (x) for all x ∈ t 1 ∩ t 2 . This covers cases where two tiles 'match' if they have the same colour on the interior of their shared boundary (for example Wang tiles). We will be using a slightly more general version of this rule in the rest of this paper, which allows tiles to match under wider conditions, as follows. Definition 2. An facet (matching) rule is a function r : C × C → {0, 1} such that r(x, y) = r(y, x) and r(0, 0) = 1. A coloured tiling (T, g) satisfies the facet (matching) rule r if for all tiles t 1 , t 2 (where t 1 = t 2 ), r(g t 1 (x), g t 2 (x)) = 1 for all x ∈ t 1 ∩ t 2 . We describe below a way of translating from this style of matching rule to the following matching rule. Definition 3. A tiling T satisfies an atlas (matching) rule U if there exists an atlas of patches U ∈ U such that for every tile t ∈ T , there exists a patch n(t) about t (with t being in the strict interior of n(t)) such that n(t) is a translation of some U ∈ U. Furthermore U must have a finite number of elements, and any patch in U must be compact. A prototile set P satisfies the atlas (matching) rule U if all tilings with prototiles P satisfy the atlas matching rule U. In this paper, we will be using patches defined by the '1-corona' about a tile t. The '1-corona' of a tile t is the set of tiles touching t (see [2]). Definition 4. A tiling T is a (P, G, g, r)-tiling if it has a prototile set P with allowable isometries G and colouring g, and satisfies the facet rule r. A tiling T is a (X , G, U)-tiling if it has a prototile set X with allowable isometries G, and satisfies the atlas matching rule U. Two tilings are MLD (mutually locally derivable) if one is obtained from the other in a unique way by local rules, and vice versa. Theorem 1. A (P, G T r , g, r)-tiling T is MLD to a (X , G I , U)-tiling for some 1corona atlas rule U and a prototile set X with |X | ≤ |P|. Construction 1. Take P and partition it into a set of equivalence classes P = P s , s ∈ {1, . . . , m} where P i ∼ P j iff supp(P i ) = supp(P j ) up to the action of an element of G T r . For all P s , let G s be the largest subgroup of G I /G T r such that for all f ∈ G s and all P ∈ P s , f (P ) ∼ P . Enumerate the elements of P s as P s 1 , . . . , P s r ∈ P s . Choose the smallest k you can so as to construct an injective function e s : P s → {(P s i , g s )|1 ≤ i ≤ k, g s ∈ G s }. Define X s = {P s 1 , . . . , P s k }. We now have a construction taking prototiles P s i ∈ P s to ordered pairs of a prototile from X s and an automorphism of that prototile. Observe that X s is a subset of P s . Proof of Theorem 1 Define a new prototile set X = X 1 ∪ . . . ∪ X m , where X s is as just defined. Let the set of allowable functions from the prototiles into R n be G I , instead of G T r . Take the set of allowable 1-coronas in the (P, G T r , g, r)-tiling T , and replace every tile originating from a translation of a prototile P s i ∈ P s with g s (P s j ), with g s and P s j originating from e s (P s i ) = (g s , P s j ). This will give you a set of 1-corona patches of X . Use this set as the atlas rule U for X . T has facet rules, which are intrinsic to the set of allowable first coronas (since the set of allowable first coronas list what boundaries are allowed to meet each other). Since our definition of X and its atlas correspond to the first coronas of tiles in T , with P s i replaced by g s (P s j ), any tiling by X is MLD to a tiling from P. Since |X s | ≤ |P s | then |X | ≤ |P|. Corollary. Take a prototile set P and partition it into a set of equivalence classes P = P s , s ∈ {1, . . . , m} as in the previous construction. If there exists P s such that |X s | < |P s |, there exists a prototile set (with atlas rules) which tiles R n with less prototiles than P. Proof. We know that |X s | < |P s |, thus |X | < |P|. Remark. This method of construction produces a prototile set with cardinality m s=1 ⌈ |Ps| |Gs| ⌉. Remark. P is strongly aperiodic iff X is strongly aperiodic. (See [2] for definition). This is because every tiling in X is MLD to a tiling in P, and strong aperiodicity is preserved under MLD equivalency. 3 Motivating examples and further improvements Example 1. For a simple illustration of the method, let us consider a tiling of the plane by 13 Wang tiles (unit squares with matching rules defined by matching coloured edges) as given in [3,4]. Label the Wang tiles as {Q 1 , . . . , Q 13 }. We can apply the above construction to get a function from {Q j } 13 j=1 to {(P i , r)|1 ≤ i ≤ 2, r ∈ D 4 }, where D 4 is the group of symmetries of the square. For example, enumerate the symmetries of the square as {r 1 , r 2 , . . . , r 8 }. Then such a function could send {Q j |1 ≤ j ≤ 8} to r j (P 1 ), and the remaining tiles {Q j |9 ≤ j ≤ 13} to r j−8 (P 2 ). The result is shown in diagram 1, for a small patch of the tiling. As is common with Wang tiles, the colouring of {Q j } 13 j=1 is represented as actual colours superimposed onto the tile. We represent the change of prototile set from {Q j } 13 j=1 to {P 1 , P 2 } by adding a colouring to P 1 and P 2 , which looks like their alphabetical symbols. This colouring has uncoloured points on the exterior of the tile (and thus no effect on the matching rules), but admits a free action by the symmetry group of a square. Example 2. Consider Kari's Wang cube prototiles, W [5]. This is a set of 21 unit cube prototiles, with facet matching rules that only tile R 3 aperiodically. Choose a unit cube prototile A which has an asymmetric label; i.e., for any two distinct isometries of the cube f, g ∈ O h , f (A) = g(A). Since the set of isometries of the cube is of cardinality 48, we can choose 21 unique isometries i k ∈ O h . We use the method in Construction 1 to replace P k with i k (A). Thus we have an aperiodic protoset with one prototile which is MLD to Kari's Wang Cubes. Note that we have lost the property of matching rules being determined on faces, and replaced them with a set of legal one corona patches (which cannot be rotated or reflected, of course). We have also had to broaden the set of allowable mappings of the prototiles into the tiled space, from translations to translations and rotation/reflections. Remark. This algorithm can be further improved, by partitioning P into equivalence classes based on what prototiles have the same support up to isometry, not just translation. Let T be a (P, G T r , g, r)-tiling as in Construction 1. If there is a prototile P i ∈ P whose support is a non-trivial isometry of another prototile P j (where i = j), then the resulting (X , G I , U)-tiling may have less prototiles than one originating from Construction 1. Construction 2. Partition P = P s , s = {1, . . . , p} where P i ∼ P j iff supp(P i ) = supp(P j ) up to the action of an element of G I . Further partition P s = P s t , t = {1, . . . , q} where P s a ∼ P s b iff supp(P s a ) = supp(P s b ) up to the action of an element of G T r . Figure 1: Top picture shows a tiling with prototiles Q 1 , . . . , Q 13 with facet matching rules, and translation as an isometry group. Bottom picture uses a two element prototile set, with rotations, reflections and translations as a isometry group. AcknowledgementsThanks are due to Chaim Goodman-Strauss, Joshua E. S. Socolar and Edmund Harriss for helpful conversations on this paper. My supervisor John Hunton has assisted considerably with improving the readability of this paper. We also thank the University of Leicester and EPSRC for a doctoral fellowship. The results of this article will form part of the author's PhD thesis.This two-stage partitioning gives us a collection of equivalence classes ( s,t P s t ) as per the first construction. Additionally we know that there exist isometries in G I from elements of P s i to elements of P s j . Take the P s t with the largest cardinality and denote it P s T . From the definition of P s there exists an isometry α P i P j such that α P i P j (supp(P i )) = supp(P j ). Furthermore we know that an given isometry can only take elements from one set P s i to P s T (by definition of equivalence class). Thus we can replace any prototile in P s t with a unique isometry of a prototile in P s T , since |P s t | ≤ |P s T |. By applying the previous construction to P s T , we can get a minimal uncoloured prototile set X s that can be used to translate prototiles in P s T , and hence P s , to atlas rules.). While this is sufficient to define the tiling, it has the problem that any picture of the tiling needs to include information about the isometries used for each tile. Thus we replace t 1 with a tile x with an uncoloured boundary, but with a coloured interior which is not preserved under any non-identity element of D 3 . Penrose tilings as coverings of congruent decagons', Petra Gummelt, Geometriae Dedicata. 10.1007/BF00239998Figure 2: New and old prototile set. 62'Penrose tilings as coverings of congruent decagons', Petra Gummelt, Geome- triae Dedicata Volume 62, Number 1 / August, 1996. doi:10.1007/BF00239998. Figure 2: New and old prototile set Open Questions in Tilings. Chaim Goodman-Strauss, 'Open Questions in Tilings', Chaim Goodman-Strauss January 10, 2000 http://comp.uark.edu/ strauss/papers/survey.pdf An aperiodic set of 13 Wang tiles. Discrete Mathematics. 160Karel Culik II'An aperiodic set of 13 Wang tiles', Karel Culik II, Discrete Mathematics, Vol- ume 160, Issues 1-3, 15 November 1996, Pages 245-251 A small aperiodic set of Wang tiles. Jarkko Kari, Discrete Mathematics. 160'A small aperiodic set of Wang tiles', Jarkko Kari, Discrete Mathematics, Vol- ume 160, Issues 1-3, 15 November 1996, Pages 259-264 An aperiodic set of Wang cubes. Jarkko Kari, Karel Culik, Journal of Universal Computer Science. 110'An aperiodic set of Wang cubes', Jarkko Kari, Karel Culik, Journal of Universal Computer Science, vol. 1, no. 10 (1995), 675-686 Cohomology of Substitution Tiling Spaces. Marcy Barge, Beverly Diamond, John Hunton, Lorenzo Sadun, 10.1017/S0143385709000777Ergodic Theory and Dynamical Systems. 'Cohomology of Substitution Tiling Spaces', Marcy Barge, Beverly Diamond, John Hunton, Lorenzo Sadun, Ergodic Theory and Dynamical Systems, doi:10.1017/S0143385709000777 The undecidability of the domino problem. R Berger, Amer Memoirs, Math. Soc. 66'The undecidability of the domino problem', Berger, R., Memoirs Amer. Math. Soc. 66(1966) Proving theorems by pattern recognition II. Bell System Tech. Journal. 401141'Proving theorems by pattern recognition II', Wang, Hao (January 1961), Bell System Tech. Journal 40(1):141. More ways to tile with only one shape polygon. 10.1007/BF02986203The Mathematical Intelligencer. 292Joshua E. S. Socolar'More ways to tile with only one shape polygon', Joshua E. S. Soco- lar, The Mathematical Intelligencer, Volume 29, Number 2 / June, 2007, doi:10.1007/BF02986203 Undecidability and Nonperiodicity for Tilings of the Plane. R M Robinson, Inventiones Mathematicae. 123'Undecidability and Nonperiodicity for Tilings of the Plane', R. M. Robinson, Inventiones Mathematicae, 12(3), 1971 pp. 177209. Role of aesthetics in pure and applied research. Bulletin of the Institute of Mathematics and its Applications. 10'Role of aesthetics in pure and applied research', Penrose, Roger (1974), Bulletin of the Institute of Mathematics and its Applications 10
[]
[ "Direct S-matrix calculation for diffractive structures and metasurfaces", "Direct S-matrix calculation for diffractive structures and metasurfaces" ]
[ "Alexey A Shcherbakov *[email protected] \nMoscow Institute of Physics and Technology\nDolgoprudniyRussia\n", "Yury V Stebunov \nMoscow Institute of Physics and Technology\nDolgoprudniyRussia\n\nGrapheneTek\nSkolkovo Innovation Center\nRussia\n", "Denis F Baidin \nMoscow Institute of Physics and Technology\nDolgoprudniyRussia\n", "Thomas Kämpfe \nUniversity of Lyon\nLaboratory Hubert CurienSaint-EtienneFrance\n", "Yves Jourlin \nUniversity of Lyon\nLaboratory Hubert CurienSaint-EtienneFrance\n" ]
[ "Moscow Institute of Physics and Technology\nDolgoprudniyRussia", "Moscow Institute of Physics and Technology\nDolgoprudniyRussia", "GrapheneTek\nSkolkovo Innovation Center\nRussia", "Moscow Institute of Physics and Technology\nDolgoprudniyRussia", "University of Lyon\nLaboratory Hubert CurienSaint-EtienneFrance", "University of Lyon\nLaboratory Hubert CurienSaint-EtienneFrance" ]
[]
The paper presents a derivation of analytical components of S-matrices for arbitrary planar diffractive structures and metasurfaces in the Fourier domain. Attained general formulas for S-matrix components can be applied within both formulations in the Cartesian and curvilinear metric. A numerical method based on these results can benefit from all previous improvements of the Fourier domain methods. In addition, we provide expressions for S-matrix calculation in case of periodically corrugated layers of 2D materials, which are valid for arbitrary corrugation depth-to-period ratios. As an example the derived equations are used to simulate resonant grating excitation of graphene plasmons and an impact of silica interlayer on corresponding reflection curves. arXiv:1712.08361v4 [physics.optics]
10.1103/physreve.97.063301
[ "https://arxiv.org/pdf/1712.08361v4.pdf" ]
51,639,081
1712.08361
ff4707d2a1c87a3a788a63b5661241cafbb39383
Direct S-matrix calculation for diffractive structures and metasurfaces April 26, 2018 Alexey A Shcherbakov *[email protected] Moscow Institute of Physics and Technology DolgoprudniyRussia Yury V Stebunov Moscow Institute of Physics and Technology DolgoprudniyRussia GrapheneTek Skolkovo Innovation Center Russia Denis F Baidin Moscow Institute of Physics and Technology DolgoprudniyRussia Thomas Kämpfe University of Lyon Laboratory Hubert CurienSaint-EtienneFrance Yves Jourlin University of Lyon Laboratory Hubert CurienSaint-EtienneFrance Direct S-matrix calculation for diffractive structures and metasurfaces April 26, 2018 The paper presents a derivation of analytical components of S-matrices for arbitrary planar diffractive structures and metasurfaces in the Fourier domain. Attained general formulas for S-matrix components can be applied within both formulations in the Cartesian and curvilinear metric. A numerical method based on these results can benefit from all previous improvements of the Fourier domain methods. In addition, we provide expressions for S-matrix calculation in case of periodically corrugated layers of 2D materials, which are valid for arbitrary corrugation depth-to-period ratios. As an example the derived equations are used to simulate resonant grating excitation of graphene plasmons and an impact of silica interlayer on corresponding reflection curves. arXiv:1712.08361v4 [physics.optics] Introduction Periodic optical structures ranging from conventional one-dimensional diffraction gratings to metasurfaces attract a lot of attention due to wide optimization possibilities in design of optical response functions. The wavelength scale nature of patterns of the most complex diffractive structures and metasurfaces requires an effort to rigorously solve Maxwell's equations. Concerning various methods capable to suit this task the Fourier space methods [1] are among the most popular and important ones. They are widely used owing to their relatively simple formulation, versatility in structures that can be modelled, and an output in form of S-matrices. The latter property features means that these methods can be directly used in simulation of measurable quantitites. Not only are S-matrices physically important entities, but also they bring stability in numerical calculations [2,3,4,5]. Fourier methods, however, conventionally operate with T-matrices, and to ensure stability of calculations an additional effort is needed, e.g., proposed in [6,3]. In this paper we develop a way to overcome these complications and demonstrate how S-matrix components of a slice of an arbitrary diffractive structure can be derived analytically in the Fourier space. Our derivation is based on the integral equation solution of the Maxwell's equations, albeit the distribution formalism given in [7] can be used to get the same result. In addition to derivation of S-matrices of bulk material gratings it is shown that expressions for corrugated layers of 2D materials can be attained in a similar way, which provides the possibility to efficiently simulate complex metasurfaces. This widens the applicability of the previous models of [8,9] where an electrodynamic response of a graphene sheet placed on top of a corrugated substrate was simulated by means of the Rayleigh-type methods. Our direct S-matrix approach does not rely on the Rayleigh hypothesis and hence is free of convergence issues which the Rayleigh-type methods face [10,11]. Volume Integral Equation Consider a planar structure, which can be either a metasurface or a diffractive optical element, in Cartesian coordinates X α , α = 1, 2, 3 with unit vectorsê α , such that axis X 3 is orthogonal to the structure plane. The structure is supposed to be doubly periodic along two non-collinear directions in plane X 1 X 2 . Denote unit vectors in the directions of the periods asp 1,2 , and let periods be Λ 1,2 . Reciprocal lattice vectors then read K 1 = 2π Λ 1p 2 ×ê ẑ p 1 · (p 2 ×ê z ) , K 2 = 2π Λ 2ê z ×p 1 p 1 · (p 2 ×ê z ) . The paper refers to linear phenomena only. Thus, we consider Maxwell's equations for timeharmonic fields and sources with implicit time dependence exponential factor exp(−iωt), which will be omitted further: ∇ × E = −M + iωµH, ∇ × H = J − iωεE.(2) The magnetic source term M is essential for a curvilinear coordinate formulation as will be discussed below. Eqs. (2) yield Helmholtz equations providing that the dielectric permittivity and the magnetic permeability are constants. We will refer to these quantities as basis ones and denote them as ε b , µ b . Helmholtz equations then read ∇ × ∇ × E − k 2 b E = iωµ b J − ∇ × M, ∇ × ∇ × H − k 2 b H = iωε b M + ∇ × J,(3) where k b = ω √ ε b µ b is the wavenumber of the homogeneous basis space. Well-known solutions of Eqs. (3) in form of volume integral equations rely on the free-space tensor electric and mixed Green's functions [12], G e and G m respectively: E(r) = E ext (r) + iωµ b d 3 r G e (r − r )J(r ) − k b d 3 r G m (r − r )M(r ), H(r) = H ext (r) + iωε b d 3 r G e (r − r )M(r ) + k b d 3 r G m (r − r )J(r ).(4) In these equations we single out "external" field amplitudes E ext , H ext , which are supposed to be known and to be produced by some sources being outside of a region under consideration. In turn, sources present in the right-hand parts of Eqs. (4) will be further on associated with local medium inhomogeneities. Aimed at dealing with planar structures we utilize a decomposition of the Green's function in the plane wave basis. Given a plane wave wavevector k ± = (k 1 , k 2 , ±k 3 ) T with sign ± distinguishing waves propagating upwards and downwards relative to axis X 3 , whose components meet the dispersion equation k 2 1 + k 2 2 + k 2 3 = k 2 b supplemented with the condition k 3 + k 3 > 0 [13], the unit vectors of the TE and the TM polarized waves can be taken aŝ e e± = k ± ×ê 3 |k ± ×ê 3 | ,ê h± = (k ± ×ê 3 ) × k ± |(k ± ×ê 3 ) × k ± | ,(5) respectively. With this definition the Green's functions explicitly write G e αβ (ρ − ρ , x 3 − x 3 ) = i 4π 2 d 2 κ i k 2 b δ(x 3 − x 3 )δ α3 δ β3 + ê eσ αê eσ β +ê hσ αê hσ β exp(ik 3 |x 3 − x 3 |) 2k 3 exp [iκ(ρ − ρ )] (6) G m αβ (ρ − ρ , x 3 − x 3 ) = − 1 4π 2 d 2 κξ αγβ k σ γ exp(ik 3 |x 3 − x 3 |) 2k 3 exp [iκ(ρ − ρ )](7) Here ρ = (x 1 , x 2 ) T , κ = (k 1 , k 2 ) T , δ αβ and ξ αβγ are Kronecker and Levi-Civita symbols respectively, and σ = sign(x 3 − x 3 ). Expressions similar to the first equation for the electric Green's functions can be found, e.g., in [14,15,16], and derivation of the second is analogous. For consistency the derivation is briefly reviewed in Appendix. In order to proceed to analysis of periodic structures, first, let us fix the "zero harmonic" wavevector k (0) = κ (0) , k 2 b − (κ (0) ) 2 T so that fields and sources are subject to Floquet-Bloch condition V(ρ + m 1 Λ 1p1 + m 2 Λ 2p2 , z) = V(ρ, z) exp iκ (0) (m 1 Λ 1p1 + m 2 Λ 2p2 )(8) Vector V can be substituted by any of E, H, J, M. Under this condition one can apply the Poisson summation formula in Eqs. (4) with explicit functions Eqs. (6), (7) to arrive at equations which depend on the Fourier components of the sources, which read S m (x 3 ) = |K 1 × K 2 | 4π 2 P d 2 ρ S(ρ , x 3 ) exp(−iκ m ρ )(9) where S stands either for J or M, the integration is performed over one structure period, and the plane harmonic wavevector κ m = κ (0) + m 1 K 1 + m 2 K 2 depends on the two-dimensional index m = (m 1 , m 2 ) ∈ Z 2 . The solution in the periodic domain then explicitly writes E(ρ, x 3 ) = E ext (ρ, x 3 ) + J 3 (ρ, x 3 ) iωε bê 3 − ωµ b m exp(iκ m ρ) ∞ −∞ dx 3 ê eσ m (ê eσ m · J m (x 3 )) +ê hσ m (ê hσ m · J m (x 3 )) exp(ik 3m |x 3 − x 3 |) 2k 3m + k b m exp(iκ m ρ) ∞ −∞ dx 3 ê eσ m (ê hσ m · M m (x 3 )) −ê hσ m (ê eσ m · M m (x 3 )) exp(ik 3m |x 3 − x 3 |) 2k 3m ,(10) and a similar expression holds for the magnetic field. Here k 3m are propagation constants of plane harmonics defined by the dispersion equation κ 2 m +k 2 3m = k 2 b and the condition k 3m + k 3m > 0; andê e,h± m are unit polarization vectors obtained by using wavevectors k ± m = (κ m , ±k 3m ) T in Eqs. (5). Eq. (10) shows that the electric field is a superposition of plane harmonics and a source "delta"-term. Let us introduce the modified fieldẼ 1,2 = E 1,2 ,Ẽ 3 = E 3 − J 3 /iωε b , so that the plane wave decomposition of this field E(ρ, x 3 ) = m ã e+ m (x 3 )ê e+ m +ã e− m (x 3 )ê e− m +ã h+ m (x 3 )ê h+ m +ã h− m (x 3 )ê h− m exp(iκ m ρ)(11) is valid at any space point. Modified amplitudeẼ ext of the external field is identical to nonmodified E ext since the sources of this field are supposed to be outside the region of interest, and we intend to evaluate Eq. (10) outside these sources. We assume then that a decomposition similar to Eq. (11) for the external field amplitudes is known yielding amplitudesã ext,e± m and a ext,h± m . Combining Eqs. (10) and (11) we see that the unknown amplitudes are found from integration over the third coordinatẽ a e+ m (x 3 ) =ã ext,e+ m (x 3 ) − x 3 −∞ dζ exp[ik 3m (x 3 − ζ)] 2k 3m ωµ bê e+ m · J m (ζ) − k bê h+ m · M m (ζ) , a h+ m (x 3 ) =ã ext,h+ m (x 3 ) − x 3 −∞ dζ exp[ik 3m (x 3 − ζ)] 2k 3m ωµ bê h+ m · J m (ζ) + k bê e+ m · M m (ζ) , a e− m (x 3 ) =ã ext,e− m (x 3 ) − ∞ x 3 dζ exp[ik 3m (ζ − x 3 )] 2k 3m ωµ bê e− m · J m (ζ) − k bê h− m · M m (ζ) , a h− m (x 3 ) =ã ext,h− m (x 3 ) − ∞ x 3 dζ exp[ik 3m (ζ − x 3 )] 2k 3m ωµ bê h− m · J m (ζ) + k bê e− m · M m (ζ) .(12) These equations can be used to get a formulation of the Generalized Source Method either in Cartesian [17] or in curvilinear [18] coordinates by introducing Generalized Sources related to the fields and performing implicit numerical integration. Instead, in the next section Eqs. (12) are used to obtain analytical S-matrix components of a thin grating layer. S-matrix of a thin grating slice On the way towards analytical S-matrix components we associate a region of interest, or source region, for general solution given by Eqs. (12) with a plane layer x 3 ≤ x 3 ≤ x(1) 3 of thickness ∆x 3 = x a e± m (x c 3 ± ∆x 3 /2) ≈ã ext,e± m (x c 3 ± ∆x 3 /2) − ∆x 3 exp(ik 3m ∆x 3 /2) 2k 3m ωµ bê e± m · J m (x c 3 ) − k bê h± m · M m (x c 3 ) , (13a) a h± m (x c 3 ± ∆x 3 /2) ≈ã ext,h± m (x c 3 ± ∆x 3 /2) − ∆x 3 exp(ik 3m ∆x 3 /2) 2k 3m ωµ bê h± m · J m (x c 3 ) + k bê e± m · M m (x c 3 ) ,(13b) Integration in the first pair of Eqs. (12) is performed up to x 3 = x c 3 + ∆x 3 /2, and in the second pair -from x 3 = x c 3 − ∆x 3 /2. Sources that produce fields with amplitudesã ext m can be located anywhere outside the layer. To get closed form equations J and M should be related to the fields. In the simplest case one may take J = −iω(ε − ε b )E, and M = 0, as is done within the Generalized Source Method for gratings whose permittivity is represented by smooth spatial functions [17] or in other Volume Integral Equation methods (e.g., [19]). When a formulation includes a correct treatment of the boundary conditions to a corrugation interface in the Fourier domain [20] or the Generalized Metric Sources, which appear in a curvilinear formulation of the problem [21], the dependence of the sources from the field can be more complicated. Generally, such dependence writes as J = −iωε b Ω E E M = −iωµ b Ω H H(14) with 3 × 3 matrices Ω E,H , whose explicit form is supposed to be known, but is not needed for the derivation in this section. In order to operate with plane harmonic amplitudes only we introduce matrices composed of column vectors given by Eq. (5): V = ê e+ ,ê h+ ,ê e− ,ê h− , W = ê h+ , −ê e+ ,ê h− , −ê e− .(15) Additionally, let us denote amplitude vectors of upward and downward propagating harmonics asã ± m = (ã e± m ,ã h± m ) T . When substituting Eq. (14) into Eq. (13) one should evaluate the sources, and hence the local field amplitudes, at the layer center x c 3 . The resulting equation on the unknown amplitude vector can be written via some matrix operator Φ mn (x c 3 ) (to be explicitly given below) as ã + m (x (2) 3 ) a − m (x (1) 3 ) = ã ext,+ m (x (2) 3 ) a ext,− m (x (1) 3 ) + ∆x 3 Φ mn (x c 3 ) ã + n (x c 3 ) a − n (x c 3 ) + O((∆x 3 ) 2 ).(16) One can find the unknown vector in the right-hand part by writing out an equation analogous to Eq. (13) when both pairs of Eqs. (12) are evaluated at x 3 = x c 3 : ã + m (x c 3 ) a − m (x c 3 ) = ã ext,+ m (x c 3 ) a ext,− m (x c 3 ) + ∆x 3Φmn (x c 3 ) ã + n (x c 3 ) a − n (x c 3 ) + O((∆x 3 ) 2 ),(17) and the latter self-consistent linear algebraic equation is solved neglecting the O((∆x 3 ) 2 ) terms. OperatorΦ mn here differs from Φ mn only by the exponential factor exp(ik 3m ∆x 3 /2). However, direct substitution of Eq. (17) into Eq. (16) shows that the inversion would provide an excessive accuracy, and it is suffices to take only the zero-order term of Eq. (17) into account. Thus, the local field amplitudes in the right-hand part of Eq. (16) can be replaced by the external ones. These amplitudes evaluated at the layer center are related with the known amplitudes at the layer boundaries through propagation factors:ã ext, + (x c 3 ) =ã ext,+ (x(1) 3 ) exp(ik 3n ∆x 3 /2), and ã ext,− (x c 3 ) =ã ext,− (x(2) 3 ) exp(ik 3n ∆x 3 /2). Then, Eq. (13) transforms to the following approximate relation: ã + m (x (2) 3 ) a − m (x (1) 3 ) = ã ext,+ m (x (1) 3 ) a ext,− m (x (2) 3 ) exp(ik 3m ∆x 3 ) + ik b ∆x 3 2 k b k 3m exp(ik 3m ∆x 3 /2) × n exp(ik 3n ∆x 3 /2) (V T ) m Ω Emn V n + (W T ) m Ω Hmn W n ã ext,+ n (x (1) 3 ) a ext,− n (x (2) 3 ) ,(18) where Ω E,Hmn are components of the Fourier block-Toeplitz matrices obtained by the Fourier transform of corresponding matrices Ω E,H evaluated at coordinate x c 3 . The amplitudes of the first term in the right-hand side were translated using propagation factor exp(ik 3n ∆x 3 ) so as to get identical vectors of external amplitudes in both terms. The accuracy of the derived equation is similar to other Fourier methods as they treat grating structures within each thin slice as homogeneous along the vertical coordinate. Eq. (18) directly provides relations between incoming and outgoing wave amplitudes for diffraction on a thin grating slice. The above derivation supposes that external field amplitudes can be generated by any sources located outside the layer x (1) 3 ≤ x 3 ≤ x (2) 3 . Since only amplitudes propagating toward the layer (incoming) are present in the right-hand part of Eq. (18) we can associate them with local fields at boundaries of the layer and leave out the superscript "ext". By simple reordering of these equations one can compose an S-matrix for the corresponding layer. Given a planar structure bounded by planes x 3 = x low 3 and x 3 = x up 3 , we will refer to the S-matrix of this structure as a 2 × 2 block matrix that relates plane wave amplitudes at the boundaries as follows: ã − m (x low 3 ) a + m (x up 3 ) = n S 11,mn S 12,mn S 21,mn S 22,mn ã + m (x low 3 ) a − m (x up 3 )(19) To apply Eqs. (18), (19) to a deep structure, the grating layer should be divided into a number of slices analogous to the Fourier Modal Method (FMM) and other Fourier methods. However, the calculation of eigen modes is no longer needed since S-matrix components for each slice are readily available. Once the S-matrix of each slice is attained via Eq. (18) a corresponding matrix for the whole grating layer can be calculated by the S-matrix propagation algorithm known to be numerically stable [3,4]. The algorithm is based on the following multiplication rule. Given two S-matrices S (1,2) of planar structures occupying adjacent plane layers x (1) 3 ≤ x 3 ≤ x (2) 3 and x (2) 3 ≤ x 3 ≤ x(3) 3 respectively, the components of the S-matrix that relates amplitudes at interfaces x 3 = x Examples Matrices Ω E,H generally depend on a particular implementation of the Fourier approach and follow from numerous results attained by different authors on the Fourier Modal Method, Differential Method, C-method, and the Generalized Source Method together with other volume integral implementations (e.g., [20,22,23,24,25]). For the reader to get acquainted with possible implementations of the previous section we provide here two illustrative examples of widely used gratings followed by a derivation of an S-matrix for sinusoidally corrugated layers of 2D materials. 1D lamellar grating A 1D lamellar grating is one of the simplest but nevertheless among the most practically important examples of periodic corrugations. Consider a formulation in Cartesian coordinates. Then, all components Ω H would be zero, and the matrix Ω E would be diagonal. Correct factorization of the products of discontinuous functions [26,27,20,22] yields Ω Emn =   [ε b /ε] −1 mn − δ mn [ε/ε b ] mn − δ mn δ mn − [ε/ε b ] −1 mn   .(21) Here, periodicity along x 1 is assumed, and [ε/ε b ] −1 mn , [ε b /ε] −1 mn denote inverted truncated Fouriermatrices of periodic permittivity functions. The permittivity function in the considered case explicitly writes ε(x 1 ) = ε 1 , (k − α/2)Λ ≤ x 1 < (k + α/2)Λ, and ε(x 1 ) = ε 2 , (k + α/2)Λ ≤ x 1 < (k + 1 − α/2)Λ with k ∈ Z, 0 < α < 1, and ε 1,2 = const, and has the following elements of the Fourier matrix: [ε/ε b ] mn = ε 2 ε b δ m,n + α ε 1 − ε 2 ε b sinc[πα(m − n)] In the collinear diffraction case when k (0) 2 = 0 the S-matrix splits into two independent parts for the TE and TM polarization separately. We get for the TE polarization S (e) 11mn = S (e) 22mn = ik b ∆x 3 2 k b k 3m exp[i(k 3m + k 3n )∆x 3 /2] ([ε/ε b ] mn − δ mn ) , S (e) 12mn = S (e) 21mn = exp(ik 3m ∆x 3 ) + ik b ∆x 3 2 k b k 3m exp[i(k 3m + k 3n )∆x 3 /2] ([ε/ε b ] mn − δ mn ) ,(22) and for the TM polarization S (h) 11mn = S (h) 22mn = ik b ∆x 3 2 k b k 3m exp[i(k 3m + k 3n )∆x 3 /2] × − k 3m k b [ε b /ε] −1 mn − δ mn k 3n k b + k 1m k b δ mn − [ε/ε b ] −1 mn k 1n k b (23a) S (h) 12mn = S (h) 21mn = exp(ik 3m ∆x 3 ) + ik b ∆x 3 2 k b k 3m exp[i(k 3m + k 3n )∆x 3 /2] × k 3m k b [ε b /ε] −1 mn − δ mn k 3n k b + k 1m k b δ mn − [ε/ε b ] −1 mn k 1n k b (23b) Independence of function ε(r) of coordinate x 3 allows calculating the grating S-matrix with a reduced complexity since in this case the expressions in Eqs. (22), (23) should be evaluated only once for a single thin layer. Suppose one starts with S-matrix S (0) of a layer with thickness ∆x 3 . S-matrix S (k) of a layer having thickness 2 k ∆x 3 is a composition of matrices for half-depth layers: S (k) = S (k−1) * S (k−1) . Thus, for x 3 -invariant corrugations the complexity of the method reduces down to O(N 3 F log N S ). Figures 1 and 2 demonstrate the convergence of the developed S-matrix method and show the comparison with solutions obtained by the FMM as it is described in [28] for lamellar dielectric and metallic gratings respectively. The maximum absolute difference in corresponding S-matrix components is plotted along vertical axis. Grating parameters used in the calculations are indicated in the figure captions. Difference of the obtained solutions from the FMM results is seen to be much below the accuracy of both methods, and depends on initial layer depth ∆x 3 (see the previous paragraph), which was taken to be 2 −30 h. The FMM is known to be particularly efficient for gratings with vertical walls, so the calculation time of the direct S-matrix method in the presented example was about 10 times larger than that of the FMM for each value of N F , though this time difference can be reduced by increasing the value of ∆x 3 as long as the reduction of the accuracy can be tolerated. 1D sinusoidal grating in curvilinear coordinates The necessity of the magnetic sources in the above derivations is due to the previously developed curvilinear coordinate Generalized Source Method with effective Generalized Metric Sources 22), (23), and the comparison with S-matrices calculated by the Fourier Modal Method. The maximum absolute difference between corresponding S-matrix components is plotted against the inverse number of Fourier harmonics. The grating parameters are: period-to-wavelength ratio Λ/λ = 1.5, depth-to-wavelength ratio h/λ = 0.5, and α = 0.5. Substrate and cover refractive indices are 1.5 and 1 respectively. Grating permittivity ε 1 = 6.25. Fig. 1, but for metallic grating of permittivity ε 1 = −9.6 + 1.1i. [21,18]. The core idea is to fit a grating corrugation profile with a suitable cuvilinear coordinate transformation (x 1 , x 2 , x 3 ) → (z 1 , z 2 , z 3 ) similar to what is done in the C-method [29], while the chosen curvilinear coordinates should continuously become Cartesian in a region near the grating. Fig. 3 demonstrates an example of a periodic corrugation with coordinate planes of a new system. This approach exploits the similarity between Maxwell's operators in the Cartesian and the curvilinear metric (see e.g. Chapter 8 of [1]), and a possibility to treat metric contributions as source terms [21]. Under this rationale the Generalized Sources derived from local metric variations read J α = −iωε b ε ε b √ gg αβ − δ αβ E β , M α = −iωµ b µ µ b √ gg αβ − δ αβ H β ,(24) where g αβ are local metric tensor components, g = 1/ det{g αβ }, and summation over the repeated indices is implied. These sources can be directly substituted into Eqs. (4) with functions Eqs. (6), (7) under direct replacement of Cartesian coordinates (x 1 , x 2 , x 3 ) with curvilinear ones (z 1 , z 2 , z 3 ). The approach yields matrices Ω that depend on local metric and medium properties [21]: Ω E,Hmn =   η E,H /( √ gg 33 ) mn − δ mn 0 [g 13 /g 33 ] mn 0 [η E,H √ g] mn − δ mn 0 [g 13 /g 33 ] mn 0 δ mn − 1/(η E,H √ gg 33 ) mn   . (25) with η E = ε/ε b , η H = µ/µ b . The latter two fractions are constant within each slice in curvilinear coordinates, and can be taken out of the Fourier matrices. S-matrices for the two polarizations then explicitly read: S (e,h) 11mn = ik b ∆z 3 2 k b k 3m exp[i(k 3m + k 3n )∆z 3 /2] η E,H [ √ g] mn − δ mn − k 3m k b η H,E 1 √ gg 33 mn − δ mn k 3n k b + k 1m k b δ mn − 1 η H,E 1 √ gg 33 mn k 1n k b + k 3m k b g 13 g 33 mn k 1n k b − k 1m k b g 31 g 33 mn k 3n k b (26a) S (e,h) 12mn = exp(ik 3m ∆z 3 ) + ik b ∆z 3 2 k b k 3m exp[i(k 3m + k 3n )∆z 3 /2] η E,H [ √ g] mn − δ mn + k 3m k b η H,E 1 √ gg 33 mn − δ mn k 3n k b + k 1m k b δ mn − 1 η H,E 1 √ gg 33 mn k 1n k b − k 3m k b g 13 g 33 mn k 1n k b − k 1m k b g 31 g 33 mn k 3n k b (26b) S (e,h) 21mn = exp(ik 3m ∆z 3 ) + ik b ∆z 3 2 k b k 3m exp[i(k 3m + k 3n )∆z 3 /2] η E,H [ √ g] mn − δ mn + k 3m k b η H,E 1 √ gg 33 mn − δ mn k 3n k b + k 1m k b δ mn − 1 η H,E 1 √ gg 33 mn k 1n k b + k 3m k b g 13 g 33 mn k 1n k b + k 1m k b g 31 g 33 mn k 3n k b (26c) S (e,h) 22mn = ik b ∆z 3 2 k b k 3m exp[i(k 3m + k 3n )∆z 3 /2] η E,H [ √ g] mn − δ mn − k 3m k b η H,E 1 √ gg 33 mn − δ mn k 3n k b + k 1m k b δ mn − 1 η H,E 1 √ gg 33 mn k 1n k b − k 3m k b g 13 g 33 mn k 1n k b + k 1m k b g 31 g 33 mn k 3n k b (26d) These equations have a symmetric form relative to a TE/TM polarization change contrary to the previous example, which is a consequence of the symmetry of the Generalized Metric sources in Eq. (24). In case of a sinusoidal corrugation the coordinate transformation is defined as z 1,2 = x 1,2 , and x 3 = z 3 + (1 − |z 3 |/b)a sin(Kx 1 ), where h = 2a is the corrugation depth, and −b ≤ x 3 ≤ b, b ≥ a is a region with curvilinear metric (see Fig. 3 for illustration and [21] for details). The components of Fourier matrices for sinusoidally corrugated gratings are found analytically: [ √ g] mn = δ m,n − 1 2 α sign(z 3 )(δ m,n+1 + δ m,n−1 ) 1 √ gg 33 mn = 1 + 2χ 2 − 1 √ 2χ 2k δ m−n,2k 1 + 2χ 2 − α sign(z 3 )δ m−n,2k+1 1 + 2χ 2 − 1 2χ 2 g 13 g 33 mn = i √ 2χ 1 + 2χ 2 − 1 √ 2χ 2k α sign(z 3 )δ m−n,2k − δ m−n,2k+1 1 + 2χ 2 − 1 1 + 2χ 2(27) with α = a/b and χ = (1 − |z 3 |/b) 2 (αKh) 2 /2. The two considered examples demonstrate the possibility to derive S-matrix components explicitly, though the resulting expressions can be rather bulky. Therefore, we restrict ourselves here to these two examples for gratings of bulk materials as S-matrix components in other various cases can be attained with aids of the vast literature on the Fourier methods and general equations of the previous section. The method has a second order polynomial convergence with regard to the slice number, and typical convergence plots are quite similar to those presented for the GSM in [17,21,18]. The convergence rate relative to the number of Fourier orders for profiled gratings and S-matrices written in Cartesian coordinates is polynomial, and for continuously differentiable profiles with curvilinear coordinate S-matrices is exponential, again, similarly to the GSM [17], and the GSMCC [21] respectively. Corrugated 2D material We proceed by modifying the results of the previous subsection and consider a layer of 2D material, e.g., graphene, on top of a corrugated substrate. Here we focus on 1D holographic gratings, whose profiles can be well approximated by sinusoidal functions introduced above. Electric currents in such materials depend only on tangential electric field components. Therefore, within the rationale of a curvilinear coordinate transformation, the relation between the current and the field via surface conductivity σ s writes as follows J α = σ s δ(z 3 ) √ g g αα (E · e α ) = σ s √ g g αα δ(z 3 )E α , α = 1, 2,(28) where upper and lower indices distinguish contravariant and covariant vector components, and e α = (∂x β /∂z α )ê β are tangent vectors to curvilinear coordinate planes. Here we suppose the 2D material layer to coinside with surface z 3 = 0. It is seen that the normalization factor √ g/g αα makes effective conductivity in the curvilinear metric be periodic even when the effect of corrugation on conductivity is neglected. A possible impact of the periodicity on σ s [30] can be also included in the present method straightforwardly, though we do not account for it in the following examples. Substitution of sources given by Eq. (28) into Eq. (12) yields relations a e± m (±0) =ã ext,e± m (±0) − ωµ b 2k m3ê e± m · J m (0) a h± m (±0) =ã ext,h± m (±0) − ωµ b 2k m3ê h± m · J m (0)(29) There are two differences between the latter equations and Eqs. ã h− m (−0) a h+ m (+0) = δ mn + ( √ g/g 11 ) mn ζ h n −1 × ( √ g/g 11 ) mn ζ h n δ mn δ mn ( √ g/g 11 ) mn ζ h n ã h+ n (−0) a h− n (+0) ,(30) where ζ e m = σ s ωµ b /2k m3 , ζ h m = σ s k m3 /2ωε b . Inversion implies that the corresponding matrix should be composed of the truncated Fourier matrix { √ g/g 11 } N m,n=1 and then numerically inverted. Eq. (30) consists of known plane wave reflection and transmission coefficients for the TE polarization [31]. Similarly Eq. (31) for the TM polarization would reduce to such coefficients in the absence of the corrugation, when √ g = g 11 = 1. It is important to note that Eqs. (30), (31) should be applied together with the results of the previous subsection. Resonant reflection by corrugated graphene The derived S-matrices for 1D holographic gratings, Eqs. (26), and a corrugated 2D material layer, Eq. (30), (31) can be used to simulate the optical response of graphene sheets covering periodically structured substrates of arbitrary corrugation depth. The present method is superior to approaches used in [8,9] since it doest not rely on the Rayleigh hypothesis, and consequently does hot suffer from corresponding convergence issues inherent to Rayleigh methods. Consider a graphene monolayer on top of a corrugated Si substrate at room temperature. Silicon permittivity around 10 THz can be taken to be constant being approximately equal to 11.5. Graphene dispersion at room temperature [32] is described by equation σ(ω) σ 0 = 8i π T ω + iτ −1 log 2 cosh E F 2T + H ω 2 + 4iω π ∞ 0 d H( ) − H(ω/2) ω 2 − 4 2 ,(32) where σ 0 = e 2 /4 , and H( ) = sinh( /T )/ [cosh(E F /T ) + cosh( /T )]. Here, the Fermi-energy depends on the applied gate voltage, and can be tuned in a wide range. Relaxation rate τ −1 depends both on the quality of graphene (which in turn depends on the fabrication process), the quality of the substrate and presence of a BN interlayer, so that τ can vary by more than an order of magnitude 4 · 10 −14 s τ 10 −12 s (e.g., see experimental results in [33,34,35]). For further examples we assume τ = 10 −13 s, and E F = 0.4 eV. Graphene sheets are known to support highly confined surface plasmon-polaritons in the terahertz band. Condition f (ω, k 1 ) = σ(ω) ω + ε 1 (ω) ω 2 ε 1 (ω)µ 0 − k 2 1 + ε 2 (ω) ω 2 ε 2 (ω)µ 0 − k 2 1 = 0(33) defines the dispersion of the TM surface plasmons in graphene [8]. Here ε 1,2 are permittivities of the homogeneous media below and above the 2D sheet, and k 1 is the projection of the wavevector on the sheet plane. The condition k 1 (ω 2 εµ 0 ) provides a good analytical approximation in case of low absorbing substrates. In case of a periodically corrugated sheet a surface plasmon wave should be coupled with an evanescent grating order to attain a resonance condition [36]. Large values of k 1 require the use of small-period gratings. Fig. 4 demonstrates reflection resonance for a sinusoidally corrugated silica substrate with a graphene sheet on top of it with grating period Λ = 0.8 µm and varying corrugation depth. Normal incidence is chosen. A single feasible resonance peak for small ratios h/Λ corresponds to the excitation of the surface plasmon-polariton by the first grating order. When the corrugation depth-to-period ratio increases this peak redshifts (similar behaviour can be seen in Fig. 5 of [9]). The reason is the decrease of the local curvature radius which affects the surface plasmon dispersion, as Eq. (33) is no more rigorous for corrugated layers. Interestingly, this is opposite to the blueshift of surface plasmon-polaritons supported by curved metal-dielectric interfaces [37]. In addition to the redshift secondary resonant peaks become significant. These peaks are due to plasmon coupling with higher grating orders. Further increase of the depth-to-period ratio results in the disappearance of significant resonances within the investigated frequency band. In experiments silicon substrates are always covered by a layer of silicon dioxide whose thickness can vary from few to hundreds of nanometers. SiO 2 is generally a bad material for the terahertz band due to high resonant absorption (see measurements of amorphous silica permittivity in [38]). The dispersion obtained in [38] can be used to estimate the influence of silica layers on the resonant curves demonstrated in Fig. 4. Fig. 5 shows such influence for a grating with h/Λ = 0.2 and different thickness h SiO 2 of the silica interlayer. Few nanometer thick layers slighly blueshift the resonance peak, whereas for a h SiO 2 ∼ 100 nm resonant absorption in silica itself around 14 THz prominently affects the reflection curve. Conclusion To sum up we derived a general form of S-matrix in the Fourier basis Eq. Fig. 4, but the graphene sheet is supposed to be in contact with an intermediate layer silica layer of varying thickness h SiO 2 . to attain explicit analytical S-matrix components, as shown for 1D lamellar and sinusoidally corrugated gratings of bulk materials. S-matrices for other types of gratings including complex 2D periodic diffractive optical elements and metasurfaces can be further derived in a similar manner on the basis of all previous results for the Fourier Modal Method, and other Fourier space methods (e.g., [22,23,24,25]), and there is no need for solving the eigenvalue problem in each slice [39]. Moreover, the derived analytical S-matrix components can be directly used for resonant analysis of grating structures proposed in [40]. The obtained S-matrix components have second order accuracy relative to spatial discretization of the grating layer, which is similar to other Fourier space methods [41]. In addition we expressed the S-matrix of a corrugated layer of 2D material with a given conductivity, which can be useful for further research of optical response functions of metasurfaces covered with such materials and multilayer quasi-2D structures. Here differentiation is performed relative to vector r. Two-dimensional Fourier transform of the scalar Green's functions writes explicitly (see, e.g., 3.876.1, 2 and 6.667.3, 4 in [42], or Appendix in [14]): ∞ −∞ d 2 ρ exp(−iκρ ) exp(ik b |r − r |) |r − r | = 2πi k 3 exp(−iκρ) exp (ik 3 |x 3 − x 3 |) , where κ = (k 1 , k 2 ) T , ρ = (x 1 , x 2 ) T , and k 2 3 = k 2 b − κ 2 , k 3 + k 3 > 0. Substitution of the latter equation into Eq. (35), and making the differentiation in Eq. (36) yield Eqs. (6) and (7). To arrive at final relations one also needs the decomposition of the 3 × 3 unit matrix I = 1 k 2 b k ± (k ± ) T +ê e± (ê e± ) T +ê h± (ê h± ) T together with transformation relations k ± ×ê e± = −k bê h± , k ± ×ê h± = k bê e± . / 2 . 2If ∆x 3 → 0 the integration in Eqs.(12) is reduced to a multiplication of integrands by ∆x 3 . Let us denote coordinate of layer center as x c 3 = x Once incoming wave amplitudes at layer boundaries are known, Eqs. (12) yield the diffracted amplitudes in form the presence of matrix inversions in Eq. (20) the complexity of the algorithm can be estimated as O(N 3 F N S ), where N F is a number of Fourier harmonics, and N S is a number of slices. Figure 1 : 1The convergence of the method based on the direct S-matrix calculation, Eqs. ( Figure 2 : 2Same as in Figure 3 : 3Illustration of the Curvilinear Coordinate Generalized Source Method idea: Maxwell's equations write in a curvilinear coordinate system (z 1 , z 2 , z 3 ) such that one of its coordinate planes exactly fits corrugation profile, and these coordinates continuously become Cartesian (x 1 , x 2 , x 3 ) in the region in the vicinity of the grating. Dashed lines demonstrate coordinate planes z 3 = const. (13): due to presence of deltafunction in Eq.(28) Eqs.(29) are exact, and terms proportional to layer thickness ∆z 3 with curvilinear sources of Eq. (24) are absent. The external field is continuous across the z 3 both polarizations. The electric current at layer location J m (0) depends on the continuous tangential electric field. In case of a 1D corrugation along x 1 direction and collinear diffraction with k m2 = 0 this tangential field readsẼ m2 (0) =ã e+ m (±0) +ã e− m (±0) for the TE polarization, andẼ m1 (0) = (k 3m /k b ) ã h− m (±0) −ã h+ m (±0) for the TM polarization. Note that in this case the field which contributes to the source cannot be substituted by the external field as was the case in Eq. (18) derivation, and a self-consistent equation system should be solved. This yields the following S-matrix relations for the two polarizations: Figure 4 : 4Dependence of zero order reflection efficiency from the frequency for the normally incident plane wave for corrugated graphene sheet placed on top of silicon substrate with corrugation period 0.8 µm and varying corrugation depth. Figure 5 : 5Similar to AcknowledgementsThe work was supported by the Russian Scientific Foundation, Grant No. 17-79-20345.AppendixThis Appendix provides a sketch of the derivation of Eqs. (6),(7). Due to the linearity of excitations Helmholtz Eqs. (3) can be considered separately either for the electric or magnetic sources. Suppose M = 0, and make a passage to scalar and vector potentials ϕ and A:Under Lorentz gauge condition ωε b µ b ϕ + i∇A = 0 Maxwell's equations reduce to the Helmholtz equation on the vector potential ∆A + k 2 b A = −µ b J, whose solution writes via scalar Green's function g 0Combining Eqs.(34),(35), the gauge condition, and comparing with Eqs. (4) one can relate tensor Green's functions with the scalar one: E Popov, Gratings: Theory and Numeric Applications. Institut Fresnel. AMUE. Popov, ed., Gratings: Theory and Numeric Applications. Institut Fresnel, AMU, 2012. Matrix method for tunneling in heterostructures: Resonant tunneling in multilayer systems. D Y K Ko, J C Inkson, Phys. Rev. B. 38D. Y. K. Ko and J. C. Inkson, "Matrix method for tunneling in heterostructures: Resonant tunneling in multilayer systems," Phys. Rev. B, vol. 38, pp. 9945-9951, 1988. Scattering-matrix approach to multilayer diffraction. N P K Cotter, T W Preist, J R Sambles, J. Opt. Soc. Am. A. 12N. P. K. Cotter, T. W. Preist, and J. R. Sambles, "Scattering-matrix approach to multilayer diffraction," J. Opt. Soc. Am. A, vol. 12, pp. 1097-1103, 1995. Formulation and comparison of two recursive matrix algorithms for modeling layered diffraction gratings. L Li, J. Opt. Soc. Am. A. 13L. Li, "Formulation and comparison of two recursive matrix algorithms for modeling layered diffraction gratings," J. Opt. Soc. Am. A, vol. 13, pp. 1024-1035, 1996. The scattering matrix and optical properties of metamaterials. N A Gippius, S G Tikhodeev, Phys. Usp. 52N. A. Gippius and S. G. Tikhodeev, "The scattering matrix and optical properties of meta- materials," Phys. Usp., vol. 52, pp. 967-971, 2009. Stable implementation of the rigorous coupled-wave analysis for surface-relief gratings: enhanced transmittance matrix approach. M G Moharam, D A Pommet, E B Grann, J. Opt. Soc. Am. A. 12M. G. Moharam, D. A. Pommet, and E. B. Grann, "Stable implementation of the rigorous coupled-wave analysis for surface-relief gratings: enhanced transmittance matrix approach," J. Opt. Soc. Am. A, vol. 12, pp. 1077-1086, 1995. New Green-function formalism for surface optics. J E Sipe, J. Opt. Soc. Am. B. 4J. E. Sipe, "New Green-function formalism for surface optics," J. Opt. Soc. Am. B, vol. 4, pp. 481-489, 1987. A primer of surface plasmon-polaritons in graphene. Y V Bludov, A Ferreira, N M R Peres, M I Vasilevskiy, Int. J. Mod. Phys. B. 27Y. V. Bludov, A. Ferreira, N. M. R. Peres, and M. I. Vasilevskiy, "A primer of surface plasmon-polaritons in graphene," Int. J. Mod. Phys. B, vol. 27, pp. 1341001-74, 2013. Analytical solution for the diffraction of an electromagnetic wave by a graphene grating. T M Slipchenko, M L Nesterov, L Martin-Moreno, A Y Nikitin, J. Opt. 15T. M. Slipchenko, M. L. Nesterov, L. Martin-Moreno, and A. Y. Nikitin, "Analytical solution for the diffraction of an electromagnetic wave by a graphene grating," J. Opt., vol. 15, pp. 114008-11, 2013. Considerations to Rayleigh's hypothesis. J Wauer, T Rother, Opt. Comm. 282J. Wauer and T. Rother, "Considerations to Rayleigh's hypothesis," Opt. Comm., vol. 282, pp. 339-350, 2009. Rayleigh was right: Electromagnetic fields and corrugated interfaces. A V Tishchenko, Opt. Photonic News. 21A. V. Tishchenko, "Rayleigh was right: Electromagnetic fields and corrugated interfaces," Opt. Photonic News, vol. 21, pp. 50-54, 2010. Radiation and Scattering of Waves. L B Felsen, N Marcuvitz, Prentice-HallL. B. Felsen and N. Marcuvitz, Radiation and Scattering of Waves, Ch. 1. Prentice-Hall, 1972. Electromagnetic theory of gratings. R Petit, Ed , Springer-VerlagR. Petit, ed., Electromagnetic theory of gratings, Ch. 5. Springer-Verlag, 1980. Plane wave expansion for arrays of arbitrarily oriented piecewise linear elements and its application in determining the impedance of a single linear antenna in a lossy half-space. B A Munk, G A Burrel, IEEE Trans. Antennas Propagat. B. A. Munk and G. A. Burrel, "Plane wave expansion for arrays of arbitrarily oriented piecewise linear elements and its application in determining the impedance of a single linear antenna in a lossy half-space," IEEE Trans. Antennas Propagat., vol. AP-27, pp. 331-343, 1979. Green function for multilayers: Light scattering in planar cavities. M S Tomaŝ, Phys. Rev. A. 51M. S. Tomaŝ, "Green function for multilayers: Light scattering in planar cavities," Phys. Rev. A, vol. 51, pp. 2545-2559, 1995. Impact of graphene on the polarizability of a neighbour nanoparticle: A dyadic Greens function study. B Amorim, P A D Goncalves, M I Vasilevskiy, N M R Peres, Appl. Sci. 71158B. Amorim, P. A. D. Goncalves, M. I. Vasilevskiy, and N. M. R. Peres, "Impact of graphene on the polarizability of a neighbour nanoparticle: A dyadic Greens function study," Appl. Sci., vol. 7, p. 1158, 2017. New fast and memory-sparing method for rigorous electromagnetic analysis of 2d periodic dielectric structures. A A Shcherbakov, A V Tishchenko, J. Quantitative Spectrosc. Radiat. Transfer. 113A. A. Shcherbakov and A. V. Tishchenko, "New fast and memory-sparing method for rigor- ous electromagnetic analysis of 2d periodic dielectric structures," J. Quantitative Spectrosc. Radiat. Transfer, vol. 113, pp. 158-171, 2012. Generalized source method in curvilinear coordinates for 2d grating diffraction simulation. A A Shcherbakov, A V Tishchenko, J. Quantitative Spectrosc. Radiat. Transfer. 187A. A. Shcherbakov and A. V. Tishchenko, "Generalized source method in curvilinear coor- dinates for 2d grating diffraction simulation," J. Quantitative Spectrosc. Radiat. Transfer, vol. 187, pp. 76-96, 2017. Electromagnetic scattering in polarizable backgrounds. O J F Martin, N B Piller, Phys. Rev. E. 58O. J. F. Martin and N. B. Piller, "Electromagnetic scattering in polarizable backgrounds," Phys. Rev. E, vol. 58, pp. 3909-3915, 1998. Use of Fourier series in the analysis of discontinuous periodic structures. L Li, J. Opt. Soc. Am. A. 13L. Li, "Use of Fourier series in the analysis of discontinuous periodic structures," J. Opt. Soc. Am. A, vol. 13, pp. 1870-1876, 1996. Efficient curvilinear coordinate method for grating diffraction simulation. A A Shcherbakov, A V Tishchenko, Opt. Express. 21A. A. Shcherbakov and A. V. Tishchenko, "Efficient curvilinear coordinate method for grat- ing diffraction simulation," Opt. Express, vol. 21, pp. 25236-25247, 2013. Maxwell equations in Fourier space: fast-converging formulation for diffraction by arbitrary shaped, periodic, anisotropic media. E Popov, M Nevi&apos;ere, J. Opt. Soc. Am. A. 18E. Popov and M. Nevi'ere, "Maxwell equations in Fourier space: fast-converging formulation for diffraction by arbitrary shaped, periodic, anisotropic media," J. Opt. Soc. Am. A, vol. 18, pp. 2886-2894, 2001. Parametric formulation of the fourier modal method for crossed surface-relief gratings. G Granet, J.-P Plumey, J. Opt. A: Pure Appl. Opt. 4G. Granet and J.-P. Plumey, "Parametric formulation of the fourier modal method for crossed surface-relief gratings," J. Opt. A: Pure Appl. Opt., vol. 4, pp. S145-S149, 2002. Normal vector method for convergence improvement using the rcwa for crossed gratings. T Schuster, J Ruoff, N Kerwien, S Rafler, W Osten, J. Opt. Soc. Am. A. 24T. Schuster, J. Ruoff, N. Kerwien, S. Rafler, and W. Osten, "Normal vector method for convergence improvement using the rcwa for crossed gratings," J. Opt. Soc. Am. A, vol. 24, pp. 2880-2890, 2007. Local normal vector field formulation for periodic scattering problems formulated in the spectral domain. M C Van Beurden, I D Setija, J. Opt. Soc. Am. A. 34M. C. van Beurden and I. D. Setija, "Local normal vector field formulation for periodic scattering problems formulated in the spectral domain," J. Opt. Soc. Am. A, vol. 34, pp. 224- 233, 2017. Efficient implementation of the coupled-wave method for metallic lamellar gratings in TM polarization. G Granet, B , J. Opt. Soc. Am. A. 13G. Granet and B. Guizal, "Efficient implementation of the coupled-wave method for metallic lamellar gratings in TM polarization," J. Opt. Soc. Am. A, vol. 13, pp. 1019-1023, 1996. Highly improved convergence of the coupled-wave method for TM polarization. P Lalanne, G M Morris, J. Opt. Soc. Am. A. 13P. Lalanne and G. M. Morris, "Highly improved convergence of the coupled-wave method for TM polarization," J. Opt. Soc. Am. A, vol. 13, pp. 779-784, 1996. Fourier modal method for relief gratings with oblique boundary conditions. I Gushchin, A V Tishchenko, J. Opt. Soc. Am. A. 27I. Gushchin and A. V. Tishchenko, "Fourier modal method for relief gratings with oblique boundary conditions," J. Opt. Soc. Am. A, vol. 27, pp. 1575-1583, 2010. A new theoretical method for diffraction gratings and its numerical application. J Chandezon, D Maystre, G Raoult, J. Opt. Paris. 11235J. Chandezon, D. Maystre, and G. Raoult, "A new theoretical method for diffraction gratings and its numerical application," J. Opt. Paris, vol. 11, p. 235, 1980. Electronic superlattices in corrugated graphene. A Isacsson, L M Jonsson, J M Kinaret, M Jonson, Phys. Rev. B. 77A. Isacsson, L. M. Jonsson, J. M. Kinaret, and M. Jonson, "Electronic superlattices in corrugated graphene," Phys. Rev. B, vol. 77, pp. 035423-6, 2008. Optical conductivity of graphene in the visible region of the spectrum. T Stauber, N M R Peres, A K Geim, Phys. Rev. B. 78T. Stauber, N. M. R. Peres, and A. K. Geim, "Optical conductivity of graphene in the visible region of the spectrum," Phys. Rev. B, vol. 78, pp. 085432-8, 2008. Optical far-infrared properties of a graphene monolayer and multilayer. L A Falkovsky, S S Pershoguba, Phys. Rev. B. 76L. A. Falkovsky and S. S. Pershoguba, "Optical far-infrared properties of a graphene mono- layer and multilayer," Phys. Rev. B, vol. 76, pp. 153410-4, 2007. Large-area synthesis of high-quality and uniform graphene films on copper foils. X Li, W Cai, J An, S Kim, J Nah, D Yang, R Piner, A Velamakanni, I Jung, E Tutuc, S K Banerjee, L Colombo, R S Ruoff, Science. 324X. Li, W. Cai, J. An, S. Kim, J. Nah, D. Yang, R. Piner, A. Velamakanni, I. Jung, E. Tutuc, S. K. Banerjee, L. Colombo, and R. S. Ruoff, "Large-area synthesis of high-quality and uniform graphene films on copper foils," Science, vol. 324, pp. 1312-1314, 2009. Boron nitride substrates for highquality graphene electronics. C R Dean, A F Young, I Meric, C Lee, L Wang, S Sorgenfrei, K Watanabe, T Taniguchi, P Kim, K L Shepard, J Hone, Nature Nanotechnol. 5C. R. Dean, A. F. Young, I. Meric, C. Lee, L. Wang, S. Sorgenfrei, K. Watanabe, T. Taniguchi, P. Kim, K. L. Shepard, and J. Hone, "Boron nitride substrates for high- quality graphene electronics," Nature Nanotechnol., vol. 5, pp. 722-726, 2010. Micrometer-scale ballistic transport in encapsulated graphene at room temperature. A S Mayorov, R V Gorbachev, S V Morozov, L Britnell, R Jalil, L A Ponomarenko, P Blake, K S Novoselov, K Watanabe, T Taniguchi, A K Geim, Nano Lett. 11A. S. Mayorov, R. V. Gorbachev, S. V. Morozov, L. Britnell, R. Jalil, L. A. Ponomarenko, P. Blake, K. S. Novoselov, K. Watanabe, T. Taniguchi, and A. K. Geim, "Micrometer-scale ballistic transport in encapsulated graphene at room temperature," Nano Lett., vol. 11, pp. 2396-2399, 2011. Surface-plasmon-assisted resonant tunneling of light through a periodically corrugated thin metal film. I Avrutsky, Y Zhao, V Kochergin, Opt. Lett. 25I. Avrutsky, Y. Zhao, and V. Kochergin, "Surface-plasmon-assisted resonant tunneling of light through a periodically corrugated thin metal film," Opt. Lett., vol. 25, pp. 595-597, 2000. Curvature effects on flexible surface plasmon resonance biosensing: segmented-wave analysis. H Lee, D Kim, Opt. Express. 24H. Lee and D. Kim, "Curvature effects on flexible surface plasmon resonance biosensing: segmented-wave analysis," Opt. Express, vol. 24, pp. 11994-12006, 2016. Optical characteristics of amorphous quartz in the 1400-200 cm 1 region. S I Popova, T S Tolstykh, V T Vorobev, Opt. Spectrosc. 33S. I. Popova, T. S. Tolstykh, and V. T. Vorobev, "Optical characteristics of amorphous quartz in the 1400-200 cm 1 region," Opt. Spectrosc., vol. 33, pp. 444-445, 1972. Scattering-matrix analysis of periodically patterned multilayers with asymmetric unit cells and birefringent media. M Liscidini, D Gerace, L C Andreani, J E Sipe, Phys. Rev. B. 77M. Liscidini, D. Gerace, L. C. Andreani, and J. E. Sipe, "Scattering-matrix analysis of periodically patterned multilayers with asymmetric unit cells and birefringent media," Phys. Rev. B, vol. 77, pp. 035324-11, 2008. Numerical methods for calculating poles of the scattering matrix with applications in grating theory. D A Bykov, L L Doskolovich, J. Lightwave Technol. 31D. A. Bykov and L. L. Doskolovich, "Numerical methods for calculating poles of the scatter- ing matrix with applications in grating theory," J. Lightwave Technol., vol. 31, pp. 793-801, 2013. Staircase approximation validity for arbitrary-shaped gratings. E Popov, M Nevi&apos;ere, B Gralak, G Tayeb, J. Opt. Soc. Am. A. 19E. Popov, M. Nevi'ere, B. Gralak, and G. Tayeb, "Staircase approximation validity for arbitrary-shaped gratings," J. Opt. Soc. Am. A, vol. 19, pp. 33-42, 2002. Table of integrals, series and products. I S Gradshteyn, I M Ryzhik, Academic PressI. S. Gradshteyn and I. M. Ryzhik, Table of integrals, series and products. Academic Press, 2007.
[]
[ "Perturbative fragmentation of vector colored particle into bound states with a heavy antiquark", "Perturbative fragmentation of vector colored particle into bound states with a heavy antiquark" ]
[ "V V Kiselev ", "A E Kovalsky ", "\nRussian State Research Center \"Institute for High Energy Physics\"\nProtvino\n", "\nMoscow Region\n142284Russia\n" ]
[ "Russian State Research Center \"Institute for High Energy Physics\"\nProtvino", "Moscow Region\n142284Russia" ]
[]
The fragmentation function of vector particle into possible bound S-wave states with a heavy antiquark is calculated in the leading order of perturbative QCD for the high energy processes at large transverse momenta with the different behaviour of anomalous chromomagnetic moment. One-loop equations are derived for the evolution of fragmentation function moments, which is caused by the emission of hard gluons by the vector particle. The integrated probabilities of fragmentation are given. The distribution of bound state over the transverse momentum with respect to the axis of fragmentation is calculated in the scaling limit.
10.1134/1.1312899
[ "https://arxiv.org/pdf/hep-ph/9908321v1.pdf" ]
119,356,608
hep-ph/9908321
7624128bd97dd8d1203ce155f81c2c7638f02020
Perturbative fragmentation of vector colored particle into bound states with a heavy antiquark 11 Aug 1999 V V Kiselev A E Kovalsky Russian State Research Center "Institute for High Energy Physics" Protvino Moscow Region 142284Russia Perturbative fragmentation of vector colored particle into bound states with a heavy antiquark 11 Aug 1999 The fragmentation function of vector particle into possible bound S-wave states with a heavy antiquark is calculated in the leading order of perturbative QCD for the high energy processes at large transverse momenta with the different behaviour of anomalous chromomagnetic moment. One-loop equations are derived for the evolution of fragmentation function moments, which is caused by the emission of hard gluons by the vector particle. The integrated probabilities of fragmentation are given. The distribution of bound state over the transverse momentum with respect to the axis of fragmentation is calculated in the scaling limit. Introduction An interesting problem concerning properties of interaction beyond the Standard Model is to study production of hadrons containing leptoquarks [1] which are scalar and vector color-triplet particles appearing in Grand Unification Theories, provided that their total width is much less than the QCD-confinement scale, Γ LQ ≪ Λ QCD . Recently the production of (qLQ)-baryons in the case of scalar leptoquark was discussed [2]. In this work we study the high energy production of heavy leptoquarkonium containing a vector leptoquark. These results can be used to calculate the fragmentation of vector local diquarks into baryons (a similar approach was applied to the production of Ω ccc in [3]). For the sake of convenience the local color-triplet vector field will be referred to as the leptoquark in this paper. New problem arising in this case is a choice of the lagrangian for the vector leptoquark interaction with gluons. Indeed, to the lagrangian of a free vector field −1/2H µνH µν , where H µν = ∂ µ U ν − ∂ ν U µ , U µ is the vector complex field with derivatives substituted by covariant ones, we can add the gauge invariant term proportional to S αβ µν G µν U βŪα , where S αβ µν = 1/2(δ α µ δ β ν − δ α ν δ β µ ) is the tensor of spin, G µν is the gluon field strength tensor. It leads to the appearance of a parameter in the gluon-leptoquark vertex (the so-called anomalous chromomagnetic moment, see Section 2). In this work we discuss the production of a 1/2-spin bound state containing the heavy vector particle at various values of this parameter. At high transverse momenta, the dominant production mechanism for the heavy leptoquarkonium bound states is the leptoquark fragmentation, which can be calculated in perturbative QCD [4] after the isolation of soft binding factor extracted from the non-relativistic potential models [5,6]. The corresponding fragmentation function is universal for any high energy process for the direct production of leptoquarkonia. In the leading α s -order, the fragmentation function has a scaling form, which is the initial condition for the perturbative QCD evolution caused by the emission of hard gluons by the leptoquark before the hadronization. The corresponding splitting function differs from that for the heavy quark because of the spin structure of gluon coupling to the leptoquark. In this work the scaling fragmentation function is calculated in Section 2 for two different cases of the anomalous chromomagnetic moment behaviour. The limit of infinitely heavy leptoquark, m LQ → ∞, is obtained from the full QCD consideration for the fragmentation. The distribution of bound state over the transverse momentum with respect to the axis of fragmentation is calculated in the scaling limit in Section 3. The splitting kernel of the DGALAP-evolution is derived in Section 4, where the one-loop equations of renormalization group for the moments of fragmentation function are obtained and solved. These equations are universal, since they do not depend on whether the leptoquark will bound or free at low virtualities, where the perturbative evolution stops. The integrated probabilities of leptoquark fragmentation into the heavy leptoquarkonia are evaluated in Section 5. The results are summarized in Conclusion. Fragmentation function in leading order The contribution of fragmentation into direct production of heavy leptoquarkonium has the form dσ[l H (p)] = 1 0 dz dσ[LQ(p/z), µ] D LQ→l H (z, µ), where dσ is the differential cross-section for the production of leptoquarkonium with the 4-momentum p, dσ is that of the hard production of leptoquark with the scaled momentum p/z, and D is interpreted as the fragmentation function depending on the fraction of momentum carried out by the bound state. The value of µ determines the factorization scale. In accordance with the general DGLAP-evolution, the µ-dependent fragmentation function satisfies the equation ∂D LQ→l H (z, µ) ∂ ln µ = 1 z dy y P LQ→LQ (z/y, µ) D LQ→l H (y, µ),(1) where P is the kernel caused by emission of hard gluons by the leptoquark before the production of heavy quark pair. Therefore, the initial form of fragmentation function is determined by the diagram shown in Fig.1, and, hence, the corresponding initial factorization scale is equal to µ = 2m Q . Furthermore, this function can be calculated as an expansion in α s (2m Q ). The leading order contribution is evaluated in this Section. Consider the fragmentation diagram in the system, where the momentum of initial leptoquark has the form q = (q 0 , 0, 0, q 3 ) and the leptoquarkonium one is p, so that In the static approximation for the bound state of leptoquark and heavy quark, the quark mass is expressed as m Q = rM, and the leptoquark mass equals m = (1 − r)M =rM. The gluon-vector leptoquark vertex has the form q 2 = s, p 2 = M 2 .T V V g αµν = −ig s t a [g µν (q +rp) α − g µα ((1 + ae)rp − aeq) ν − g να ((1 + ae)q − aerp) µ ],(2) where ae is the anamalous chromomagnetic moment, t a is the QCD group generator. The sum over the vector leptoquark polarizations with the momentum q (q 2 = s) depends on the choice of the gauge of free field lagrangian (for example, the Stueckelberg gauge), but the fragmentation function is a physical quantity and has not to depend on the gauge parameter changing the contribution of longitudinal components of the vector field. So, the sum over polarization can be taken in the form P (q) µν = −g µν + q µ q ν s . The matrix element of the fragmentation into the baryon with the spin of 1/2 has the form M = − 2 √ 2πα s 9 √ M 3 R(0) rr(s − m 2 ) 2 P (q) νδ P (rp) µη T V V g αµν ρ αβq γ β (p − M)γ η γ 5 l H M δ 0 ,(3) where the sum over the gluon polarization is written down in the axial gauge with n = (1, 0, 0, −1) ρ µν (k) = −g µν + k µ n ν + k ν n µ k · n , with k = q − (1 − r)p. The spinors l H andq correspond to the leptoquarkonium and heavy quark associated to the fragmentation. M 0 denotes the matrix element for the hard production of leptoquark at high energy, R(0) is the radial wave-function at the origin. The matrix element squared and summed over the helicities of particles in the final state has the form |M| 2 = W µν M µ 0 M ν 0 In the limit of high energies q · n → ∞, W µν behaves like W µν = −g µν W + R µν ,(4) where R µν can depend on the gauge parameters and leads to scalar formfactor terms which are small in comparison with W in the limit of q · n → ∞. Define z = p · n q · n . The fragmentation function is determined by the expression [7] D(z) = 1 16π 2 dsθ s − M 2 z − m 2 Q 1 − z W, where W is defined in (4). The integral in the expression for the fragmentation function diverges logarithmically at a constant value of anamalous chromomagnetic moment if ae does not equal −1. In this work we consider two sets for the behaviour of anamalous chromomagnetic moment. Set I is defined by ae = −1. Here we observe that the obtained fragmentation function coincides with that for the scalar leptoquark 3 except the factor of 1/3 D(z) = 8α 2 s 243π |R(0)| 2 M 3 r 2r2 z 2 (1 − z) 2 [1 −rz] 6 · 3 + 3r 2 − (6 − 10r + 2r 2 + 2r 3 )z + +(3 − 10r + 14r 2 − 10r 3 + 3r 4 )z 2 ) ,(5) which tends toD (y) = 8α 2 s 243π y 6 |R(0)| 2 m 3 Q (y − 1) 2 r 8 + 4y + 3y 2 ,(6) at r → 0 and y = (1−(1−r)z)/(rz). The limit ofD(y) is in agreement with the general consideration of 1/m-expansion for the fragmentation function [8], wherẽ D(y) = 1 r a(y) + b(y). The dependence on y in a(y) is the same as for the fragmentation of the heavy quark into the quarkonium [7]. Set II: ae behaves like −1 + AM 2 /(s − m 2 LQ ). The obtained fragmentation function is D(z) = 8α 2 s 243π |R(0)| 2 16M 3 r 2r2 z 2 (1 − z) 2 [1 −rz] 6 · 16(3 + 3r 2 − 6z + 10rz − 2r 2 z − 2r 3 z + +3z 2 − 10z 2 r + 14z 2 r 2 − 10z 2 r 3 + 3z 2 r 4 ) + +A(3A + 24r − 6zA − 2rzA − 32rz − 16r 2 z + 3z 2 A + +2z 2 rA + 8z 2 r + 3z 2 r 2 A − 32z 2 r 2 + 24z 2 r 3 ) .(7) at r → 0 and y = (1 − (1 − r)z)/(rz). The perturbative functions in the leading α s -order are shown in Fig.2 at r = 0.02. We see hard distributions, which become softer with the evolution (see ref. [2]). Transverse momentum of leptoquarkonium In the system with an infinite momentum of the fragmentating leptoquark its invariant mass is expressed by the fraction of longitudinal leptoquark momentum z and transverse momentum with respect to the fragmentation axis p T (see Fig. 1) as s = m 2 + M 2 z(1 − z) [(1 − (1 − r)z) 2 + t 2 ], where t = p T /M. The calculation of diagram in Fig. 1 gives the double distribution d 2 P ds dz = D(z, s), where the Set I function D has the form D(z, s) = 256α 2 s 81π |R(0)| 2 r 2r2 M 3 [1 −rz] 2 (s − m 2 ) 4 rr 2 +r(1 + r − z(1 + 4r − r 2 )) s − m 2 M 2 − z(1 − z) s − m 2 M 2 2 .(9) For Set II, we find D(z, s) = 8α 2 s 81π |R(0)| 2 r 2r2 M 3 [1 −rz] 2 (s − m 2 ) 4 8r(A − 4 + 4r) 2 (1 − z + rz) 2 + +2(−A − 4 − 4r + zA + 4z − rzA + 16rz − 4r 2 z) (1 − z + rz)(A − 4 + 4r) s − m 2 M 2 − 32(1 − z)z s − m 2 M 2 2 .(10) The distribution of leptoquarkonium over the transverse momentum can be obtained by the integration over z D(t) = 1 0 dz D(z, s) 2M 2 t z(1 − z) . For Set I we have D(t) = 64α 2 s 81π |R(0)| 2 3(1 − r) 5 M 3 1 t 6 t(30r 3 − 30r 4 − 61t 2 r + 45r 2 t 2 + 33r 3 t 2 − −17r 4 t 2 + 3t 4 − 9rt 4 + 15r 2 t 4 − 9r 3 t 4 ) − (30r 4 − 99r 2 t 2 − 54r 3 t 2 + 27r 4 t 2 + 9t 4 + 18rt 4 − 6r 2 t 4 + +18r 3 t 4 + 3r 4 t 4 + 3t 6 − 6rt 6 + 9r 2 t 6 )arctg (1 − r)t r + t 2 + 24(2r 3 t + rt 3 + r 2 t 3 ) ln r 2 (1 + t 2 ) r 2 + t 2 .(11) The distribution for Set II is given in Appendix. The typical form of distribution over the transverse momentum is shown in Fig. 3. Hard gluon emission The one-loop contribution can be calculated in the way described in the previous Sections. This term does not depend on the part of leptoquark-gluon vertex with the anomalous chromomagnetic moment, therefore the splitting kernel coincides with that for the scalar leptoquark. It equals where the "plus" denotes the ordinary action: (1)]. The scalar leptoquark splitting function can be compared with that of the heavy quark P Q→Q (x, µ) = 4α s (µ) 3π P LQ→LQ (x, µ) = 4α s (µ) 3π 2x 1 − x + ,(12)1 0 dxf + (x) · g(x) = 1 0 dxf (x) · [g(x) − g1 + x 2 1 − x + , which has the same normalization factor at x → 1. Furthermore, multiplying the evolution equation by z n and integrating over z, one can get from (1) the µ-dependence of moments a (n) up to the one-loop accuracy of renormalization group ∂a (n) ∂ ln µ = − 8α s (µ) 3π 1 2 + . . . + 1 n + 1 a (n) , n ≥ 1.(13) At n = 0 the right hand side of (13) equals zero, which means that the integrated probability of leptoquark fragmentation into the heavy leptoquarkonium is not changed during the evolution, and it is determined by the initial fragmentation function calculated perturbatively in Section 2. The solution of equation (13) has the form a (n) (µ) = a (n) (µ 0 ) α s (µ) α s (µ 0 ) 16 3β 0 1 2 +...+ 1 n+1 ,(14) where one has used the one-loop expression for the QCD coupling constant α s (µ) = 2π β 0 ln(µ/Λ QCD ) , where β 0 = 11−2n f /3, with n f being the number of quark flavors with m q < µ < m LQ . Relation (14) is universal one, since it is independent of whether the leptoquark is free or bound at the virtualities less than µ 0 . We can use the evolution for the fragmentation into the heavy leptoquarkonium. The leptoquark can lose about 20% of its momentum before the hadronization [2]. Integrated probabilities of fragmentation As has been mentioned above, the evolution conserves the integrated probability of fragmentation which can be calculated explicitly dz D(z) = 8α 2 s 81π |R(0)| 2 16m 3 Q w(r).(15) For Set I, we have w(r) = 16[(8 + 15r − 60r 2 + 100r 3 − 60r 4 − 3r 5 ) + 30r(1 − r + r 2 + r 3 ) ln r] 15(1 − r) 7 .(16) For Set II, we find (A = 3) w(r) = (143 + 701r − 1882r 2 + 3250r 3 − 3245r 4 + 2017r 5 − 936r 6 − 48r 7 ) + +30r(25 − 21r + 43r 2 + r 3 + 8r 4 + 16r 5 ) ln r 1 15(1 − r) 9 . (17) The w(r) functions are shown in Fig. 4 at low r. Conclusion In this work the dominant mechanism for the production of bound states of spin 1/2 of a local color-triplet vector field with a heavy antiquark is considered for high energy processes at large transverse momenta, where the fragmentation contributes as the leading term. We investigate two sets of the anamalous chromomagnetic moment behaviour. Set I is defined by ae = −1 (the expression for the fragmentation function diverges logarithmically at a constant value of anamalous chromomagnetic moment if ae is not equal to −1). Here we observe that the obtained fragmentation function coincides with that for the scalar leptoquark up to the factor of 1/3 D(z) = 8α 2 s 243π |R(0)| 2 M 3 r 2r2 z 2 (1 − z) 2 [1 −rz] 6 · 3 + 3r 2 − (6 − 10r + 2r 2 + 2r 3 )z + +(3 − 10r + 14r 2 − 10r 3 + 3r 4 )z 2 ) ,(18) where r is the ratio of heavy quark mass to the mass of the bound state. In the infinitely heavy leptoquark limit, D(z) has the form, which agrees with what expected from general consideration of 1/m-expansion for the fragmentation function. Set II is defined by ae = −1 + AM 2 /(s − m 2 LQ ). The fragmentation function for Set II differs from that for Set I: D(z) = 8α 2 s 243π |R(0)| 2 16M 3 r 2r2 z 2 (1 − z) 2 [1 −rz] 6 · 16(3 + 3r 2 − 6z + 10rz − 2r 2 z − 2r 3 z + +3z 2 − 10z 2 r + 14z 2 r 2 − 10z 2 r 3 + 3z 2 r 4 ) + +A(3A + 24r − 6zA − 2rzA − 32rz − 16r 2 z + 3z 2 A + +2z 2 rA + 8z 2 r + 3z 2 r 2 A − 32z 2 r 2 + 24z 2 r 3 ) . The distribution of bound state over the transverse momentum with respect to the axis of fragmentation is calculated in the scaling limit. The corresponding distribution functions are given by the expression (11) for Set I and the expression from Appendix. The hard gluon corrections caused by the splitting of vector leptoquark are taken into account so that the evolution kernel has the form P LQ→LQ (x, µ) = 4α s (µ) 3π . The results will be used to investigate the fragmentation of doubly heavy vector diquarks into baryons elsewhere. 2x 1 − x + ,(20) Figure 1 : 1The diagram of leptoquark fragmentation into the heavy leptoquarkonium. Figure 2 :M 2The fragmentation function of leptoquark into the heavy leptoquarkonium, the N-factor is determined by N = 3 r 2 (1−r) 2 , the fragmentation function for Set I is shown by the dashed line, the fragmentation function for Set II (A = 3) is given by the solid line at r = 02 ) + A(8A − 8yA + 3y 2 A − 64 + 16y) , Figure 3 :M 3The distributions over the transverse momentum with respect to the axis of fragmentation, N t -factor is determined by N t = 4 r 2 (1−r) 7 . The dashed line represents the result for Set I, the solid line does it for Set II. Figure 4 : 4The w-functions for the leptoquark fragmentation into the heavy leptoquarkonium versus the fraction r = m Q /M. The curves correspond to Sets as inFig. 3. which results in the corresponding one-loop equation for the moments of fragmentation functions (see eqs.(13), (14)). The numerical estimates show that the probabilities of fragmentation into bound states with c-and b-quarks of heavy vector leptoquark with the mass about 400 GeV are of the order of 10 −(3−4) . This suppression makes the experimental observation of such states rather difficult. However we can use the perturbative expressions for the model of fragmentations into doubly heavy baryons, where the integrated probabilities are of the order of 10 −(1−2) In paper[2] an arithmetic error was made, which slightly affects upon the final result at small r. AppendixThe expression for the distribution over the transverse momentum for Set II (ae = −1 + 3M 2 s−m 2 LQ ) is given by work was in part supported by the Russian Foundation for Basic Research. 99-02- 16558 and 96-15-96575work was in part supported by the Russian Foundation for Basic Research, grants 99-02- 16558 and 96-15-96575. . W Buchmüller, R Rückl, D Wyler, Phys. Lett. 191442W.Buchmüller, R.Rückl, D.Wyler, Phys. Lett. B191, 442 (1987). . V V Kiselev, Phys. Rev. 5854008V.V.Kiselev, Phys. Rev. D58, 054008 (1998); . V V Kiselev, Phys. Atom. Nucl. 62Yad. Fiz.V.V.Kiselev, Phys. Atom. Nucl. 62, 300 (1999) [Yad. Fiz. 62, 335 (1999)]. . V A Saleev, hep-ph/9906515V.A.Saleev, hep-ph/9906515 (1999). . E Braaten, S Fleming, T C Yuan, Ann. Rev. Nucl. Part. Sci. 46197E.Braaten, S.Fleming, T.C.Yuan, Ann. Rev. Nucl. Part. Sci. 46, 197 (1997). . A Martin, Phys. Lett. 93338A.Martin, Phys. Lett. 93B, 338 (1980). . W Buchmüller, C H H Tye, Phys. Rev. 24132W.Buchmüller, C.H.H.Tye, Phys. Rev. D24, 132 (1981). . E Braaten, T C Yuan, Phys. Rev. Lett. 711673E.Braaten, T.C.Yuan, Phys. Rev. Lett. 71, 1673 (1993); . E Braaten, K Cheung, T C Yuan, Phys. Rev. 484230E.Braaten, K.Cheung, T.C.Yuan, Phys. Rev. D48, 4230 (1993); . E Braaten, K Cheung, T C Yuan, Phys. Rev. 485049E.Braaten, K.Cheung, T.C.Yuan, Phys. Rev. D48, 5049 (1993). . R L Jaffe, L Randall, Nucl. Phys. 41579R.L.Jaffe, L.Randall, Nucl. Phys. B415, 79 (1994).
[]
[ "Dark Universe and distribution of Matter as Quantum Imprinting: the Quantum Origin of Universe", "Dark Universe and distribution of Matter as Quantum Imprinting: the Quantum Origin of Universe" ]
[ "Ignazio Licata \nISEM\nInstitute for Scientific Methodology\nVia Ugo La Malfa n. 153 90146PalermoItaly\n", "Gerardo Iovane \nISEM\nInstitute for Scientific Methodology\nVia Ugo La Malfa n. 153 90146PalermoItaly\n\nDepartment of Computer Science\nUniversity of Salerno\nVia Giovanni Paolo II132, 84084Fisciano (Sa)Italy\n", "Leonardo Chiatti \nAUSL VT Medical Physics Laboratory\nVia Enrico Fermi 15, Lorentzstraße 1901100, 76135Viterbo, Karlsruhe, Karlsruhe(Italy) e ZKMGermany\n", "Elmo Benedetto \nDepartment of Computer Science\nUniversity of Salerno\nVia Giovanni Paolo II132, 84084Fisciano (Sa)Italy\n\nDepartment of Engineering\nUniversity of Sannio\nPiazza Roma 2182100BeneventoItaly\n", "Fabrizio Tamburini " ]
[ "ISEM\nInstitute for Scientific Methodology\nVia Ugo La Malfa n. 153 90146PalermoItaly", "ISEM\nInstitute for Scientific Methodology\nVia Ugo La Malfa n. 153 90146PalermoItaly", "Department of Computer Science\nUniversity of Salerno\nVia Giovanni Paolo II132, 84084Fisciano (Sa)Italy", "AUSL VT Medical Physics Laboratory\nVia Enrico Fermi 15, Lorentzstraße 1901100, 76135Viterbo, Karlsruhe, Karlsruhe(Italy) e ZKMGermany", "Department of Computer Science\nUniversity of Salerno\nVia Giovanni Paolo II132, 84084Fisciano (Sa)Italy", "Department of Engineering\nUniversity of Sannio\nPiazza Roma 2182100BeneventoItaly" ]
[]
In this paper we analyze the Dark Matter problem and the distribution of matter through two different approaches, which are linked by the possibility that the solution of these astronomical puzzles should be sought in the quantum imprinting of the Universe. The first approach is based on a cosmological model formulated and developed in the last ten years by the first and third authors of this paper; the so-called "Archaic Universe". The second approach was formulated by Rosen in 1993 by considering the Friedman-Einstein equations as a simple one-dimensional dynamical system reducing the cosmological equations in terms of a Schrödinger equation. As an example, the quantum memory in cosmological dynamics could explain the apparently periodic structures of the Universe while Archaic Universe shows how the quantum phase concerns not only an ancient era of the Universe, but quantum facets permeating the entire Universe today.
10.1016/j.dark.2018.11.001
[ "https://arxiv.org/pdf/1905.02389v1.pdf" ]
125,421,161
1905.02389
02025e90a559ef65fe44c39b73bffaa3e66584ff
Dark Universe and distribution of Matter as Quantum Imprinting: the Quantum Origin of Universe Ignazio Licata ISEM Institute for Scientific Methodology Via Ugo La Malfa n. 153 90146PalermoItaly Gerardo Iovane ISEM Institute for Scientific Methodology Via Ugo La Malfa n. 153 90146PalermoItaly Department of Computer Science University of Salerno Via Giovanni Paolo II132, 84084Fisciano (Sa)Italy Leonardo Chiatti AUSL VT Medical Physics Laboratory Via Enrico Fermi 15, Lorentzstraße 1901100, 76135Viterbo, Karlsruhe, Karlsruhe(Italy) e ZKMGermany Elmo Benedetto Department of Computer Science University of Salerno Via Giovanni Paolo II132, 84084Fisciano (Sa)Italy Department of Engineering University of Sannio Piazza Roma 2182100BeneventoItaly Fabrizio Tamburini Dark Universe and distribution of Matter as Quantum Imprinting: the Quantum Origin of Universe In this paper we analyze the Dark Matter problem and the distribution of matter through two different approaches, which are linked by the possibility that the solution of these astronomical puzzles should be sought in the quantum imprinting of the Universe. The first approach is based on a cosmological model formulated and developed in the last ten years by the first and third authors of this paper; the so-called "Archaic Universe". The second approach was formulated by Rosen in 1993 by considering the Friedman-Einstein equations as a simple one-dimensional dynamical system reducing the cosmological equations in terms of a Schrödinger equation. As an example, the quantum memory in cosmological dynamics could explain the apparently periodic structures of the Universe while Archaic Universe shows how the quantum phase concerns not only an ancient era of the Universe, but quantum facets permeating the entire Universe today. Introduction. Return to the Foundations: the peaceful cosmological coexistence Many contemporary studies about the formation and distribution of matter in cosmology are dominated by phenomenological approaches based on changes in the standard model [1]. In this paper, we explore two approaches that go in the foundational direction. By this term we mean the more general conditions that can be imposed in cosmology before the Planck era and inflation, assuming that Quantum Mechanics (QM) and General Relativity (GR) are indeed the correct frameworks to identify these conditions. In spite of its generality, this line of attack to the problem is very selective and full of information. We know, for example, that a "peaceful cosmological coexistence" implies the removal of the singularity and the introduction of an imaginary time, which is a precursor of the ordinary time, so as to replace the old image of the "thermodynamic balloon" with the introduction of a wave function of the Universe calculated through the Feynman path integral. This, as it is well known, is the essence of Hartle-Hawking proposal [2], and the thus calculated wave function of the Universe can satisfy the Wheeler-DeWitt Equation [3]. One of the most radical and recent proposals to the question is the theory of the Archaic Universe, developed by two of the authors starting from de Sitter Projective Relativity; it can be considered an extension of the Hartle-Hawking condition [4]. This theory was actually formulated to solve a different problem, that of the conciliation of Projective General Relativity (PGR) with the cosmological principle. This conciliation is not trivial because the PGR represents the extension of de Sitter relativity to the general non-empty case. According to this theory every observer, in any epoch, is placed at the chronological distance t0 » ±10 18 s from the two sheets (past and future) of a de Sitter horizon. The problem arises from how to define a cosmic time consistently in these circumstances. The Archaic Universe is a hypothetical predecessor (not in a chronological but ontological sense, hence the adjective "archaic") of space-time. This precursor consists of the four-dimensional surface of a half of the fivedimensional hyper-sphere of radius r = ct0. The equator of this half hyper-sphere lies on the hyperplan x0 = 0 and the intersections with the various plans at constant x0, 0 £ x0 £ ct0, constitute the various three-dimensional "parallels". It is imagined that the surface of the hemisphere is the seat of virtual processes originated on the equator, each of which ends on the parallel corresponding to a specific value of x0. The processes ending on the parallel corresponding to a given value of x0 are interpreted as virtual fluctuations of "duration" x0/c (and therefore, according to the indetermination principle, of energy ħc/x0) of a pre-vacuum, precursor of the current vacuum. The formal temperature ħc/kx0 is associated with these processes, where k is the Boltzmann constant. The formation of the Universe is described as a set of "nucleation" processes starting from a "universal reservoir" consisting of the virtual processes "written" on the four-dimensional surface [5][6][7][8]. The characteristics of this nucleation, as we shall see, suggest interesting hypotheses about the nature of the dark matter (DM). Another foundational approach is due to Nathan Rosen, the historical collaborator of Einstein, and is based on the so-called cosmological Schrödinger equation [9]. At first instance, talking about Schrödinger equation in cosmology it would seem absurd but it is not. For example, in a recent paper [10], the author shows that secular perturbations upon self-gravitating disks exhibit a mathematical similarity to a quantum scattering theory applying a time-dependent Schrödinger equation. It is well known that gravity induces quantum effects such as the evaporation of black holes and the absence of an accepted theory of quantum gravity remains a deep problem in theoretical physics. The canonical quantization of GR leads to the Wheeler-DeWitt equation introducing the so-called Superspace, an infinite-dimensional space of all possible 3-metrics. Rosen, instead, preferred to start his work from the classical cosmological equations using a simplified quantization scheme. Consider a homogeneous isotropic Universe. It is described by the Robertson-Walker line element and by assuming a standard perfect fluid matter; we deduce the following Friedman equations, where the parameter ̇ denotes a derivative with respect to the cosmic time of the scale factor . For a complete description we also have to include an equation of state to characterize the thermodynamical conditions of matter, namely, = ( − 1) = ( − 1) .(2) where is the pressure, the energy density, with the parameter found in the interval 1 ≤ ≤ 2. In particular, = 1 characterizes the matter-dominated era and the value = 4/3 is found in the radiation-dominated era. The solution of the third equation of (1) is = C D C E FG (3) where the subscript 0 means that the physical quantities are calculated at today's time. In [9] the author suggests to multiply the second relation in (1) Moreover, the author interprets the cosmological equation as an energy conservation law, writing + = (5) and therefore the kinetic energy becomes = 1 2̇.(6) and the potential energy Finally, for the total energy we obtain = − 1 2 .(8) By considering the Newtonian equation of motion ̈= − (9) and by defining the momentum of mass as =̇ (10) the Hamiltonian is Ħ = . 2 + ( ) (11) At this point, Rosen uses the standard first quantization procedure → − ђ , Ħ → ђ(12) obtaining the Schrödinger equation ђ = Ħ (13) where ψ= ψ(a, t) is the wave function of the Universe. These are two different lines of attack to the problem, but both can be considered a way to constrain the solutions of Wheeler-DeWitt equation and overcome the well-known interpretative problems related to a "Universe wave function". In the case of the Archaic Universe we select the boundary conditions that lead to the observed Universe avoiding the proliferation of cosmological models; Rosen equation, on the other hand, is like introducing a Feynman path built on the Robertson-Walker scenario. It is no coincidence that these two theories imply interesting consequences on the distribution of matter and on DM because both impose an "imprinting" ab initio on the dynamics in the time of the Universe. This review is structured as follows: sections 1-5 briefly recall the conceptual and mathematical premises of the Archaic Universe, the origin of inertia and the problem of DM. In sections 6 and 7 the Rosen theory and its consequences for the distribution of matter are developed. The Archaic Universe, a brief description. The term "pre-vacuum" used in the Introduction means the existence, on the hemi-spherical surface, of a special class of four-dimensional frames of reference with a "time" axis x0, connected by threedimensional roto-translations; the latter are isomorphic to transformations of the de Sitter group. In these special frames the virtual particles of the pre-vacuum have null momentum and therefore are considered "at rest". The observers placed at the origins of these frames do not experience any "wind of aether" associated with any virtual process. The basic idea in this framework is that the collapse of the quantum-mechanical wave-function is an objective process, consisting in the temporal localization of elementary particles on intervals in the order of q0 » t0/ND, where ND » 10 41 is the Dirac number [7]. For values of x0 lower than cq0 » 10 -13 cm, the collapses are inhibited, and therefore no real interaction occurs. For higher values, wavefunction collapses can occur; in other words, the pre-vacuum is unstable with respect to the temporal localization of its particles. The virtual fluctuations of "temporal" extension x0 > cq0 are then converted, from the collapse of their wave functions, into real particles. In other words, a process of nucleation of real matter occurs that "empties" the pre-vacuum, converting it into matter and ordinary vacuum. This nucleation, seen from the spatio-temporal domain, is the big bang. Space-time is not connected with the Archaic Universe dynamically, but through three distinct transformations of coordinates: first of all, a Wick rotation, which transforms the half hyper-sphere into a portion of hyperboloid; then a gnomonic projection of this hyperboloid on the tangent plane; finally, a scale contraction by a factor » ND. These three operations transform the privileged frames of reference of the unique archaic space into a triple continuous infinity of spaces (one for each tangent point), connected by transformations of the de Sitter group (in a sandwich between two inverse scale transformations). These are the private spacetimes of the fundamental observers, i.e. the global frames of reference used by these observers to coordinate the events. It is to these different (but equivalent) "points of view on the world" that the gravitational equations of the PGR are applied. The conditions of homogeneity and "spatial" isotropy of the pre-vacuum induce the cosmological principle, which, thus, becomes a condition to be applied to the solution of these equations. Similarly to the conventional Friedmann cosmology, the distance scale R(t) must be determined; t is the cosmic time. The essential difference with respect to the conventional approach is, however, that the cosmological model is univocally determined and it is a flat model with a positive cosmological constant. This scheme does not require inflation, although it can be added. The situation at the big bang can therefore be described as follows. Consider a fundamental observer O that emits (receives) a signal consisting of a particle P. In the private space-time of O (of radius » cq0 to the big bang) the propagation of the signal from P to O, or vice versa, is described by a translation on space-time characterized by a set of known parameters. The private space-time of P is instead connected to that of O by a de Sitter translation of parameters » ND times larger (in a sandwich between two inverse scale transformations of factors respectively » ND, 1/ND). The gravitational equations define the metric of the private space-time of O as a function of the distribution of matterenergy in such space-time but the solutions of these equations, related to different values of O, are connected by a global transformation of coordinates on the de Sitter "public" space-time with radius r = ct0. It is important to understand that the Archaic Universe does not "precede" the current Universe chronologically. Rather, it is a kind of boundary condition: the PGR equations are applied to the private space-time of a generic fundamental observer, and the private space-times of the various observers are connected by coordinate transformations of the Archaic Universe. In a sense, therefore, the archaic layer of physical reality is always present, though it cannot be directly probed. The situation is very similar to that of observers on the earth's surface put in the impossibility to travel. They can deduce the spherical shape of the Earth from various observations (curvature of the sea surface, shape of the Earth's shadow in the eclipses of the Moon, etc.), but they cannot verify it directly. The spherical shape of the Earth is in fact a global property that can be directly experimented only by traveling the Earth's surface or comparing the data coming from differently positioned observers. In the following sections we will see how, in a slight extension of this scenario, the dark matter can find its place as a "fossil" of the archaic pre-vacuum. We will limit our analysis to the exposure of the essential ideas on this topic, by referring these ideas to other works present in the literature [11][12][13] to obtain a comparison with the observational data. As we will see, the idea is that a specific quantization of the inertia formulated in the Archaic Universe leads, in private space, to the dependence of some local effects (the dark matter thickening) derived from the global constant t0. 2."Archaic" origin of the inertia principle (and its violation) In the previous presentation of the Archaic Universe, the parallel of coordinate x0 (counted from the equator) of the hyper-sphere was considered isothermal, at the temperature ħc/kx0 (k is the Boltzmann constant). However, we can assume the possibility that on this parallel there is an excess of temperature u (assumed positive-valued, as discussed below in the text), and a temperature gradient Ñu. As we will see, the ratio between the maximum value of u and the background value ħc/kx0 (where 0 £ x0 £ 10 -13 3 cm) is £ 10 -39 . Therefore, the generalization we are considering leaves the theoretical framework of the Archaic Universe, defined in the previous section, unchanged. In the origin of each of the privileged frames of reference taken into consideration in the previous section, the pre-vacuum is at rest for each value of x0/c. But there may also be the more general case in which the motion of the particles of the pre-vacuum with respect to that origin (and in correspondence thereof) is at zero mean over long enough intervals of x0/c. We assume that each of these frames is associated with a spherical spatial bubble of radius r centered on its origin, concentric to a second bubble of radius d £ r within which the excess of temperature u with respect to ħc/kx0 is constant. We assume that u is null for distances from the origin x > r, while it takes intermediate values for d £ x £ r. These intermediate values will be defined by the static limit of the thermal conduction equation, i.e. by the Laplace equation Du = 0. Let us now go to a point in the archaic space whose distance from the origin of the frame of reference is x, with 0 £ x £ r. In this point it is possible to define the thermal energy ku, and the pulsation w = 2p/T given by the relation ħw = ku. The fundamental assumption is that a material point in x accelerates towards the origin, with an acceleration a given by the relation (a = | a |): = ± 4 . . , = ( )(14) where the sign of the second side of the equation is positive in the region 0 £ x £ d, negative in the region d £ x £ r (that is, the acceleration is maximum for x = d). It should be noted that this is equivalent, in relativistic terms, to assume an appropriate Newtonian static metric within the bubble of radius r centered in x = 0. We will call this bubble a pre-vacuum "molecule". By defining the time period T = h/ku we have T = T1 for 0 £ x £ d, T = ∞ for x ≥ r. The values of T in the spherical corona d £ x £ r will be obtained from the solution of the Laplace equation. We note that positive values of the period T (the only ones with physical meaning) imply u > 0, as requested. Before going on, let us consider T0, a sort of "internal period" of the origin x = 0 of the frame of reference; for the connection of T0 with T1 we will return later on. Here, we hypothesize that in the Archaic Universe the origin of a frame of reference is endowed with a finite extension in x0/c equal to q0, a typical scale of the temporal localization of elementary particles [8][9][10][11][12][13][14]. The random choice of a particular origin on the x0/c axis thus corresponds to an a priori probability equal to the ratio q0/t0 between the interval q0 and the extension of the x0/c axis, and an information 1/a = -log2(q0/t0). If the entropy _ C C a . 5 C C 7 = C C(15) is interpreted as an action (in units of the Planck's constant h) expressed in the interval q0, it is possible to define an energy as a ratio of this action and q0; this ratio is, in conventional units, h/at0. It is then possible to define a new action as a product of h/at0 for an interval of extension T0 of the variable x0/c. If we assume that this action is an integral multiple of h, we have T0 = nat0, with n = 0, 1, 2,… . In other works we have identified q0 with the time necessary for light to travel the classical radius of the electron, and a with the fine structure constant [14]. It should be noted that in the "Einsteinian" limit of an infinite de Sitter time, that is for t0 ® ¥, a finite (and undetermined) value of T0 is only possible for n = 0. The case n = 0 corresponds to the unexcited pre-vacuum; its particles are at rest with respect to the origins of a continuum of frames of reference that will become, after the big bang, the ordinary inertial background of the fundamental observers. At finite values of the energy can correspond, however, a finite number of excitations n ¹ 0, the "excitations of the inertial field a". In this sense, every point in space is a potential oscillator. The maximum excitation corresponds to n = 137 » 1/a, because in this case we have T0 = t0. Since in the Einsteinian limit t0 ® ¥ only the value n = 0 survives, the pre-vacuum molecules (and the related dark matter clusters that we are going to describe) are an effect of the finiteness of t0. Moreover is u » h/kat0, so the ratio between this excess and the background value ħc/kx0 (where 0 £ x0 £ 10 -13 cm) is £ 10 -39 . The link between T0 and the molecular radius r is defined by the following relation: C = 4 . C . (16) that is: = . . C 4 .(17) The quantization of T0 and relations (14), (17) are the basic assumptions of this model. "After" the big bang For t » q0 the size of the private space of a fundamental observer is » cq0 (the initial singularity is removed in this description) and the origins of all the frames of reference with n ¹ 0, which form a finite set, are concentrated in this space. In a point whose distance from the i-th of these origins is xi, a test body undergoes a total acceleration given by the sum of all the accelerations a(xi) relative to each origin, each computed without taking into account neither R(t) nor the initial contraction for a scale factor » ND; that is, the "archaic" value of the xi is used. With the increase of t the various contributions a(xi) overlap less and less due to the expansion of space and they are therefore less and less averaged. There is therefore a gradual (apparent) growth of the dark matter. When the space radius is ct0 the situation existing in the Archaic Universe is restored. The dark matter thus ceases to appear and from then on is simply diluted (this happens roughly for t » 7´10 9 y ) [15]. This leads to the appearance of more regular morphologies for matter clusters favored by the molecules. Dark Matter Let's now return to consider the single molecule. The field a cannot be outgoing from the origin, because in this case a body subject only to this field (that is, whose acceleration coincides with a) would be rejected by the origin and would not have a defined period of oscillation. This period is defined in the interval 0 £ x £ d where it is constant and equal to T1 (for an harmonic oscillator). The fact that acceleration is directed towards the origin makes the molecule a region where the condensation of matter is favored. Within this region the acceleration increases linearly, according to (14), from a minimum value (which must be zero for reasons of symmetry) to a maximum value amax From (19) and (16) it is then obtained immediately s = g 2 . Within the sphere 0 £ x £ r the Laplace equation takes the form: . 7 = 0 (20) That is: = C + H(21) Where C0, C1 are constant. For 0 £ x £ d we have T = h/ku = T1, thus it is C0 = 0, C1 = h/kT1. The solution in the spherical crown d £ x £ r will be obtained by posing u = h/kT1 for x = d, u = 0 for x = r. We have: = ℎ H D 1 − 1 E 5 1 − 1 7(22) Remembering that T = h/ku one has: = H ( − ) D − 1E(23) For x << r the following linear approximation holds: ~ ( − ) + ; = H ( − ) ; =(24) For 0 £ x £ d the equation (18) can be rewritten as: D ghi E = − .(25) Instead, for d £ x £ r, we have from (24): (26) and thus: = ghi − 4 . 5 1 − 1 ( − ) + 7ghi = 1 1 + 4 . ghi _ 1 ( − ) + − 1 a(27) Let us consider, within the limits of linear approximation (24), x values such that (x -d)/d << ABamax/4p 2 = (1 -d/r) 2 . The further approximation applies to them: The two expressions obtained respectively for 0 £ x £ d and d £ x £ r (exact the first one, approximate the second one) can be combined in a single expression as follows: ghi = 1 + 4 .D ghi E = Θ( − ) − . [1 − Θ( − )](29) The application of linear approximation to observational data seems to indicate a preference (whose significance is still an open problem) for three cases: (I) completely harmonic case with d » r, s » 1, for galaxy clusters; (II) s = d/r = f << 1 for single galaxies; (III) s = 1, f not negligible, for single galaxies. These three cases can be summarized, respectively, in the three groups of equations: The application of (30), (31), (32) to concrete cases is examined in other works [12,13]; the fitting of the experimental data is overall satisfactory. We note that is f » s (or amax » c/t0), regardless of the specific case. It is necessary to underline that the existence of a "molecule" is completely independent from the possible presence of baryonic matter inside it; in this context, therefore, thickening of dark matter is possible in areas where baryon matter is absent. In particular, systems such as the "Bullet Cluster" do not seem to pose significant interpretative problems [12]. | ( ) = C 0 < < ( ) = 0 >(30) Scale Law Regardless of the specific case and the applicability of the linear approximation it is possible to derive a universal law of scale invariance [13]. Let us consider again the expression of the acceleration for 0 £ x £ d and introduce the dark mass M(x) to the following way: ( ) = ghi = C = • = ( ) .(33) where G is the Newton's gravitational constant. By defining the density µ of dark matter through the relationship: ( ) = 4 3 F (34) and posing c/t0 = 6.2´10 -10 ms -2 , the following result is immediately derived: ( ) = 3 4 C = 2.12 . = 9 • 10 . … † ‡ ( ) .(35) where … † ‡ is the solar mass. That is: HC 5 … † ‡ /( ) . 7 = 2.95 ~ 3(36) To compare this result with the observations one can take the central density of dark matter of a galaxy as an estimator of µ and the length r0 of the profile of Burkert as an estimator of sr [13]. With these identifications, the obtained relationship is confirmed by extensive observational research related to galaxies of all sizes, from elliptical dwarfs to giant spirals [16]. The validity of this scale law allows to define the parameter s in terms of µ, which is probably a condition expressing an accidental fact, and r, fixed by the quantum number n. We conclude by returning to the topic of the relationship between cosmos and particles, which inspired the definition of T0. Let us define the "electron density" µe through the relation: ˆ= 4 3ˆ‰ . . ‹ F (37) where me is the mass of electron at rest. As can be seen, a radius equal to the classical radius is attributed to the electron. Then we have, within one order of magnitude, the following numerical coincidence: ( ) ~ ˆ‰ . . ‹(38) In other words, the electron satisfactorily matches the observed scale invariance for galaxies. The last relation can also be deduced theoretically assuming that the ratio between the classical radius of the electron and ct0 is equal to that between its gravitational and electrostatic self-energies. This ratio, as is known from Dirac and Eddington cosmologies, is » 1/ND. In Search of Quantum Traces After Rosen paper, a group of Italian scientists has tried to apply his simple scheme of cosmological quantization to explain the apparent regularity in the galaxy distribution [17]. Indeed it seems that the distribution of galaxies is non-random and some unknown mechanism has led to the formation of regularity in the galaxy distribution with a characteristic scale of 128ℎ OEH with 0.5 ≤ ℎ ≤ 1 [18]. In [17], unlike Rosen, the authors interpret as the mass of a galaxy and for this reason, in their interpretation, we have that a galaxy of mass has the probability | | . to be at a given scale factor ( ). The relation between the scale factor and the red-shift is ̇= = −+ 1 (39) so C = 1 +(40) we than can obtain the probability amplitude to find a given galaxy at a given red-shift , at time . To complete this analysis, by using the standard quantum mechanics, the Schrödinger stationary equation and, by using equation (7), the Schrödinger stationary equation can be written in the form The authors examine the equation (44) in detail and, for mathematical insights, we recommend reading the paper [17]. Indeed the authors show that there are several possibilities to get oscillatory solutions. They depend on the type of cosmic fluid and the type of spatial metrics. For example, in a dust model with cosmological constant, they find solutions in terms of Bessel functions that have an asymptotic behavior in good agreement with the experimental data. Obviously a good result is to get 8 oscillations with periodicity of 128ℎ OEH in 2000ℎ OEH and that is in a red-shift range from z = 0 to z = 0.5. In our opinion, since the quantum imprinting there has been during the radiaton era with = 4/3 and, furthermore, it seems that space is flat [19], above all it would be interesting to analyze the solutions of when P is slowly varying, then the following condition is obtained: > 8 . C . . 3 + Λ . 3 This condition implies that: for k = 0 we do not have solutions obeying this condition unless < 0 for k = 1 we can have accelerated expansion if the Universe is closed. For k = -1 the condition is valid only if < 0 From these results one obtains oscillatory solutions for P > 0 and P is slowly varying, a sort of breating mode and solutions with negative cosmological constants 1 . The Dark Side of the Universe Over the years, the Universe has become very "dark". Indeed, it is well known that the best description of cosmological evolution is the so called model. This model is a consequence of experimental observations of supernova redshifts [21][22], clustering of galaxies [23][24][25] and Cosmic Microwave Background (CMB) [26][27][28]. The basic ingredients of model are 69.2% of Dark Energy (DE), the cosmological constant that is the cause of the accelerated expansion of the Universe, and 30.8% of self-gravitating matter of which only 4.9% is of ordinary luminous matter while the remaining 25.9% of the energy density would be composed by Dark Matter (DM). The hypothesis of DM is born when C. Zwicky measured the velocity dispersion of the Coma cluster of galaxies [29]. Experimental observations, mainly due to Vera Rubin's research group, clearly show that galactic rotational speeds do not decrease following Keplerian way and the main problem concerns specially areas outside the luminous part of the galaxy. In these areas there are neutral hydrogen clouds at large distances from the rotational centre and they too are moving at constant tangential velocity although, in those regions, the sky is totally dark. A minority of scientists seek to change in various ways the gravitational law or the Newton second law and certainly the most famous attempt is due to the Israeli scientist Mordehai Milgrom. He, from 1983 to now, developed the so called MOND (Modified Newtonian Dynamics) theory in order to explain a variety of astronomical phenomena without requiring the presence of DM [30][31]. Although there are models that try to interpret the experimental results without "dark" components [32][33][34], almost all of the scientific community believes that DE and DM are inevitabile. For example, in a recent paper [35] the authors, starting from data reported for main sequence stars under the Sloan Digital Sky Survey collaboration, showed that the periodic spectral modulations could be due to DM effects. Returning to the topic of our paper, we must observe that the cosmological Schrödinger equation has also been applied to the case of search for DM candidates and to analyze the origin and nature of DE [36][37]. In [36] the author analyzes the numerical solutions of (44) in two remarkable cases with a different interpretation of the mass. He predicts, in this way, that DM is composed of a quantum particle of very low mass and is clustered around the luminous matter of galaxies. Let us remember the following relations useful for implementing the equation in a mathematical software to find numerical solutions 1 If we also analyse the equation in the form "" + ®h°+ . ± = 0, P > 0 is always valid, instead P < 0 is never satisfied. In this case we have only oscillatory regimes like in the DM-axion stellar oscillations. In a recent paper [37], instead, the authors follow Rosen's original paper and that is, , in the equation, is interpreted as the mass of the whole Universe. Moreover, they look at the cosmological constant not as a perturbation of potential energy, but as related to the total energy. Finally, they make a suitable change of variables = C 5 g 7 (51) where C and g < C are arbitrary constants with By fixing a suitable boundary condition on the eigenfunctions of the Schrödinger equation, they calculate the corresponding eigenvalues of energy obtaining that the wave function of the Universe is a Bessel function of purely imaginary order. This result seems very interesting because Bessel functions occur in many branches of mathematical physics and, in particolar, the Bessel functions with purely imaginary order have many applications in physics as you can read in [38]. The most important result is that DE can be seen as the energy of the quantum state of the Universe. The Universe is not in a fundamental state with zero energy, but it is in an excited state with a nonvanishing value of energy. Conclusions In this paper we have reviewed the recent developments of Rosen's approach to quantum cosmology and the concept of Archaic Universe. The first approach is conceptually very simple. The cosmological quantum equation is nothing else but the quantum counterpart of the classical Einstein first order equation. The origin of the cosmological constant is still unknown and, following Rosen quantization, it is possible to explain dark energy, through a complex mathematical formalism, as a simple excited state of the Universe. Furthermore, if we change the interpretation of the mass in the equation, it is possible to use this approach to search for Dark Matter particles. Finally, the oscillating solutions can be connected in natural way to the correlation function, which gives the features of the distribution of superclusters of galaxies. The Archaic Universe, instead, proposes an interpretation of DM and of the morphogenesis linked to a reading of the "big-bang" as nucleation in a vacuum geometrically constrained and observed under particular conditions of projectivity [39]. Nucleation is a quantum process, and DM is a global systemic effect of flocculation of matter. The Archaic Universe is compatible with it but does not necessarily require a "particle candidate". Why should we deal with these theories? In these years the Standard Model has had an increasing number of confirmations and the open questions, at the moment, do not seem to require deep structural changes. Apart from the singular case of Peccei-Quinn-Wilczek-t'Hooft's axion, the reading of DM in particle terms does not seem to be the only one. We believe that theories that explore the possibility of explaining DM in systemic terms, but not based on ad hoc assumptions, albeit brilliant, deserve attention; for example, MOND theories. In particular, the theories here considered, are born as promising models of quantum cosmology that seem to merge gravity and quantum physics in an organic way. Naturally the future of these theories will be decided by a close comparison with observational data and particle models of DM [40][41]. = 4p 2 d/T1 2 . Let us pose T1 = T0g. If we indicate with sr the value of x at which the extension of the acceleration profile evaluated for 0 £ x £ d equals c/t0, we have: By posing d = fr, with 0 £ f £ 1, this expression becomes l/r, where l = 1/[f(1-f) 2 ]; it coincides with 1/d in the limit f << 1. " " + _ . + . a = 0 (46) More in general, if one considers equation (46) as Impact of cosmological and astrophysical constraints on dark matter simplified models. C Arina, Arina C. Impact of cosmological and astrophysical constraints on dark matter simplified models https://arxiv.org/abs/1805.04290 Wave function of the Universe. J Hartle, S Hawking, Phys. Rev. D. 282960Hartle J. and Hawking S. Wave function of the Universe Phys. Rev. D 28, 2960 (1983) Quantum Theory of Gravity. I. The Canonical Theory Phys. Rev. B S Dewitt, 1601113DeWitt B.S. Quantum Theory of Gravity. I. The Canonical Theory Phys. Rev. 160, 1113 (1967) I Licata, Sitter Projective Relativity SpringerBriefs in Physics. Licata I. et al. De Sitter Projective Relativity SpringerBriefs in Physics (2017) The Archaic Universe: Big Bang, Cosmological Term and the Quantum Origin of Time in Projective Cosmology Int. I Licata, L Chiatti, Journ. Theor. Phys. 48Licata I. and Chiatti L. The Archaic Universe: Big Bang, Cosmological Term and the Quantum Origin of Time in Projective Cosmology Int. Journ. Theor. Phys. 48, 1003-1018 (2009) Archaic Universe and cosmological model: "big bang" as nucleation by vacuum Int. I Licata, L Chiatti, Journ. Theor. Phys. 4910Licata I. and Chiatti L. Archaic Universe and cosmological model: "big bang" as nucleation by vacuum Int. Journ. Theor. Phys. 49 (10), 2379-2402 (2010) I Licata, L Chiatti, Timeless approach to quantum jumps Quanta. 4Licata I. and Chiatti L. Timeless approach to quantum jumps Quanta 4(1), 10-26 (2015) Particle model from quantum foundations Quantum Stud. Math Found. L Chiatti, I Licata, 4Chiatti L. and Licata I. Particle model from quantum foundations Quantum Stud. Math Found. 4, 181-204 (2017) Quantum mechanics of a miniuniverse Int. N Rosen, J. Theor.Phys. 32Rosen N. Quantum mechanics of a miniuniverse Int.J. Theor.Phys., 32:1435-1440 (1993) Schrödinger evolution of self-gravitating discs. K Batygin, Monthly Notices of the Royal Astronomical Society. 475Batygin K. Schrödinger evolution of self-gravitating discs Monthly Notices of the Royal Astronomical Society, Volume 475, Issue 4, 21 April (2018) A possible mechanism for the origin of inertia in de Sitter-Fantappié-Arcidiacono projective relativity El. L Chiatti, Journ. Theor. Phys. 926Chiatti L. A possible mechanism for the origin of inertia in de Sitter-Fantappié- Arcidiacono projective relativity El. Journ. Theor. Phys. 9 (26), 11-26 (2012) Cosmos and Particles, a different view of dark matter The Open Access. L Chiatti, Astronomy Journal. 5Chiatti L. Cosmos and Particles, a different view of dark matter The Open Access Astronomy Journal 5, 44-53 (2012) Dark matter in galaxies: a relic of a pre-big bang era?. L Chiatti, Quantum Matter Journal. 33Chiatti L. Dark matter in galaxies: a relic of a pre-big bang era? Quantum Matter Journal 3(3), 284-288 (2014) Quantum jumps and electrodynamical description Int. L Chiatti, Jourm Quantum Found. 3Chiatti L. Quantum jumps and electrodynamical description Int. Jourm Quantum Found. 3, 100-118 (2017) Sitter Relativity and Cosmological Principle The Open Access. L Chiatti, De, Astronomy Journal. 4Chiatti L. De Sitter Relativity and Cosmological Principle The Open Access Astronomy Journal 4, 27-37 (2011) The universal rotation curve of spiral galaxies -II. The dark matter distribution out to the virial radius. P Salucci, Monthly Notices of Royal Astronomical Society. 3781Salucci P. et al. The universal rotation curve of spiral galaxies -II. The dark matter distribution out to the virial radius Monthly Notices of Royal Astronomical Society 378 (1), 41-47 (2007) Oscillating Universes as eigensolutions of cosmological Schrödinger equation. S Capozziello, Int.J.Mod.Phys. 9143Capozziello S. et al. Oscillating Universes as eigensolutions of cosmological Schrödinger equation Int.J.Mod.Phys.D9,143 (2000). Large-scale distribution of galaxies at the Galactic poles. T Broadhurst, Nature. 343Broadhurst T. et al. Large-scale distribution of galaxies at the Galactic poles, Nature 343, 726 -728 (1990). . P A R Ade, Planck CollaborationarXiv:1502.01589Astron. Astrophys. 594Planck Collaboration (P.A.R. Ade et al.), Astron. Astrophys. 594, A13 (2016) arXiv:1502.01589 . S Esposito, Ettore Majorana: Unpublished Research Notes on Theoretical Physics Springer. Esposito S. et al. Ettore Majorana: Unpublished Research Notes on Theoretical Physics Springer, (2009) Measurements of omega and lambda from 42 high-redshift supernovae. S Perlmutter, Astrophys. J. 5172Perlmutter, S., et al. Measurements of omega and lambda from 42 high-redshift supernovae Astrophys. J. 517(2), 565--586 (1999) Improved cosmological constraints from new, old and combined supernova datasets. M Kowalski, Astrophys. J. 686749Kowalski M. et al. Improved cosmological constraints from new, old and combined supernova datasets Astrophys. J. 686, 749 (2008) Simulations of the formation, evolution and clustering of galaxies and quasars. V Springel, Nature. 4357042Springel V., et al. Simulations of the formation, evolution and clustering of galaxies and quasars Nature 435(7042), 629-636 (2005) Baryon acoustic oscillations in the SDSS data release 7 Galaxy sample. W J Percival, Mon. Not. R. Astron. Soc. 4012148Percival W.J. et al. Baryon acoustic oscillations in the SDSS data release 7 Galaxy sample Mon. Not. R. Astron. Soc. 401, 2148 (2010) Cosmological constraints from the clustering SDSS DR7 Luminous Red Galaxies Mon. B A Reid, Not. R. Astron. Soc. 40460Reid B.A. et al. Cosmological constraints from the clustering SDSS DR7 Luminous Red Galaxies Mon. Not. R. Astron. Soc. 404, 60 (2010) First-year Wilkinson microwave anisotropy probe (WMAP) observations: determination of cosmological parameters. L Verde, WMAP CollaborationAstrophys. J. Suppl. Ser. 1481WMAP Collaboration, Verde, L. et al. First-year Wilkinson microwave anisotropy probe (WMAP) observations: determination of cosmological parameters Astrophys. J. Suppl. Ser. 148(1), 175-194 (2003) First-year Wilkinson microwave anisotropy probe (WMAP) observations: preliminary maps and basic results. C L Bennett, WMAP collaborationAstrophys. J. Suppl. Ser. 1481Bennett C.L. et al. (WMAP collaboration): First-year Wilkinson microwave anisotropy probe (WMAP) observations: preliminary maps and basic results Astrophys. J. Suppl. Ser. 148(1), 1-27 (2003) Three-year Wilkinson microwave anisotropy probe (WMAP) observations: implications for cosmology. D N Spergel, Astrophys. J. Suppl. Ser. 1702Spergel D.N. et al. Three-year Wilkinson microwave anisotropy probe (WMAP) observations: implications for cosmology Astrophys. J. Suppl. Ser. 170(2), 377--408 (2007) Spectral displacement of extra galactic nebulae Helv. F Zwicky, Phys. Acta. 6Zwicky F. Spectral displacement of extra galactic nebulae Helv. Phys. Acta. 6, 110-127 (1933) A modification of the Newtonian dynamics as a possible alternative to the hidden mass hypothesis. M Milgrom, Astrophysical Journal. 270Milgrom M. A modification of the Newtonian dynamics as a possible alternative to the hidden mass hypothesis Astrophysical Journal 270: 365--370 (1983) A modification of the Newtonian dynamics -Implications for galaxies. M Milgrom, Astrophysical Journal. 270Milgrom M. A modification of the Newtonian dynamics -Implications for galaxies Astrophysical Journal 270: 371-389 (1983) A machian request for the equivalence principle in extended gravity and nongeodesic motion Gravit. I Licata, Cosmol. 2248Licata I. et al A machian request for the equivalence principle in extended gravity and nongeodesic motion Gravit. Cosmol. 22: 48 (2016) Little Perturbations Grow up…Without Dark Matter Int. A Feoli, E Benedetto, J. Theor.Phys. 51Feoli A. and Benedetto E. Little Perturbations Grow up…Without Dark Matter Int.J. Theor.Phys., Vol 51, Issue 3, pp 690-697 ( 2012 ) Dark energy or local acceleration? Gravitation and Cosmology. A Feoli, E Benedetto, 23Feoli A. and Benedetto E. Dark energy or local acceleration? Gravitation and Cosmology, 23(3):240-244, (2017) Can the periodic spectral modulations observed in 236 Sloan Sky Survey stars be due to dark matter effects?. F Tamburini, I Licata, Phys. Scr. 929Tamburini, F. and Licata, I. Can the periodic spectral modulations observed in 236 Sloan Sky Survey stars be due to dark matter effects? Phys. Scr. 2017, 92, 9. Some predictions of the cosmological Schrödinger equation. A Feoli, Int.J.Mod.Phys. D Vol. 12Feoli A. Some predictions of the cosmological Schrödinger equation Int.J.Mod.Phys. D Vol.12, 1475-1485 (2003) Is Dark Energy due to an excited quantum state of the Universe?. A Feoli, Eur. Phys. J. Plus. 132211A. Feoli et al. Is Dark Energy due to an excited quantum state of the Universe? Eur. Phys. J. Plus 132: 211 (2017) The asymptotic theory of dispersion relations containing Bessel functions of imaginary order. C J Chapman, Proceedings of the Royal Society A. Chapman C.J. The asymptotic theory of dispersion relations containing Bessel functions of imaginary order Proceedings of the Royal Society A 26 September (2012) Universe without singularities. A group approach to de. I Licata, Sitter cosmology EJTP. 310Licata, I. Universe without singularities. A group approach to de Sitter cosmology EJTP 3(10): 211-224 (2006) The Search for Dark Matter. L Baudis, European Review. 261Baudis, L. The Search for Dark Matter European Review 26(1), 70-81( 2018) Behind the Scenes of the Universe: From the Higgs to Dark. G Bertone, Matter Oxford UNiv. PressBertone, G. Behind the Scenes of the Universe: From the Higgs to Dark Matter Oxford UNiv. Press (2014)
[]
[ "Development of a compact HAPG crystal Von Hamos X-ray spectrometer for extended and diffused sources", "Development of a compact HAPG crystal Von Hamos X-ray spectrometer for extended and diffused sources" ]
[ "A Scordo \nStefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria\n", "C Curceanu \nStefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria\n", "M Miliucci \nStefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria\n", "Laboratori Nazionali Di Frascati Infn \nStefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria\n", "Frascati ( Rome \nStefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria\n", "Italy F Sirghi \nStefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria\n", "J Zmeskal \nStefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria\n" ]
[ "Stefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria", "Stefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria", "Stefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria", "Stefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria", "Stefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria", "Stefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria", "Stefan-Meyer-Institut für subatomare Physik\nLaboratori Nazionali di Frascati INFN\nHoria Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH)\nFrascati (Rome), Magurele, ViennaItaly, Romania, Austria" ]
[]
Bragg spectroscopy is one of the best established experimental methods for high energy resolution X-ray measurements; however, this technique is limited to the measurement of photons produced from well collimated (tens of microns) or point-like sources and becomes quite inefficient for photons coming from extended and diffused sources. The possibility to perform simultaneous measurements of several energies is strongly demanded when low rate signals are expected and single angular scans require long exposure times. A prototype of a high resolution and high precision X-ray spectrometer working also with extended isotropic sources, has been developed by the VOXES collaboration at INFN Laboratories of Frascati, using Highly Annealed Pyrolitic Graphite (HAPG) crystals in a "semi"-Von Hamos configuration, in which the position detector is rotated with respect to the standard Von Hamos one, to increase the dynamic energy range. The aim is to deliver a cost effective system having an energy resolution at the level of eV for X-ray energies from about 2 keV up to tens of keV, able to perform sub-eV precision measurements with non point-like sources. The proposed spectrometer has possible applications in several fields, going from fundamental physics to quantum mechanics tests, synchrotron radiation and X-FEL applications, astronomy, medicine and industry. In particular, this technique is fundamental for a series of nuclear physics measurements like, for example, the energies of the exotic atoms radiative transitions which allow to extract fundamental parameters in the low energy QCD in the strangeness sector. In this work, the working principle of the spectrometer is presented, together with the tests and the results, in terms of resolution and source, size obtained for F e(K alpha1,2 ), Cu(Kα1,2), N i(K β ), Zn(Kα1,2), M o(Kα1,2) and N b(K β ) lines.
null
[ "https://arxiv.org/pdf/1903.02826v1.pdf" ]
119,439,854
1903.02826
96a9d57ec9ba8a12896d6fd88394e7d40aef0107
Development of a compact HAPG crystal Von Hamos X-ray spectrometer for extended and diffused sources (Dated: March 8, 2019) A Scordo Stefan-Meyer-Institut für subatomare Physik Laboratori Nazionali di Frascati INFN Horia Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH) Frascati (Rome), Magurele, ViennaItaly, Romania, Austria C Curceanu Stefan-Meyer-Institut für subatomare Physik Laboratori Nazionali di Frascati INFN Horia Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH) Frascati (Rome), Magurele, ViennaItaly, Romania, Austria M Miliucci Stefan-Meyer-Institut für subatomare Physik Laboratori Nazionali di Frascati INFN Horia Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH) Frascati (Rome), Magurele, ViennaItaly, Romania, Austria Laboratori Nazionali Di Frascati Infn Stefan-Meyer-Institut für subatomare Physik Laboratori Nazionali di Frascati INFN Horia Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH) Frascati (Rome), Magurele, ViennaItaly, Romania, Austria Frascati ( Rome Stefan-Meyer-Institut für subatomare Physik Laboratori Nazionali di Frascati INFN Horia Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH) Frascati (Rome), Magurele, ViennaItaly, Romania, Austria Italy F Sirghi Stefan-Meyer-Institut für subatomare Physik Laboratori Nazionali di Frascati INFN Horia Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH) Frascati (Rome), Magurele, ViennaItaly, Romania, Austria J Zmeskal Stefan-Meyer-Institut für subatomare Physik Laboratori Nazionali di Frascati INFN Horia Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH) Frascati (Rome), Magurele, ViennaItaly, Romania, Austria Development of a compact HAPG crystal Von Hamos X-ray spectrometer for extended and diffused sources (Dated: March 8, 2019)numbers: 0785-m0785Fv0785Nc4270-a6110Nz0760-j Bragg spectroscopy is one of the best established experimental methods for high energy resolution X-ray measurements; however, this technique is limited to the measurement of photons produced from well collimated (tens of microns) or point-like sources and becomes quite inefficient for photons coming from extended and diffused sources. The possibility to perform simultaneous measurements of several energies is strongly demanded when low rate signals are expected and single angular scans require long exposure times. A prototype of a high resolution and high precision X-ray spectrometer working also with extended isotropic sources, has been developed by the VOXES collaboration at INFN Laboratories of Frascati, using Highly Annealed Pyrolitic Graphite (HAPG) crystals in a "semi"-Von Hamos configuration, in which the position detector is rotated with respect to the standard Von Hamos one, to increase the dynamic energy range. The aim is to deliver a cost effective system having an energy resolution at the level of eV for X-ray energies from about 2 keV up to tens of keV, able to perform sub-eV precision measurements with non point-like sources. The proposed spectrometer has possible applications in several fields, going from fundamental physics to quantum mechanics tests, synchrotron radiation and X-FEL applications, astronomy, medicine and industry. In particular, this technique is fundamental for a series of nuclear physics measurements like, for example, the energies of the exotic atoms radiative transitions which allow to extract fundamental parameters in the low energy QCD in the strangeness sector. In this work, the working principle of the spectrometer is presented, together with the tests and the results, in terms of resolution and source, size obtained for F e(K alpha1,2 ), Cu(Kα1,2), N i(K β ), Zn(Kα1,2), M o(Kα1,2) and N b(K β ) lines. I. INTRODUCTION The possibility to perform high precision measurements of soft X-rays, strongly demanded in many fields of fundamental science, from particle and nuclear physics to quantum mechanics, as well as in astronomy and in several applications using synchrotron light sources or X-FEL beams, in biology, medicine and industry, is still today a big challenge. These measurements are even more difficult when they have to be performed in accelerator environments where, depending on the different machines, various kinds of hadronic and electromagnetic backgrounds are present. Typical large area spectroscopic detectors used for wide and isotropic targets in accelerator environments are solid state devices, like the Silicon Drift Detectors (SDDs), recently employed by the SIDDHARTA experiment [1] for exotic atoms transition lines measurements at the DAΦN E e + e − collider of the INFN National Laboratories of Frascati [2]. The intrinsic * [email protected] resolution of such kind of detectors is nevertheless limited to 120 eV FWHM by the electronic noise and the Fano Factor, making them unsuitable for those cases in which the photon energy has to be measured with a precision of below 1 eV. Few eV resolutions have become achievable using superconducting microcalorimeters, like the Transition Edge Sensors developed at NIST [3], able to obtain few eV FHWM at 6 keV; in spite of this excellent resolution, these kind of detectors still have some limitations: a very small active area, prohibitively high costs of the complex cryogenic system needed to reach the operational temperature of 50 mK, and a response function which is still not properly under control. As a third possibility, Bragg spectroscopy is one of the best established high resolution X-ray measurement techniques; however, when the photons emitted from extended sources (like a gaseous or liquid target) have to be measured, this method has been until now ruled out by the constraint to reduce the dimension of the target to a few tens of microns [4] [5]. Experiments performed in the past at the Paul Scherrer Institute (PSI), measuring pionic atoms [6] [7], pioneered the possibility to combine arXiv:1903.02826v1 [physics.ins-det] 7 Mar 2019 Charged Coupled Device detectors (CCDs) with silicon crystals, but the energy range achievable with that system was limited to few keV due to the crystal structure, and the silicon low intrinsic reflection efficiency required the construction of a very large spectrometer. The possibility to perform other fundamental measurements, like the precision determination of the K − mass measuring the radiative kaonic nitrogen transitions at the DAΦN E collider [8], has been also investigated, but the estimated efficiency of the proposed spectrometer was not sufficient to reach the required precision. Recently, the development of the Pyrolitic Graphite mosaic crystals [9], renewed the interest on Bragg spectrometers as possible candidates also for millimetric isotropic sources X-ray measurements in accelerator environments. Mosaic crystals consist in a large number of nearly perfect small pyrolitic graphite crystallites, randomly misoriented around the lattice main direction; the FWHM of this random angular distribution is called mosaicity (ω F W HM ) and it makes possible that even a photon not reaching the crystal with the exact Bragg energy-angle relation, can find a properly oriented crystallite and be reflected [10]. This, together with a lattice spacing constant of 3, 514Å, enables them to be highly efficient in diffraction in the 2-20 keV energy range, for the n=1 reflection order, while higher energies can be reached at higher reflection orders. Thanks to their production mechanism, Higly Annealed Pyrolitic Graphite crystals (HAPG) can be realised with different ad-hoc geometries, making them suitable to be used in the Von Hamos configuration [11], combining the dispersion of a flat crystal with the focusing properties of cilindrically bent crystals. The main problem to overcome is represented by the source size; Von Hamos spectrometers have been extensively used in the past providing very promising results in terms of spectral resolution [4] [12][13] [14] but all the available works in literature report measurements done in conditions to have an effective source dimensions of some tens of microns; this configuration is achieved either with microfocused X-ray tubes, or with a set of slits and collimators placed before the target to minimize the activated area. In this work we investigate the possibility to combine HAPG crystals properties with the vertical focus of the Von Hamos configuration, to realize a spectrometer able to maintain a resolution in the order of 0, 1% (FWHM/E), for energies below 10 keV , and of 0, 5% up to 20 keV , using a source size ranging from 500 µm to 2 mm in the Bragg dispersion plane. In section II the experimental setup and the spectrometer geometry are presented, while in section III the experimental results obtained for the F e(K α1,2 ), Cu(K α1,2 ), N i(K β ), Zn(K α1,2 ), M o(K α1,2 ) and N b(K β ) line measurements are reported. II. SPECTROMETER SETUP AND GEOMETRY A. Von Hamos geometry The spectrometer configuration used in the measurements presented in this work is the Von Hamos one, in which the X-ray source and the position detector are placed on the axis of a cylindrically bent crystal (see fig. 1; this geometrical scheme allows an improvement in the reflection efficiency due to the vertical focusing. As a consequence, for each X-ray energy the source-crystal (L 1 ) and the source-detector (L 2 ) distances are determined by the Bragg angle (θ B ) and the curvature radius of the crystal (ρ c ): L 1 = ρ c sinθ B (1) L 2 = L 1 sinφ(2) In Fig. 1, a schematic of the dispersive plane is shown where the X-ray source is sketched in orange, the HAPG crystal in red and the position detector in blue and green for the standard and the "semi" Von Hamos configuration (see section III A), respectively. In the figure, ρ c is the crystal curvature radius, θ B is the Bragg angle, φ = π − θ B , L 1 is the source-crystal distance and L 2 is half of the resulting source-detector distance. : the X-ray source is pictured in orange, the HAPG crystal in red and the position detector in blue and green for the standard and the "semi" Von Hamos configuration (see section III A), respectively. ρc is the crystal curvature radius, θB is the Bragg angle, φ = π − θB, θM is the position detector rotation angle with respect to standard VH configuration, L1 is the source-crystal distance and L2 is half the resulting source-detector distance. Spectrometer components The setup used to perform the reported measurements is shown in Fig. 2. A single or multi element thin foil (target) is placed inside an aluminum box and activated by a XTF-5011 Tungsten anode X-ray tube, produced by OXFORD IN-STRUMENTS, placed on top of the box; the center of the foil, placed on a 45 • rotated support prism, represents the origin of the reference frame in which Z is the direction of the characteristic photons emitted by the target and forming, with the X axis, the Bragg reflection plane, while Y is the vertical direction, along which primary photons generated by the tube are shot. Two adjustable motorized slits (STANDA 10AOS10-1) with 1 µm resolution are placed after the 5, 9 mm diameter circular exit window of the aluminum box in order to shape the outcoming X-ray beam. The various used HAPG crystals, having a thickness of 100 µm and a mosaicity of (0, 1 ± 0, 03) • , are deposited on different curvature radii Thorlabs N-BK7 30 × 32 mm 2 uncoated Plano-Concave Cylindrical lenses, held by a motorized mirror mount (STANDA 8MUP21-2) with a double axis 1 arcsec resolution and coupled to a 0, 01 µm motorized vertical translation stage (STANDA 8MVT40-13), a 4, 5 arcsec resolution rotation stage (STANDA 8MR191-28) and two motorized 0, 156 µm linear translation stages (STANDA 8MT167-25LS). The position detector is a commercial MYTHEN2-1D 640 channels strip detector produced by DECTRIS (Zurich, Switzerland), having an active area of 32 × 8 mm 2 and whose strip width and thickness are, respectively, 50 µm and 420 µm; the MYTHEN2-1D detector is also coupled to a positioning motorized system identical to the one for the HAPG holder. Finally, a standar Peltier Cell is kept on top of the strip detector in order to stabilize its temperature in the working range of 18 • − 28 • . The resulting 10-axis motorized positioning system is mounted on a set of Drylin rails and carriers to ensure better stability and alignement and, in addition, to easily adjust source-crystal-detector positions for each energy to be measured. B. Target illuminated region The region of the target illuminated by the X-ray tube is constant and can be calculated starting from the initial anode position h = 95, 3 mm and the angular conical aperture α = 11 • of the tube (see Fig. 3). The intersection of the conical beam and the prism, rotated at 45 • , is resulting to be ellypsoidal with axes: ρ = ρ x = htgα = 95, 3 mm × tg(11 • ) = 18, 524 mm (3) ρ = ρsin( π 2 + α) sin( π 4 − α) = 32, 52 mm (4) ρ y = ρ sin π 4 = 22, 99 mm (5) C. Source size in dispersion plane As pointed out in section I, the actual limitations on the possible usage of crystal spectrometers for extended targets is represented by the requirement of a point-like source; however, using a pair of slits as the one described in the previous section, it is possible to shape the beam of X-rays emitted by an extended and diffused target in such a way to simulate a virtual point-like source. Referring to Fig. 4, this configuration is obtained setting the position (z 1 and z 2 ) and the aperture (S 1 and S 2 ) of each slit in order to create a virtual source between the two slits (z f , green solid lines on Fig. 4), an angular acceptance ∆θ , and an effective source S 0 (green) which can be, in principle, as wide as necessary. The ∆θ angular acceptance could also be set to any value, provided it is large enough to ensure that all the θ B corresponding to the lines to be measured are included; for example, if both Fe(K α1 ) (4510,84 eV, θ B = 24, 19 • ) and Fe(K α2 ) (4504,86 eV, θ B = 24, 22 • ) have to be measured, taking also into account the mosaicity (σ ω = ω F W HM /2, 35), the condition is ∆θ + 6σ ω ≥ 0, 03 • , since for each photon direction there is a non-zero probability to find a properly oriented crystallite within 6σ ω . These X-ray beam shape characteristics overlaps with a second component, depicted as a dotted green line in Fig. 4, due to the Xrays emitted in the inner part of the target (S 0 ) with a smaller angular divergence ∆θ ≤ ∆θ . In Fig. 4, z h and S h are the position and the diameter of the circular exit window of the aluminum box front panel, respectively. In practice, increasing the effective source size introduces a bakground, as shown in Fig. 5; for simplicity we describe this situation only for the dotted green lines scheme of Fig. 4 but it applies also in the solid line case. The yellow lines on the figure represent the photons which, matching the Bragg condition (θ B , λ B ), form the signal peak on the reflected Bragg spectrum (we call them nominal from now on). Since the photons are isotropically emitted from the whole target foil, some of them may have the correct energy and angle to be reflected but originate from a point of the target near the nominal one (on the yellow line). As far as this mislocation is below the limit given by the mosaic spread of the crystal, such photons are also reflected under the signal peak worsening the spectral resolution (solid green lines in the figure); on the contrary, when this mislocation exceeds this limit (solid red line) these photons are reflected outside the signal peak. In the same way, photons not emitted in parallel to the nominal ones may still impinge on the HAPG crystals with an angle below its mosaic spread (dashed green lines) and be then reflected under the signal peak also affecting the spectral resolution. On the contrary, if the impinging angle is out of this limit, those photons are not reflected on the position detector. As a consequence, for each energy there is the possibility to find the right slits configuration leading at the maximum source size keeping the resolution below the desired limit. FIG. 5. Signal and background scheme (not in scale); two slits S1 and S2 are used to shape the X-ray beam divergence (∆θ) and effective source size (S0). Depending on the HAPG mosaic spread, photons emitted parallel (solid lines) and not (dashed lines) to the nominal one matching the Bragg condition (yellow) are shown; some of them are reflected under the signal peak (solid and dashed green), some are a source of background (solid red), some are not reflected (dashed red). See the text for more details. For each chosen ∆θ , S 0 pair, the corresponding values of the slits' apertures S 1 and S 2 can be found; first, we define the position of the intersection point z f : z f = S 0 2 ctg ∆θ 2(6) Then, the two slits' apertures and the vertical illuminated region of the HAPG are defined by: S 1 = z f − z 1 z f S 0(7)S 2 = z 2 − z f z f S 0(8) Concerning the second component, the ∆θ and S 0 parameters can be otained from z 1 , S 1 and z 2 , S 2 as: In Fig. 6, the beam configuration in the vertical plane, orthogonal to the dispersive one, is shown. The vertical spread of the X-ray beam is fixed by the slits positions (z 1 , z 2 ) and their frame size (A s ), together with the exit circular hole in the front panel of the setup box (z h , A h ). The φ angle is then defined as: ∆θ = tg −1 S 2 − S 1 z 2 − z 1 (9) S 0 = S 2 − 2z 2 tg ∆θ 2(10)tgφ = A s + A h 2(z 2 − z h )(11) We first define, as in the horizontal case, the position of the intersection point z f : z f = z 2 − A s 2tgφ(12) The vertical spread of the beam on the target (A 0 ) and on the HAPG crystal (A c ) are then: A 0 = 2z f tgφ(13)A c = 2(z c − z f )tgφ (14) where z c is the crystal position (corresponding to L 1 in Fig. 1). The final effective source area is then determined by S 0 × A 0 . III. EXPERIMENTAL RESULTS In this section we present the spectra obtained for different lines (see tab. I); for each measurement, the corresponding geometrical parameters are listed in tab. II, where θ set B is the central Bragg angle value used for the calculation. The crystal curvature radius is chosen in order to make a compromise between the energy resolution and the signal rate, since higher ρ c leads to longer paths meaning better resolution but higher X-ray absorption from the air. Slits positions z 1 and z 2 are chosen such as to have a vertical dispersion at the HAPG position smaller than the crystal size (30 mm). A. "semi" -Von Hamos configuration One of the most critical parameter of an X-ray spectrometer is its dynamic range; in particular, the possibility to record multiple lines on a single spectrum is crucial if low rates physical processes have to be measured simultaneously and it also allows online calibration. A possible method to increase the dynamic range is to rotate the position detector of an angle θ M , like shown in Figs. 7 and 8, where the comparisons between two measurements in the standard (red) and "semi" (blue) Von Hamos configuration are reported. The spectra in the main frame are obtained from the MYTHEN2-1D position detector (50 µm per channel), while in the upper left insight the corresponding calibrated spectra are shown. From these latter, one can 7. Comparison between the standard (red) and the "semi" (blue) Von Hamos configuration uncalibrated spectra of Cu(Kα1,2) lines. In the insight, a schematic of the two configurations is shown where the X-ray source, the HAPG crystal, the position detector and its illuminated region are colored in light blue, green, red and yellow, respectively. FIG. 8. Comparison between the standard (red) and the "semi" (blue) Von Hamos configuration calibrated spectra of Cu(Kα1,2) lines. In the insight, a schematic of the two configurations is shown where the X-ray source, the HAPG crystal, the position detector and its illuminated region are colored in light blue, green, red and yellow, respectively. immediately see how the dynamic range (in eV) of the "semi" Von Hamos spectrum is wider than the standard one. In the upper right inside, a schematic of the two configurations is shown where the X-ray source, the HAPG crystal, the position detector and its illuminated region are colored in light blue, green, red and yellow, respectively. In order to check how the peak resolution is influenced by this rotation, we report as an example in Fig. 9 the results of a set of measurements of the Cu(K α1,2 ), for different θ M values ranging from 0 • to −90 • + θ B , using a 206, 7 mm radius HAPG with S 0 = 1, 2 mm and δθ = 0, 5 • . The θ M = 0 position corresponds to the "semi" Von Hamos configuration in which the position detector is rotated in order to have the photons reflected with the nominal Bragg angle θ B impinging orthogonally to the detector surface. The fitting function is a double gaussian with common σ for the Cu lines and a polynomial for the background. In the top panel the dynamic range, defined as E max − E min (eV ) of the calibrated spectrum is plotted, while in the bottom one the resolutions (σ) of the K α1,2 peaks are shown. From these results, one can appreciate how the dynamic range is strongly increased while the resolution remains constant within the error bars. This very important result triggered the decision to perform all the subsequent measurements in the "semi" Von Hamos configuration. B. Measured spectra In this section we present the results of the measurements listed in table II. For each measurement, the spectra have been acquired with different S 0 and ∆θ settings but with a fixed integration time; in Figs. 10, 11, 12 and 13 we show only one fitted spectrum per measurement corresponding to the best achieved precision on the line position. The individual peak fitting functions are gaussians with a common σ parameter for K α1,2 . K α1 , K α2 and K β fitting functions correspond to the green, violet and light blue curves, respectively and are, together with a polynomial backgound (blue) and the overall fit (red), overimposed to the calibrated spectrum. The values of the S 0 , ∆θ pair, the corresponding S 0 , ∆θ pair and the curvature radius ρ c of the used crystal are also reported in the figures. The Fe(K α1,2 ) measurement has been performed using a smaller curvature radius crystal with respect to the Cu(K α1,2 ), since the air absorption for 6 keV is higher than for 8 keV, so the photon paths (source-crystal and crystal-detector) had to be kept as small as possible. In both these measurements the ∆θ angular acceptance could have been set to small values because the two lines of interest were only few percent of degree distant one from the other; nevertheless, spectra with resolution of σ 4 eV have been obtained for both Fe and Cu with effective source sizes of 500 µm and 1, 2 mm, respectively. These values are already one order of magnitude higher than the ones presented in section I. The effective source size is even wider in the case of the MoNb measurement at higher energies, where resolutions of σ 40 eV have been obtained using a ρ c = 77, 5 mm radius HAPG crystal to avoid the almost 2 m path lengths resulting from bigger ρ c values. It has to be mentioned that the worsening in resolution could be avoided exploiting the n=2 reflection order; this would correspond to a measurement of a line around 9 keV for n=1, leading to a resolution similar to the one obtained for the Cu target. As a drawback, this would require a longer exposure time since the n=2 peak reflectivity is an order of magnitude smaller than the n=1 one [10]. The possibility to exploit a wide dynamic range is highlighted by the CuZnNi spectrum, in which the combined effect of the ∆θ angular acceptance and the mosaicity allow to have in a single measurement lines which are almost 600 eV distant. It has to be noticed that since the lowest and highest part of the spectrum are only exploiting the tail of the mosaicity curve, the Cu(K α2 ) and the Zn(K α1 ) peaks are a bit suppresed; this is the reason why they don't have the standard 0.5 ratio between K α1 and K α2 . IV. CONCLUSIONS In this paper we presented the results obtained with the VOXES compact Von Hamos spectrometer based on mo-saic crystals of pyrolitic graphite (HAPG); in particular, we demonstrated how the proposed spectrometer allows to obtain energy resolutions of few eV when used with extended and diffused isotropic sources. The possibility to use it in a "semi" Von Hamos configuration, leading to a wider energy range with no loss in resolution was also confirmed. Another very important feature of this spectrometer (and of Bragg spectrometers in general) is the possibility to have almost background-free spectra because of the non linear total path of the photons; indeed, putting a soft X-ray shielding of few mm of plastic material along the L 2 path will prevent backround scattered photons coming from the activated target to arrive directly on the position detector. A confirmation of this feature can be deduced by the errors obtained in the peak energy positions shown in our spectra, which follow the pure statistical behaviour of a gaussian and background free signal having δE σ √ (N ) . This, together with a proper shielding around the position detector itself to be adapted to the different hadronic and electromagnetic machine backgrounds, make this spectroscopy technique a serious candidate for high precision X-ray measurements from diffused and extended sources in accelerator environments. ACKNOWLEDGMENTS This work is supported by the 5th National Scientific Committee of INFN in the framework of the Young Researcher Grant 2015, n. 17367/2015. We thank the LNF and SMI staff, in particular the LNF SPCM service and Doris Pristauz-Telsnigg, for the support in the preparation of the setup. FIG. 1 . 1Von Hamos schematic geometry schematic of the dispersive plane (not in scale) FIG. 2 . 2Spectrometer setup picture. FIG. 3 . 3Schematics of the illuminated region of the target (not in scale). See text for more details. FIG. 4 . 4Beam geometry on the dispersive plane (not in scale): the position (z1 and z2) and the aperture (S1 and S2) of two slits are used to create a virtual source between the two slits (z f , green solid lines), and effective source S 0 (green) and an angular acceptance ∆θ ; the HAPG crystal is pictured in red, the two slits are shown in black while z h and S h are the position and the diameter of the circular exit window of the aluminum box front panel (light blue), respectively. FIG. 6 . 6Beam geometry on the focusing plane. The vertical spread of the X-ray beam is fixed by the slits positions (z1, z2) and their frame size (As), together with the exit circular hole in the front panel of the setup box (z h , A h ). See text for more details. FIG. 9 . 9Results of the Cu(Kα1,2) lines measured with a 206, 7 mm radius HAPG for different rotation angles of the position detector θM , where θM = 0 refers to the "semi" Von Hamos configuration; in the top panel the dynamic range (eV), defined as Emax − Emin of the calibrated spectrum is plotted, while in the bottom one the resolutions (σ) of the Kα1,2 peaks are shown. The fitting function is a double gaussian with common σ for the Cu lines and a polynomial for the background. In this measurement, S 0 = 1, 2 mm and δθ = 0, 5 • . FIG. 10. Fitted spectrum of Fe(Kα1,2) lines: Kα1, Kα2, polynomial background and total fitting function correspond to the green, violet, blue and red curves, respectively. FIG. 11 . 11Fitted spectrum of Cu(Kα1,2) lines: Kα1, Kα2, polynomial background and total fitting function correspond to the green, violet, blue and red curves, respectively.FIG. 12. Fitted spectrum of Mo(Kα1,2)+Nb(K β ) lines: Kα1, Kα2 and K β fitting function correspond to the green, violet and light blue curves, respectively and are, together with a polynomial background (blue) and the overall fit (red), overimposed to the calibrated spectrum.FIG. 13. Fitted spectrum of Cu(Kα1,2)+Zn(K β )+Ni(Kα1,2) lines: Kα1, Kα2 and K β fitting function correspond to the green, violet and light blue curves, respectively and are, together with the overall fit (red), overimposed to the calibrated spectrum. TABLE I . IList of the X-ray lines used in this work and the corresponding Bragg angles θB.Line E (eV) θB ( • ) Fe(Kα1) 6403, 84 16, 77 Fe(Kα2) 6390, 84 16, 81 Cu(Kα1) 8047, 78 13, 28 Cu(Kα2) 8027, 83 13, 31 Ni(K β ) 8264, 66 12, 92 Zn(Kα1) 8638, 86 12, 35 Zn(Kα2) 8615, 78 12, 39 Mo(Kα1) 17479, 34 6, 07 Mo(Kα2) 17374, 30 6, 11 Nb(K β ) 18622, 50 5, 70 TABLE II . IIList of the measurements presented in this work and their main beam parameters.Line θ set B ( • ) ρc (mm) L1 (mm) L2 (mm) z1 (mm) z2 (mm) A0 (mm) Fe(Kα1,2) 16, 77 103, 4 358, 46 343, 15 76 257 14, 14 Cu(Kα1,2) 13, 28 206, 7 900, 54 876, 33 60 820 8, 09 Cu(Kα1,2)+Ni(K β )+Zn(Kα1,2) 12, 92 103, 4 463, 07 451, 28 75 352 11, 52 Mo(Kα1,2)+Nb(K β ) 6, 07 77, 5 733, 35 729, 18 98, 7 649, 7 13, 97 FIG. . M Bazzi, SIDDHARTA10.1016/j.physletb.2011.09.011arXiv:1105.3090Phys. Lett. 704113nucl-exM. Bazzi et al. (SIDDHARTA), Phys. Lett. B704, 113 (2011), arXiv:1105.3090 [nucl-ex]. A Gallo, Particle accelerator. Proceedings, 10th European Conference. Edinburgh, UK060626604Conf. Proc.A. Gallo et al., Particle accelerator. Proceedings, 10th European Conference, EPAC 2006, Edinburgh, UK, June 26-30, 2006, Conf. Proc. C060626, 604 (2006). . W B Doriese, Review of Scientific Instruments. 8853108W. B. Doriese et al., Review of Scientific Instruments 88, 053108 (2017). H , Proceedings of FEL 2006. FEL 2006798H. Legall et al., Proceedings of FEL 2006 , 798 (2006). . R Barnsley, Rev. Sci. Instrum. 742388R. Barnsley et al., Rev. Sci. Instrum. 74, 2388 (2003). D F Anagnostopoulos, 10.1023/A:1020815220597Proceedings, International RIKEN Conference on Muon Catalyzed Fusion and Related Exotic Atoms: Shimoda, Japan. International RIKEN Conference on Muon Catalyzed Fusion and Related Exotic Atoms: Shimoda, Japan138131D. F. Anagnostopoulos et al., Proceedings, International RIKEN Conference on Muon Catalyzed Fusion and Re- lated Exotic Atoms: Shimoda, Japan, April 22-26, 2001, Hyperfine Interact. 138, 131 (2001). . M Trassinelli, 10.1016/j.physletb.2016.06.025arXiv:1605.03300Phys. Lett. 759583physics.atom-phM. Trassinelli et al., Phys. Lett. B759, 583 (2016), arXiv:1605.03300 [physics.atom-ph]. . G Beer, 10.1016/S0370-2693(02)01664-7Phys. Lett. 53552G. Beer et al., Phys. Lett. B535, 52 (2002). . M Sanchez Del Rio, Proceedings of SPIE. 3448246M. Sanchez del Rio et al., Proceedings of SPIE 3448, 246 (1998). . M Gerlach, J. Appl. Cryst. 48M. Gerlach et al., J. Appl. Cryst. 48 (2015). . L V Von Hamos, Ann. Physik. 409716L. V. Von Hamos, Ann. Physik 409, 716 (1933). . A P Shevelko, Rev. Sci. Instr. 733458A. P. Shevelko et al., Rev. Sci. Instr. 73, 3458 (2002). . U Zastrau, JINST. 810006U. Zastrau et al., JINST 8, P10006 (2013). . L Anklamm, Rev. Sci. Instr. 8553110L. Anklamm et al., Rev. Sci. Instr. 85, 053110 (2014).
[]
[ "When Does Division of Labor Lead to Increased System Output?", "When Does Division of Labor Lead to Increased System Output?" ]
[ "Emmanuel Tannenbaum \nDepartment of Chemistry\nBen-Gurion University of the Negev\n84105Be'er-ShevaIsrael\n" ]
[ "Department of Chemistry\nBen-Gurion University of the Negev\n84105Be'er-ShevaIsrael" ]
[]
This paper develops a set of simplified dynamical models with which to explore the conditions under which division of labor leads to optimized system output, as measured by the rate of production of a given product. We consider two models: In the first model, we consider the flow of some resource into a compartment, and the conversion of this resource into some product. In the second model, we consider the resource-limited growth of autoreplicating systems. In this case, we divide the replication and metabolic tasks among different agents. The general features that emerge from our models is that division of labor is favored when the resource to agent ratio is at intermediate values, and when the time cost associated with transporting intermediate products is small compared to characteristic process times. We discuss the results of this paper in the context of simulations with digital life. We also argue that division of labor in the context of our replication model suggests an evolutionary basis for the emergence of the stem-cell-based tissue architecture in complex organisms.
10.1016/j.jtbi.2007.03.020
[ "https://arxiv.org/pdf/q-bio/0605047v1.pdf" ]
2,705,561
q-bio/0605047
c6f20d50bf97c664254d9a737ae6ffa760790834
When Does Division of Labor Lead to Increased System Output? 29 May 2006 Emmanuel Tannenbaum Department of Chemistry Ben-Gurion University of the Negev 84105Be'er-ShevaIsrael When Does Division of Labor Lead to Increased System Output? 29 May 2006Differentiationdivision of laborreplicationmetabolismstem cellstissue architectureagent- based models This paper develops a set of simplified dynamical models with which to explore the conditions under which division of labor leads to optimized system output, as measured by the rate of production of a given product. We consider two models: In the first model, we consider the flow of some resource into a compartment, and the conversion of this resource into some product. In the second model, we consider the resource-limited growth of autoreplicating systems. In this case, we divide the replication and metabolic tasks among different agents. The general features that emerge from our models is that division of labor is favored when the resource to agent ratio is at intermediate values, and when the time cost associated with transporting intermediate products is small compared to characteristic process times. We discuss the results of this paper in the context of simulations with digital life. We also argue that division of labor in the context of our replication model suggests an evolutionary basis for the emergence of the stem-cell-based tissue architecture in complex organisms. I. INTRODUCTION Division of labor is a ubiquitous phenomenon in biology. In sufficiently complex multicellular organisms, various tasks necessary for organismal survival (metabolism, nutrient transport, motion, reproduction, information processing, etc.) are performed by distinct parts of the organism [1,2]. Division of labor is even possible in clonal populations of free-living single-celled organisms [3]. At longer length scales, it is apparent that division of labor is a strong characteristic of community behavior in various animals [4,5]. Human-built modern economies exhibit considerable division of labor (indeed, much research into this phenomenon has been done by economists) [6,7,8,9,10,11,12,13]. Selective pressure for the division of labor in a population of agents (cells, organisms, humans) arises because specialization allows a given agent to optimize its performance of a relatively limited set of tasks. Total system production of a population of differentiated agents can therefore be significantly greater than a comparable population of undifferentiated agents. The question that arises then, is why is division of labor not always observed? For example, while complex multicellular organisms are certainly ubiquitous, approximately 80% of the biomass of the planet is in the form of bacteria. While capable of exhibiting cooperative behavior, bacteria are, for the most part, free-living singlecelled organisms. Clearly then, there are regimes where differentiation is not desirable. As a general rule, the more complex the organism, the greater the selective pressure for differentiation of system tasks. This rule is admittedly somewhat circular, since a more complex organism will by definition exhibit more * Electronic address: [email protected] specialization of the component agents. So, to be more precise, the greater the number of agents comprising a system, the greater the selective pressure for differentiation (even this formulation has some ambiguity, because we can arbitrarily define any group of agents to comprise a system, no matter how weak the inter-agent interactions. Nevertheless, despite this ambiguity, we will proceed with this initial "working" rule). The origin of this rule comes from the observation that there is a cost associated with differentiation, namely a time (and energy, though this will be ignored in this paper) cost associated with transporting intermediate products from one part of the system to another. As system size grows, then presumably the density of agents grows (since the number of agents grows, and since we are grouping all the agents into one system, the inter-agent interactions are sufficiently strong, compared to some reference interaction, to warrant this grouping. Note that increasing the agent density is a simple way to do this, though highly interconnected systems may interact fairly strongly over relatively long distances. The internet is a good example of this). As the density of agents grows, the characteristic time associated with transporting intermediates from one part of the system to another decreases, and so the cost of differentiation decreases (in fairness, the idea of transport costs placing a barrier to differentiation is not originally the author's. In the context of firms, this idea has been presented in the economics literature [10]). In this paper, we develop two sets of models that capture the competition between the benefits of differentiation and the time cost associated with differentiation. In the first model, we consider the flow of some resource into a compartment, and its conversion into some final product. In the undifferentiated case, we assume that there is a single agent capable of converting the resource into final product. In the differentiated case, we assume that the conversion of resource is accomplished in a two-step process, each of which is carried out by distinct agents specialized for the separate tasks. In the second model, we consider the flow of resource into a region containing replicating agents. We assume that the agents increase their volume so as to maintain a constant, pre-specified population density. In the undifferentiated case, we assume that a given agent can absorb the resource, and process it to produce a new agent. In the differentiated case, we assume a division of labor between replication and metabolism steps. That is, we assume that a fraction of the agents are specialized so that they cannot replicate, but can only process the resource into an intermediate form. This metabolized resource is then processed by the replicators, which produce new agents. These daughter agents then undergo a differentiation step, where they can either become replicators themselves, or metabolizers. In both the compartment and the replicator models, the general features that emerge is that differentiation is favored when population density is at intermediate levels with respect to resource numbers (when the population density is low, then the undifferentiated pathways are favored, while when resources are highly limited, then the difference between the undifferentiated and differentiated pathways disappears). In the context of the replicatormetabolism model, we argue that this phenomenon suggests an evolutionary basis for the stem-cell-based tissue architecture in complex vertebrate organisms. This paper is organized as follows: In Section II, we develop and discuss the compartment model, involving the conversion of some resource into a final product. In Section III, we develop and discuss the replicationmetabolism model. In Section IV, we conclude with a summary of our results and plans for future research. II. COMPARTMENT MODEL A. Definition of the model The compartment model is defined as follows: Some resource, denoted R, flows into a compartment of volume V at a rate f R . In the undifferentiated case, a single agent, denoted E, processes the resource to produce a final product, denoted P (the term E comes from chemistry, since the chemical analogue of an agent is an Enzyme catalyst). In the differentiated case, the processing of R is accomplished by two separate agents, E 1 and E 2 . The agent E 1 first converts the resource into an intermediate product R * , and then the agent E 2 converts R * into P . It should be apparent that separating the tasks associated with converting R to P among two different agents can only increase the total production rate of P if E 1 and E 2 can each perform their individual tasks better than E. Therefore, an implicit assumption here is that, when an agent specializes, its "native" ability to perform a given task can be made better than when an agent is unspecialized. For a simple reason why this is true, let us imagine that E, E 1 , E 2 are enzymes, i.e. protein catalysts, whose function is pre-determined by some amino acid sequence of length L. If the alphabet is of size S, then there are S L distinct sequences that can generate E, E 1 , E 2 . Assuming that E 1 and E 2 are optimized for their particular functions, we note that, in the absence of any additional information, the probability that E 1 and E 2 are the same is 1/S L → 0 as L → ∞. Indeed, the average Hamming distance (number of sites where two sequences differ) between any two sequences in the sequence space is given by L(1 − 1/S) → ∞ as L → ∞. Therefore, it is highly likely that E is neither optimized for any of the tasks associated with converting R to P , but performs each task with some intermediate efficiency. B. Undifferentiated model In order to describe the processes governing the conversion of R to P , we will adopt the language and notation of chemical reaction kinetics. This formalism is very convenient, and is easily translatable into a system of ordinary differential equations. For the undifferentiated model, we have, E + R → E − R second-order rate constant k 1 E − R → E + P first-order rate constant k 2 R → Decay products (first-order rate constant k D )(1) The first reaction refers to agent E grabbing the resource R (in chemistry, this is referred to as the binding step). At this point, the agent is denoted E −R, to indicate that it is bound to a resource particle. In the second reaction, the agent processes the resource to form the product P , which it then releases. The last reaction indicates that the resource R has a finite lifetime inside the compartment, and decays with some first-order rate constant k D . This assumption ensures that the compartment cannot be filled with resource without limit. The finite lifetime can be due to backdiffusion of resource outside the compartment, or simply that the resource does not last forever (some analogies include waiting time of customers at a restaurant before leaving without being served, the characteristic time for a food product to spoil, or the diffusion of solute out of a cell). If n R , n E , n ER denote the number of particles of resource R, unbound agents E, and agent-resource complexes E − R, respectively, then we have, dn R dt = f R − k 1 V n E n R − k D n R dn E dt = − k 1 V n E n R + k 2 n ER dn ER dt = k 1 V n E n R − k 2 n ER(2) If we define n = n E + n ER , then note that dn/dt = dn E /dt + dn ER /dt = 0, which implies that n is constant. After some manipulation, we obtain that the steady-state solution of this model is given by, n R,ss = k 2 n ER,ss (k 1 /V )(n − n ER,ss )(3) where n ER,ss satisfies, n 2 ER,ss − (n + f R k 2 + k D (k 1 /V ) )n ER,ss + f R k 2 n = 0 (4) so that, n ER,ss = 1 2 [(n+ f R k 2 + k D (k 1 /V ) )− (n + f R k 2 + k D (k 1 /V ) ) 2 − 4 f R k 2 n] (5) We take the "-" root because it guarantees that n ER,ss ≤ n for all positive values of f R . For small n, a Taylor expansion of the quadratic to first order gives, n ER,ss = f R /k 2 f R /k 2 + k D /(k 1 /V ) n (small n)(6) while for large n, Taylor expansion to first order with respect to the remaining terms gives, n ER,ss = f R k 2 (large n)(7) As a rough estimate of where the transition from the small n to the large n behavior occurs, we can equate the two expressions and solve for n. The result is, n trans,1 = f R k 2 + k D (k 1 /V )(8) C. Differentiated model The conversion of resource R into P via the differentiated pathway occurs via the following sets of chemical reactions: E 1 + R → E 1 − R second-order rate constant k ′ 1 E 1 − R → E 1 + R * first-order rate constant k ′ 2 E 2 + R * → E 2 − R * second-order rate constant k ′ 3 E 2 − R * → E 2 + P first-order rate constant k ′ 4 R → Decay products first-order rate constant k D R * → Decay products first-order rate constant k * D (9) Note that the intermediate product, R * , is also capable of decaying. As we will see shortly, it is the finite lifetime of the intermediate products that causes the undifferentiated pathway to outperform the differentiated pathway at low agent numbers, and allows for a transition at higher agent numbers, whereby the differentiated pathway overtakes the undifferentiated pathway. In the context of our model, the direct interpretation of the decay term for R * is that the intermediate product has a finite lifetime, due either to diffusion out of the compartment or due to decay into other compounds. More generally, though, this term may refer to an aging cost, and therefore this model may be useful in understanding aspects of networked systems, whose function does not necessarily depend on material transfers, but on information transfers. Information is transmitted between various parts of a system in order to effect system behavior, in response to the state of the system at the time of information transfer. Therefore, there is a time limit during which the information is relevant (because of the dynamic nature of the system and environment), which may be roughly modelled by assuming that the information is "lost" via a first-order decay. Defining particle and agent numbers analogously to the undifferentiated case, we obtain the system of equations, dn R dt = f R − k ′ 1 V n E1 n R − k D n R dn R * dt = k 2 n E1R − k ′ 3 V n E2 n R * − k * D n R * dn E1 dt = − k ′ 1 V n E1 n R + k ′ 2 n E1R dn E1R dt = k ′ 1 V n E1 n R − k ′ 2 n E1R dn E2 dt = − k ′ 3 V n E2 n R * + k ′ 4 n E2R * dn E2R * dt = k ′ 3 V n E2 n R * − k ′ 4 n E2R *(10) If we define n 1 = n E1 + n E1R and n 2 = n E2 + n E2R * , then note that dn 1 /dt = dn 2 /dt = 0, so that n 1 and n 2 are also constant. Proceeding to solve for the steadystate of this model, we obtain, n E1R,ss = 1 2 [(n 1 + f R k ′ 2 + k D (k ′ 1 /V ) ) − (n 1 + f R k ′ 2 + k D (k ′ 1 /V ) ) 2 − 4 f R k ′ 2 n 1 ] n E2R * ,ss = 1 2 [(n 2 + k 2 n E1R k ′ 4 + k * D (k ′ 1 /V ) ) − (n 2 + k ′ 2 n E1R k ′ 4 + k * D (k ′ 3 /V ) ) 2 − 4 k ′ 2 n E1R k ′ 4 n 2 ](11) Now, when n 1 and n 2 are small, it may be shown that to lowest non-vanishing order, the steady state population of E 2 − R * is given by, n E2R * ,ss = (k ′ 3 /V ) k * D k ′ 2 k ′ 4 f R /k ′ 2 f R /k ′ 2 + k D /(k ′ 1 /V ) α(1−α)n 2 (small n)(12) where n ≡ n 1 + n 2 , and α ≡ n 1 /n, and 1 − α = n 2 /n. For large values of n, we obtain that, n E2R * ,ss = f R k ′ 4 (large n)(13) As an estimate of where the transition between the small n and large n behavior occurs, we can equate the two expressions and solve for n. We obtain, n trans,2 = 1 α(1 − α) k * D (k ′ 3 /V ) ( f R k ′ 2 + k D (k ′ 1 /V ) )(14) D. Comparison of undifferentiated and differentiated models The small n expression for the rate of production of final product in the undifferentiated case is, k 2 n ER = k 2 f R /k 2 f R /k 2 + k D /(k 1 /V ) n(15) while the small n expression for the rate of production of final product in the differentiated case is, k ′ 4 n E2R * = k ′ 2 (k ′ 3 /V ) k * D f R /k ′ 2 f R /k ′ 2 + k D /(k ′ 1 /V ) α(1 − α)n 2 (16) Note then that for sufficiently small n, the undifferentiated production pathway produces final product more quickly than the differentiated pathway. However, because the rate of production of final product for the undifferentiated pathway initially increases linearly with n, while the rate of production of final product for the differentiated pathway increases quadratically, it is possible that the differentiated pathway eventually overtakes the undifferentiated pathway. The critical n where this is estimated to occur, denoted n equal , may be estimated by equating the two expressions. The final result is, n equal = 1 α(1 − α) k * D (k ′ 3 /V ) f R /k ′ 2 + k D /(k ′ 1 /V ) f R /k 2 + k D /(k 1 /V )(17) Now, for n equal to be meaningful, it must occur in a regime where the rate expressions used to obtain it are valid. Therefore, we want n equal < n trans,1 , n trans,2 . However, we can make an even stronger statement. If n equal does indeed refer to a point beyond which the differentiated pathway overtakes the undifferentiated pathway, then we should have n equal < n trans,2 < n trans,1 . It is possible to show that, n trans,2 n equal = n trans,1 n trans,2 = α(1 − α) (k ′ 3 /V ) k * D 1 f R /k ′ 2 + k D /(k ′ 1 /V ) ( f R k 2 + k D (k 1 /V ) )(18) and so, our inequality is equivalent to the condition that, Figure 4, the parameters are chosen so that the differentiated pathway eventually overtakes the undifferentiated pathway, while in Figure 5, the parameters are chosen so that this is not the case. Note, however, that even in Figure 4, although the differentiated pathway overtakes the undifferentiated pathway, once n becomes very large, the undifferentiated pathway again overtakes the differentiated pathway. k * D (k ′ 3 /V ) < α(1 − α)( f R k 2 + k D (k 1 /V ) ) f R /k 2 + k D /(k 1 /V ) f R /k ′ 2 + k D /(k ′ 1 /V ) (19) which implies that n equal < f R k 2 + k D (k 1 /V )(20) This behavior can be explained as follows: When n is very small, the rate at which the intermediate product is "grabbed" by E 2 is small compared to the decay rate, so that much intermediate product is lost. In this regime, the undifferentiated pathway is optimal, for, although E may be less efficient than either E 1 or E 2 at their respective tasks, the overall production rate of P is not reduced by the loss of intermediates. Now, as n increases, the rate of loss of intermediates decreases to an extent such that the increased efficiency associated with differentiation causes the differentiated pathway to overtake the undifferentiated pathway. However, once n increases even further, then there is sufficient quantity of agents in both the undifferentiated and differentiated pathways to process all of the incoming resource, with minimal loss due to decay. At this point, because the production rate of P has become resource limited, the efficiency advantage of the differentiated pathway is considerably reduced, such that the slight cost associated with intermediate decay becomes sufficient to cause the undifferentiated pathway to overtake the differentiated pathway. However, this effect is a small one, since, once n is very large, both the undifferentiated and differentiated pathways perform similarly. E. When can a differentiated pathway outperform an undifferentiated pathway? The analysis of the previous section deserves further scrutiny, in order to better understand the circumstances under which differentiation can lead to improved system performance. At low agent numbers, the decay of the product intermediates leads to a quadratic increase in system output, and so the undifferentiated pathway outperforms the differentiated pathway. At some point, however, the number of agents is sufficiently large that the decay of both resource and intermediates is minimal, so that it is possible for the differentiated pathway to overtake the undifferentiated pathway. However, this is only possible if the differentiated pathway is more efficient than the undifferentiated pathway. To quantify this notion, assume that n is at some intermediate value, such that k D and k * D may be effectively taken to be 0. In this regime, it is possible to show that, k ′ 4 n E2R * ,ss = min{k ′ 2 αn, k ′ 4 (1 − α)n}(21) Essentially, if k ′ 2 αn > k ′ 4 (1 − α)n, then the first set of agents are capable of producing intermediate at a rate greater than the second set of agents are capable of processing it, so that the second reaction step is rate limiting. If k ′ 2 αn < k ′ 4 (1 − α)n, then the first reaction step is rate limiting. Note then that if one of the reactions is rate limiting, we can adjust the agent fractions to increase the rate of the rate limiting reaction, and thereby increase the overall production rate of P . Therefore, the maximal production rate of P is achieved when k ′ 2 αn = k ′ 4 (1 − α)n ⇒ α optimal = k ′ 4 /(k ′ 2 + k ′ 4 ), so that the maximal production rate of P is given by, (k ′ 4 n E2R * ,ss ) max = k ′ 2 k ′ 4 k ′ 2 + k ′ 4 n(22) For the undifferentiated case, the analogous expression is k 2 n, and so we expect that the differentiated pathway can only overtake the undifferentiated pathway when, k ′ 2 k ′ 4 k ′ 2 + k ′ 4 > k 2 ⇒ 1 k ′ 2 + 1 k ′ 4 < 1 k 2(23) Intuitively, this condition makes sense, since 1/k 2 is the characteristic time it takes agent E to convert R to P , while 1/k ′ 2 and 1/k ′ 4 are the characteristic times for agents E 1 and E 2 to perform their respective tasks. Therefore, differentiation can only overtake nondifferentiation if the characteristic time for the completion of a set of tasks is shorter for the differentiated pathway than it is for the undifferentiated pathway. If 1/k ′ 2 + 1/k ′ 4 > 1/k 2 , it is in principle possible for the differentiated pathway to overtake the undifferentiated pathway if k ′ 1 is sufficiently greater than k 1 , and if k ′ 3 is sufficiently large compared to k * D . Basically, the differentiated agents are not more efficient at actually processing the resource, but they are more efficient at grabbing them, which can give the differentiated pathway an advantage. However, in contrast to the case where 1/k ′ 2 + 1/k ′ 4 < 1/k 2 , this advantage is only temporary, because once the agent number becomes sufficiently large, the characteristic time to grab resource becomes very small. As a final note, because the condition for differentiation to outperform nondifferentiation at larger agent numbers is 1/k ′ 2 + 1/k ′ 4 < 1/k 2 , while the agent number n equal where differentiation overtakes nondifferentiation does not depend on k ′ 4 , it should be apparent that our criterion for n equal could be inaccurate in actually predicting the location of the cross-over. This is because n equal is based on the small n region, where the rate of production of P for the differentiated pathway increases quadratically. This is the regime where the production rate of P is limiting by intermediate resource decay. However, as 1/k ′ 2 + 1/k ′ 4 increases, we expect n equal to become a better predictor of the crossover point, since the decay of the intermediate resource becomes a comparatively greater factor in dictating the performance of the differentiated pathway. In any event, though, the expression for n equal and the condition for the existence of a cross-over are useful, for they indicate that the larger the value of f R , the larger the value of k * D that is possible for a cross-over to still occur. In particular, it suggests that, as long as 1/k ′ 2 + 1/k ′ 4 < 1/k 2 , then by making f R sufficiently large for a given k * D , we will eventually obtain that the differentiated pathway will outperform the undifferentiated pathway at sufficiently high agent numbers. This is indeed what is observed numerically. III. REPLICATION-METABOLISM MODEL In this section, we turn our attention to the replicationmetabolism model, where a population of agents processes an external resource for the purposes of producing more agents. A. Definition of the model We consider a population of replicating agents, relying on the supply of some resource, denoted R. We assume that the resource is supplied to the population at a rate of f R per unit volume, and that, as the population grows, the volume expands in such a way as to maintain a constant population density ρ. In the undifferentiated model, a single agent, denoted E, processes the resource R and replicates. In the differentiated model, an agent E 1 "metabolizes" the resource to some intermediate R * , and then another agent, denoted E 2 , processes the intermediate and reproduces. However, the E 2 agents are responsible for supplying both metabolizers and replicators. Therefore, E 2 produces a "blank" agent, denoted E, which then specializes and becomes either E 1 and E 2 . B. Undifferentiated model The reactions defining the undifferentiated model are, E + R → E − R second-order rate constant k 1 E − R → E + E first-order rate constant k 2 R → Decay products (first-order rate constant k D )(24) In terms of population numbers, the dynamical equations for n E , n ER and n R are, dn E dt = − k 1 V n E n R + 2k 2 n ER dn ER dt = k 1 V n E n R − k 2 n ER dn R dt = f R V − k 1 V n E n R − k D n R(25) Therefore, defining n = n E + n ER , we have, dn dt = k 2 n ER(26) Since the population density is ρ, this implies that, dV dt = 1 ρ dn dt = k 2 V x ER(27) where x E ≡ n E /n, x ER ≡ n ER /n. Now, the concentration c R of the resource R is given by the relation n R = c R V , which implies that dc R dt = 1 V ( dn R dt − c R dV dt ) = f R − (k 1 ρx E + k 2 x ER + k D )c R (28) Putting everything together, we obtain, finally, the system of equations, 1 n dn dt = k 2 x ER dx E dt = −k 1 c R x E + 2k 2 x ER − k 2 x ER x E dx ER dt = k 1 c R x E − k 2 x ER − k 2 x 2 ER dc R dt = f R − c R (k 1 ρx E + k 2 x ER + k D )(29) We can determine the steady-state behavior of the model by setting the left-hand-side of the above system of equations to 0. When ρ → 0, the steady-state solution is characterized by, c R,ss = f R k D + k 2 x ER,ss (ρ → 0) (30) where x ER,ss is the solution to the cubic, 0 = x 3 +(1+ k D k 2 )x 2 +( k 1 f R k 2 2 + k D k 2 )x− k 1 f R k 2 2 (ρ → 0) (31) Note that when f R = 0, we obtain x ER,ss = 0. Therefore, differentiating the cubic with respect to x gives, ( dx ER,ss df R ) fR=0 = k 1 k 2 k D(32) and so we have, x ER,ss = k 1 k 2 k D f R (small f R , ρ → 0)(33) When f R is large, we get x ER,ss → 1, so that x ER,ss = 1 (f R → ∞, ρ → 0)(34) Equating the small f R and large f R expressions, we obtain that the transition from small f R to large f R behavior is approximated by, f R,trans,1 (ρ = 0) = k 2 k D k 1(35) Now, when ρ is large, then the steady-state expression for c R is approximated by, 0 = dc R dt = f R − c R k 1 ρx E → k 1 c R x E = f R ρ(36) and so, 0 = f R ρ − k 2 x ER,ss − k 2 x 2 ER,ss(37) from which it follows that, x ER,ss = 1 2 [−1 + 1 + 4 f R k 2 ρ ](38) Since ρ is large, we will approximate this expression further, by taking the first-order expansion in f R /(k 2 ρ), giving, x ER,ss = f R k 2 ρ(39) We can estimate the cross-over from small ρ to large ρ behavior by equating the two expressions. We have two estimates, one for small f R , and one for large f R . We obtain, ρ − trans,1 = k D k 1 (f R < k 2 kD k1 ) ρ + trans,1 = f R k 2 (f R > k 2 kD k1 ) (40) C. Differentiated model The reactions defining the differentiated model are, E 1 + R → E 1 − R second-order rate constant k ′ 1 E 1 − R → E 1 + R * first-order rate constant k ′ 2 E 2 + R * → E 2 − R * second-order rate constant k ′ 3 E 2 − R * → E 2 + E first-order rate constant k ′ 4 E → E 1 first-order rate constant k ′ 5 E → E 2 first-order rate constant k ′ 6 R → Decay products (first-order rate constant k D ) R * → Decay products (first-order rate constant k * D ) (41) Following a procedure similar to the one carried out for the undifferentiated model, we obtain the system of equations, 1 n dn dt = k ′ 4 x E2R * dx E1 dt = −k ′ 1 c R x E1 + k ′ 2 x E1R + k ′ 5 x E − k ′ 4 x E2R * x E1 dx E1R dt = k ′ 1 c R x E1 − k ′ 2 x E1R − k ′ 4 x E2R * x E1R dx E2 dt = −k ′ 3 c R * x E2 + k ′ 4 x E2R * + k ′ 6 x E − k ′ 4 x E2R * x E2 dx E2R * dt = k ′ 3 c R * x E2 − k ′ 4 x E2R * − k ′ 4 x 2 E2R * dx E dt = k ′ 4 x E2R * − (k ′ 5 + k ′ 6 )x E − k ′ 4 x E2R * x E dc R dt = f R − (k ′ 1 ρx E1 + k D )c R − k ′ 4 x E2R * c R * dc R * dt = ρ(k ′ 2 x E1R − k ′ 3 c R * x E2 ) − (k * D + k ′ 4 x E2R * )c R * (42) Now, definingx E1 = x E1 + x E1R , andx E2 = x E2 + x E2R * , we obtain, dx E1 dt = k ′ 5 x E − k ′ 4 x E2R * x E1 dx E2 dt = k ′ 6 x E − k ′ 4 x E2R * x E2(43) Therefore, at steady-state we have, x E1,ss x E2,ss = k ′ 5 k ′ 6(44) and so, using the relationx E1 +x E2 + x E = 1 we obtain, x E1,ss = k ′ 5 k ′ 5 + k ′ 6 (1 − x E,ss ) x E2,ss = k ′ 6 k ′ 5 + k ′ 6 (1 − x E,ss )(45) If we let k ′ 5 , k ′ 6 → ∞ such that k ′ 5 /k ′ 6 remains constant, then it should be clear that x E,ss → 0. Intuitively, E differentiates to either E 1 or E 2 as soon as it is produced, so it does not build up in the system. The ratio between k ′ 5 and k ′ 6 then dictates the fraction of E 1 and E 2 in the system (allowing k ′ 5 , k ′ 6 → ∞ essentially amounts to assuming that the differentiation time is zero. This is of course not true, and future research will need to incorporate positive differentiation times). Defining α = k ′ 5 /(k ′ 5 + k ′ 6 ), we then havex E1,ss = α, andx E2,ss = 1−α. Therefore, to characterize the system at steady-state, we need to solve four equations, giving the steady-state conditions for x E1R , x E2R * , c R , and c R * , respectively. The equations are, 0 = k ′ 1 c R (α − x E1R ) − k ′ 2 x E1R − k ′ 4 x E2R * x E1R 0 = k ′ 3 c R * (1 − α − x E2R * ) − k ′ 4 x E2R * − k ′ 4 x 2 E2R * 0 = f R − c R (k D + k ′ 1 ρ(α − x E1R )) − k ′ 4 x E2R * c R 0 = ρ(k ′ 2 x E1R − k ′ 3 c R * (1 − α − x E2R * )) −k * D c R * − k ′ 4 c R * x E2R *(46) As with the undifferentiated case, we study the behavior of this system of equations in both the small and large ρ limits. When ρ = 0, we have c R * ,ss = 0 ⇒ x E2R * ,ss = 0 ⇒ c R,ss = f R /k D ⇒ x E1R,ss = (k ′ 1 f R α/k D )/(k ′ 2 + k ′ 1 f R /k D ). Differentiating the steady-state equations with respect to ρ, and evaluating at ρ = 0, gives, (47) and so, for small ρ, we have, ( dc R * ,ss dρ ) ρ=0 = k ′ 2 k * D (x E1R ) ρ=0 ( dx E2R * ,ss dρ ) ρ=0 = k ′ 3 k ′ 4 (1 − α)( dc R * ,ss dρ ) ρ=0x E2R * ,ss = k ′ 2 k ′ 3 k ′ 4 k * D k ′ 1 fR kD k ′ 2 + k ′ 1 fR kD α(1 − α)ρ (small ρ) (48) Now for large ρ, our steady-state equations may be reduced to, 0 = k ′ 1 c R (α − x E1R ) − k ′ 2 x E1R − k ′ 4 x E2R * x E1R 0 = k ′ 3 c R * (1 − α − x E2R * ) − k ′ 4 x E2R * − k ′ 4 x 2 E2R * 0 = f R − k ′ 1 c R ρ(α − x E1R ) 0 = k ′ 2 x E1R − k ′ 3 c R * (1 − α − x E2R * )(49) The third equation gives k ′ 1 c R (α − x E1R ) = f R /ρ, which may be substituted into the first equation to give, 0 = f R ρ − x E1R (k ′ 2 + k ′ 4 x E2R * )(50) Solving for x E1R in terms of x E2R * , and plugging the resulting expression into the fourth steady-state equation gives, after some manipulation, that x E2R * ,ss is the solution of the cubic, 0 = x 3 E2R * ,ss + (1 + k ′ 2 k ′ 4 )x 2 E2R * ,ss + k ′ 2 k ′ 4 x E2R * ,ss − k ′ 2 f R k ′2 4 ρ (51) Now, when f R = 0, we obtain x E2R * ,ss = 0. From this it is possible to show that, ( dx E2R * ,ss df R ) fR=0 = 1 ρk ′ 4(52) and so, x E2R * ,ss = f R k ′ 4 ρ (large ρ)(53) As with the undifferentiated case, the transition from small ρ to large ρ behavior may be estimated by equating the two expressions and solving for ρ. The result is, ρ trans,2 = 1 α(1 − α) k D k * D k ′ 3 ( 1 k ′ 1 + f R k ′ 2 k D )(54) D. Comparison of undifferentiated and differentiated models As a function of f R , we wish to determine if, as ρ increases, the average growth rate of the differentiated population overtakes the growth rate of the undifferentiated population. If this does indeed happen, then there exists a ρ, denoted ρ equal , at which the two growth rates are equal. We first consider the regime f R < k 2 (k D /k 1 ). This is the small f R regime of the undifferentiated population. In this regime, the transition from low ρ to large ρ behavior occurs at ρ trans,1 = k D /k 1 . For the differentiated pathway, we have ρ trans,2 = 1 α(1−α) k * D k ′ 1 k ′ 3 ( k ′ 4 k ′ 2 f R + k D ) . Now, we have four possibilities: (1) ρ equal < ρ trans,1 , ρ trans,2 . (2) ρ trans,2 < ρ equal < ρ trans,1 . (3) ρ trans,1 < ρ equal < ρ trans,2 . (4) ρ equal > ρ trans,1 , ρ trans,2 . We can immediately eliminate Cases (2), (3), and (4) as possibilities. For Case (4), we get an undifferentiated rate of k 2 f R /(k 2 ρ) = f R /ρ, and a differentiated rate of k ′ 4 f R /(k ′ 4 ρ) = f R /ρ, and so the two rates are equal. For Case (2), we get an undifferentiated rate of k 1 f R /k D , and a differentiated rate of f R /ρ, so equating gives ρ equal = k D /k 1 = ρ trans,1 . Therefore, Case (2) is essentially a limiting case of Case (4), and can also eliminated. For Case (3), we get an undifferentiated rate of f R /ρ, and a differentiated rate of (k ′ 2 k ′ 3 /k * D )(k ′ 1 f R /k D )/(k ′ 2 +k ′ 1 f R /k D )α(1−α) ρ, so equating gives ρ equal = ρ trans,2 . Therefore, Case (3) is essentially a limiting case of Case (1), and can also be eliminated. For Case (1), we have, ρ equal = 1 α(1 − α) k 1 k * D k ′ 3 ( 1 k ′ 1 + f R k ′ 2 k D )(55) Now, we can show that, ρ equal ρ trans,2 = ρ trans,2 ρ trans,1 = k 1 1 α(1 − α) k * D k ′ 3 k D ( 1 k ′ 1 + f R k ′ 2 k D ) (56) and so, in order for ρ equal < ρ trans,1 , ρ trans,2 , then we must have, f R < k D k 2 k 1 k ′ 2 k 2 [α(1 − α) k ′ 3 k D k 1 k * D − k 1 k ′ 1 ](57) We now consider the case where f R > k D k2 k1 . This is the large f R regime of the undifferentiated population. Following a similar procedure to the one carried out for the small f R regime, we can show that the only possible crossover occurs in the small ρ regimes for both the undifferentiated and differentiated cases. In this regime, we obtain, ρ equal = k 2 f R 1 α(1 − α) k D k * D k ′ 3 ( 1 k ′ 1 + f R k ′ 2 k D )(58) We can show that, ρ equal ρ trans,2 = ρ trans,2 ρ trans,1 = f rack 2 f R 1 α(1 − α) k D k * D k ′ 3 ( 1 k ′ 1 + f R k ′ 2 k D ) (59) and so, in order for ρ equal < ρ trans,1 , ρ trans,2 , we must have, k * D k ′ 3 < α(1 − α) (f R /k 2 ) 2 k D /k ′ 1 + f R /k ′ 2(60) In Figure 9, we show a high-f R plot where the differentiated growth rate overtakes the undifferentiated growth rate. In Figure 10, we show a high-f R plot where the undifferentiated growth rate stays above the differentiated rate at all values of ρ (these figures are included in the version submitted to The Journal of Theoretical Biology). E. When can a differentiated population outreplicate an undifferentiated population? We can subject our replication-metabolism model to a similar analysis to the one applied to the compartment model. First of all, as with the compartment model, we expect that the differentiated pathway can only overtake the undifferentiated pathway, and then maintain a higher replication rate if 1/k ′ 2 + 1/k ′ 4 < 1/k 2 . Again, this condition simply states that the total characteristic time associated with converting resource into a new agent in the differentiated case is less than the total characteristic time in the undifferentiated case. The assumption is that decay costs are negligible, as well as time costs associated with grabbing resource and intermediates. An interesting behavior that occurs with the replication-metabolism model is the different dependence on f R that the transition population density ρ equal has in the low-f R regime and the high-f R regime. In the high-f R regime, ρ equal has a weak dependence on f R , though it does decrease as f R increases. This makes sense, for in the high-f R regime, the growth rate of the undifferentiated population is limited by the rate at which the complex E − R produces new agents. As f R increases, the cost associated with the decay of the intermediate resource R * decreases, so that the differentiated pathway overtakes the undifferentiated pathway sooner. In the low-f R regime, ρ equal increases linearly with f R , so that, as f R increases in this regime, the differentiated pathway overtakes the undifferentiated pathway only at higher values of ρ (if it overtakes at all). The reason for this behavior is that at low f R , the growth rate of the undifferentiated population is resource limited, so that increasing f R actually increases the growth rate. The effect of this is to push to higher values of ρ the point at which the differentiated agents outreplicate the undifferentiated agents. What is interesting with these patterns of behavior is that they indicate opposite criteria for when a cooperative replicative strategy is favored, depending on the availability of resource: When resources are plentiful, then increasing the resource favors a differentiated replication strategy. However, when resource are limited, then decreasing the resource favors a differentiated replication strategy. In this vein, it is interesting to note that complex multicellular life is only possible in relatively resource-rich environments. On the other hand, organisms such as the cellular slime mold (Dictyostelium discoideum) transition from a single-celled to a multi-celled life cycle when starved. While we have already postulated one possible reason for this behavior in terms of minimizing overall reproductive costs [14], the behavior indicated in our model may provide another, complementary explanation for the selective advantage for this phenomenon. IV. CONCLUSIONS AND FUTURE RESEARCH This paper developed two models with which to compare the performance of undifferentiated and differentiated pathways. The first model considered the flow of resource into a compartment filled with a fixed number of agents, whose collective task is to convert the resource into some final product. The second model considered the replication rate of a collection of agents, driven by an externally supplied resource. By assuming that the resource, and even more importantly, that reaction intermediates, have a finite lifetime, we were able to show that undifferentiated pathways are favored at low agent numbers and/or densities, while differentiated pathways are favored at higher agent numbers and/or densities. An equivalent way of stating this is that differentiation is favored when resources are limited, where resource limitation is measured by the ratio of available resource to agents. Some interesting results that emerged from our studies was that, although limited resources favor differentiation (as measured by the resource-agent ratio), for a given set of system parameters, differentiation will be more likely to overtake nondifferentiation at higher population size and/or density if the amount of available resource is increased (although the actual cross-over location will increase as well). The central reason for this is that the relative decay costs associated with differentiation are decreased as resource is increased. In the context of the replication-metabolism model, we should note that when resources are plentiful, differentiation is favored at lower population densities as the resource flow is increased, while when resources are limited, differentiation is favored at lower population densities as the resource flow is decreased. Regarding the former observation, it should be noted that it has been shown that diversity of replicative strategies is favored at intermediate levels of resources [15]. In digital life simulations, they showed that the number of distinct replicating computer programs was maximized at intermediate resource availability, a result consistent with what is observed eco-logically. We claim that the results of this paper are consistent with these observations. Regarding the latter observation, we pointed out in the previous section that this behavior is possibly consistent with the behavior of organisms such as the cellular slime mold, which transition from a single-celled to a multicelled life form when starved. We also posit that the results of the replicationmetabolism model suggest a possible evolutionary basis for the stem-cell-based tissue architecture in complex multicellular organisms. Essentially, as population density increases, and therefore as the resource-to-agent ratio decreases, it becomes more efficient for some cells to exclusively focus on replicating and renewing the population, while other cells engage in specialized functions necessary for organismal survival. Of course, our replication-metabolism model is not quite the same as a stem-cell-based tissue architecture. First of all, the stem-cell and tissue cell population does not collectively grow. Rather, the stem cells periodically divide in order to replace dead tissue cells. Therefore, the stem-cell-based tissue architecture is a kind of hybrid between our compartment model and our replicationmetabolism model. Secondly, our replication-metabolism model assumes that there is a single differentiation step, while in reality a differentiating tissue cell undergoes several divisions and differentiation steps before becoming a mature tissue cells. Finally, our replication-metabolism model assumed that differentiation was instantaneous. In reality, differentiation takes time, and this time cost will affect whether differentiation can overtake non-differentiation, and, if so, will likely delay the critical population density where this happens. Despite these shortcomings, we believe that the models developed here could be used as the basis for more sophisticated models that could produce, via an optimization criterion, the stem-cell-based tissue architecture observed in complex multicellular organisms. This is a subject we leave for future work. Figures 4 and 5 5(of the version submitted to The Jour-nal of Theoretical Biology) show comparisons of the production rates of final product for the undifferentiated and differentiated pathways. In AcknowledgmentsThis research was supported by the Israel Science Foundation (Alon Fellowship). Exploitation of environmental heterogeneity by spatial division of labour in a clonal plant. J F Stuefer, H De Kroon, H J During, Functional Ecology. 10J.F. Stuefer, H. de Kroon, and H.J. During, "Exploita- tion of environmental heterogeneity by spatial division of labour in a clonal plant," Functional Ecology 10, 328-334 (1996). Influence of gene action across different time scales on behavior. Y Ben-Shahar, A Robichon, M B Sokolowski, G E Robinson, Science. 296Y. Ben-Shahar, A. Robichon, M.B. Sokolowski, and G.E. Robinson, "Influence of gene action across different time scales on behavior," Science 296, 741-744 (2002). Thinking about bacterial populations as multicellular organisms. J A Shapiro, Annu. Rev. Microbiol. 52J.A. Shapiro, "Thinking about bacterial populations as multicellular organisms," Annu. Rev. Microbiol. 52, 81- 104 (1998). Regulation of division of labor in insect societies. G E Robinson, Annu. Rev. Entomology. 37G.E. Robinson, "Regulation of division of labor in insect societies," Annu. Rev. Entomology 37, 637-665 (1992). Honeybee colony integration: Worker-worker interactions mediate hormonally regulated plasticity in division of labor. Z Huang, G E Robinson, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA89Z. Huang, and G.E. Robinson, "Honeybee colony in- tegration: Worker-worker interactions mediate hormon- ally regulated plasticity in division of labor," Proc. Natl. Acad. Sci. USA 89, 11726-11729 (1992). The division of labor, coordination costs, and knowledge. G S Becker, K M Murphy, Quarterly Journal of Economics. 107G.S. Becker, and K.M. Murphy, "The division of labor, coordination costs, and knowledge," Quarterly Journal of Economics 107, 1137-1160 (1992). Production, information costs, and economic organization. H Demsetz, American Economic Review. 62H. Demsetz, "Production, information costs, and eco- nomic organization," American Economic Review 62, 777-795 (1972). Work and the division of labor. A Strauss, The Sociological Quarterly. 26A. Strauss, "Work and the division of labor," The Soci- ological Quarterly 26, 1-19 (1985). Producer services, scale, and the division of labor. J F Francois, Oxford Economic Papers. 42J.F. Francois, "Producer services, scale, and the division of labor," Oxford Economic Papers 42, 715-729 (1990). The firm as a communication network. P Bolton, Quarterly Journal of Economics. 109P. Bolton, "The firm as a communication network," Quarterly Journal of Economics 109, 809-839 (1994). R & D-based models of economic growth. C I Jones, J. Pol. Econ. 103C.I. Jones, "R & D-based models of economic growth," J. Pol. Econ. 103, 759-784 (1995). Industrial organization and location: Division of labor, the firm, and spatial process. A J Scott, Economic Geography. 62A.J. Scott, "Industrial organization and location: Divi- sion of labor, the firm, and spatial process," Economic Geography 62, 215-231 (1986). Organizational size, complexity, and formalization. R H Hall, N J Johnson, J E Haas, Amer. Soc. Rev. 32R.H. Hall, N.J. Johnson, and J.E. Haas, "Organizational size, complexity, and formalization," Amer. Soc. Rev. 32, 903-912 (1967). Selective advantage for multicellular replicative strategies: A two-cell example. E Tannenbaum, Phys. Rev. E. 7310904E. Tannenbaum, "Selective advantage for multicellular replicative strategies: A two-cell example," Phys. Rev. E 73, 010904 (2006). Adaptive radiation from resource competition in digital organisms. S S Chow, C O Wilke, C Ofria, R E Lenski, C Adami, Science. 305S.S. Chow, C.O. Wilke, C. Ofria, R.E. Lenski, and C. Adami, "Adaptive radiation from resource competi- tion in digital organisms," Science 305, 84-86 (2004).
[]
[ "Uncertainties in nuclear transition matrix elements for neutrinoless ββ decay II: the heavy Majorana neutrino mass mechanism", "Uncertainties in nuclear transition matrix elements for neutrinoless ββ decay II: the heavy Majorana neutrino mass mechanism" ]
[ "P K Rath \nDepartment of Physics\nUniversity of Lucknow\nLucknow-226007India\n", "R Chandra \nDepartment of Applied Physics\nBabasaheb Bhimrao Ambedkar University\n226025LucknowIndia\n\nDepartment of Physics and Meteorology\nIndian Institute of Technology\nKharagpur-721302India\n", "P K Raina \nDepartment of Physics and Meteorology\nIndian Institute of Technology\nKharagpur-721302India\n\nDepartment of Physics\nIndian Institute of Technology\nRupnagar -140001Ropar, PunjabIndia\n", "K Chaturvedi \nDepartment of Physics\nBundelkhand University\nJhansi-284128India\n", "J G Hirsch \nInstituto de Ciencias Nucleares\nUniversidad Nacional Autónoma de México\n04510MéxicoD.FMéxico\n" ]
[ "Department of Physics\nUniversity of Lucknow\nLucknow-226007India", "Department of Applied Physics\nBabasaheb Bhimrao Ambedkar University\n226025LucknowIndia", "Department of Physics and Meteorology\nIndian Institute of Technology\nKharagpur-721302India", "Department of Physics and Meteorology\nIndian Institute of Technology\nKharagpur-721302India", "Department of Physics\nIndian Institute of Technology\nRupnagar -140001Ropar, PunjabIndia", "Department of Physics\nBundelkhand University\nJhansi-284128India", "Instituto de Ciencias Nucleares\nUniversidad Nacional Autónoma de México\n04510MéxicoD.FMéxico" ]
[]
Employing four different parametrizations of the pairing plus multipolar type of effective twobody interaction and three different parametrizations of Jastrow-type of short range correlations, the uncertainties in the nuclear transition matrix elements M (0ν) N due to the exchange of heavy Majorana neutrino for the 0 + → 0 + transition of neutrinoless double beta decay of 94 Zr, 96 Zr, 98 Mo, 100 Mo, 104 Ru, 110 Pd, 128,130 Te and 150 Nd isotopes in the PHFB model are estimated to be around 35%. Excluding the nuclear transition matrix elements calculated with Miller-Spenser parametrization of Jastrow short range correlations, the uncertainties are found to be smaller than 20%.
10.1103/physrevc.85.014308
[ "https://arxiv.org/pdf/1106.1560v2.pdf" ]
119,279,540
1106.1560
419d32d07460b347b26c8e24d37ad53e38d0b159
Uncertainties in nuclear transition matrix elements for neutrinoless ββ decay II: the heavy Majorana neutrino mass mechanism 16 Jan 2012 P K Rath Department of Physics University of Lucknow Lucknow-226007India R Chandra Department of Applied Physics Babasaheb Bhimrao Ambedkar University 226025LucknowIndia Department of Physics and Meteorology Indian Institute of Technology Kharagpur-721302India P K Raina Department of Physics and Meteorology Indian Institute of Technology Kharagpur-721302India Department of Physics Indian Institute of Technology Rupnagar -140001Ropar, PunjabIndia K Chaturvedi Department of Physics Bundelkhand University Jhansi-284128India J G Hirsch Instituto de Ciencias Nucleares Universidad Nacional Autónoma de México 04510MéxicoD.FMéxico Uncertainties in nuclear transition matrix elements for neutrinoless ββ decay II: the heavy Majorana neutrino mass mechanism 16 Jan 2012(Dated: January 28, 2013)numbers: 2160-n2340-s2340Hc Employing four different parametrizations of the pairing plus multipolar type of effective twobody interaction and three different parametrizations of Jastrow-type of short range correlations, the uncertainties in the nuclear transition matrix elements M (0ν) N due to the exchange of heavy Majorana neutrino for the 0 + → 0 + transition of neutrinoless double beta decay of 94 Zr, 96 Zr, 98 Mo, 100 Mo, 104 Ru, 110 Pd, 128,130 Te and 150 Nd isotopes in the PHFB model are estimated to be around 35%. Excluding the nuclear transition matrix elements calculated with Miller-Spenser parametrization of Jastrow short range correlations, the uncertainties are found to be smaller than 20%. I. INTRODUCTION In addition to establishing the Dirac or Majorana nature of neutrinos, the observation of (ββ) 0ν decay is a convenient tool to test the lepton number conservation, possible hierarchies in the neutrino mass spectrum, the origin of neutrino mass and CP violation in the leptonic sector. Further, it can also ascertain the role of various gauge models associated with all possible mechanisms, namely the exchange of light neutrinos, heavy neutrinos, the right handed currents in the left-right symmetric model (LRSM), the exchange of sleptons, neutralinos, squarks and gluinos in the R p -violating minimal super symmetric standard model, the exchange of leptoquarks, existence of heavy sterile neutrinos, compositeness, extradimensional scenarios and Majoron models, allowing the occurrence of (ββ) 0ν decay. Stringent limits on the associated parameters have already been extracted from the observed experimental limits on the half-life of (β − β − ) 0ν decay [1] and presently, all the experimental attempts are directed for its observation. The experimental and theoretical studies devoted to (ββ) 0ν decay over the past decades have been recently reviewed by Avignone et al. [2] and references there in. Presently, there is an increased interest to calculate reliable NTMEs for (β − β − ) 0ν decay due to the exchange of heavy Majorana neutrinos, in order to ascertain the dominant mechanism contributing to it [3,4]. The lepton number violating (β − β − ) 0ν decay has been studied by Vergados by taking a Lagrangian consisting of lefthanded as well as right-handed leptonic currents [5]. In the QRPA, the (β − β − ) 0ν decay due to the exchange of heavy Majorana neutrinos has been studied by Tomoda [6]. The decay rate of (β − β − ) 0ν mode in the LRSM has been derived by Doi and Kotani [7]. Hirsch et al. [8] have calculated all the required nuclear transition matrix ele-ments (NTMEs) in the QRPA and limits on the effective light neutrino mass m ν , heavy neutrino mass M N , right handed heavy neutrino M R , λ , η and mixing angle tanξ have been obtained. The heavy neutrino mechanism has also been studied in the QRPA without [9] and with pn-pairing [10]. In the heavy Majorana neutrino mass mechanism,Šimkovic et al. [11] have studied the role of induced weak magnetism and pseudoscalar terms and it was found that they are quite important in 48 Ca nucleus. The importance of the same induced currents in both light and heavy Majorana neutrino exchange mechanism has also been studied using the pn-RQRPA [12] as well as SRQRPA [3]. In spite of the remarkable success of the large scale shell model (LSSM) calculations of Strassbourg-Madrid group [13], there is a necessity of large configuration mixing to reproduce the structural complexity of medium and heavy mass nuclei. On the other hand, the QRPA and its extensions have emerged as successful models by including a large number of basis states and in correlating the single-β GT strengths and half-lives of (β − β − ) 2ν decay in addition to explaining the observed suppression of M 2ν [14,15]. In the mass region 90 ≤ A ≤ 150, there is a subtle interplay of pairing and quadrupolar correlations and their effects on the NTMEs of (β − β − ) 0ν decay have been studied in the interacting shell model (ISM) [16,17], deformed QRPA model [18][19][20][21], and projected-Hartree-Fock-Bogoliubov (PHFB) model [22,23]. The possibility to constrain the values of the gauge parameters using the measured lower limits on the (β − β − ) 0ν decay half-lives relies heavily on the model dependent NTMEs. Different predictions are obtained by employing different nuclear models, and within a given model, varying the model space, single particle energies (SPEs) and effective two-body interaction. In addition, a number of issues regarding the structure of NTMEs, namely the effect of pseudoscalar and weak magnetism terms on the Fermi, Gamow-Teller and tensorial NTMEs [24,25], the role of finite size of nucleons (FNS) as well as short range correlations (SRC) vis-a-vis the radial evolution of NTMEs [16,[26][27][28] and the value of the axialvector coupling constant g A are also the sources of uncertainties and remain to be investigated. It was observed by Vogel [29] that in case of well studied 76 Ge, the calculated decay rates T 0ν 1/2 differ by a factor of 6-7 and consequently, the uncertainty in the effective neutrino mass m ν is about 2 to 3. Thus, the spread between the calculated NTMEs can be used as the measure of the theoretical uncertainty. In case the (ββ) 0ν decay of different nuclei will be observed, Bilenky and Grifols [30] have suggested that the results of calculations of NTMEs of the (β − β − ) 0ν decay can be checked by comparing the calculated ratios of the corresponding NTMEs-squared with the experimentally observed values. Bahcall et al. [31] and Avignone et al. [32] have calculated averages of all the available NTMEs, and their standard deviation is taken as the measure of theoretical uncertainty. On the other hand, Rodin et al. [33] have calculated nine NTMEs with three sets of basis states and three realistic two-body effective interactions of charge dependent Bonn, Argonne and Nijmen potentials in the QRPA as well as RQRPA and estimated the theoretical uncertainties by making a statistical analysis. It was noticed that the variances are substantially smaller than the average values and the results of QRPA, albeit slightly larger, are quite close to the RQRPA values. Faessler and coworkers have further studied uncertainties in NTMEs due to short range correlations using unitary correlation operator method (UCOM) [26] and self-consistent coupled cluster method (CCM) [27]. The PHFB model has the advantage of treating the pairing and deformation degrees of freedom on equal footing and projecting out states with good angular momentum. However, the single β decay rates and the distribution of GT strength, which require the structure of the intermediate odd Z-odd N nuclei, can not be studied in the present version of the PHFB model. In spite of this limitation, the PHFB model in conjunction with pairing plus quadrupole-quadrupole (PQQ ) [34] has been successfully applied to reproduce the lowest yrast states, electromagnetic properties of the parent and daughter nuclei, and the measured (β − β − ) 2ν decay rates [35,36]. In the PHFB formalism, the existence of an inverse correlation between the quadrupole deformation and the size of NTMEs M 2ν , M (0ν) and M (0ν) N has been observed [22,23]. Further, it has been noticed that the NTMEs are usually large for a pair of spherical nuclei, almost constant for small deformation, suppressed depending on the difference in the deformation ∆β 2 of parent and daughter nuclei and having a well defined maximum when ∆β 2 = 0 [22,23]. In Ref. [37], a statistical analysis was performed for extracting uncertainties in eight (twelve) NTMEs for (β − β − ) 0ν decay due to the exchange of light Majorana neutrino, calculated in the PHFB model with four different parameterizations of pairing plus multipolar type of effective two-body interaction [23] and two (three) different parametrization of Jastrow type of SRC [27]. In confirmation with the observation made byŠimkovic et al. [27], it was noticed that the Miller-Spenser type of parametrization is a major source of uncertainty and its exclusion reduces the uncertainties from 10%-15% to 4%-14%. Presently, the same procedure has been adopted to estimate the theoretical uncertainties associated with the NTMEs M (0ν) N for (β − β − ) 0ν decay due to the exchange of heavy Majorana neutrino. In Sec. II, a brief discussion of the theoretical formalism is presented. The results for different parameterizations of the two-body interaction and SRC vis-a-vis radial evolution of NTMEs are discussed in Sec III. In the same section, the averages as well as standard deviations are calculated for estimating the theoretical uncertainties. Finally, the conclusions are given in Sec. IV. II. THEORETICAL FORMALISM In the charged current weak processes, the currentcurrent interaction under the assumption of zero mass neutrinos leads to terms which, except for vector and axial vector parts, are proportional to the lepton mass squared, and hence negligible. However, it has been reported byŠimkovic et al. [24,25] that the contribution of the pseudoscalar term is equivalent to a modification of the axial vector current due to PCAC and greater than the vector current. The contributions of pseudoscalar and weak magnetism terms in the mass mechanism can change M (0ν) upto 30% and the change in M (0ν) N is considerably larger. In the shell-model [16,38], IBM [39] and GCM+PNAMP [40], the contributions of these pseudoscalar and weak magnetism terms to M (0ν) have been also investigated. However, it has been reported by Suhonen and Civitarese [41] that these contributions are relatively small and can be safely neglected. Therefore, the investigation of this issue is of definite interest and is reported in the present work. In the two nucleon mechanism, the half-life T 0ν 1/2 for the 0 + → 0 + transition of (β − β − ) 0ν decay due to the exchange of heavy Majorana neutrino between nucleons having finite size is given by [6,7] T 0ν 1/2 0 + → 0 + −1 = m p M N 2 G 01 M (0ν) N 2 ,(1) where m p is the proton mass and M N −1 = i U 2 ei m −1 i , m i > 1 GeV,(2) and in the closure approximation, the NTMEs M (0ν) N is of the form [12, 26, 27] M (0ν) N = −M F h + M GT h + M T h ,(3) where M α = n,m 0 + F O α,nm τ + n τ + m 0 + I (4) with O F h = H F h (r nm ) (5) O GT h = σ n · σ m H GT h (r nm ) (6) O T h = [3 (σ n · r nm ) (σ m · r nm ) − σ n · σ m ] H GT h (r nm )(7) The exchange of heavy Majorana neutrinos gives rise to short ranged neutrino potentials, which with the consideration of FNS are given by (8) where f αh (qr nm ) = j 0 (qr nm ) for α = F as well as GT and f T h (qr nm ) = j 2 (qr nm ). H αh (r nm ) = 2R (m p m e )π f αh (qr nm ) h α (q)q 2 dq Further, the h F (q), h GT (q) and h T (q) are written as h F (q) = g V g A 2 Λ 2 V q 2 + Λ 2 V 4 (9) h GT (q) = g 2 A (q 2 ) g 2 A 1 − 2 3 g P (q 2 )q 2 g A (q 2 )2m p + 1 3 g 2 P (q 2 )q 4 g 2 A (q 2 )4m 2 p + 2 3 g 2 M (q 2 )q 2 g 2 A 4m 2 p ≈ Λ 2 A q 2 + Λ 2 A 4 1 − 2 3 q 2 (q 2 + m 2 π ) + 1 3 q 4 (q 2 + m 2 π ) 2 + g V g A 2 κ 2 q 2 6m 2 p Λ 2 V q 2 + Λ 2 V 4 (10) h T (q) = g 2 A (q 2 ) g 2 A 2 3 g P (q 2 )q 2 g A (q 2 )2m p − 1 3 g 2 P (q 2 )q 4 g 2 A (q 2 )4m 2 p + 1 3 g 2 M (q 2 )q 2 g 2 A 4m 2 p ≈ Λ 2 A q 2 + Λ 2 A 4 2 3 q 2 (q 2 + m 2 π ) − 1 3 q 4 (q 2 + m 2 π ) 2 + g V g A 2 κ 2 q 2 12m 2 p Λ 2 V q 2 + Λ 2 V 4(11) where the form factors are given by The short range correlations (SRC) arise mainly from the repulsive nucleon-nucleon potential due to the exchange of ρ and ω mesons and have been incorporated by using effective transition operator [42], the exchange of ωmeson [43], UCOM [26,44] and the self-consistent CCM [27]. The SRC can also be incorporated phenomenologically by Jastrow type of correlations with Miller-Spenser parametrization [45]. Further, it has been shown in the self-consistent CMM [27] that the SRC effects of Argonne and CD-Bonn two nucleon potentials are weak and it is possible to parametrize them by Jastrow type of correlations within a few percent accuracy. Explicitly, g A (q 2 ) = g A Λ 2 A q 2 + Λ 2 A 2 g M (q 2 ) = κg V Λ 2 V q 2 + Λ 2 V 2 g P (q 2 ) = 2m p g A (q 2 ) (q 2 + m 2 π ) Λ 2 A − m 2 π Λ 2 A(12)with g V = 1.0, g A = 1.254, κ = µ p − µ n = 3.70, Λ V = 0.850f (r) = 1 − ce −ar 2 (1 − br 2 )(13) where a = 1. due to FNS+SRC1 and FNS+SRC3 are at r ≈ 0.6 fm and r ≈ 0.5 fm, respectively. The shapes of these functions have definite influence on the radial evolution of NTMEs M (0ν) N for (β − β − ) 0ν decay due to the exchange of heavy Majorana neutrino as discussed in Sec. III. The calculation of M (0ν) N in the PHFB model has been discussed in our earlier work [22,37] and one obtains the following expression for NTMEs M (0ν) α of (β − β − ) 0ν decay [37] M (0ν) α = n Ji=0 n J f =0 −1/2 π 0 n (Z,N ),(Z+2,N −2) (θ) αβγδ (αβ |O α | γδ) × εη f (π) * Z+2,N −2 εβ 1 + F (π) Z,N (θ)f (π) * Z+2,N −2 εα F (ν) * Z,N ηδ 1 + F (ν) Z,N (θ)f (ν) * Z+2,N −2 γη sinθdθ(14) and the expressions for calculating n J , n (Z,N ),(Z+2,N −2) (θ), f Z,N and F Z,N (θ) are given in Refs. [22,37]. The calculation of matrices f Z,N and F Z,N (θ) requires the amplitudes (u im , v im ) and expansion coefficients C ij,m , which specify the axially symmetric HFB intrinsic state |Φ 0 with K = 0. Presently, they are obtained by carrying out the HFB calculations through the minimization of the expectation value of the effective Hamiltonian given by [23] H = H sp + V (P ) + V (QQ) + V (HH) (15) where H sp , V (P ), V (QQ) and V (HH) denote the single particle Hamiltonian, the pairing, quadrupolequadrupole and hexadecapole-hexadecapole part of the effective two-body interaction, respectively. The HH part of the effective interaction V (HH) is written as [23] V (HH) = − χ 4 2 αβγδ ν (−1) ν α|r 4 Y 4,ν (θ, φ)|γ β|r 4 Y 4,−ν (θ, φ)|δ a † α a † β a δ a γ(16) with χ 4 = 0.2442 χ 2 A −2/3 b −4 for T = 1, and twice of this value for T = 0 case, following Bohr and Mottelson [46]. In Refs. [22,35,36], the strengths of the like particle components χ pp and χ nn of the QQ interaction were kept fixed. The strength of proton-neutron (pn) component χ pn was varied so as to reproduce the ex-citation energy of the 2 + state E 2 + for the considered nuclei, namely 94,96 Zr, 94,96,98,100 Mo, 98,100,104 Ru, 104,110 Pd, 110 Cd, 128,130 Te, 128,130 Xe, 150 Nd and 150 Sm as closely possible to the experimental values. This is denoted as P QQ1 parametrization. Alternatively, one can employ a different parametrization of the χ 2pn , namely P QQ2 by taking χ 2pp = χ 2nn = χ 2pn /2 and the ex-citation energy E 2 + can be reproduced by varying the χ 2pp . Adding the HH part of the two-body interaction to P QQ1 and P QQ2 and by repeating the calculations, two more parameterizations of the effective two-body interactions, namely P QQHH1 and P QQHH2 were obtained [37]. The four different parameterizations of the effective pairing plus multipolar correlations provide us four different sets of wave functions. With three different parameterizations of Jastrow type of SRC and four sets of wave functions, sets of twelve NTMEs M (0ν) N are calculated for estimating the associated uncertainties in the present work. The uncertainties associated with the NTMEs M (0ν) N for (β − β − ) 0ν decay are estimated statistically by calculating the mean and the standard deviation defined by M (0ν) N = k i=1 M (0ν) N (i) N(17) and ∆M (0ν) N = 1 √ N − 1 N i=1 M (0ν) N − M (0ν) N (i) 2 1/2(18) III. RESULTS AND DISCUSSIONS The model space, SPE's, parameters of P QQ type of effective two-body interactions and the method to fix them have already been given in Refs. [22,35,36]. It turns out that with P QQ1 and P QQ2 parameterizations, the experimental excitation energies of the 2 + state E 2 + [47] can be reproduced within about 2% accuracy. The electromagnetic properties, namely reduced B(E2:0 + → 2 + ) transition probabilities, deformation parameters β 2 , static quadrupole moments Q(2 + ) and gyromagnetic factors g(2 + ) are in overall agreement with the experimental data [48,49]. A. Short range correlations and radial evolutions of NTMEs In the approximation of finite size of nucleons with dipole form factor (F) and finite size plus SRC (F+S), the theoretically calculated twelve NTMEs M (0ν) N using the four sets of HFB wave functions generated with P QQ1, P QQHH1, P QQ2 and P QQHH2 parameterizations of the effective two-body interaction and three different parameterizations of Jastrow type of SRC for 94,96 Zr, 98,100 Mo, 104 Ru, 110 Pd, 128,130 Te and 150 Nd isotopes are presented in Table I. To analyze the role of different components of NTME M (0ν) N , the decomposition of the latter into Fermi, different terms of Gamow-Teller and tensor matrix elements of (ii) The Gamow-Teller matrix element is noticeably modified by the inclusion of the pseudoscalar and weak magnetism terms in the hadronic currents. While M GT −P P increases the absolute value of M GT −AA , M GT −AP has a significant contribution with opposite sign in all cases. The term M GT −MM is smaller than others, and the introduction of short range correlations changes its sign. (iii) The tensor matrix elements have a very small contribution, smaller than 2%, to the total transition matrix elements. (iv) The inclusion of short range correlations changes the nuclear matrix elements significantly, whose effects are large for the Gamow-Teller and Fermi matrix elements but small in the case of tensor ones. (v) The Miller-Spencer parameterization of the short range correlations, SRC1, cancels out a large part of the radial function H N , as shown in Fig. 1. The same cancellation reduces the calculated matrix elements to about one third of its original value. The other two parameterizations of the short range correlations, namely SRC2 and SRC3, have a sizable effect, which is in all cases much smaller than SRC1. With respect to the point nucleon case. the change in M (0ν) N is about 30%-34% due to the FNS. With the inclusion of effects due to FNS and SRC, the NTMEs change by about 75%-79%, 58%-62% and 43%-47% for F+SRC1, F+SRC2 and F+SRC3, respectively. It is noteworthy that the SRC3 has practically negligible effect on the finite size case. Further, the maximum variation in M (0ν) N due to P QQHH1, P QQ2 and P QQHH2 parametrization with respect to PQQ1 interaction are about 24%, 18% and 26% respectively. In the QRPA [26,27], ISM [16] and PHFB [28,37], the radial evolution of M (0ν) due to the exchange of light Majorana neutrino has already been studied. In both QRPA and ISM calculations, it has been established that the contributions of decaying pairs coupled to J = 0 and J > 0 almost cancel beyond r ≈3 fm and the magnitude of C (0ν) for all nuclei undergoing (β − β − ) 0ν decay have their maximum at about the internucleon distance r ≈1 fm. These observations were also made in the PHFB model [28,37]. Similarly, the radial evolution of M (0ν) N can be studied by defining M (0ν) N = C (0ν) N (r) dr(19) The radial evolution of M (0ν) N has been studied for four cases, namely F, F+SRC1, F+SRC2 and F+SRC3. To make the effects of finite size and SRC more transparent, we plot them for 100 Mo in Fig. 2. In case of finite sized nucleons, the C (0ν) N are peaked at r ≈0.5 fm and with the addition of SRC1 and SRC2, the peak shifts to about 0.8 fm. However, the position of peak is shifted to 0.7 fm for SRC3. In Fig. 3, we plot the radial dependence of C (0ν) N for six nuclei, namely 96 Zr, 100 Mo, 110 Pd, 128,130 Te and 150 Nd and the same observations remain valid. Also, the same features in the radial distribution of C (0ν) N are noticed in the cases of P QQ2, P QQHH1 and P QQHH2 parametrizations. B. Uncertainties in NTMEs The uncertainties associated with the NTMEs M (0ν) N for (β − β − ) 0ν decay are estimated by preforming a statistical analysis by using Eqs. (17) and (18). In Table I It turns out that in all cases, the uncertainties ∆M (0ν) are about 35% for g A = 1.254 and g A = 1.0. Further, we estimate the uncertainties for eight NTMEs M (0ν) N calculated using the SRC2, and SRC3 parameterizations and the uncertainties in NTMEs reduce to about 16% to 20% with the exclusion of Miller-Spenser type of parametrization. In Table IV, average NTMEs for case II along with NTMEs calculated in other models have been presented. It is noteworthy that in the models employed in Refs. [6,8,9], effects due to higher order currents have not been included. We also extract lower limits on the effective mass of heavy Majorana neutrino M N from the largest observed limits on half-lives T 0ν 1/2 of (β − β − ) 0ν decay. The extracted limits are M N > 5.67 +0.94 −0.94 × 10 7 GeV and > 4.06 +0.64 −0.64 ×10 7 GeV, from the limit on halflife T 0ν 1/2 >3.0×10 24 yr of 130 Te [56] for g A = 1.254 and g A = 1.0, respectively. IV. CONCLUSIONS We have employed the PHFB model, with four different parameterizations of pairing plus multipole effective two body interaction, to generate sets of four HFB intrinsic wave functions, which reasonably reproduced the observed spectroscopic properties, namely the yrast spectra, reduced B(E2:0 + → 2 + ) transition probabilities, static quadrupole moments Q(2 + ) and g-factors g(2 + ) of participating nuclei in (β − β − ) 2ν decay, as well as their M 2ν [35,36] The study of effects due to finite size of nucleons and SRC reveal that in the case of heavy Majorana neutrino exchange, the NTMEs change by about 30%-34% due to finite size of nucleons and the SRC1, SRC2 and SRC3 change them by 75%-79%, 58%-62% and 43%-47%, respectively. Further, it has been noticed through the study of radial evolution of NTMEs that the FNS and SRC play a more crucial role in the heavy than in the light Majorana neutrino exchange mechanism. Finally, a statistical analysis has been performed by employing the sets of twelve NTMEs M (0ν) N to estimate the uncertainties for g A = 1.254 and g A = 1.0. It turns out that the uncertainties are about 35% for all the considered nuclei. Exclusion of Miller-Spenser parametrization of Jastrow type of SRC, reduces the maximum uncertainties to a value smaller than 20%. The best extracted limit on the effective heavy Majorana neutrino mass M N from the available limits on experimental half-lives T 0ν 1/2 using average NTMEs M GeV, Λ A = 1.086 GeV and m π is the pion mass. Substituting Eq. (5)-Eq. (11) in Eq. (3), there is one term, associated with h F , Eq. (9), contributing to M F h , while M GT h has four terms, denoted by M GT −AA , M GT −AP , M GT −P P and M GT −MM , which correspond to the four terms in h GT , Eq. (10). The tensor contribution, M T h , has three terms, denoted by M T −AP , M T −P P and M T −MM , which correspond to the three terms in h T , Eq. (11). Their contributions to the total nuclear matrix element are discussed in Sec. III. FIG. 1 : 11, 1.59 and 1.52 f m −2 , b = 0.68, 1.45 and 1.88 f m −2 and c = 1.0, 0.92 and 0.46 for Miller-Spencer parametrization, CD-Bonn and Argonne V18 NN potentials, respectively. In this work the NTMEs M (0ν) N are calculated in the PHFB model for the above mentioned three sets of parameters for the SRC, denoted as SRC1, SRC2 and SRC3, respectively.InFig.1, we plot the neutrino potential H N (r, Λ)= H F h (r, Λ) f (r) with the three different parametrizations of SRC. It is noticed, that the potentials due to FNS and FNS+SRC3 are peaked at the origin where as the Radial dependence of HN (r, Λ)= H F h (r, Λ) f (r) for the three different parameterizations of the SRC. In the case of FNS, f (r) = 1. ( i ) iThe contribution of conventional Fermi matrix elements M F h = M F −V V is about 20% to the total matrix element. N (r) for the β − β − 0ν decay of 100 Mo isotope. N (r) for the β − β − 0ν decay of 96 Zr, 100 Mo, 110 Pd, 128,130 Te and 150 Nd isotopes. In this Fig., (a), (b), (c) and (d) correspond to F, F+SRC1, F+SRC2 and F+SRC3, respectively. the PHFB model is > 5.67 +0.94 −0.94 × 10 7 GeV and > 4.06 +0.64 −0.64 ×10 7 GeV for 130 Te isotope. TABLE I : ICalculated NTMEs M in the PHFB model with four different parameterization of effective two-body interaction and three different parameterizations of Jastrow type of SRC for the β − β − 0ν decay of 94,96 Zr, 98,100 Mo, 104 Ru, 110 Pd, 128,130 Te and 150 Nd isotopes due to the exchange of heavy Majorana neutrino exchange.(a), (b), (c) and (d) denote P QQ1, P QQHH1, P QQ2 and P QQHH2 parameterizations, respectively. See the footnote in page 3 of Ref.(0ν) N [37] for further details. Nuclei F F+S SRC1 SRC2 SRC3 94 Zr (a) 236.9498 77.5817 138.2606 191.3897 (b) 220.3794 72.4285 128.7496 178.0783 (c) 205.8370 72.9303 124.3248 168.5705 (d) 211.0437 68.9323 122.9710 170.3572 96 Zr (a) 177.7479 56.4909 102.4434 142.8831 (b) 185.5251 59.5338 107.2877 149.3117 (c) 170.8199 54.2382 98.4051 137.2870 (d) 175.4730 56.0746 101.2963 141.1240 98 Mo (a) 355.1915 117.0804 208.2494 287.5615 (b) 346.1118 116.4967 204.5667 281.0515 (c) 358.5109 118.0563 210.1150 290.2080 (d) 343.4160 115.2077 202.6977 278.7158 100 Mo (a) 365.8004 122.2000 215.8882 296.9869 (b) 361.9877 122.6611 214.7455 294.4297 (c) 368.4056 123.2364 217.5391 299.1598 (d) 328.9795 111.4464 195.1601 267.5869 104 Ru (a) 274.0700 89.7666 160.7925 222.1151 (b) 264.9015 88.1515 156.2893 215.1076 (c) 258.2796 84.6746 151.6002 209.3600 (d) 247.0603 82.3208 145.8435 200.6645 110 Pd (a) 424.6601 140.3359 249.6835 344.3187 (b) 379.9404 127.4915 224.6563 308.6907 (c) 407.2163 134.6824 239.4733 330.1888 (d) 390.3539 130.6314 230.5392 316.9996 128 Te (a) 190.5325 62.4373 111.5143 154.1796 (b) 231.8024 77.4559 136.7936 188.1893 (c) 220.7156 73.5158 130.0810 179.0960 (d) 235.4814 78.6367 138.9052 191.1366 130 Te (a) 236.0701 81.5493 141.3447 192.7610 (b) 231.5921 79.3844 138.1901 188.8492 (c) 233.0024 80.4020 139.4400 190.2194 (d) 230.5282 78.9888 137.5307 187.9675 150 Nd (a) 163.8037 55.8968 97.8169 133.6912 (b) 130.1364 43.8840 77.3178 105.9993 (c) 160.2720 54.6713 95.6942 130.8005 (d) 131.9781 44.6741 78.5433 107.5715 TABLE II : IIDecomposition of NTMEs for the β − β − 0ν decay of 100 Mo including finite size effect (F) and SRC (F+S) for the P QQ1 parametrization.NTMEs F F+S SRC1 SRC2 SRC3 MF 68.6223 35.8191 54.0101 64.2516 MGT −AA -370.5960 -144.5650 -242.3340 -316.3250 MGT −AP 174.4640 43.3631 93.9737 137.5100 MGT −P P -66.3082 -8.3767 -28.5936 -48.0727 MGT −M M -41.7693 16.3949 7.6213 -13.3421 MGT -304.2095 -93.1837 -169.3326 -240.2298 MT −AP 9.4369 9.2393 10.0332 10.0610 MT −P P -3.6622 -3.5528 -3.9226 -3.9386 MT −M M 1.2567 1.1163 1.3438 1.3722 MT 7.0314 6.8028 7.4544 7.4945 M (0ν) N 365.8004 122.2000 215.8882 296.9869 Zr, 98,100 Mo, 110 Pd, 128,130 Te and 150 Nd isotopes are displayed, which are employed to calculate the average values M tabulated inTable IIIfor the bare axial vector coupling constant g A = 1.254 and quenched value of g A = 1.0., sets of twelve NTMEs M (0ν) N of 94,96 (0ν) N as well as uncertainties ∆M (0ν) N . Considering three different parameterizations of Jastrow type of SRC, sets of twelve NTMEs M for the study (β − β − ) 0ν decay of 94,96 Zr, 98,100 Mo, 104 Ru, 110 Pd, 128,130 Te and 150 Nd isotopes in the heavy Majorana neutrino mass mechanism have been calculated.(0ν) N TABLE III : IIIAverage NTMEs M for the β − β − 0ν decay of 94,96 Zr, 98,100 Mo, 104 Ru, 110 Pd, 128,130 Te and 150 Nd isotopes. Both bare and quenched values of gA are considered. Case I and Case II denote calculations with and without SRC1, respectively.(0ν) N and uncertainties ∆M (0ν) N β − β − gA Case I Case II emitters M (0ν) N ∆M (0ν) N M (0ν) N ∆M (0ν) N 94 Zr 1.254 126.2146 44.9489 152.8378 27.1912 1.0 142.9381 49.1752 172.1620 29.3965 96 Zr 1.254 100.5313 36.8858 122.5048 21.9209 1.0 114.4851 40.3246 138.6328 23.5263 98 Mo 1.254 202.5006 71.6345 245.3957 41.8882 1.0 230.1520 78.3244 277.2795 44.9878 100 Mo 1.254 206.7533 73.0792 250.1870 43.7119 1.0 235.0606 79.9885 282.7964 47.1334 104 Ru 1.254 150.5572 53.9389 182.7216 31.9382 1.0 171.8075 59.0467 207.1750 34.3939 110 Pd 1.254 231.4743 82.4924 280.5688 49.1588 1.0 263.4339 90.3033 317.3947 53.0150 128 Te 1.254 126.8285 46.3381 153.7370 29.4676 1.0 143.9772 50.6942 173.5263 31.8554 130 Te 1.254 136.3856 46.9164 164.5378 27.2226 1.0 154.3797 51.2511 185.2849 29.1907 150 Nd 1.254 85.5467 31.4473 103.4294 20.9802 1.0 97.3640 34.5024 117.0160 22.8729 TABLE IV : IVAverage NTMEs M for the β − β − 0ν decay of 94,96 Zr, 98,100 Mo, 110 Pd, 128,130 Te and 150 Nd isotopes. Both bare and quenched values of gA are considered. The superscripts a and b denote the Argonne and CD-Bonn potentials.′ (0ν) N = (gA/1.254) 2 M (0ν) N β − β − gA M ′ (0ν) N Mo are presented inTable IIfor P QQ1 parametrization. From the inspection ofTable II, the following observations emerge. . H V Klapdor-Kleingrothaus, I V Krivosheina, I V Titkova, Int. J. Mod. Phys. A. 211159H. V. Klapdor-Kleingrothaus, I. V. Krivosheina, and I. V. Titkova, Int. J. Mod. Phys. A 21, 1159 (2006). . F T Avignone, S R Elliott, J Engel, Rev. Mod. Phys. 80481F. T. Avignone, S. R. Elliott, and J Engel, Rev. Mod. Phys. 80, 481 (2008). . F Šimkovic, J D Vergados, A Faessler, Phys. Rev. D. 82113015F.Šimkovic, J. D. Vergados, and A. Faessler, Phys. Rev. D 82, 113015 (2010); . A Faessler, A Meroni, S T Petcov, F Šimkovic, J Vergados, Phys. Rev. D. 83113003A. Faessler, A. Meroni, S. T. Pet- cov, F.Šimkovic, and J. Vergados, Phys. Rev. D 83, 113003 (2011). . V Tello, M Nemevsek, F Nesti, G Senjanovic, F Vissani, Phys. Rev. Lett. 106151801V. Tello, M. Nemevsek, F. Nesti, G. Senjanovic, F. Vis- sani, Phys. Rev. Lett. 106, 151801 (2011). . J D Vergados, Phys. Rep. 1331J. D. Vergados, Phys. Rep. 133, 1 (1986). . T Tomoda, Rep. Prog. Phys. 5453T. Tomoda, Rep. Prog. Phys. 54, 53 (1991). . M Doi, T Kotani, Prog. Theor. Phys. 89139M. Doi and T. Kotani, Prog. Theor. Phys. 89, 139 (1993). . M Hirsch, H V Klapdor-Kleingrothaus, O Panella, Phys. Lett. 3747M. Hirsch, H. V. Klapdor-Kleingrothaus, and O. Panella, Phys. Lett. B374, 7 (1996). . G Pantis, J D Vergados, Phys. Rep. 242285G. Pantis and J. D. Vergados, Phys. Rep. 242, 285 (1994). . G Pantis, F Šimkovic, J D Vergados, A Faessler, Phys. Rev. C. 53695G. Pantis, F.Šimkovic, J. D. Vergados, and A. Faessler, Phys. Rev. C 53, 695 (1996). . F Šimkovic, G V Efimov, M A Ivanov, V E Lyubovitskij, Z. Phys. A. 341193F.Šimkovic, G. V. Efimov, M. A. Ivanov, and V. E. Lyubovitskij, Z. Phys. A 341, 193 (1992). . F Šimkovic, M Nowak, W A Kaminski, A A Raduta, A Faessler, Phys. Rev. C. 6435501F.Šimkovic, M. Nowak, W. A. Kaminski, A. A. Raduta, and A. Faessler, Phys. Rev. C 64, 035501 (2001). . E Caurier, A Poves, A P Zuker, Phys. Lett. 25213E. Caurier, A. Poves, and A. P. Zuker, Phys. Lett. B252, 13 (1990); . E Caurier, F Nowacki, A Poves, J Retamosa, Phys. Rev. Lett. 771954E. Caurier, F. Nowacki, A. Poves, and J. Re- tamosa, Phys. Rev. Lett. 77, 1954 (1996); . E Caurier, F Nowacki, A Poves, J Retamosa, Nucl. Phys. 654973E. Caurier, F. Nowacki, A. Poves, and J. Retamosa, Nucl. Phys. A654, 973c (1999). . P Vogel, M R Zirnbauer, Phys. Rev. Lett. 573148P. Vogel and M. R. Zirnbauer, Phys. Rev. Lett. 57, 3148 (1986). . O Civitarese, A Faessler, T Tomoda, Phys. Lett. 19411O. Civitarese, A. Faessler, and T. Tomoda Phys. Lett. B194, 11 (1987). . E Caurier, J Menéndez, F Nowacki, A Poves, Phys. Rev. Lett. 10052503E. Caurier, J. Menéndez, F. Nowacki, and A. Poves, Phys. Rev. Lett. 100, 052503 (2008). . J Menéndez, A Poves, E Caurier, F Nowacki, arXiv:0809.2183v1nucl-thJ. Menéndez, A. Poves, E. Caurier, and F. Nowacki, arXiv:0809.2183v1[nucl-th]. . L Pacearescu, A Faessler, F Šimkovic, Phys. At. Nucl. 671210L. Pacearescu, A. Faessler, and F.Šimkovic, Phys. At. Nucl. 67, 1210 (2004). . R Álvarez-Rodríguez, P Sarriguren, E Moya De Guerra, L Pacearescu, A Faessler, F Šimkovic, Phys. Rev. C. 7064309R.Álvarez-Rodríguez, P. Sarriguren, E. Moya de Guerra, L. Pacearescu, A. Faessler, and F.Šimkovic, Phys. Rev. C 70, 064309 (2004). . M S Yousef, V Rodin, A Faessler, F Šimkovic, Phys. Rev. C. 7914314M. S. Yousef, V. Rodin, A. Faessler, and F.Šimkovic, Phys. Rev. C 79, 014314 (2009). . D Fang, A Faessler, V Rodin, F Šimkovic, Phys. Rev. C. 8251301D. Fang, A. Faessler, V. Rodin, and F.Šimkovic, Phys. Rev. C 82, 051301(R) (2010). . K Chaturvedi, R Chandra, P K Rath, P K Raina, J G Hirsch, Phys. Rev. C. 7854302K. Chaturvedi, R. Chandra, P. K. Rath, P. K. Raina, and J. G. Hirsch, Phys. Rev. C. 78, 054302 (2008). . R Chandra, K Chaturvedi, P K Rath, P K Raina, J G Hirsch, Europhys. Lett. 8632001R. Chandra, K. Chaturvedi, P. K. Rath, P. K. Raina, and J. G. Hirsch, Europhys. Lett. 86, 32001 (2009). . F Šimkovic, G Pantis, J D Vergados, A Faessler, Phys. Rev. C. 6055502F.Šimkovic, G. Pantis, J. D. Vergados, and A. Faessler, Phys. Rev. C 60, 055502 (1999). . J D Vergados, Phys. Rep. 3611J. D. Vergados, Phys. Rep. 361, 1 (2002). . F Šimkovic, A Faessler, V Rodin, P Vogel, J Engel, Phys. Rev. C. 7745503F.Šimkovic, A. Faessler, V. Rodin, P. Vogel, and J. En- gel, Phys. Rev. C 77, 045503 (2008). . Qrpa, 0ν 1/2 ( yr) Ref. mN (GeVQRPA QRPA QRPA QRPA SRQRPA a SRQRPA b T 0ν 1/2 ( yr) Ref. mN (GeV) . F Šimkovic, A Faessler, H Müther, V Rodin, M Stauf, Phys. Rev. C. 7955501F.Šimkovic, A. Faessler, H. Müther, V. Rodin, and M. Stauf, Phys. Rev. C 79, 055501 (2009). . P K Rath, R Chandra, K Chaturvedi, P K Raina, J G Hirsch, Phys. Rev. C. 8044303P. K. Rath, R. Chandra, K. Chaturvedi, P. K. Raina, and J. G. Hirsch, Phys. Rev. C 80, 044303 (2009). P Vogel, arXiv:nucl-th/0005020Current Aspects of Neutrino Physics. D. O. CaldwellSpringer8177P. Vogel, in Current Aspects of Neutrino Physics, edited by D. O. Caldwell (Springer, 2001) Chap. 8, p. 177; arXiv: nucl-th/0005020. . S M Bilenky, J A Grifols, Phys. Lett. 550154S. M. Bilenky and J. A. Grifols, Phys. Lett. B550, 154 (2002). . John N Bahcall, Hitoshi Murayama, C Peña-Garay, Phys. Rev. D. 7033012John N. Bahcall, Hitoshi Murayama, and C. Peña-Garay, Phys. Rev. D 70, 033012 (2004). . F T Avignone, Iii , G S King, Iii , Yu G Zdesenko, New Journal of Physics. 76F. T. Avignone III, G. S. King III, and Yu. G. Zdesenko, New Journal of Physics 7, 6 (2005). . V A Rodin, A Faessler, F Šimkovic, P Vogel, Phys. Rev. C. 6844302V. A. Rodin, A. Faessler, F.Šimkovic, and P. Vogel, Phys. Rev. C 68, 044302 (2003). . M Baranger, K Kumar, Nucl. Phys. 110490M. Baranger and K. Kumar, Nucl. Phys. A110, 490 (1968). . R Chandra, J Singh, P K Rath, P K Raina, J G Hirsch, Eur. Phys. J. A. 23223R. Chandra, J. Singh, P. K. Rath, P. K. Raina, and J. G. Hirsch, Eur. Phys. J. A 23, 223 (2005). . S Singh, R Chandra, P K Rath, P K Raina, J G Hirsch, Eur. Phys. J. A. 33375S. Singh, R. Chandra, P. K. Rath, P. K. Raina, and J. G. Hirsch, Eur. Phys. J. A 33, 375 (2007). . P K Rath, R Chandra, K Chaturvedi, P K Raina, J G Hirsch, Phys. Rev. C. 8264310P. K. Rath, R. Chandra, K. Chaturvedi, P. K. Raina, and J. G. Hirsch, Phys. Rev. C. 82, 064310 (2010). . M Horoi, S Stoica, Phys. Rev. C. 8124321M. Horoi and S. Stoica, Phys. Rev. C 81, 024321 (2010). . J Barea, F Iachello, Phys. Rev. C. 7944301J. Barea and F. Iachello, Phys. Rev. C 79, 044301 (2009). . T R Rodríguez, G Martínez-Pinedo, Phys. Rev. Lett. 105252503T. R. Rodríguez and G. Martínez-Pinedo, Phys. Rev. Lett. 105, 252503 (2010). . J Suhonen, O Civiarese, Phys. Lett. 668277J. Suhonen and O. Civiarese, Phys. Lett. B668, 277 (2008). . H F Wu, H Q Song, T T S Kuo, W K Cheng, D Strottman, Phys. Lett. 162227H. F. Wu, H. Q. Song, T. T. S. Kuo, W. K. Cheng, and D. Strottman, Phys. Lett. B162, 227 (1985). . J G Hirsch, O Castaños, P O Hess, Nucl. Phys. 582124J. G. Hirsch, O. Castaños, and P. O. Hess, Nucl. Phys. A582, 124 (1995). . M Kortelainen, J Suhonen, Phys. Rev. C. 7624315M. Kortelainen and J. Suhonen, Phys. Rev. C 76, 024315 (2007); . M Kortelainen, O Civitarese, J Suhonen, J Toivanen, Phys. Lett. 647128M. Kortelainen, O. Civitarese, J. Suhonen, and J. Toivanen, Phys. Lett. B647, 128 (2007). . G A Miller, J E Spencer, Ann. Phys. (NY). 100G. A. Miller and J. E. Spencer, Ann. Phys. (NY) 100, 562 (1976). . A Bohr, B R Mottelson, Nuclear Structure. IWorld ScientificA. Bohr and B. R. Mottelson, Nuclear Structure Vol. I (World Scientific, Singapore, 1998). . M Sakai, At. Data Nucl. Data Tables. 31399M. Sakai, At. Data Nucl. Data Tables 31, 399 (1984). . P Raghavan, At. Data Nucl. Data Tables. 42189P. Raghavan, At. Data Nucl. Data Tables 42, 189 (1989). . S Raman, C W NestorJr, P Tikkanen, At. Data Nucl. Data Tables. 781S. Raman, C. W. Nestor Jr., and P. Tikkanen, At. Data Nucl. Data Tables 78, 1 (2001). . R Arnold, Nucl. Phys. 658299R. Arnold et al., Nucl. Phys. A658, 299 (1999). . J Argyriades, Nucl. Phys. 847168J. Argyriades et al., Nucl. Phys. A847, 168 (2010). . J H Fremlin, M C Walters, Proc. Phys. Soc. Lond. A. 65911J. H. Fremlin and M. C. Walters, Proc. Phys. Soc. Lond. A 65, 911 (1952). . R Arnold, Phys. Rev. Lett. 95182302R. Arnold et al., Phys. Rev. Lett. 95, 182302 (2005). . R G Winter, Phys. Rev. 85687R. G. Winter, Phys. Rev. 85, 687 (1952). . C Arnaboldi, Phys. Lett. 557167C. Arnaboldi et al., Phys. Lett. B557, 167 (2003). . C Arnaboldi, Phys. Rev. C. 7835502C. Arnaboldi et al., Phys. Rev. C 78, 035502 (2008). . J Argyriades, Phys. Rev. C. 8032501J. Argyriades et al., Phys. Rev. C 80, 032501(R) (2009).
[]
[ "Regime change thresholds in flute-like instruments: influence of the mouth pressure dynamics", "Regime change thresholds in flute-like instruments: influence of the mouth pressure dynamics" ]
[ "Soizic Terrien \nLMA\nCNRS\nAix-Marseille Univ\nCentrale Marseille7051, F-13402Marseille Cedex20UPRFrance\n", "Rémi Blandin \nLMA\nCNRS\nAix-Marseille Univ\nCentrale Marseille7051, F-13402Marseille Cedex20UPRFrance\n\n) currently at: Gipsa-lab\nUMR 5216\nCNRS\nGrenoble INP\nUniversité Joseph Fourier\nUniversit Stendhal\n11 rue des Mathématiques, BP 4638402Grenoble Campus, Saint Martin d'Héres CedexFrance\n", "Christophe Vergez \nLMA\nCNRS\nAix-Marseille Univ\nCentrale Marseille7051, F-13402Marseille Cedex20UPRFrance\n", "Benoît Fabre \nLAM\nUMR 7190\nSorbonne Universités\nUPMC Univ Paris 06\nCNRS\nInstitut Jean Le Rond d'Alembert\n11 rue de LourmelF-75015ParisFrance\n" ]
[ "LMA\nCNRS\nAix-Marseille Univ\nCentrale Marseille7051, F-13402Marseille Cedex20UPRFrance", "LMA\nCNRS\nAix-Marseille Univ\nCentrale Marseille7051, F-13402Marseille Cedex20UPRFrance", ") currently at: Gipsa-lab\nUMR 5216\nCNRS\nGrenoble INP\nUniversité Joseph Fourier\nUniversit Stendhal\n11 rue des Mathématiques, BP 4638402Grenoble Campus, Saint Martin d'Héres CedexFrance", "LMA\nCNRS\nAix-Marseille Univ\nCentrale Marseille7051, F-13402Marseille Cedex20UPRFrance", "LAM\nUMR 7190\nSorbonne Universités\nUPMC Univ Paris 06\nCNRS\nInstitut Jean Le Rond d'Alembert\n11 rue de LourmelF-75015ParisFrance" ]
[]
Since they correspond to a jump from a given note to another one, the mouth pressure thresholds leading to regime changes are particularly important quantities in flute-like instruments. In this paper, a comparison of such thresholds between an artificial mouth, an experienced flutist and a non player is provided. It highlights the ability of the experienced player to considerabily shift regime change thresholds, and thus to enlarge its control in terms of nuances and spectrum. Based on recent works on other wind instruments and on the theory of dynamic bifurcations, the hypothesis is tested experimentally and numerically that the dynamics of the blowing pressure influences regime change thresholds. The results highlight the strong influence of this parameter on thresholds, suggesting its wide use by experienced musicians. Starting from these observations and from an analysis of a physical model of flute-like instruments, involving numerical continuation methods and Floquet stability analysis, a phenomenological modelling of regime change is proposed and validated. It allows to predict the regime change thresholds in the dynamic case, in which time variations of the blowing pressure are taken into account.
10.3813/aaa.918828
[ "https://arxiv.org/pdf/1403.7487v1.pdf" ]
106,397,320
1403.7487
471ef3bcec6f0ad1a9e167111f3458f5f4c76610
Regime change thresholds in flute-like instruments: influence of the mouth pressure dynamics 28 Mar 2014 Soizic Terrien LMA CNRS Aix-Marseille Univ Centrale Marseille7051, F-13402Marseille Cedex20UPRFrance Rémi Blandin LMA CNRS Aix-Marseille Univ Centrale Marseille7051, F-13402Marseille Cedex20UPRFrance ) currently at: Gipsa-lab UMR 5216 CNRS Grenoble INP Université Joseph Fourier Universit Stendhal 11 rue des Mathématiques, BP 4638402Grenoble Campus, Saint Martin d'Héres CedexFrance Christophe Vergez LMA CNRS Aix-Marseille Univ Centrale Marseille7051, F-13402Marseille Cedex20UPRFrance Benoît Fabre LAM UMR 7190 Sorbonne Universités UPMC Univ Paris 06 CNRS Institut Jean Le Rond d'Alembert 11 rue de LourmelF-75015ParisFrance Regime change thresholds in flute-like instruments: influence of the mouth pressure dynamics 28 Mar 2014arXiv:1403.7487v1 [physics.class-ph] Since they correspond to a jump from a given note to another one, the mouth pressure thresholds leading to regime changes are particularly important quantities in flute-like instruments. In this paper, a comparison of such thresholds between an artificial mouth, an experienced flutist and a non player is provided. It highlights the ability of the experienced player to considerabily shift regime change thresholds, and thus to enlarge its control in terms of nuances and spectrum. Based on recent works on other wind instruments and on the theory of dynamic bifurcations, the hypothesis is tested experimentally and numerically that the dynamics of the blowing pressure influences regime change thresholds. The results highlight the strong influence of this parameter on thresholds, suggesting its wide use by experienced musicians. Starting from these observations and from an analysis of a physical model of flute-like instruments, involving numerical continuation methods and Floquet stability analysis, a phenomenological modelling of regime change is proposed and validated. It allows to predict the regime change thresholds in the dynamic case, in which time variations of the blowing pressure are taken into account. Introduction and problem statement In flute playing, the phenomenon of regime change is particularly important, both because it corresponds to a jump from a given note to another one (in most cases an octave higher or lower) and because it is related to the blowing pressure, directly controlled by the musician. As an example, a regime change from the first register to the second register (periodic oscillation regimes synchronized on the first and the second resonance mode of the instrument, respectively), occurs when the musician blows harder enough in the instrument, and is characterized by a frequency leap approximately an octave higher (see for example [1]). It is well known that register change is accompanied by hysteresis (see for example [1,2,3]): the mouth pressure at which the jump between two registers occurs (the so-called regime change threshold) is larger for rising pressures than for diminishing pressures. For musicians, the hysteresis allows a greater freedom in terms of musical performance. Indeed, it allows them both to play forte on the first register and piano on the second register, leading to a wider control in terms of nuance and timbre. Numerous studies have focused on both the prediction and the experimental detection of such thresholds [1,2]. Other studies have focused on the influence of different parameters on regime change thresholds, such as the geometrical dimensions of channel, chamfers and excitation window of recorders or organ flue pipes [4,5,6], the importance of nonlinear losses [2], or the convection velocity of perturbations on the jet [2]. However, it seems that few studies have focused, in terms of regime change thresholds, on other control parameters (i.e. related to the musician) than the slowly varying blowing pressure. Since it has important musical consequences, one can wonder if flute players develop strategies to change the values of regime change thresholds and to maximize the hysteresis. To test this hypothesis, increasing and decreasing profiles of blowing pressure (crescendo and decrescendo) were performed on the same alto recorder and for a given fingering (corresponding to the note F 4 ), by an experienced flutist, a non player, and an artifical mouth [7]. Both experienced musician and non musician have been instructed to stay as long as possible on the first register and on the second register for crescendo and decrescendo respectively. The different experimental setups will be described in section 2. The representation of the fundamental frequency of the sound with respect to the blowing pressure, displayed in figure 1, highlights that the musician obtained an increasing threshold 213 % higher and a decreasing threshold 214 % higher than the artificial mouth, whereas the differences between the non musician and the artificial mouth are of 9 % for the increasing threshold and 32 % for the decreasing threshold. As highlighted in figure 2, similar comparisons on other fingerings (G 4 , A 4 , B b 4 and B 4 ) show that thresholds reached by the musician are at least 95 % higher and up to 240 % higher than thresholds observed on the artificial mouth. On the other hand, thresholds obtained by the non musician are at most 13.3 % lower and 29 % higher than thresholds of the artificial mouth. Figure 3 presents the comparison between the experienced flutist, the non musician and the artificial mouth in terms of hysteresis. For the three cases, the difference between the thresholds obtained performing an increasing and a decreasing blowing pressure ramp are represented for the five fingerings studied. One can observe that the musician reaches hysteresis between 169 % and 380 % wider than the artifical mouth for the F 4 , G 4 , A 4 and B b 4 fingerings, and up to 515 % wider than the artificial mouth for the B 4 fingering. The hysteresis observed for the non musician are between 27 % and 233 % wider than the hysteresis obtained with the artificial mouth. One can note that the maximum relative difference of 233 % is obtained for the B 4 fingering. For all the other fingerings, the relative differences with the artificial mouth remain between 27% and 65%. In all cases, the hysteresis obtained by the experienced flutist are at least 84 % wider than that observed for the non musician. As a first conclusion, one can consider that the behaviour of a given instrument played by the artificial mouth and by a non musician is not significantly different in terms of increasing regime change thresholds. In terms of hysteresis, if the results are not significantly different for the F 4 , A 4 and B b 4 fingerings, more important differences are observed for both the G 4 and B 4 fingerings. However, the values measured for the experienced flutist remain significantly higher, both in terms of thresholds and hysteresis, than that obtained for the non player and the artificial mouth. An experienced flutist is able to significantly and systematically modify these thresholds, and thus to enlarge the hysteresis, which presents an obvious musical interest. Which parameters does the musician use to control the regime change thresholds? If the influence of the blowing pressure has been widely studied under hypothesis of quasi-static variations [1,2,3,4,5,6,8,9] (called hereafter the static case), and if studies have focused on the measurement of various control parameters [10,11,12] to the authors' knowledge, no study has ever focused on the influence of the blowing pressure dynamics on the behaviour of flute-like instruments. Moreover, recent works have shown the strong influence of this parameter on oscillation thresholds of reed instruments [13,14], and thus suggest that it could be a control parameter for musicians. In the same way, as recent studies [3,9] have highlighted that the phenomenon of register change in flute-like instruments is related to a bifurcation of the system, corresponding to a loss of stability of a periodic solution branch, it suggests to consider the results of the theory of dynamic bifurcations [15]. This theory takes into account time evolution of the bifurcation parameters. This paper focuses on the influence of the dynamics of linearly increasing and decreasing ramps of the blowing pressure on the regime change thresholds between the two first registers of flute-like instruments. In section 2, the stateof-the-art physical model for flutes is briefly presented, as well as the instrument used for experiments, and the numerical and experimental tools involved in this study. Experimental and numerical results are presented in section 3, highlighting the strong influence of the slope of a linear ramp of the blowing pressure on the thresholds. Finally, a phenomenological modelling of regime change is proposed and validated in section 4, which could lead to a prediction of regime change thresholds and associated hysteresis. Experimental and numerical tools In this section, experimental and numerical tools used throughout the article are introduced. Measurements on musicians For the present study, an alto Bressan Zen-On recorder adapted for different measurements and whose geometry is described in [16] has been played by the professional recorder player Marine Sablonnière. As illustrated in figure 4, two holes were drilled to allow a measurement of both the mouth pressure P m , through a capillary tube connected to a pressure sensor Honeywell ASCX01DN, and the acoustic pressure in the resonator (under the labium), through a differential pressure sensor endevco 8507C-2. Pressure controlled artificial mouth Such experiments with musicians do not allow a systematic and repeatable exploration of the instrument behaviour. To play the instrument without any instrumentalist, a pressure controlled artificial mouth is used [7]. This setup allows to control precisely the blowing pressure, and to freeze different parameters (such as the configuration of the vocal tract or the distance between the holes and the fingers) which continuously vary when a musician is playing. As described in figure 5, a servo valve connected to compressed air controls the flow injected in the instrument through a cavity representing the mouth of the musician. Every 40 µs, the desired pressure (the target) is compared to the pressure measured in the mouth through a differential pressure sensor endevco 8507C-1. The electric current sent to the servo valve, directly controlling its opening and thus the flow injected in the mouth, is then adjusted using a Proportional Integral Derivative controller scheme. It is implemented on a DSP card dSpace 1006 [7]. Physical model of the instrument In parallel of experiments, the behaviour of the state-of-the-art model for flute-like instruments is studied through time-domain simulations and numerical continuation, and qualitatively compared below to experimental results, giving rise to a better understanding of the different phenomena involved. As for other wind instruments, the mechanism of sound production in flute-like instruments can be described as a coupling between a nonlinear exciter and a linear, passive resonator, the later being constituted by the air column contained in the pipe [17,18]. However, they differ from other wind instruments in the nature of their exciter: whereas it involves the vibration of a solid element for reed and brass instruments (a cane reed or the musician's lips), it is constituted here by the nonlinear interaction between an air jet and a sharp edge called labium (see for example [19]), as illustrated in figure 6. More precisely, the auto-oscillation process is modeled as follows: when the musician blows into the instrument, a jet with velocity U j and semihalf width b is created at the channel exit. As the jet is naturally unstable, any perturbation is amplified while being convected along the jet, from the channel exit to the labium. The convection velocity c v of these perturbations on the jet is related to the jet velocity itself through: c v ≈ 0.4U j [20,21,22]. The duration of convection introduces a delay τ in the system, related both to the distance W between the channel exit and the labium (see figure 6) and to the convection velocity c v through: τ = W cv . Due to its instability, the jet oscillates around the labium with a deflection amplitude η(t), leading to an alternate flow injection inside and outside the instrument. These two flow sources Q in and Q out in phase opposition (separated by a small distance δ d , whose value is evaluated by Verge in [23]) act as a dipolar pressure source difference ∆p src (t) on the resonator [1,23,24], represented through its admittance Y . The acoustic velocity v ac (t) of the waves created in the resonator disrupts back the air jet at the channel exit. As described above, this perturbation is convected and amplified along the jet, toward the labium. The instability is amplified through this feedback loop, leading to selfsustained oscillations. This mechanism of sound production can be represented by a feedback loop system, represented in figure 7. According to various studies describing the different physical phenomena involved ( [5,20,21,22,25] for the jet, [1,23,24] for the aeroacoustic source), the state-of-the-art model for flute-like instruments [19] is described through system 1, in which each equation is related to a given element of the feedback loop system of figure 7: Figure 7: Basic modeling of sound production mechanism in flute-like instruments, as a system with a feedback loop [26,19]. η(t) = h U j e α i W v ac (t − τ ) ∆p(t) =∆p src (t) + ∆p los (t) = ρδ d bU j W d dt tanh η(t) − y 0 b − ρ 2 v ac (t) α vc 2 sgn(v ac (t)) V ac (ω) = Y (ω) · P (ω) = a 0 b 0 jω + c 0 + p−1 k=1 a k jω ω 2 k − ω 2 + jω ω k Q k · P (ω)(1) In these equations, α i is an empirical coefficient characterizing the amplification of the jet perturbations [20,25], ρ is the air density, and y 0 the offset between the labium position and the jet centerline (see figure 6). V ac and P are respectively the frequency-domain expressions of the acoustic velocity at the pipe inlet v ac (t) and the pressure source δp(t). In the second equation, the additional term ∆p los = − ρ 2 vac(t) αvc 2 sgn(v ac (t)) models nonlinear losses due to vortex shedding at the labium [27]. α vc is a vena contracta factor (estimated at 0.6 in the case of a sharp edge), and sgn represents the sign function. The admittance Y (ω) is represented in the frequency-domain as a sum of resonance modes, including a mode at zero frequency (the so-called uniform mode [26]). The coefficients a k , ω k and Q k are respectively the modal amplitude, the resonance pulsation and the quality factor of the k th resonance mode, ω is the pulsation, and a 0 , b 0 and c 0 are the coefficients of the uniform mode. For the different fingerings of the recorder used for experiments, these coefficients are estimated through a fit of the admittance. These admittances are estimated through the measure of the geometrical dimensions of the bore of the recorder and the use of the software WIAT [28]. The length corrections related to the excitation window of the recorder (see figure 6) are subsequently taken into account using the analytical formulas detailed in chapter 7 of [26]. Numerical resolution methods Time-domain simulations of this model are carried out through a classical Runge-Kutta method of order 3, implemented in Simulink [29]. A high sampling frequency f s = 23 × 44100 Hz is used. This value is chosen both because the solution is not significantly different for higher sampling frequencies, and because it allows an easy resampling at a frequency suitable for audio production systems. In parallel, equilibrium and periodic steadystate solutions of the model are computed using orthogonal collocation (see for example [30]) and numerical continuation [31]. Starting from a given equilibrium or periodic solution, continuation methods, which rely on the implicit function theorem [32], compute the neighbouring solution, i.e the solution for a slightly different value of the parameter of interest (the so-called continuation parameter), using a prediction-correction method. This iterative process is schematically represented in figure 8. It thus aims at following the corresponding branch (that is to say "family") of solutions when the continuation parameter varies. For more details on these methods and their adaptation to the state-of-the-art flute model, the reader is referred to [33,34] and [9]. The stability properties of the different parts of the branches are subsequently determined using the Floquet theory (see for example [35]). For a given dynamical system, the computation of both the different branches of equilibrium and periodic solutions and their stability, here achieved with the software DDE-Biftool [36,37,34], leads to bifurcation diagrams. Such diagrams ideally represent all the branches of equilibrium and periodic solutions as a function of the continuation parameter, and provide ac- Schematic representation of the principle of numerical continuation through a prediction-correction algorithm [31,36]. Starting from a known part of the branch, the neighbouring solution (for a slightly different value of the continuation parameter λ) is predicted and corrected. By succesive iterations, it leads to the computation of the complete solution branch of equilibrium or periodic solutions. x represents a characteristic of the solution, such as its frequency or its amplitude. cess to specific information that are not possible to access experimentally or in time-domain simulations: unstable parts of the branches, coexistence of different solutions, and bifurcations arising along the different branches. Thereby, a bifurcation diagram provides a more global knowledge of the system dynamics and an easier interpretation of different phenomena observed experimentally and in time-domain simulations, as illustrated for example in [38,39,9]. This will be illustrated by figure 11 provided in section 3, which represents such a diagram of the state-ofthe-art model of flute-like instruments, in terms of oscillation frequency of the periodic solutions with respect to the blowing pressure. strated the strong influence of the dynamics of control parameters on the oscillation threshold of reed instruments. Particularly, it has highlighted, in such instruments, the phenomenon of bifurcation delay, corresponding to a shift of the oscillation threshold caused by the dynamics of the control parameter [15]. Although we focus here on transitions between the two first registers (i.e. between two different oscillation regimes), and although flute-like instruments are mathematically quite different dynamical systems from reed instruments, these former studies suggest that the temporal profile P m (t) of the pressure dynamics could considerabily influence the regime change thresholds. We focus in this section on the comparison of regime change thresholds between the static case and the dynamic case, the latter corresponding a time varying blowing pressure. To test this hypothesis, linearly increasing and decreasing blowing pressure ramps P m (t) = P ini + a · t, with different slopes a, have been run both through time-domain simulations and experiments with the artificial mouth (the reader is referred to appendix A for a table of notations). Figure 9 represents, for the F 4 fingering, the dynamic pressure thresholds P dyn corresponding to the jump between the two first registers, with respect to the slope a. The positive and negative values of a correspond to increasing and decreasing ramps of P m (t) respectively. For each value of a, the experimental threshold is a mean value calculated for three realisations. In this paper, the value of P dyn is determined through a fundamental frequency detection using the software Yin [40]: P dyn is defined as the value of P m at which a jump of the fundamental frequency is observed. The temporal resolution of the detection is 0.0016 s for experimental signals and 0.0007 s for simulation signals, which corresponds to a resolution of 0.8 and 0.36 P a (respectively), in the case of a slope a = 500 P a/s of the blowing pressure. Despite the dramatic simplifications of the model, these first results higlight that the real instrument and the model present similar qualitative behaviours. Surprisingly enough, the experimental and numerical behaviours are also quantitatively similar, with typical relative differences between 3 % and 28 % on the thresholds observed for rising pressure (called increasing pressure threshold P dyn 1→2 ). For the decreasing pressure threshold P dyn 2→1 , observed for diminishing pressure, the difference is more important, with a typical relative deviation of about 50 %. Moreover, the strong influence of a on both P dyn 1→2 and P dyn 2→1 is clearly pointed out: with the artificial mouth, a = 400 Pa/s leads to a value of P dyn 1→2 45% higher than a = 10 Pa/s, and to a value of P dyn 2→1 16% lower. Similarly, for time-domain simulations, a = 400 Pa/s leads to a value of P dyn 1→2 15.5% higher and to a value of P dyn 2→1 18% lower than a = 10 Pa/s. Increasing a thus enlarges the hysteresis; indeed P dyn 1→2 and P dyn 2→1 are respectively increased and decreased. This can be compared (at least qualitatively) with phenomena observed on an experienced flutist, presented in section 1. Figure 10 represents, as previously, the mean value of the regime change thresholds P dyn 1→2 and P dyn 2→1 obtained for three experiments, with respect to the slope a, for the other fingerings already studied in section 1. It higlights that the behaviour observed in figure 9 for the F 4 fingering looks similar for other fingerings of the recorder. Indeed, depending on the fingering, the increase of a from 20 Pa/s to 400 Pa/s leads to an increase of P dyn 1→2 between 13 % and 43 % and to a decrease of P dyn 2→1 from 3 % to 15 %. Again, these results can be qualitatively compared with the results presented in section 1 for an experienced musician. Figure 10: Transition between the two first registers of an alto recorder played by an artificial mouth, for five different fingerings: representation of the dynamic regime change thresholds with respect to the slope a of linear ramps of the blowing pressure. 3.2 Influence of the slope of blowing pressure ramps on oscillation frequency and amplitude As observed for the oscillation threshold in clarinet-like instruments [13], we show in this section that a modification of the regime change threshold does not imply a strong modification of the characteristic curves, observed in the static case, linking the oscillation amplitude and the oscillation frequency to the blowing pressure. For numerical results, this feature can be easily illustrated through a comparison between the results of time-domain simulations and the bifurcation diagrams obtained through numerical continuation. This is done in figure 11, in terms of frequency with respect to the blowing pressure P m , for modal coefficients corresponding to the G 4 fingering. In this figure, the two periodic solution branches correspond to the first and the second registers, and solid and dashed lines represent stable and unstable parts of the branches, respectively. As the computation of such a bifurcation diagram relies on the static bifurcation theory, the point where the first register becomes unstable, at P m = 311.5P a, corresponds to the static threshold P stat 1→2 from first to second register. It thus corresponds to the threshold that would be observed by choosing successive constant values of the blowing pressure, and letting the system reach a steady-state solution (here, the first or the second register). In the same way, the point at which the change of stability of the second register is observed corresponds to the static threshold from second to first register P stat 2→1 = 259P a. Figure 11 shows that for high values of a, the system follows the unstable part of the branch corresponding to the first register: the maximum relative difference between the frequency predicted by the bifurcation diagram and the results of time-domain simulations is 9 cents. In the dynamic case, the system thus remains on the periodic solution branch corresponding to the "starting" regime (the first register in figure 11), after it became unstable. Providing, for the A fingering, the oscillation amplitude as a function of P m for different values of a, figure 12 highlights that the same property is observed experimentally. In both cases, the value of a considerabily affects the register change thresholds. However, far enough from the jump between the two registers, the oscillation amplitude only depends on the value of P m , and does not appear significantly affected by the value of a. In figure 12, the comparison between the two slowest ramps (20 Pa/s and 100 Pa/s) and the two others is particularly interesting. Indeed, for the two slowest ramps, an additional oscillation regime, corresponding to a quasiperiodic sound (called multiphonic sound in a musical context) [6,41,42,3,43], is observed for blowing pressure between 300 Pa and 400 Pa for a = 20 Pa/s, and between 340 and 400 Pa for a = 100 Pa/s. As this regime does not appear for higher slopes, it highlights that a modification of the blowing pressure dynamics can allow the musician to avoid (or conversely to favor) a given oscillation regime. Influence of the pressure dynamics before the static threshold To better understand the mechanisms involved in the case of a dynamic bifurcation between two registers, this section focuses on the influence, on the regime change thresholds, of the evolution of P m (t) before the static threshold P stat has been reached. In other words, the aim is to determine whether the way P m (t) evolves before the static threshold is reached impacts the dynamic regime change threshold. To investigate this issue, different piecewise lin- ear ramps have been achieved both with the artificial mouth and in time-domain simulation. For rising pressures, these profiles are defined such as dPm dt = a 1 for P m < P knee and dPm dt = a 2 for P m > P knee (where a 1 and a 2 are constants) and P knee is a constant that may be adjusted. For diminishing pressure, they are such as dPm dt = a 1 for P m > P knee and dPm dt = a 2 for P m < P knee . Experimental results Experimentally, blowing pressure profiles constituted by two linear ramps with different slopes (a 1 = 350 Pa/s, a 2 = 40 Pa/s) have been achieved for the G 4 fingering. The pressure P knee at which the knee break occurs varies between the different realisations. Figure 13 presents these experimental results in terms of P dyn 1→2 and P dyn 2→1 , with respect to P knee − P stat . Thereby, a zero abscissa corresponds to a change of slope from a 1 to a 2 at a pressure equal to P stat 1→2 for rising pressure, and equal to P stat 2→1 for diminishing pressure. It highlights that for rising pressure, P dyn 1→2 remains constant as long as P knee < P stat 1→2 (i.e. for negative values of the abscissa), and that this constant value (about 258 Pa) corresponds to the value of P dyn 1→2 previously observed for a linear ramp with constant slope a 2 = 40 Pa/s (see figure 10). Conversely, once P knee > P stat 1→2 , the value of P dyn 1→2 gradually increases to reach 295 Pa, which corresponds to the value observed for a linear ramp with a contant slope a 1 = 350 Pa/s. The same behaviour is observed for the decreasing threshold: as long as P knee > P stat 2→1 , the value of P dyn 2→1 is almost contant and close to that observed previously for a linear ramp of constant slope a 2 = 40 Pa/s (see figure 10). However, for P knee < P stat , the value of P dyn 2→1 progressively decreases to about 142 Pa, which corresponds to that observed for a linear ramp of constant slope a 1 = 350 Pa/s (see figure 10). As a conclusion, as long as the slope break occurs before the static threshold has been reached, the dynamic threshold is driven by the slope of the second part of the blowing pressure profile. If it occurs just after the static threshold has been reached, the dynamic threshold lies between the dynamic thresholds corresponding to the two slopes of the blowing pressure profile. Finally, if the slope break occurs, for rising pressure, at a presure sufficiently higher (respectively lower for diminishing pressure) than the static threshold, the dynamic threshold is driven, as expected, by the slope of the first part of the blowing pressure Abscissa: difference between the pressure P knee at which the knee occurs and the static regime change threshold P stat . Dashed lines represent the dynamic regime change thresholds observed previously for linear ramps of constant slope a 1 and a 2 respectively. profile. Results of time-domain simulations These experimentally observed behaviours are also observed on numerical simulations of the model. For modal coefficients corresponding to the G 4 fingering, the comparison has been made between the dynamic thresholds obtained for three different cases: • linear increasing ramps of P m (t), with slope a 2 . • a first piecewise linear increasing ramp, with a slope change at P knee = 250P a, and a fixed value of a 1 = 500 Pa/s. • a second piecewise linear increasing ramp, with a slope change at P knee = 250P a, and a fixed value of a 1 = 200 Pa/s. It is worth noting that for the two kinds of piecewise linear ramps, P knee is lower than P stat 1→2 , predicted by a bifurcation diagram at 311.5 Pa (see figure 11). For each case, various simulations were achieved, for different values of a 2 . Figure 14 provides the comparison of value of P dyn 1→2 obtained for these three kinds of blowing pressure profiles as a function of a 2 . With a maximum relative difference of 3.5%, the thresholds obtained for the piecewise linear profiles are strongly similar to those obtained with linear ramps. As for the experimental results, if P knee < P stat 1→2 , the dynamic threshold P dyn 1→2 is thus driven by the second slope a 2 of the profile. For the particular profile in which a 1 = 500 Pa/s and a 2 = 830 Pa/s, the influence of the value of P knee on P dyn 1→2 has been studied. The results are represented in figure 15 in the same way as the experimental results in figure 13. As experimentally, if P knee < P stat 1→2 , the value of P dyn 1→2 is driven by a 2 , and a constant threshold of about 385 Pa is observed, corresponding to the value obtained for a linear ramp with a slope equal to a 2 = 830 Pa (see figure 14). Conversely, when P knee > P stat1→2 , P dyn1→2 gradually shifts to finally achieve the value of 369 Pa, equal to that oberved for a linear ramp with a slope equal to a 1 = 500 Pa/s. The dynamic threshold is then driven by a 1 . Ordinate: dynamic threshold P dyn 1→2 . Abscissa: difference between the pressure P knee at which the knee occurs and the static regime change threshold P stat 1→2 . Dashed lines represent the values of P dyn 1→2 previously observed for linear ramps of slope a 1 and a 2 . Comparison with the results of an experienced musician The strong influence of the dynamics of P m (t) on thresholds and hysteresis suggests, by comparison with results presented in section 1, that musicians use this property to access to a wider control in terms of nuances and timbre. However, the comparison between the musician and the artificial mouth (see figures 2, 3 and 10) shows that the values of P dyn 1→2 obtained by the musician remains, for the different fingerings studied, between 61 % and 134 % higher than the maximal thresholds obtained with the artificial mouth for high values of the slope a. In the same way, the hysteresis obtained by the musician remains between 26% and 102% wider than the maximal hysteresis observed with the artifical mouth for the F 4 , G 4 , A 4 and B b 4 fingerings, and up to 404 % wider for the B 4 fingering. Discussion These results bring out the strong influence of the dynamics of the blowing pressure on the oscillation regime thresholds in flute-like instruments. Comparisons between experimental and numerical results show that the substantial simplifi-cations involved in the state-of-the-art physical model of the instrument do not prevent it to faithfully reproduce the phenomena observed experimentally. Suprisingly enough, different results show good agreement not only qualitatively but also quantitatively. Moreover, both the experimental and numerical results show that the dynamic threshold does not depend on the dynamics of the blowing pressure before the static threshold has been reached. Although the system studied here is mathematically very different from one that models reed instruments (see for example [2,44,9]), and although focus is set here on bifurcations of periodic solutions, results can be compared with some phenomena highlighted by Bergeot et al. on the dynamic oscillation threshold of reed instruments [14,13]. As in the work of Bergeot, phenomena highlihted are not predicted by the static bifurcation theory, often involved in the study of musical instruments. Moreover, the comparison between the results obtained with the artificial mouth and with an experienced flutist suggests that the musicians combine the dynamics of the blowing pressure with other control parameters in order to enlarge the hysteresis associated to regime change. Indeed, other works on flute-like instruments [45,46], together with different studies on other wind instruments [47,48,49] suggests that the vocal tract can also influence the regime change thresholds. Toward a phenomenological model of register change The different properties of the register change phenomenon, observed both experimentally and in simulations in the previous part, allow to propose a preliminary phenomenological modelling of this phenomenon. Proposed model Starting from the results presented in figures 13, 14, and 15, which lead to the conclusion that P dyn only depends on the dynamics of the blowing pressure after the static threshold has been reached, this modelling is based on the following hypothesis: • The regime change starts when P m (t) = P stat . • The regime change is not instantaneous, and has a duration t dyn during which the blowing pressure evolves from P stat to P dyn . We thus write P dyn as the sum of the static threshold P stat and a correction term P corr related to the dynamics of the blowing pressure: P dyn = P stat + P corr .(2) Based on the two hypothesis cited above, we introduce a new dimensionless quantity, the fraction of regime change ζ(t). By definition, ζ = 0 when the regime change has not started (i.e. when P m (t) < P stat for rising pressure and when P m (t) > P stat for diminishing pressure), and ζ = 1 when the regime change is completed (i.e. when P m (t) = P dyn , which corresponds to the change of fundamental frequency, as defined in the previous section). ζ is consequently defined as: ζ(t) = t tstat ∂ζ ∂t dt (3) where t stat is the instant at which P m (t) = P stat . Defining the origin of time at t stat leads to t = t − t stat , and thus gives: ζ( t) = t 0 ∂ζ ∂ t d t(4) As a simplifiying assumption, we consider that the rate of change ∂ζ ∂ t of the variable ζ( t) only depends on the gap ∆P ( t) = P m ( t) − P stat between the mouth pressure P m ( t) and the static regime change threshold: ∂ζ ∂ t = f (∆P ),(5) where f is an unknown monotonous and continuous function. According to the latest hypothesis, function f can be estimated at different points through the realisation of "steps" profiles of P m (t), from a value lower than P stat 1→2 , to a value larger than P stat 1→2 (see figure 16). Indeed, in such a case, for a step occuring at t = 0, ∆P ( t) corresponds to the difference between the pressure at the top of the step and P stat 1→2 , and is thus constant for t > 0. Consequently, f (∆P ) is constant with respect to time. From equations 4 and 5, one thus obtains for blowing pressure steps: ζ( t) = t 0 f (∆P )d t =f (∆P ) t 0 d t =f (∆P ) · t (6) Recalling that t dyn is the instant at which P m ( t) = P dyn , we have by definition ζ( t dyn ) = 1, and finally obtain for blowing pressure steps: f (∆P ) = 1 t dyn(7) For each value of the step amplitude, a different value of t dyn is obviously measured through a frequency detection: t dyn is defined as the time after which the oscillation frequency varies no more than two times the frequency resolution. Therefore, successive time-domain simulations of P m steps (see figure 16) with different amplitudes are carried out, to determine the function f (∆P ) through equation 7. Such simulations have been achieved for the two fingerings F 4 and G 4 , in both cases for transitions from the first to the second register. The results are represented in figure 17 with respect to ∆P . where the coefficient α depends on the considered fingering. Assessment of the model To check the validity of this modelling, the case of the linear pressure ramps studied in the previous section is now examined. In such a case, the difference between the blowing pressure and the static threshold is defined through ∆P ( t) = a · t, where a is the slope of the ramp in Pa/s. Recalling that ζ( t dyn ) = 1 and injecting equations 5 and 8 in equation 4 leads to: t dyn 0 f (∆P ( t))d t = 1 t dyn 0 α ∆P ( t)d t = 1 t dyn 0 α a t · d t = 1 t dyn 0 td t = 1 α √ a t dyn = 3 2α √ a 2 3(9) Moreover, due to the expression of ∆P ( t) in the case of linear ramps, one can write from equations 2 and 9: P corr =P dyn − P stat =∆P ( t dyn ) =a · t dyn = 3 2α a 2 3(10) According to this modelling, the value of P corr obtained with linear ramps should be proportional to the slope a to the power 2/3. Timedomain simulations for linear ramps of P m (t) with slope a are performed for two fingerings (F 4 and G 4 ). Figure 18 represents the threshold P dyn corresponding to the end of the transition from the first to the second register with respect to the slope a power 2/3. The results are correctly fit- Figure 18: Time-domain simulations of linear increasing ramps of the blowing pressure, for both the F 4 fingering (+) and the G 4 fingering (x): representation of the dynamic regime change threshold P dyn 1→2 with respect to the power 2/3 of the slope a. Solid and dashed lines represent linear fit of the data, which both present linear correlation coefficients higher than 0.99. ted by straight lines, with correlation coefficients higher than 0.99. This good agreement with the model prediction (equation 10) thus allows to validate the proposed modelling of the phenomenon of regime change. Moreover, on such a representation, the intercept of the fit with the y-axis provides a prediction of the static regime change threshold, which can not be exactly determined, strictly speaking, with linear ramps of the blowing pressure. The static thresholds thereby obtained 13 are 294 Pa and 314 Pa for the F 4 and G 4 fingering respectively. These values present relative differences of 0.1% and 0.8% with the thresholds of 294.3 Pa and 311.5 Pa predicted by the bifurcation diagrams computed through numerical continuation (see figure 11 for the bifurcation diagram of the G 4 fingering), which supports the validity of the proposed modelisation. Case of experimental data The experimental thresholds displayed in figure 10 for the five fingerings studied are represented in figure 19 with respect to a 2/3 . Similarly to figure 18, the different curves are correctly fitted by straight lines, with linear correlation coefficients between 0.88 and 0.99. The fact that these coefficients are, in some cases, lower than those of simulations can be explained by the presence of noise and of small fluctuations of the mouth pressure during the experiment, which sometimes prevents a threshold detection as accurate and systematic as in the case of numerical results. However, the good agreement of the experimental results with equation 10 also allows to validate the proposed phenomenological modelling of regime change. Figure 19: Same data as in figure 10: representation of the dynamic thresholds P dyn 1→2 and P dyn 2→1 , for five fingerings of an alto recorder played by an artificial mouth, with respect to the power 2/3 of the slope a of linear ramps of the blowing pressure. Solid lines represent linear fit of the data. Data present linear correlation coefficients between 0.88 and 0.99. Influence of the regime of arrival In the case of time-domain simulations, for the G 4 fingering, starting from the second register and achieving linear decreasing ramps of P m (t) leads to a particular behaviour. As shown in figure 20, P dyn does not appear, at least in a first stage, to be proportional to the power 2/3 of the slope. However, this case is particular in the sense that different oscillation regimes are reached, depending on the slope a of the ramp. Thereby, as highlighted with circles in figure 20, low values of the slope (|a| < 20 Pa/s) lead to a transition from the second to the first register, whereas higher values of the slope lead to a transition from the second register to an aeolian regime, as represented with crosses in figure 20. In flute-like instruments, aeolian regimes corresponds to particular sounds, occuring at low values of the blowing pressure, and originating from the coupling between a mode of the resonator (here the 5 th ) and an hydrodynamic mode of the jet of order higher than 1 [6,50,3]. As highlighted in the same figure, considering the two different transitions separately allows to find, as previously, the linear dependance between P dyn and a 2/3 . Indeed, linear correlation coefficients of 0.98 for |a| < 20 Pa/s, and of 0.95 for |a| > 20 Pa/s are found. Since the corresponding slope is the inverse of 2 3 α to the power 2/3 (see equation 10), such results suggest that α does not only depend on the fingering, but also on the oscillation regimes involved in the transition. The study of the Floquet exponents ρ m of the system supports this hypothesis. The Floquet exponents, computed for the system linearised around one of its periodic solutions, allow to estimate the (local) stability properties of the considered periodic solution [51,35]. More precisely, they provide information on whether a small perturbation superimposed on the solution will be amplified or attenuated with time. If all the Floquet exponents have negative real parts, any perturbation will be attenuated with time, and the considered solution is thus stable. Conversely, if at least one of the Floquet exponents has a positive real part, any perturbation will be amplified in the "direction" of the phase space corresponding to the eigenvector associated to this exponent, and the solution is thus unstable. The real part of the Floquet exponents of the considered system, linearised around the periodic solution corresponding to the second register (i.e. to the "starting" regime of the decreasing blowing pressure ramps considered here), are represented in figure 21 with respect to the blowing pressure P m . It highlights that the second register is stable for all values of P m between 300 Pa and 259 Pa. A first Floquet exponent introduces an instability at P m = 259 Pa, wich corresponds to the destabilisation of the second register (see figure 11). As highlighted in [9], such a destabilisation, corresponding to a bifurcation of the second register, causes the regime change. This point is thus the static threshold P stat 2→1 , already highlighted in figure 11. A second Floquet exponent reaches a positive real part at P m = 229P a. Moreover, the real part of the latest exponent becomes higher than the first one for P m < P cross , with P cross = 224 Pa. Comparison of results presented in figure 21 with those of figure 20 suggests that, in the case of a regime change, the "arrival" regime is driven by the Floquet exponent of the starting regime with the highest real part: indeed, as highlighted in figures 20 and 21, until the dynamics of P m (t) induces a regime change threshold higher than the pressure P cross for which the Floquet exponents intersect, one observes a transition to the first register. On the other hand, once the dynamics of P m (t) induces a threshold lower than P cross , the transition leads to the aeolian regime. This interpretation seems furthermore to be consistent with the slope change observed in figure 20 and with the physical meaning of the real part of the Floquet exponents. Indeed, as the value of the real part of a Floquet exponent is related to the amplification of a perturbation with time, a high value of ℜ(ρ m ) should correspond to a small duration t dyn of the regime change, whereas a small value of ℜ(ρ m ) should correspond to a high value of t dyn . Therefore, by analogy with equations 7 and 8, coefficient α can be related to the evolution of ℜ(ρ m ) with P m . Thereby, a greater evolution of ℜ(ρ m ) with respect to ∆P should correspond to a higher value of α. Due to equation 10, valuable for linear ramps of P m (t), it finally corresponds to a smaller rate of change of the straight line linking P dyn and a 2/3 . This property is here verified by the comparison between figures 20 and 21: the real part of the "second" floquet exponent (in bold black dashed line in figure 21), related to a regime change to the aeolian regime, presents a greater evolution with ∆P = P m − P stat than the floquet exponent inducing a transition to the first register (in bold blue line in figure 21). In the same way, the rate of change of straight line related in figure 20 to the regime change toward the aeolian regime (dashed line) is smaller than that of the straight line related to the transition from the second to the first register (in solid line). Surprisingly enough, these results thus highlight that bifurcation diagrams and associated Floquet stability analysis provide valuable information in the dynamic case, despite the fact that they involve the static bifurcation theory and a linearisation of the studied system around the "starting" periodic solution. In the dynamic case, they remain instructive on the following characteristics: • the arrival regime resulting from the regime change. • a qualitative indication on the duration of the regime change, through an estimation of the parameter α. It thus informs on both the dynamic threshold and its evolution with respect to the difference ∆P between the mouth pressure and the static regime change threshold. • as highlighted in the previous section, the evolution of the oscillation amplitude and frequency with respect to the mouth pressure, even after the static threshold has been crossed. Conclusion Recent studies in the field of musical acoustics have demonstrated that musicians are able to modify strongly the behaviour of the instrument considered alone (see for example [47,49]), and thus argue for a wider consideration of the different control parameters. A comparison between an experienced flutist, a non musician and an artificial mouth, in terms of regime change thresholds between the two first registers, and associated hysteresis, shows that the experienced musician seems to have developed strategies allowing him to significantly shift the regime change thresholds, and thus to enlarge the hysteresis, which presents an obvious musical interest. Conversely, for most fingerings studied, the behaviour observed when the recorder is played by a non musician and by an artifiical mouth do not present significant differences in terms of regime change thresholds. The experimental and numerical results presented in this article highlight that the slope of linear increasing and decreasing ramps of the blowing pressure strongly affects the pressure regime change thresholds, and thus the hysteresis. Moreover, it appears that the important criterion lies only in the dynamics of the blowing pressure after the static regime change threshold has been reached. The modification of the dynamics of the blowing pressure can thus allow, in some cases, to avoid or conversely to favor a given oscillation regime, and thereby to select the "arrival" regime resulting from a regime change. The phenomenological model proposed according to these observations allows to predict the dynamic regime change threshold from the knowledge of the temporal evolution of the blowing pressure. It highlights that the bifurcation diagrams and the associated Floquet stability analysis provide valuable information in the dynamic case, despite the fact that they involve a static hypothesis and a linearisation of the studied system. However, taking into account the dynamics of the mouth pressure does not allow to shift the thresholds and to enlarge the hysteresis as much as the experienced flutist does. It thus suggests that flutists develop strategies to combine the effects of the dynamics with those of other control parameters, such as for example the vocal tract, whose influence on regime change thresholds has been recently studied [46]. Moreover, the study presented here focuses on linear profiles of the mouth pressure. As such a temporal evolution does not seem realistic in a musical context (see for example [52,11]), it would be interesting to consider the effect of more complex temporal evolutions of the blowing pressure. Finally, it would be interesting to study more widely step profiles of the mouth pressure, whose importance is crucial in the playing of winf instruments. Aknowledgment The authors would like to thank Marine Sablonnière and Etienne Thoret for their participation in experiments. A from the second to the first register (case of diminishing blowing pressure) Slope of the first part of a 1 (Pa/s) piecewise linear ramps of the blowing pressure Slope of the second part of a 2 (Pa/s) piecewise linear ramps of the blowing pressure Pressure at which the slope P knee (Pa) break occurs in the case of piecewise linear ramps of the blowing pressure Difference between the P corr (Pa) static regime change threshold and the dynamic regime change threshold ζ (dimensionless) Fraction of regime change t stat (s) Time at which P m = P stat time variable, whose origin t(s) is defined at t stat Time at which P m = P dyn t dyn (s) (end of the regime change) Difference between the ∆P (t) (Pa) blowing pressure P m (t) and the static regime change threshold P stat ρ m Floquet exponents Figure 1 : 1Oscillation frequency with respect to the blowing pressure, for the F 4 fingering of an alto Zen recorder, played by an experienced flutist, a non musician and an artificial mouth. Oscillations around 350 Hz and 740 Hz correspond to the first and second register, respectively. Figure 2 : 2Increasing pressure thresholds corresponding to the jump from the first to the second register of an alto recorder played by an experienced flutist, a non musician and an artificial mouth, for five fingerings. Figure 3 : 3Hysteresis on the jump between the two first registers of an alto recorder played by an experienced flutist, a non musician and an artificial mouth, for five fingerings. Figure 4 : 4Experimental setup with the adapted recorder, allowing to measure both the pressure in the mouth of the flutist and the acoustic pressure under the labium. Figure 5 : 5Schematic representation of the principle of the artificial mouth. The opening of the servo valve, controlling the flow injected in the mouth, is adapted every 40 µs in order to minimize the difference between the measured and the desired values of the pressure in the mouth. Figure 6 : 6Schematic representation of the jet behaviour, based on Fabre in[19]. (a) Perturbation of the jet at the channel exit by the acoustic field present in the resonator. (b) Convection and amplification of the perturbation, due to the unstable nature of the jet. (c) Jet-labium interaction: oscillation of the jet around the labium, which sustains the acoustic field. Figure 8 : 8Figure 8: Schematic representation of the principle of numerical continuation through a prediction-correction algorithm [31, 36]. Starting from a known part of the branch, the neighbouring solution (for a slightly different value of the continuation parameter λ) is predicted and corrected. By succesive iterations, it leads to the computation of the complete solution branch of equilibrium or periodic solutions. x represents a characteristic of the solution, such as its frequency or its amplitude. Figure 9 : 9Dynamic regime change threshold between the two first registers of the F 4 fingering, with respect to the slope a of linear ramps: artificial mouth and time-domain simulation. Figure 11 :Figure 12 : 1112Bifurcation diagram of the G fingering, superimposed with time-domain simulations of increasing linear ramps of the blowing pressure, for different values of the slope a: representation of the oscillation frequency with respect to the blowing pressure P m . For the bifurcation diagram, the two branches correspond to the first and the second register, solid and dashed lines represent stable and unstable parts of the branches, respectively. The vertical dotted lines highlight the static regime change thresholds P stat 1→2 and P stat 2→1 Increasing linear ramps of the blowing pressure, with different slopes a, achieved with an artificial mouth: oscillation amplitude of the A 4 fingering of an alto recorder, with respect to the blowing pressure. Figure 13 : 13Piecewise linear ramps of the blowing pressure (a 1 = 350 Pa/s and a 2 = 40 Pa/s), achieved on the G 4 fingering of an alto recorder played by an atificial mouth. Ordinate: dynamic threshold P dyn 1→2 (up) and P dyn 2→1 (down). Figure 14 : 14Time-domain simulations of piecewise linear ramps of the blowing pressure with P knee = 270P a (a 1 = 500 Pa/s for squares and a 1 = 200 Pa/s for circles) and of linear ramps of the blowing pressure (crosses). Representation of the increasing dynamic regime change threshold P dyn 1→2 for the G 4 fingering, as a function of a 2 (slope of the second part of the blowing pressure profile for piecewise linear ramps, and slope of the linear ramps). Figure 15 : 15Time-domain simulations of piecewise linear ramps of the blowing pressure (a 1 = 500 Pa/s and a 2 = 830 Pa/s), for the G 4 fingering. Figure 16 :Figure 17 : 1617Illustration of the step profiles of the blowing pressure (up) achieved in time-domain simulations, and of the detection of the transient duration t dyn (down).In the two cases, the results follow a square root function: the linear correlation coefficients between ∆P and 1 Estimation of the function f (∆P ): representation of the inverse of the transient duration for step profiles of the blowing pressure, for both the F 4 and G 4 fingerings (left and right respectively), with respect to the difference ∆P between the target pressure of the steps and the static threshold P stat 1→2 . Dashed lines represent fit of the data with square root functions.fingering and 0.97 for the G 4 fingering. Such results thus suggest to approximate the function f through:f (∆P ) = α (∆P ); Figure 20 : 20Time-domain simulations of linear decreasing ramps of the blowing pressure, for the G 4 fingering: representation of the dynamic regime change threshold P dyn 2→1 with respect to the power 2/3 of the slope a. Circles and crosses represent transitions from the second register to the first register and to an aeolian regime, respectively. Solid and dashed lines represent linear fit of the data, which present linear correlation coefficients of 0.98 and 0.95, respectively. The dotdashed line indicates the pressure at which the Floquet exponents of the starting regime cross infigure 21. Figure 21 : 21G 4 fingering: real parts of the Floquet exponents of the system linearised around the periodic solution corresponding to the second register, with respect to the blowing pressure P m . Floquet exponents provide information on the stability properties of the considered regime. Table of notation of16 Symbol Associated variable P m (Pa) Blowing pressure a (Pa/s) Slope of linear ramps of the blowing pressure Static pressure threshold P stat 1→2 (Pa) from the first to the second register (case of rising blowing pressure) Static pressure threshold P stat 2→1 (Pa) from the second to the first register (case of diminishing blowing pressure) Dynamic pressure threshold P dyn 1→2 (Pa) from the first to the second register (case of rising blowing pressure) Dynamic pressure threshold P dyn 2→1 (Pa) Table 1 : 1Table of notations used throughout the article. pressure profiles in recorder playing. In Proceedings of the International Conference on New interfaces for Musical Expression, Oslo, Norway, 2011. [12] P de la Cuadra, B Fabre, N Montgermont, and L de Ryck. Analysis of flute control parameters: A comparison between a novice and an experienced flautist. In Forum Acusticum, Budapest, 2005. [13] B Bergeot, A Almeida, Ch Vergez, and B Gazengel. Prediction of the dynamic oscillation threshold in a clarinet model with a linearly increasing blowing pressure. Nonlinear Dynamics, 73(1-2):521-534, 2013. Linear ramps of the blowing pressure: experimental and numerical results 3.1 Influence of the slope of blowing pressure ramps on thresholdsAs highlighted in section 1, important differences arise, in terms of regime change thresholds and hysteresis, between experienced flutist and artificial mouth or non musician, which remain unexplained. Recent works[13,14] have demon- Jet drive mechanisms in edge tones and organ pipes. J W Coltman, Journal of the Acoustical Society of America. 603J W Coltman. Jet drive mechanisms in edge tones and organ pipes. Journal of the Acoustical Society of America, 60(3):725- 733, 1976. Regime change and oscillation thresholds in recorderlike instruments. R Auvray, P Y Fabre, Lagrée, Journal of the Acoustical Society of America. 1314R Auvray, B Fabre, and P Y Lagrée. Regime change and oscillation thresholds in recorder- like instruments. Journal of the Acoustical Society of America, 131(4):1574-1585, 2012. Flutelike musical instruments: A toy model investigated through numerical continuation. Ch S Terrien, B Vergez, Fabre, Journal of sound and vibration. 3321S Terrien, Ch Vergez, and B Fabre. Flute- like musical instruments: A toy model in- vestigated through numerical continuation. Journal of sound and vibration, 332(1):3833- 3848, 2013. Experimental investigation of the flue channel geometry influence on edge-tone oscillations. C Ségoufin, L Fabre, De Lacombe, Acta Acustica united with Acustica. 905C Ségoufin, B Fabre, and L de Lacombe. Experimental investigation of the flue chan- nel geometry influence on edge-tone oscilla- tions. Acta Acustica united with Acustica, 90(5):966-975, September 2004. Experimental study of the influence of the mouth geometry on sound production in a recorder-like instrument: Windway length and chamfers. C Ségoufin, M P Fabre, Verge, A P J Hirschberg, Wijnands, Acta Acustica united with Acustica. 864C Ségoufin, B Fabre, M P Verge, A Hirschberg, and A P J Wijnands. Experimental study of the influence of the mouth geometry on sound production in a recorder-like instrument: Windway length and chamfers. Acta Acustica united with Acustica, 86(4):649-661, 2000. Sound production by organ flue pipes. N H Fletcher, Journal of the Acoustical Society of America. 604N H Fletcher. Sound production by organ flue pipes. Journal of the Acoustical Society of America, 60(4):926-936, 1976. High-precision regulation of a pressure controlled artificial mouth: The case of recorderlike musical instruments. D Ferrand, Ch Vergez, F Fabre, Blanc, Acta Acustica united with Acustica. 964D Ferrand, Ch Vergez, B Fabre, and F Blanc. High-precision regulation of a pressure con- trolled artificial mouth: The case of recorder- like musical instruments. Acta Acustica united with Acustica, 96(4):701-712, July 2010. Time-domain simulation of the flute. J W Coltman, Journal of the Acoustical Society of America. 92J W Coltman. Time-domain simulation of the flute. Journal of the Acoustical Society of America, 92:69-73, 1992. Calculation of the steady-state oscillations of a flute model using the orthogonal collocation method. accepted for publication in Acta Acustica united with Acustica. Ch S Terrien, Vergez, D Fabre, Barton, S Terrien, Ch Vergez, B Fabre, and D Bar- ton. Calculation of the steady-state oscilla- tions of a flute model using the orthogonal collocation method. accepted for publication in Acta Acustica united with Acustica, 2014. Acoustical correlates of flute performance technique. N H Fletcher, The Journal of the Acoustical Society of America. 571N H Fletcher. Acoustical correlates of flute performance technique. The Journal of the Acoustical Society of America, 57(1):233- 237, 1975. Acquisition and study of blowing. F Garcia, Vinceslas, E Tubau, Maestre, F Garcia, L Vinceslas, J Tubau, and E Maestre. Acquisition and study of blowing Response of an artificially blown clarinet to different blowing pressure profiles. B Bergeot, B Almeida, Ch Gazengel, D Vergez, Ferrand, The Journal of the Acoustical Society of America. 1351B Bergeot, A Almeida, B Gazengel, Ch Vergez, and D Ferrand. Response of an artificially blown clarinet to different blowing pressure profiles. The Journal of the Acousti- cal Society of America, 135(1):479-490, 2014. Dynamic bifurcations: proceedings of a conference held in Luminy. E Benoît, SpringerFranceE Benoît. Dynamic bifurcations: proceed- ings of a conference held in Luminy, France, March 5-10, 1990. Springer, 1991. Resonance frequencies of the recorder (english flute). D H Lyons, The Journal of the Acoustical Society of America. 705D H Lyons. Resonance frequencies of the recorder (english flute). The Journal of the Acoustical Society of America, 70(5):1239- 1247, 1981. On the sensation of tones. Helmholtz H Von, DoverNew YorkH von Helmholtz. On the sensation of tones. Dover New York, 1954. On the oscillations of musical instruments. R T M E Mcintyre, J Schumacher, Woodhouse, Journal of the Acoustical Society of America. 745M E McIntyre, R T Schumacher, and J Woodhouse. On the oscillations of musi- cal instruments. Journal of the Acoustical Society of America, 74(5):1325-1345, 1983. Physical modeling of flue instruments: A review of lumped models. B Fabre, Hirschberg, Acta Acustica united with Acustica. 86B Fabre and A Hirschberg. Physical model- ing of flue instruments: A review of lumped models. Acta Acustica united with Acustica, 86:599-610, 2000. The theory of sound second edition. J W S Rayleigh, 1894New York, DoverJ W S Rayleigh. The theory of sound second edition. New York, Dover, 1894. Visualization and analysis of jet oscillation under transverse acoustic perturbation. P De La Cuadra, Ch Vergez, B Fabre, Journal of Flow Visualization and Image Processing. 14P de la Cuadra, Ch Vergez, and B Fabre. Vi- sualization and analysis of jet oscillation un- der transverse acoustic perturbation. Jour- nal of Flow Visualization and Image Process- ing, 14(4):355-374, 2007. Sinuous instability of a planar jet: propagation parameters and acoustic excitation. A Nolle, Journal of the Acoustical Society of America. 103A Nolle. Sinuous instability of a planar jet: propagation parameters and acoustic excita- tion. Journal of the Acoustical Society of America, 103:3690-3705, 1998. Jet oscillations and jet drive in recorder-like instruments. M P Verge, Caussé, Fabre, A P J Hirschberg, Wijnands, Van Steenbergen, Acta Acustica united with Acustica. 2M P Verge, R Caussé, B Fabre, A Hirschberg, A P J Wijnands, and A van Steenbergen. Jet oscillations and jet drive in recorder-like in- struments. Acta Acustica united with Acus- tica, 2:403-419, 1994. Sound production in recorder-like instruments. ii. a simulation model. M P Verge, R Hirschberg, Caussé, Journal of the Acoustical Society of America. 1015M P Verge, A Hirschberg, and R Caussé. Sound production in recorder-like instru- ments. ii. a simulation model. Journal of the Acoustical Society of America, 101(5):2925- 2939, 1997. The sound of oscillating air jets: Physics, modeling and simulation in flute-like instruments. P De, La Cuadra, Stanford UniversityPhD thesisP de la Cuadra. The sound of oscillating air jets: Physics, modeling and simulation in flute-like instruments. PhD thesis, Stanford University, 2005. Acoustique des instruments de musique (Acoustics of musical instruments). A Chaigne, Kergomard, Belin (EchellesA Chaigne and J Kergomard. Acoustique des instruments de musique (Acoustics of musi- cal instruments). Belin (Echelles), 2008. Vortex shedding in steady oscillation of a flue organ pipe. B Fabre, A P J Hirschberg, Wijnands, Acta Acustica united with Acustica. 826B Fabre, A Hirschberg, and A P J Wijnands. Vortex shedding in steady oscillation of a flue organ pipe. Acta Acustica united with Acus- tica, 82(6):863-877, 1996. The wind instrument acoustic toolkit. A Lefebvre, A Lefebvre. The wind in- strument acoustic toolkit. A 3(2) pair of Runge-Kutta formulas. P Bogacki, L F Shampine, Applied Mathematics Letter. 24P Bogacki and L F Shampine. A 3(2) pair of Runge-Kutta formulas. Applied Mathematics Letter, 2(4):321-325, 1989. Collocation methods for the computation of periodic solutions of delay differential equations. K Engelborghs, K Luzyanina, D In&apos;t Hout, Roose, SIAM Journal on Scientific Computing. 225K Engelborghs, T Luzyanina, K J In't Hout, and D Roose. Collocation methods for the computation of periodic solutions of delay differential equations. SIAM Journal on Sci- entific Computing, 22(5):1593-1609, 2000. Numerical continuation methods for dynamical systems. B Krauskopf, H M Osinga, J Galan-Vioque, SpringerB Krauskopf, H M Osinga, and J Galan- Vioque. Numerical continuation methods for dynamical systems. Springer, 2007. Lecture notes on numerical analysis of nonlinear equations. E J Doedel, Numerical Continuation Methods for dynamical systems. SpringerE J Doedel. Lecture notes on numerical analysis of nonlinear equations. In Numeri- cal Continuation Methods for dynamical sys- tems, pages 1-49. Springer, 2007. Collocation schemes for periodic solutions of neutral delay differential equations. D Barton, R E Krauskopf, Wilson, Journal of Difference Equations and Applications. 1211D Barton, B Krauskopf, and R E Wilson. Collocation schemes for periodic solutions of neutral delay differential equations. Journal of Difference Equations and Applications, 12(11):1087-1101, 2006. Bifurcation analysis tools for neutral delay equations: a case study. D Barton, R E Krauskopf, Wilson, 6th IFAC conference on Time-Delay Systems. D Barton, B Krauskopf, and R E Wilson. Bifurcation analysis tools for neutral delay equations: a case study. In 6th IFAC confer- ence on Time-Delay Systems, 2006. . B A H Nayfeh, Balachandran, Applied Nonlinear Dynamics. WileyA H Nayfeh and B Balachandran. Applied Nonlinear Dynamics. Wiley, 1995. DDE Biftool: a Matlab package for bifurcation analysis of delay differential equations. K Engelborghs, Katholieke Universiteit LeuvenTechnical reportK Engelborghs. DDE Biftool: a Mat- lab package for bifurcation analysis of de- lay differential equations. Technical report, Katholieke Universiteit Leuven, 2000. Numerical bifurcation analysis of delay differential equations using DDE-Biftool. K Engelborghs, D Luzyanina, Roose, ACM Transactions on Mathematical Software. 281K Engelborghs, T Luzyanina, and D Roose. Numerical bifurcation analysis of delay differential equations using DDE-Biftool. ACM Transactions on Mathematical Soft- ware, 28(1):1-21, 2002. Oscillation threshold of a clarinet model: a numerical continuation approach. S Karkar, Ch Vergez, B Cochelin, Journal of the Acoustical Society of America131S Karkar, Ch Vergez, and B Cochelin. Os- cillation threshold of a clarinet model: a numerical continuation approach. Jour- nal of the Acoustical Society of America, 131(1):698-707, 2012. Toward the systematic investigation of periodic solutions in single reed woodwind instruments. S Karkar, Ch Vergez, B Cochelin, Proceedings of the 20th International Symposium on Music Acoustics. the 20th International Symposium on Music AcousticsSydney, AustraliaS Karkar, Ch Vergez, and B Cochelin. To- ward the systematic investigation of peri- odic solutions in single reed woodwind in- struments. In Proceedings of the 20th In- ternational Symposium on Music Acoustics, Sydney, Australia, August 2010. Yin, a fundamental frequency estimator for speech and music. A De Cheveigné, H Kawahara, The Journal of the Acoustical Society of America. 1114A de Cheveigné and H Kawahara. Yin, a fundamental frequency estimator for speech and music. The Journal of the Acoustical Society of America, 111(4):1917-1930, 2002. Jet offset, harmonic content, and warble in the flute. J W Coltman, Journal of the Acoustical Society of America. 1204J W Coltman. Jet offset, harmonic content, and warble in the flute. Journal of the Acous- tical Society of America, 120(4):2312-2319, 2006. Some aspects of tuning and clean intonation in reed instruments. B J P Dalmont, Gazengel, J Gilbert, Kergomard, Applied Acoustics. 46J P Dalmont, B Gazengel, J Gilbert, and J Kergomard. Some aspects of tuning and clean intonation in reed instruments. Applied Acoustics, 46:19-60, 1995. Is the jet-drive flute model able to produce modulated sounds like flautas de chinos ?. Ch S Terrien, Vergez, B De La Cuadra, Fabre, Proceedings of Stockholm Music Acoustics Conference. Stockholm Music Acoustics ConferenceStockholm,SwedenS Terrien, Ch Vergez, P de la Cuadra, and B Fabre. Is the jet-drive flute model able to produce modulated sounds like flau- tas de chinos ? In Proceedings of Stock- holm Music Acoustics Conference, Stock- holm,Sweden, 2013. Numerical resolution of a physical model of flute-like instruments: comparison between different approaches. S Terrien, Auvray, P Fabre, Ch Y Lagrée, Vergez, Proceedings of Acoustics. AcousticsNantes,FranceS Terrien, R Auvray, B Fabre, P Y Lagrée, and Ch Vergez. Numerical reso- lution of a physical model of flute-like in- struments: comparison between different ap- proaches. In Proceedings of Acoustics 2012, Nantes,France, 2012. Mouth resonance effects in the flute. J W Coltman, The Journal of the Acoustical Society of America. 542J W Coltman. Mouth resonance effects in the flute. The Journal of the Acoustical Society of America, 54(2):417-420, 1973. Influence of the fluctuations of the control pressure on the sound production in flute-like instruments. R Auvray, P Fabre, S Y Lagrée, Ch Terrien, Vergez, Proceedings of Acoustics. AcousticsNantes,FranceR Auvray, B Fabre, P Y Lagrée, S Terrien, and Ch Vergez. Influence of the fluctuations of the control pressure on the sound produc- tion in flute-like instruments. In Proceedings of Acoustics 2012, Nantes,France, 2012. Vocal-tract influence in trombone performance. V Fréour, G P Scavone, Proc. Int. Symp. Music Acoustics, Sydney & Katoomba. Int. Symp. Music Acoustics, Sydney & KatoombaV Fréour and G P Scavone. Vocal-tract in- fluence in trombone performance. In Proc. Int. Symp. Music Acoustics, Sydney & Ka- toomba, 2010. Measurement of vocal-tract influence during saxophone performance. G P Scavone, A R Da Lefebvre, Silva, The Journal of the Acoustical Society of America. 1234G P Scavone, A Lefebvre, and A R da Silva. Measurement of vocal-tract influence during saxophone performance. The Journal of the Acoustical Society of America, 123(4):2391- 2400, 2008. Pitch bending and glissandi on the clarinet: roles of the vocal tract and partial tone hole closure. J M Chen, J Smith, Wolfe, The Journal of the Acoustical Society of America. 1263J M Chen, J Smith, and J Wolfe. Pitch bend- ing and glissandi on the clarinet: roles of the vocal tract and partial tone hole closure. The Journal of the Acoustical Society of America, 126(3):1511-1520, 2009. Aerodynamically excited acoustic oscillations in cavity resonator exposed to an air jet. M Meissner, Acta Acustica united with Acustica. 88M Meissner. Aerodynamically excited acous- tic oscillations in cavity resonator exposed to an air jet. Acta Acustica united with Acus- tica, 88:170-180, 2001. Sur les equations differentielles lineaires. G Floquet, Ann. ENS. 2G Floquet. Sur les equations differentielles lineaires. Ann. ENS [2], 12:47-88, 1883. Structuring music in recorder playing: a hydrodynamical analysis of blowing control parameters. B Fabre, Guillard, Solomon, V Blanc, Sidorenkov, Proceedings of the International Symposium in Music and Acoustics. the International Symposium in Music and AcousticsB Fabre, F Guillard, M Solomon, F Blanc, and V Sidorenkov. Structuring music in recorder playing: a hydrodynamical analy- sis of blowing control parameters. In Pro- ceedings of the International Symposium in Music and Acoustics, 2010.
[]
[ "Magnetic field-assisted manipulation and entanglement of Si spin qubits", "Magnetic field-assisted manipulation and entanglement of Si spin qubits" ]
[ "M J Calderón \nCondensed Matter Theory Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMD\n", "Belita Koiller \nCondensed Matter Theory Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMD\n\nInstituto de Física\nUniversidade Federal do Rio de Janeiro\nCaixa Postal 6852821941-972Rio de JaneiroBrazil\n", "S Das Sarma \nCondensed Matter Theory Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMD\n" ]
[ "Condensed Matter Theory Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMD", "Condensed Matter Theory Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMD", "Instituto de Física\nUniversidade Federal do Rio de Janeiro\nCaixa Postal 6852821941-972Rio de JaneiroBrazil", "Condensed Matter Theory Center\nDepartment of Physics\nUniversity of Maryland\n20742-4111College ParkMD" ]
[]
Architectures of donor-electron based qubits in silicon near an oxide interface are considered theoretically. We find that the precondition for reliable logic and read-out operations, namely the individual identification of each donor-bound electron near the interface, may be accomplished by fine-tuning electric and magnetic fields, both applied perpendicularly to the interface. We argue that such magnetic fields may also be valuable in controlling two-qubit entanglement via donor electron pairs near the interface.PACS numbers: 03.67. Lx, 73.20.Hb, 85.35.Gv, 71.55.Cn Spin qubits in semiconductors (e.g. GaAs, Si) are among the most promising physical systems for the eventual fabrication of a working quantum computer (QC). There are two compelling reasons for this perceived importance of semiconductor spin qubits: (i) electron spin has very long coherence times, making quantum error correction schemes feasible as a matter of principle; (ii) semiconductor structures provide inherently scalable solid state architectures, as exemplified by the astonishing success of the microelectronics technology in increasing the speed and efficiency of logic and memory operations over the last fifty years (i.e. 'Moore's Law'). These advantages of semiconductor quantum computation apply much more to Si than to GaAs, because the electron spin coherence time can be increased indefinitely (up to 100 ms or even longer in the bulk) in Si through isotopic purification [1, 2, 3] whereas in GaAs the electron spin coherence time is restricted only to about 10 µs [1, 2, 4]. It is thus quite ironic that there has been much more substantial experimental progress[4]in electron spin and charge qubit manipulation in the III-V semiconductor quantum dot systems [5] than in the Si:P Kane computer architecture[6]. In addition to the long coherence time, the Si:P architecture has the highly desirable property of microscopically identical qubits which are scalable using the Si microelectronic technology. The main reason for the slow experimental progress in the Kane architecture is the singular lack of qubit-specific quantum control over an electron which is localized around a substitutional P atom in the bulk in a relatively unknown location. This control has turned out to be an impossibly difficult experimental task in spite of impressive developments in materials fabrication and growth in the Si:P architecture using both the 'top-down' and the 'bottom-up' techniques [7]. It is becoming manifestly clear that new ideas are needed in developing quantum control over single qubits in the Si:P QC architecture.In this Letter we suggest such a new idea, establishing convincingly that the use of a magnetic field, along with an electric field, would enable precise identification, manipulation and entanglement of donor qubits in the Si:P quantum computer architecture by allowing control over the spatial location of the electron as it is pulled from its shallow hydrogenic donor state to the Si/SiO 2 interface by an electric field. Additionally, the magnetic field could be used to control the spatial overlap of the electronic wavefunctions in the 2D plane parallel to the interface (i.e. similar to a MOSFET geometry), thus enabling external manipulation of the inter-qubit entanglement through the magnetic field tuning of the exchange coupling between neighboring spin qubits [5].Fig. 1highlights the basic physical effects explored in our theoretical study: Substitutional P donors in Si, separated by R, are a distance d from an ideally flat Si/SiO 2 (001) interface. Uniform electric F and magnetic B fields are both applied along z. When a single P atom is considered (R → ∞ limit), and in the absence of external fields, the active electron is bound to the donor potential well (W D ) forming a hydrogenic atom. If a uniform electric field is also present, an additional well is formed which tends to draw the donor electron toward the interface. The interface well (W I ) is triangular-shaped along z while in the xy 2D plane the confinement is still provided by the 'distant' donor Coulomb attraction. The groundstate wavefunctions for each well are given inFig. 1(b). This corresponds to a particular value of the applied field, the critical field which we denote by F = F c (d), such that the expectation values of the energy for the W I and W D ground-eigenstates are degenerate.Fig. 1(c)shows how the system in (b) changes due to a magnetic field applied along z. The interface ground state wavefunction undergoes the usual 'shrinking' perpendicular to B, and the energy at W I is raised, while no significant changes occur at W D . As a consequence, we find that by properly tuning B it is possible to control the electron location along the applied magnetic field direction, partially or completely reversing the effect of the electric field.We base our quantitative description of this problem
10.1103/physrevb.74.081302
[ "https://arxiv.org/pdf/cond-mat/0602597v2.pdf" ]
119,041,314
cond-mat/0602597
828d34290f01d460904e282c2f385ef1a43c046e
Magnetic field-assisted manipulation and entanglement of Si spin qubits 4 Aug 2006 M J Calderón Condensed Matter Theory Center Department of Physics University of Maryland 20742-4111College ParkMD Belita Koiller Condensed Matter Theory Center Department of Physics University of Maryland 20742-4111College ParkMD Instituto de Física Universidade Federal do Rio de Janeiro Caixa Postal 6852821941-972Rio de JaneiroBrazil S Das Sarma Condensed Matter Theory Center Department of Physics University of Maryland 20742-4111College ParkMD Magnetic field-assisted manipulation and entanglement of Si spin qubits 4 Aug 2006(Dated: July 18, 2021) Architectures of donor-electron based qubits in silicon near an oxide interface are considered theoretically. We find that the precondition for reliable logic and read-out operations, namely the individual identification of each donor-bound electron near the interface, may be accomplished by fine-tuning electric and magnetic fields, both applied perpendicularly to the interface. We argue that such magnetic fields may also be valuable in controlling two-qubit entanglement via donor electron pairs near the interface.PACS numbers: 03.67. Lx, 73.20.Hb, 85.35.Gv, 71.55.Cn Spin qubits in semiconductors (e.g. GaAs, Si) are among the most promising physical systems for the eventual fabrication of a working quantum computer (QC). There are two compelling reasons for this perceived importance of semiconductor spin qubits: (i) electron spin has very long coherence times, making quantum error correction schemes feasible as a matter of principle; (ii) semiconductor structures provide inherently scalable solid state architectures, as exemplified by the astonishing success of the microelectronics technology in increasing the speed and efficiency of logic and memory operations over the last fifty years (i.e. 'Moore's Law'). These advantages of semiconductor quantum computation apply much more to Si than to GaAs, because the electron spin coherence time can be increased indefinitely (up to 100 ms or even longer in the bulk) in Si through isotopic purification [1, 2, 3] whereas in GaAs the electron spin coherence time is restricted only to about 10 µs [1, 2, 4]. It is thus quite ironic that there has been much more substantial experimental progress[4]in electron spin and charge qubit manipulation in the III-V semiconductor quantum dot systems [5] than in the Si:P Kane computer architecture[6]. In addition to the long coherence time, the Si:P architecture has the highly desirable property of microscopically identical qubits which are scalable using the Si microelectronic technology. The main reason for the slow experimental progress in the Kane architecture is the singular lack of qubit-specific quantum control over an electron which is localized around a substitutional P atom in the bulk in a relatively unknown location. This control has turned out to be an impossibly difficult experimental task in spite of impressive developments in materials fabrication and growth in the Si:P architecture using both the 'top-down' and the 'bottom-up' techniques [7]. It is becoming manifestly clear that new ideas are needed in developing quantum control over single qubits in the Si:P QC architecture.In this Letter we suggest such a new idea, establishing convincingly that the use of a magnetic field, along with an electric field, would enable precise identification, manipulation and entanglement of donor qubits in the Si:P quantum computer architecture by allowing control over the spatial location of the electron as it is pulled from its shallow hydrogenic donor state to the Si/SiO 2 interface by an electric field. Additionally, the magnetic field could be used to control the spatial overlap of the electronic wavefunctions in the 2D plane parallel to the interface (i.e. similar to a MOSFET geometry), thus enabling external manipulation of the inter-qubit entanglement through the magnetic field tuning of the exchange coupling between neighboring spin qubits [5].Fig. 1highlights the basic physical effects explored in our theoretical study: Substitutional P donors in Si, separated by R, are a distance d from an ideally flat Si/SiO 2 (001) interface. Uniform electric F and magnetic B fields are both applied along z. When a single P atom is considered (R → ∞ limit), and in the absence of external fields, the active electron is bound to the donor potential well (W D ) forming a hydrogenic atom. If a uniform electric field is also present, an additional well is formed which tends to draw the donor electron toward the interface. The interface well (W I ) is triangular-shaped along z while in the xy 2D plane the confinement is still provided by the 'distant' donor Coulomb attraction. The groundstate wavefunctions for each well are given inFig. 1(b). This corresponds to a particular value of the applied field, the critical field which we denote by F = F c (d), such that the expectation values of the energy for the W I and W D ground-eigenstates are degenerate.Fig. 1(c)shows how the system in (b) changes due to a magnetic field applied along z. The interface ground state wavefunction undergoes the usual 'shrinking' perpendicular to B, and the energy at W I is raised, while no significant changes occur at W D . As a consequence, we find that by properly tuning B it is possible to control the electron location along the applied magnetic field direction, partially or completely reversing the effect of the electric field.We base our quantitative description of this problem Architectures of donor-electron based qubits in silicon near an oxide interface are considered theoretically. We find that the precondition for reliable logic and read-out operations, namely the individual identification of each donor-bound electron near the interface, may be accomplished by fine-tuning electric and magnetic fields, both applied perpendicularly to the interface. We argue that such magnetic fields may also be valuable in controlling two-qubit entanglement via donor electron pairs near the interface. Spin qubits in semiconductors (e.g. GaAs, Si) are among the most promising physical systems for the eventual fabrication of a working quantum computer (QC). There are two compelling reasons for this perceived importance of semiconductor spin qubits: (i) electron spin has very long coherence times, making quantum error correction schemes feasible as a matter of principle; (ii) semiconductor structures provide inherently scalable solid state architectures, as exemplified by the astonishing success of the microelectronics technology in increasing the speed and efficiency of logic and memory operations over the last fifty years (i.e. 'Moore's Law'). These advantages of semiconductor quantum computation apply much more to Si than to GaAs, because the electron spin coherence time can be increased indefinitely (up to 100 ms or even longer in the bulk) in Si through isotopic purification [1,2,3] whereas in GaAs the electron spin coherence time is restricted only to about 10 µs [1,2,4]. It is thus quite ironic that there has been much more substantial experimental progress [4] in electron spin and charge qubit manipulation in the III-V semiconductor quantum dot systems [5] than in the Si:P Kane computer architecture [6]. In addition to the long coherence time, the Si:P architecture has the highly desirable property of microscopically identical qubits which are scalable using the Si microelectronic technology. The main reason for the slow experimental progress in the Kane architecture is the singular lack of qubit-specific quantum control over an electron which is localized around a substitutional P atom in the bulk in a relatively unknown location. This control has turned out to be an impossibly difficult experimental task in spite of impressive developments in materials fabrication and growth in the Si:P architecture using both the 'top-down' and the 'bottom-up' techniques [7]. It is becoming manifestly clear that new ideas are needed in developing quantum control over single qubits in the Si:P QC architecture. In this Letter we suggest such a new idea, establishing convincingly that the use of a magnetic field, along with an electric field, would enable precise identification, manipulation and entanglement of donor qubits in the Si:P quantum computer architecture by allowing control over the spatial location of the electron as it is pulled from its shallow hydrogenic donor state to the Si/SiO 2 interface by an electric field. Additionally, the magnetic field could be used to control the spatial overlap of the electronic wavefunctions in the 2D plane parallel to the interface (i.e. similar to a MOSFET geometry), thus enabling external manipulation of the inter-qubit entanglement through the magnetic field tuning of the exchange coupling between neighboring spin qubits [5]. Fig. 1 highlights the basic physical effects explored in our theoretical study: Substitutional P donors in Si, separated by R, are a distance d from an ideally flat Si/SiO 2 (001) interface. Uniform electric F and magnetic B fields are both applied along z. When a single P atom is considered (R → ∞ limit), and in the absence of external fields, the active electron is bound to the donor potential well (W D ) forming a hydrogenic atom. If a uniform electric field is also present, an additional well is formed which tends to draw the donor electron toward the interface. The interface well (W I ) is triangular-shaped along z while in the xy 2D plane the confinement is still provided by the 'distant' donor Coulomb attraction. The groundstate wavefunctions for each well are given in Fig. 1(b). This corresponds to a particular value of the applied field, the critical field which we denote by F = F c (d), such that the expectation values of the energy for the W I and W D ground-eigenstates are degenerate. Fig. 1(c) shows how the system in (b) changes due to a magnetic field applied along z. The interface ground state wavefunction undergoes the usual 'shrinking' perpendicular to B, and the energy at W I is raised, while no significant changes occur at W D . As a consequence, we find that by properly tuning B it is possible to control the electron location along the applied magnetic field direction, partially or completely reversing the effect of the electric field. We base our quantitative description of this problem on the single-valley effective-mass approximation, leading to the model-Hamiltonian [8,9] H = T + eF z − e 2 ǫ 1 r + e 2 Q ρ 2 + (z + 2d) 2 − e 2 Q 4(z + d) . (1) The magnetic field vector potential, A = B (y, −x, 0) /2, is included in the kinetic energy term, T = η=x,y,zh 2 /(2m η ) (i∂/∂η + eA η /(hc)) 2 , where the effective masses m x = m y = m ⊥ = 0.191 m and m z = m = 0.916 m, account for the Si conduction band valley's anisotropy. The electric field defines the second term in H, the third and fourth terms describe the donor and its image charge potentials, while the last term is the electron image potential. The fourth term involves the lateral coordinate ρ = (x, y). The image-related terms are proportional to Q = (ǫ 2 − ǫ 1 )/[ǫ 1 (ǫ 2 + ǫ 1 )], where ǫ 1 = 11.4 and ǫ 2 = 3.8 are the Si and SiO 2 static di-electric constants. It is convenient to rewrite the kinetic energy term as T = T 0 + 1 8 ω 2 c ρ 2 + iω c y ∂ ∂x − x ∂ ∂y ,(2) where T 0 is the kinetic energy for B = 0 and ω c = eB/(m ⊥ c). The second term allows interpreting the effect of the magnetic field as providing an additional parabolic potential in the xy plane, which increases the kinetic energy and enhances the lateral confinement of the electron wave-function. The last term in T gives zero or negligible contribution for the donor electron lowenergy states being investigated, and will be neglected here. We solve the double-well problem described by the model Hamiltonian in Eq. (1) in the basis of the uncoupled solutions to the individual wells W I and W D , obtained variationally from the ansatz [9]: Ψ I (ρ, z) = f α (z) × g β (ρ) (3) = α 5/2 √ 24 (z + d) 2 e −α(z+d)/2 × β √ π e −β 2 ρ 2 /2 Ψ D (ρ, z) ∝ (z + d)e − √ ρ 2 /a 2 +z 2 /b 2 ,(4) where α, β, a and b are the variational parameters. The (z + d) factors guarantee that the wavefunctions are zero at the interface (z = −d), as we assume that the insulator provides an infinite barrier potential. For d > 6 nm and B = 0, we find that a and b coincide with the Kohn-Luttinger variational Bohr radii for the isolated impurity (d → ∞), where a = 2.365 nm and b = 1.36 nm. As an illustration of the uncoupled wells variational solutions, we present in Fig. 1 the calculated wavefunctions for d = 30nm. The dash-dot lines correspond to f α (z) and Ψ D (0, z), while the dashed lines on the left represent g β (ρ) for ρ = (x, 0) [or equivalently (0, y)]. It is clear from this figure that the four variational parameters define relevant length scales involved in the problem. In the presence of a magnetic field, there is an additional length scale (the magnetic length) given by λ B = hc/(eB) which defines the typical lateral confinement produced by the magnetic field alone, regardless of other potentials in the problem. The magnetic field effect is significant only for values of B > ∼ B c such that λ B < ∼ c ρ where c ρ is the lateral confinement length in the absence of the field. For the donor potential well, c ρ ≈ a = 2.365 nm, so that the field required to appreciable affect the W Dground state eigenenergy or wavefunction is B D c ∼ 120 T! On the other hand, for the W I well, the confinement length at the interface for B = 0 is 1/β ∼ 18.5 nm so that a much smaller B I c ∼ 2 T is sufficient to affect the interface ground state. The magnetic field strengths we consider here (up to 10 T) are not large enough to affect a or b, but important effects are obtained in lateral confinement at the interface as shown in Fig. 1 The expectation value of the energy for each well ground state, E j = Ψ j |H|Ψ j (j=I,D) , is given by the horizontal lines in Fig. 1. The critical field condition F = F c corresponds to E I = E D and is illustrated in Fig. 1(b). In Fig. 1(c) we also indicate the value of the B = 0 energies by a thin dotted line, and we note that under a 10 T magnetic field E I is raised by 2 meV, while E D undergoes a comparatively negligible shift (by 0.13 meV). Fig. 2(a) shows the energy shifts, ∆E(B) = E I (B) − E I (0), for three particular values of d: As expected, the effect of the magnetic field is stronger for larger d. Figs. 2(b) and (c) where we give the expectation value of the z-coordinate of the electronic ground state Ψ 0 = C I Ψ I + C D Ψ D . We start with an electric field slightly above F c (d), hence, C I ≈ 1 and C D ≈ 0. Under an increasing magnetic field, the energy shift at the interface eventually detunes E D and E I in such a way that C I ≈ 0 and C D ≈ 1, i.e. the electron is taken back to the donor moving parallel to the magnetic field B and against the electric field F. How big a magnetic field is needed for this process to be completed depends on d and on the value of the static electric field. For those donors further from the interface, the passage of the electronic ground state from the W I to W D occurs more abruptly as a function of B. The effect of the value of the electric field is also relevant, as illustrated in Fig. 2(b) and (c). In Fig. 2(b), F = F c + 60 V/cm and the passage occurs around B ∼ 2.2 T, while in Fig. 2(c), F = F c +120 V/cm and the needed field is B ∼ 3.2 T. Therefore, fine tuning of the electric field is required to observe this phenomenon at reasonably small magnetic fields. We propose to use this result as a means of differentiating donor electrons from other charges that may be detected [10] in the architecture shown in Fig. 1(a). Tunneling times are not significantly affected upon magnetic fields considered here and remain a function of d alone [9]. Consequences of this shift in energy are shown in The other significant effect of the magnetic field on the interface states is the transverse shrinkage of the wave function. Fig. 3(a) shows the lateral confinement length 1/β as a function of the magnetic field. For d = 15.8 nm, the wave-function width is reduced by a factor of 0.8 when a 10 T magnetic field is applied, while for d = 30 nm a stronger reduction factor of 0.6 is obtained. The significance of this enhanced confinement may be accessed through the overlap between interface electrons coming from neighboring donors separated a distance R. The overlap quantifies whether we can consider these electrons as separate entities or they will form a 2-dimensional electron gas (2DEG). One obtains S = g β ( ρ)|g β ( ρ + R) = exp(−β 2 R 2 /4) exp[−R 2 /(16λ 4 B β 2 )] where we note that, due to gauge invariance, g β (x + R, y) = π −1 β exp{−β 2 [(x + R) 2 + y 2 ]/2} exp {iyR/(4λ 2 B )}. The results for the particular inter-donor distance R = 30 nm, which corresponds to a planar density n ∼ 10 11 cm −2 , are shown in Fig. 3(b). We estimate that an overlap S < would guarantee that the electrons do not form a 2DEG. Fig. 4(a) presents the overlap versus inter-donor distance for B = 0. The calculated S is generally very large for the experimentally reasonable distances d and R considered here. For the donors further from the interface (d = 30 nm), the condition S < 0.1 requires either the application of a magnetic field (around 6 T for R = 30 nm) or a smaller planar density n < ∼ 3 × 10 10 cm −2 (which corresponds to R > ∼ 55nm). The magnetic fields required to get a reduction of the overlap to S = 0.1 and 0.2 are shown in Fig. 4(b) and (c), respectively, as a function of the inter-donor distance. For the donors closer to the interface (d = 15.8 nm), the effect of the magnetic field is not as dramatic: The lateral confinement provided by the donor potential is strong enough to give S = 0.1 for B = 0 and R ∼ 37 nm (n ∼ 7.4 × 10 10 cm −2 ). Note that in the case of neighboring donors placed at different distances from the interface d, S(R) would present oscillations due to valley interference effects [11]. For qubits defined at the donor sites it is certainly more reliable to perform operations involving two-qubit entanglement at the more directly accessible interface region. In this case, depending on R and d, the confining potential provided by the donors may need to be complemented by additional surface gates [12] and/or static magnetic fields. Time-dependent magnetic fields can be invoked in switching the exchange gates for two-qubit operations [5]. In summary, we demonstrate here that, in addition to the control of donor charges by electric fields, as usual in conventional semiconductor devices, relatively moderate magnetic fields (up to 10 T) may provide relevant information and manipulation capabilities as the building blocks of donor-based QC architectures in Si are being developed. Uniform magnetic fields may displace donorbound electrons from the interface to the donor nucleus region, as illustrated in Fig. 2. This effect, which could be monitored via surface charge detectors, allows differentiating donor electrons from spurious surface or oxideregion bound electrons, which would not exhibit this type of behavior. For electrons originating from neighboring donors and drawn to the surface by an electric field, the magnetic field lateral confinement effect may provide isolation between electrons (see Fig. 3), allowing smaller interdonor distances to be exploited (see Fig. 4), as well as additional control in two qubits operations via surface exchange gates similar to the spin manipulation in quantum dots in GaAs. Defects such as dangling bonds at the Si/SiO 2 interface are important sources of spin decoherence, thus spin coherence times near the interface are expected to be significantly reduced as compared to the bulk. Performing two-qubit (spin-spin) entanglement near the SiO 2 interface, as suggested here, requires extremely careful interface optimization. We believe that the use of an external magnetic field along with the FET geometry near a Si-SiO 2 interface should allow in the near future significant experimental progress in the currently stalled, but potentially important, donor qubit based Kane spin QC architecture. This work is supported by LPS and NSA. BK acknowledges support by CNPq and FAPERJ. PACS numbers: 03.67.Lx, 85.30.-z, 73.20.Hb, 85.35.Gv, 71.55.Cn FIG. 1 : 1(Color online) (a) Schematic configuration of donors in Si near the interface with an oxide barrier under applied electric F and magnetic B uniform fields. In (b) and (c), the thick lines show the electronic confining double-well potential: The interface well is labeled WI and the donor well, WD. The dashed lines give the decoupled ground-eigenfunctions in the wells, and the thin horizontal lines indicate the expectation value of the energy in each well. Results for d = 30 nm and F = 13.5 kV/cm ≈ Fc(d), are given in (b) for B = 0 and in (c) for B = 10 T. Parameters defining typical wavefunction widths (see text) are indicated in (c). Magnetic field effects tend to be stronger for larger d, as the interface state is less affected by the strongly confining Coulomb potential at WD. by comparison of B = 0 in (b) with B = 10 T in (c). FIG. 2 : 2(Color online) (a) Shift in the energy EI , associated with the interface state, versus magnetic field for three values of the donor distance to the barrier (d). The shift in energy of the donor state (ED) is negligible compared to these values and is not shown. (b) Expectation value of the z-position for the electronic ground state versus magnetic field. For each d the electric field is tuned to a value just above the critical field: F = Fc + 60 V/cm. The electron starts at a position close to the interface and moves parallel to the magnetic field and against the electric field ending up at the donor well. The values of the critical electric field at each distance are Fc(15.8 nm) = 27.44 kV/cm, Fc(22 nm) = 18.85 kV/cm, Fc(30 nm) = 13.44 kV/cm. (c) Same as (b) for F = Fc + 120 V/cm. FIG. 3 : 3(Color online) (a) Reduction of the lateral confinement length of the interface state, 1/β, when a perpendicular magnetic field is applied. The effect is stronger for donors that are further away from the interface. (b) Overlap S versus magnetic field for inter-donor separation R = 30nm (which corresponds to a planar density n = 10 11 cm −2 ). online) (a) Overlap versus interdonor distance. (b) and (c) show the magnetic field required to reduce the overlap to 0.1 and 0.2 respectively. R = 20 nm corresponds to a planar density n = 2.5 × 10 11 cm −2 and R = 40 nm to n = 6.25 × 10 10 cm −2 . . R Sousa, S. Das Sarma, Phys. Rev. B. 68115322R. de Sousa and S. Das Sarma, Phys. Rev. B 68, 115322 (2003). . W Witzel, R Sousa, S. Das Sarma, Phys. Rev. B. 72161306W. Witzel, R. de Sousa, and S. Das Sarma, Phys. Rev. B 72, 161306 (2005). . E Abe, K Itoh, J Isoya, S Yamasaki, Phys. Rev. B. 7033204E. Abe, K. Itoh, J. Isoya, and S.Yamasaki, Phys. Rev. B 70, 033204 (2004); . A M Tyryshkin, J J L Morton, S C Benjamin, A Ardavan, G A D Briggs, J W Ager, S A Lyon, cond-mat/0512705A. M. Tyryshkin, J. J. L. Morton, S. C. Benjamin, A. Ardavan, G. A. D. Briggs, J. W. Ager, and S. A. Lyon, cond-mat/0512705 (2005). . J R Petta, A C Johnson, J M Taylor, E A Laird, A Yacoby, M D Lukin, C M Marcus, M P Hanson, A C Gossard, Science. 3092180J. R. Petta, A. C. Johnson, J. M. Taylor, E. A. Laird, A. Yacoby, M. D. Lukin, C. M. Marcus, M. P. Hanson, and A. C. Gossard, Science 309, 2180 (2005); . A C Johnson, J R Petta, J M Taylor, A Yacoby, M D Lukin, C M Marcus, M P Hanson, A C Gossard, Nature. 435925A. C. John- son, J. R. Petta, J. M. Taylor, A. Yacoby, M. D. Lukin, C. M. Marcus, M. P. Hanson, and A. C. Gossard, Nature 435, 925 (2005); . T Hatano, M Stopa, S Tarucha, Science. 309268T. Hatano, M. Stopa, and S. Tarucha, Science 309, 268 (2005). . D Loss, D P Divincenzo, Phys. Rev. A. 57120D. Loss and D. P. DiVincenzo, Phys. Rev. A 57, 120 (1998); . G Burkard, D Loss, D P Divincenzo, Phys. Rev. B. 592070G. Burkard, D. Loss, and D. P. DiVincenzo, Phys. Rev. B 59, 2070 (1999); . X Hu, S. Das Sarma, Phys. Rev. A. 6162301X. Hu and S. Das Sarma, Phys. Rev. A 61, 062301 (2000). . B E Kane, Nature. 393133B. E. Kane, Nature 393, 133 (1998). . T Schenkel, A Persaud, S J Park, J Nilsson, J Bokor, J A Liddle, R Keller, D H Schneider, D W Cheng, D E Humphries, J. Appl. Phys. 947017T. Schenkel, A. Persaud, S. J. Park, J. Nilsson, J. Bokor, J. A. Liddle, R. Keller, D. H. Schneider, D. W. Cheng, and D. E. Humphries, J. Appl. Phys. 94, 7017 (2003); . S R Schofield, N J Curson, M Y Simmons, F J Rueß, T Hallam, L Oberbeck, R G Clark, Phys. Rev. Lett. 91136104S. R. Schofield, N. J. Curson, M. Y. Simmons, F. J. Rueß, T. Hallam, L. Oberbeck, and R. G. Clark, Phys. Rev. Lett. 91, 136104 (2003); . D Jamieson, C Yang, T Hopf, S Hearne, C Pakes, S Prawer, M Mitic, E Gauja, S Andresen, F Hudson, A Dzurak, R Clark, Appl. Phys. Lett. 86202101D. Jamieson, C. Yang, T. Hopf, S. Hearne, C. Pakes, S. Prawer, M. Mitic, E. Gauja, S. Andresen, F. Hudson, A. Dzurak, R. Clark, Appl. Phys. Lett. 86, 202101 (2005). . D Macmillen, U Landman, Phys. Rev. B. 294524D. MacMillen and U. Landman, Phys. Rev. B 29, 4524 (1984); . G Kramer, R Wallis, Phys. Rev. B. 323772G. Kramer and R. Wallis, Phys. Rev. B 32, 3772 (1985) . M Calderón, B Koiller, X Hu, S. Das Sarma, Phys. Rev. Lett. 9696802M. Calderón, B. Koiller, X. Hu, and S. Das Sarma, Phys. Rev. Lett. 96, 096802 (2006). . B E Kane, MRS Bulletin. 30105B. E. Kane, MRS Bulletin 30, 105 (2005); . K Brown, L Sun, B Kane, cond-mat/0601553K. Brown, L. Sun, and B. Kane, cond-mat/0601553 (2006). . B Koiller, X Hu, S. Das Sarma, Phys. Rev. B. 66115201B. Koiller, X. Hu, and S. Das Sarma, Phys. Rev. B 66, 115201 (2002). For typical values of R and d, the double-well potential from the donor pair alone usually provides a relatively shallow barrier, resulting in negligibly small values of J. Surface gates between the donors could raise the barrier and allow closer donor (smaller R) geometries, leading to values of J compatible with the required gating times (h/J ≪ ∼ ms), similar to the situation in GaAs. quantum dot quantum computing architectures (Ref. 4)For typical values of R and d, the double-well potential from the donor pair alone usually provides a relatively shallow barrier, resulting in negligibly small values of J. Surface gates between the donors could raise the barrier and allow closer donor (smaller R) geometries, leading to values of J compatible with the required gating times (h/J ≪ ∼ ms), similar to the situation in GaAs quantum dot quantum computing architectures (Ref. 4).
[]
[ "Quantum-Reduced Loop-Gravity: Relation with the Full Theory", "Quantum-Reduced Loop-Gravity: Relation with the Full Theory" ]
[ "Emanuele Alesci \nInstytut Fizyki Teoretycznej\nUniwersytet Warszawski\nul. Hoża 6900-681WarszawaEUPoland\n", "Francesco Cianfrani \nInstitute for Theoretical Physics\nUniversity of Wroc law\nPl. Maksa Borna 9, Pl-50-204 Wroc lawPoland\n", "Carlo Rovelli \nUMR 7332\nAix Marseille Université\nCNRS\n13288MarseilleCPTFrance\n\nUMR 7332\nUniversité de Toulon\nCNRS\n83957La GardeCPTFrance\n" ]
[ "Instytut Fizyki Teoretycznej\nUniwersytet Warszawski\nul. Hoża 6900-681WarszawaEUPoland", "Institute for Theoretical Physics\nUniversity of Wroc law\nPl. Maksa Borna 9, Pl-50-204 Wroc lawPoland", "UMR 7332\nAix Marseille Université\nCNRS\n13288MarseilleCPTFrance", "UMR 7332\nUniversité de Toulon\nCNRS\n83957La GardeCPTFrance" ]
[]
The quantum-reduced loop-gravity technique has been introduced for dealing with cosmological models. We show that it can be applied rather generically: anytime the spatial metric can be gauge-fixed to a diagonal form. The technique selects states based on reduced graphs with Livine-Speziale coherent intertwiners and could simplify the analysis of the dynamics in the full theory.PACS numbers: 04.60.Pp Quantum Reduced Loop Gravity (QRLG) is a framework introduced for the quantization of symmetryreduced sectors of general relativity. It was introduced in [1, 2] and applied to an inhomogeneous extension of the Bianchi I cosmological model. Here we show that its application is in fact quite wide, since it essentially amounts to a choice of gauge in the full theory. More precisely, we show that fixing the gauge where the triad is diagonal (in the quantum theory) leads to the state space of QRLG.The loop quantization of homogeneous models [3, 4] (loop quantum cosmology) and spherically-symmetric systems [5] (black holes) has been mostly studied by first restricting to a reduced phase space and then quantizing the resulting system. The strategy of starting from the full quantum theory and restricting the set of states has developed more slowly, both in the canonical [6] and covariant [7-9] versions of the theory.The metric of a Bianchi I model is diagonal and the internal SU (2) gauge can be fixed so that the densitized triads are diagonal as well. In QRLG, one fixes a threedimensional cubic lattice oriented in the directions that diagonalize the metric. The connection on each link belongs then to a fixed U (1) subgroup of SU (2), one per each of the three possible orientations of the links. Group elements associated to links are in U (1), not in SU (2), and the SU (2) structure is only present at the nodes. The way U (1) states are embedded into SU (2) states is analogous to the way SU (2) states sits into SL(2, C) states in spinfoam theory, one dimension up. Using these structures we can regularize the scalar constraint as in full loop quantum gravity (LQG)[10].Here we point out that this scheme is more general than its application to Bianchi I and the inhomogeneous extensions previously considered. It works anytime we can choose a reference frame where the spatial metric is diagonal. This is generically possible, since any 3-metric can generically be taken to diagonal form by a 3d diffeomorphism[11], as in three dimensions the number of nondiagonal components of the metric coincides with the number of parameters of a diffeomorphisms. The price to pay is a nontrivial Shift function and a potentially more complicated dynamics.Below, we review a few basis elements of LQG that we need for this construction, then we give the QRLG construction of the state space, and finally we recover this same state space by a gauge fixing in the general quantum theory.Loop Quantum Gravity. In LQG, the elements of the kinematical Hilbert space H kin are labeled by oriented graphs Γ in the spatial manifold and are given by functions on L-copies of SU (2), L being the number of links in Γ.
10.1103/physrevd.88.104001
[ "https://arxiv.org/pdf/1309.6304v1.pdf" ]
118,790,690
1309.6304
ca895afa6ab15ec7b16812dff1c69bdf94bc60f1
Quantum-Reduced Loop-Gravity: Relation with the Full Theory 24 Sep 2013 Emanuele Alesci Instytut Fizyki Teoretycznej Uniwersytet Warszawski ul. Hoża 6900-681WarszawaEUPoland Francesco Cianfrani Institute for Theoretical Physics University of Wroc law Pl. Maksa Borna 9, Pl-50-204 Wroc lawPoland Carlo Rovelli UMR 7332 Aix Marseille Université CNRS 13288MarseilleCPTFrance UMR 7332 Université de Toulon CNRS 83957La GardeCPTFrance Quantum-Reduced Loop-Gravity: Relation with the Full Theory 24 Sep 2013(Dated: May 22, 2014)arXiv:1309.6304v1 [gr-qc] The quantum-reduced loop-gravity technique has been introduced for dealing with cosmological models. We show that it can be applied rather generically: anytime the spatial metric can be gauge-fixed to a diagonal form. The technique selects states based on reduced graphs with Livine-Speziale coherent intertwiners and could simplify the analysis of the dynamics in the full theory.PACS numbers: 04.60.Pp Quantum Reduced Loop Gravity (QRLG) is a framework introduced for the quantization of symmetryreduced sectors of general relativity. It was introduced in [1, 2] and applied to an inhomogeneous extension of the Bianchi I cosmological model. Here we show that its application is in fact quite wide, since it essentially amounts to a choice of gauge in the full theory. More precisely, we show that fixing the gauge where the triad is diagonal (in the quantum theory) leads to the state space of QRLG.The loop quantization of homogeneous models [3, 4] (loop quantum cosmology) and spherically-symmetric systems [5] (black holes) has been mostly studied by first restricting to a reduced phase space and then quantizing the resulting system. The strategy of starting from the full quantum theory and restricting the set of states has developed more slowly, both in the canonical [6] and covariant [7-9] versions of the theory.The metric of a Bianchi I model is diagonal and the internal SU (2) gauge can be fixed so that the densitized triads are diagonal as well. In QRLG, one fixes a threedimensional cubic lattice oriented in the directions that diagonalize the metric. The connection on each link belongs then to a fixed U (1) subgroup of SU (2), one per each of the three possible orientations of the links. Group elements associated to links are in U (1), not in SU (2), and the SU (2) structure is only present at the nodes. The way U (1) states are embedded into SU (2) states is analogous to the way SU (2) states sits into SL(2, C) states in spinfoam theory, one dimension up. Using these structures we can regularize the scalar constraint as in full loop quantum gravity (LQG)[10].Here we point out that this scheme is more general than its application to Bianchi I and the inhomogeneous extensions previously considered. It works anytime we can choose a reference frame where the spatial metric is diagonal. This is generically possible, since any 3-metric can generically be taken to diagonal form by a 3d diffeomorphism[11], as in three dimensions the number of nondiagonal components of the metric coincides with the number of parameters of a diffeomorphisms. The price to pay is a nontrivial Shift function and a potentially more complicated dynamics.Below, we review a few basis elements of LQG that we need for this construction, then we give the QRLG construction of the state space, and finally we recover this same state space by a gauge fixing in the general quantum theory.Loop Quantum Gravity. In LQG, the elements of the kinematical Hilbert space H kin are labeled by oriented graphs Γ in the spatial manifold and are given by functions on L-copies of SU (2), L being the number of links in Γ. The quantum-reduced loop-gravity technique has been introduced for dealing with cosmological models. We show that it can be applied rather generically: anytime the spatial metric can be gauge-fixed to a diagonal form. The technique selects states based on reduced graphs with Livine-Speziale coherent intertwiners and could simplify the analysis of the dynamics in the full theory. PACS numbers: 04.60.Pp Quantum Reduced Loop Gravity (QRLG) is a framework introduced for the quantization of symmetryreduced sectors of general relativity. It was introduced in [1,2] and applied to an inhomogeneous extension of the Bianchi I cosmological model. Here we show that its application is in fact quite wide, since it essentially amounts to a choice of gauge in the full theory. More precisely, we show that fixing the gauge where the triad is diagonal (in the quantum theory) leads to the state space of QRLG. The loop quantization of homogeneous models [3,4] (loop quantum cosmology) and spherically-symmetric systems [5] (black holes) has been mostly studied by first restricting to a reduced phase space and then quantizing the resulting system. The strategy of starting from the full quantum theory and restricting the set of states has developed more slowly, both in the canonical [6] and covariant [7][8][9] versions of the theory. The metric of a Bianchi I model is diagonal and the internal SU (2) gauge can be fixed so that the densitized triads are diagonal as well. In QRLG, one fixes a threedimensional cubic lattice oriented in the directions that diagonalize the metric. The connection on each link belongs then to a fixed U (1) subgroup of SU (2), one per each of the three possible orientations of the links. Group elements associated to links are in U (1), not in SU (2), and the SU (2) structure is only present at the nodes. The way U (1) states are embedded into SU (2) states is analogous to the way SU (2) states sits into SL(2, C) states in spinfoam theory, one dimension up. Using these structures we can regularize the scalar constraint as in full loop quantum gravity (LQG) [10]. Here we point out that this scheme is more general than its application to Bianchi I and the inhomogeneous extensions previously considered. It works anytime we can choose a reference frame where the spatial metric is diagonal. This is generically possible, since any 3-metric can generically be taken to diagonal form by a 3d diffeomorphism [11], as in three dimensions the number of nondiagonal components of the metric coincides with the number of parameters of a diffeomorphisms. The price to pay is a nontrivial Shift function and a potentially more complicated dynamics. Below, we review a few basis elements of LQG that we need for this construction, then we give the QRLG construction of the state space, and finally we recover this same state space by a gauge fixing in the general quantum theory. Loop Quantum Gravity. In LQG, the elements of the kinematical Hilbert space H kin are labeled by oriented graphs Γ in the spatial manifold and are given by functions on L-copies of SU (2), L being the number of links in Γ. A basis of states is obtained from Peter-Weyl theorem, and is labelled by an irreducible representations j l of SU (2) on each link l, and a SU (2) intertwiner x n at each node n. The corresponding state reads < h l |Γ, j l , x n >= n∈Γ x n · l∈Γ D j l (h l ). (1) D j (h) and x are Wigner matrices in the representation j and intertwiners, respectively; the products extend over all the links and the nodes in Γ; the dot means the contraction between indices of intertwiners and Wigner matrices. The flux operator E i (S) associated to the oriented surface S acts as the left (right) invariant vector fields on the group element based at links l beginning (ending) on S. For instance, given a surface S having a single intersection with a link l at a point x ∈ e, such that l = l ′ l ′′ and l ′ ∩ l ′′ = x, the operatorÊ i (S) is given bŷ E i (S)D j l (h l ) = 8πγl 2 P o(l, S) D j l (h l ′ ) τ i D j l (h l ′′ ),(2) γ and l P being the Immirzi parameter and the Planck length, respectively, while o(l, S) is equal to 0, 1, −1 according to the relative sign of l and the normal to S. τ i denotes the SU (2) generators in the j l representation. The equivalence class s of graphs Γ under diffeomorphisms can be used to implement background independence in the dual of the SU (2)-invariant kinematical Hilbert space as follows < h|s, j l , x n >= Γ∈s < h|Γ, j l , x n > * .(3) The scalar constraint can be regularized in the space of SU(2)-and diffeo-invariant states. Quantum Reduced Loop Gravity. The Bianchi I model is endowed with a diagonal metric tensor dl 2 = a 2 1 (dx 1 ) 2 + a 2 2 (dx 2 ) 2 + a 2 3 (dx 3 ) 2 ,(4) where a i (i = 1, 2, 3) are three time-dependent scale factors. In the inhomogeneous extension of Bianchi I, the a i are assumed to be a function of time and the spatial coordinates x i , which are the Cartesian coordinates of a fiducial flat metric. The associated densitized triads can be chosen to be diagonal, i.e. E a i = p i δ a i , |p i | = a 1 a 2 a 3 a i(5) by the gauge-fixing condition [12,13] χ i = ǫ k ij E a k δ j a = 0.(6) The connections are generically given by A i a = c i u i a + . . . , c i = γ Nȧ i ,(7) where u i a = δ i a are the component of three units vectors u a oriented along thee fiducial orthogonal axis and the dots indicate terms due to the spin connections, which are generically non-diagonal. These terms were disregarded in [1,2] by considering two cases: the reparametrized Bianchi I model, in which each a i is a function of the single corresponding coordinate x i ; and the Kasner epoch inside a generic cosmological solution, for which spatial gradient of the scale factors are negligible with respect to time derivatives. The kinematical symmetries in this reduced phase space are generated by two sets of constraints: the Gauss constraints associated with three U (1) gauges, each acting on a single spatial direction x i and having {c i , p i } as the couple of connections and momenta; the vector constraints associated with a subgroup of the diffeomorphisms group, made by those transformations (reduced diffeomorphisms) which can be seen as the product of a generic diffeomorphisms along a given direction x i and a rigid translation along the other ones. The description of such a system in QRLG is obtained by truncating the LQG kinematical Hilbert space. First, the Hilbert space of the full theory is restricted to that based on a reduced set of cubic graphs, with links parallel to three fiducial vectors ω i = δ a i ∂ a (i = 1, 2, 3). We call i l the direction of the link l in the cubic graph. Then, the gauge fixing leading to diagonal momenta and connections is implemented weakly, following the procedure to impose the simplicity constraints in Spin Foam [14]. The condition (6) is first rewritten in terms of fluxes across surfaces S j normal to the j direction, as χ i (S) = ǫ k ij E k (S j ) = 0 (8) and then implemented solving strongly the master constraint conditionχ 2 (S) = iχ 2 i (S) = 0. Since the holonomy along the link l, is generated by τ i l only, h l = P e ( l cidx i )τi l ,(9) and a solution ofχ 2 (S)ψ l = 0 can be obtained by working with projected U (1)-states, obtained by stabilizing the SU (2) group element based at each link l around the internal directions u l , where u l = u i l and the components of u i are given by u i a = δ i a as above. In terms of Wigner matrices, the resulting projected state on a link with direction i = 1, 2, 3 reads ψ i (h) = +∞ n=−∞ ψ n i D j(n) n n (h),(10) where i D j(ni) mr are the Wigner matrices in the spin basis |j, m i that diagonalizes the operators J 2 and J i , and ψ n are the coefficients of the expansion. The condition χ 2ψ ei = 0 fixes the degree of the representation, i.e. the U (1) quantum number n in terms of SU (2) quantum number j. An approximate solution which becomes exact for j → ∞ is given by j(n) = |n|.(11) This is good enough for assuring the classical limit. Here we restrict to positive values of n for simplicity. Let's H R be the space spanned by the states (10), with j given by (11). The gauge-fixing condition χ i = 0 holds weakly on this space. A reduced recoupling theory adapted to such states follows from SU (2) recouping theory. Consider the SU (2) coherent states |j, u = D j ( u)|j, j = m |j, m D j mj (u),(12) where u is a unit vector and u is a group element that rotates the z axis into u. Using these, define the projectors P l = |j l , u l j l , u l |, for each link of the graph. The projector P χ that maps H kin into H R acts on each Wigner-matrix state as P χ : D j l (h l ) → P l D j l (h l )P l ,(14) and its image has the form (10). So far we have considered states on single links. Now let's consider states of the full theory, invariant under SU(2) gauge transformations. The projection of the invariant basis states can be written in the form h|Γ, j l , x n R = n∈Γ j l , x n |j l , u l · l∈Γ i l D j l j l j l (h l ). (15) The coefficients j l , x n |j l , u l are the reduced intertwiners and they take the following expression in terms of the SU (2) intertwiner basis j l , x|j l , u l = x * m1...mO,m ′ 1 ...m ′ I O o=1 D −1jo jomo (u o ) I i=1 D ji m ′ i ji (u i ), The reduced intertwiners j l , x n |j l , u l provide a nontrivial node structure. It is the presence of such a node structure and of reduced diffeomorphisms invariance which provide a well-defined regularized expression for the scalar constraint by mimicking the techniques of quantum spin dynamics [10]. We now re-interpret restriction to reduced graphs as a gauge fixing at the quantum level, to a gauge where the metric tensor takes the form (4). This turns out to be simpler than the Bianchi I case considered previously, because it does not require to chose a priori the projected form for the states; this form comes automatically from the gauge fixing. Fixing the frame. Given a point x and three vectors ω i = δ a i ∂ a at the point, let S i x be three surfaces intersecting at x dual to these vectors. The vanishing of the off-diagonal components of the metric tensor can be written in terms of fluxes as follows η km x = δ ij E i (S k x )E j (S m x ) = 0, k = m, ∀x ∈ Σ.(17) Consider now the equation as a gauge-fixing constraint in the quantum theory. We want thus to solveη kl x = 0, i.e. weakly. That is, we look for a subspace of the full Hilbert space where ψ|η km x |φ = 0, k = m, ∀x ∈ Σ.(18) There are two cases for which the action of the operator η kl on a state based in Γ is non-trivial. depending on the intersections between Γ and the surfaces S i x : 1. there is an link l x ∈ Γ containing x as an internal point; 2. x is a node for α. In the first case, the action ofη km x is non-trivial on D j l(x) (h l(x) ) and readŝ η km x D j lx (g l(x) ) = (8πγl 2 P ) 2 × (19) o(S k , l x ) o(S m , l x ) j lx (j lx + 1) D j lx (h lx ), where o(S, l) is the intersection number between the link and the surfaces. Hence, spin networks with the link l x are eigenfunctions of the operatorη kl x . Therefore, the scalar product with other spin networks with a link l x gives l x ,j lx |η km x |l x , j lx = (20) = (8πγl 2 P ) 2 o(S k , l x )o(S m , l x )j lx (j lx + 1)δj i ,ji , which in general does not vanish forj lx = j lx . However, a proper subspace exists where all these matrix element vanish. It is formed by states based on the links of the cubic graph, i.e. at links parallel to the vectors ω i . In fact, if l x is in the direction i = 1, 2, 3 then o(S k , l x ) = δ k i and l x ,j lx |η km x |l x , j lx = (8πγl 2 P ) 2 δ k i δ m i j lx (j lx + 1)δj lx ,j lx which vanishes for k = m. Henceforth, the restriction to reduced graph satisfies (17) in case 1. We denote reduced graphs by Γ P and the Hilbert space based at Γ P by H P . We can then follow [1,2] and define a projector P selecting the states based at reduced graphs and projecting to H P diffeomorphisms invariant states (3). This gives < h|P |s, j l , x v >= ΓP ∈s < h|Γ P , j l , x n > * ,(21) where the sum is over all the reduced graphs contained in the s-knot s. Reduced graphs within each s are mapped into each others by the action of reduced diffeomorphisms, times all possible exchanges between fiducial vectors {ω i , −ω i }. Hence, s-knots are projected to sums of reduced s-knots s A P < h|P |s, j l , x n >= A ΓP ∈s A p < h|Γ P , j l , x n > * ,(22) with the index A labeling all permutations of {ω i } times inversions. The sum over A implies us that no special meaning must be given to a fiducial direction. This solution to the gauge fixing condition (17) defines the same Hilbert space as in QRLG with the only difference that we have to sum all permutations and inversions of the fiducial directions. Let us now move to case 2. Here a solution in the large j limit is obtained restricting the admissible intertwiners states to the Livine-Speziale coherent intertwines [15] with normals u l . Livine-Speziale coherent states adapted to the reduced graphs are given by inserting a resolution of the identity h|Γ, j l , u l = xn h|Γ, j l , x n j l , x n |j l , u l . The matrix elements of the product of two fluxes intersecting Γ at a node n for j → ∞ are (see [16]) Γ, j l , u l | E(S k n ) · E(S m n )|Γ, j l , u l ≈ (24) (8πγl 2 P ) 2 l k j l k u k · lm j lm u m , where the sums extend over the links emanating from n in the direction u k and u m . Since the vectors u i are orthogonal, the expression above vanishes for k = m. We have assumed for simplicity that all the links are outgoing. Therefore, the condition η km n = 0 can be solved in the large j limit and it provides the restriction to the states of the form Γ, j l , x n |ψ = n∈Γ j l , u l |j l , x n l ψ j l , u l l ,(25) in which ψ j l , u l l denotes the coefficients of the expansion of the SU (2) group elements in the basis of coherent states. By the identification ψ j l , u l l = ψ n l l , for n l = j l ,(26) the expression (25) formally coincides with the one found in (16) giving the expansion of the states of QRLG in the basis elements of H R . However, now we have an actual expansion in the basis elements of H P , i.e. of the full theory just restricted to reduced graphs. The SU (2) gauge-fixing condition can also be imposed without using projected U (1) networks. As pointed out in [1,2], it is convenient to write Wigner matrices based at links in the direction i in the basis |j, m i diagonalizing J 2 and J i , so that the action of the master constraint conditionχ 2 (S x ) = 0 at the node readŝ χ 2 (S x ) i D j mn (h l ) = (8πγl 2 P )(j(j+1)−m 2 ) i D j mn (h l ). (27) A solution for j → ∞ is given by m = j and can be implemented by inserting the projector P l at the node. The general reduced basis element is obtained from (1) replacing D j l (h l ) with P l D j l (h l )P l , and this gives h|Γ, j l , x n = n∈Γ j l , x n |j l , u l l∈Γ l D j l n l n l (h l ), which coincides with Eq.(10). Hence, the quantum states adapted to the gauge fixing condition (6) coincide with the ones defined in [1,2] even if the connection is not diagonal. Conclusions. We have discussed how to fix a gauge where the triad is diagonal, in the kinematical Hilbert space of LQG. We have shown that the gauge fixing condition is solved weakly by states based at reduced links connected by Livine-Speziale coherent intertwiners. This leads to the same state space as the one defined in Quantum Reduced Loop Gravity (QRLG) of [1,2]. Therefore, QRLG can be regarded as a framework useful beyond the cosmological context, possibly for full quantum gravity. The fact that the analytical expression for the hamiltonian constraint simplifies substantially in the QRLG language [1] (essentially due to the fact that the volume is diagonal in the reduced basis elements) makes this result intriguing. The limits of the construction are in the approximated solution to the gauge fixing conditions, which holds only for j ≫ 1, possibly in the limitations of the applicability of the gauge condition, and perhaps in the complication of the dynamics that one might expect in a gauge fixed context like this. We expect the semiclassical analysis to indicate whether any interesting quantum gravity effects can be captured in this regime. The framework can also in principle simplify other issues, such as the coupling between quantum geometry and matter [17][18][19] and the relation between the canonical and covariant approach [20,21]. where we have split the links l = {i, o} i = 1, ..., I, o = 1, ..., O in n into I incoming and O outgoing links. A generic state can thus be expanded as follows R Γ, j l , x n |ψ = n∈Γ j l , u l |j l , x n · l∈Γ ψ j l l . Acknowledgment FC is supported by funds provided by the National Science Center under the agreement DEC-2011/02/A/ST2/00294. The work of E.A. was supported by the grant of Polish Narodowe Centrum Nauki nr DEC-2011/02/A/ST2/00300. This work has been partially realized in the framework of the CGW collaboration (www.cgwcollaboration.it). . E Alesci, F Cianfrani, arXiv:1210.4504E. Alesci and F. Cianfrani, (2012), arXiv:1210.4504. . E Alesci, F Cianfrani, arXiv:1301.2245Phys. Rev. 8783521E. Alesci and F. Cianfrani, Phys. Rev., D87, 083521 (2013), arXiv:1301.2245. . M Bojowald, Lect.Notes Phys. 8351M. Bojowald, Lect.Notes Phys., 835, 1 (2011). . A Ashtekar, P Singh, arXiv:1108.0893Class.Quant.Grav. 28213001A. Ashtekar and P. Singh, Class.Quant.Grav., 28, 213001 (2011), arXiv:1108.0893. . R Gambini, J Pullin, arXiv:1302.5265Phys. Rev. Lett. 110211301R. Gambini and J. Pullin, Phys. Rev. Lett., 110, 211301 (2013), arXiv:1302.5265. . C Rovelli, F Vidotto, arXiv:0805.4585Class. Quant. Grav. 25225024C. Rovelli and F. Vidotto, Class. Quant. Grav., 25, 225024 (2008), arXiv:0805.4585. . E Bianchi, C Rovelli, F Vidotto, arXiv:1003.3483Phys. Rev. 8284035E. Bianchi, C. Rovelli, and F. Vidotto, Phys. Rev., D82, 084035 (2010), arXiv:1003.3483. . F Vidotto, arXiv:1011.4705J. Phys.: Conf. Ser. 314F. Vidotto, J. Phys.: Conf. Ser., 314 (2010), arXiv:1011.4705. . E F Borja, I Garay, F Vidotto, Sigma, arXiv:1110.3020815E. F. Borja, I. Garay, and F. Vidotto, SIGMA, 8, 015 (2012), arXiv:1110.3020. . T Thiemann, Class. Quantum Grav. 15839T. Thiemann, Class. Quantum Grav., 15, 839 (1998). . J Grant, J Vickers, Class.Quant.Grav. 26235014J. Grant and J. Vickers, Class.Quant.Grav., 26, 235014 (2009). . F Cianfrani, G Montani, arXiv:1104.4546Phys.Rev. 8524027F. Cianfrani and G. Montani, Phys.Rev., D85, 024027 (2012), arXiv:1104.4546. . F Cianfrani, A Marchini, G Montani, arXiv:1201.2588Europhys.Lett. 9910003F. Cianfrani, A. Marchini, and G. Montani, Euro- phys.Lett., 99, 10003 (2012), arXiv:1201.2588. . J Engle, E Livine, R Pereira, C Rovelli, arXiv:0711.0146Nucl. Phys. 799136J. Engle, E. Livine, R. Pereira, and C. Rovelli, Nucl. Phys., B799, 136 (2008), arXiv:0711.0146. . E R Livine, S Speziale, arXiv:0705.0674Phys. Rev. 7684028E. R. Livine and S. Speziale, Phys. Rev., D76, 084028 (2007), arXiv:0705.0674. . E Bianchi, E Magliaro, C Perini, arXiv:0905.4082Nucl. Phys. 822245E. Bianchi, E. Magliaro, and C. Perini, Nucl. Phys., B822, 245 (2009), arXiv:0905.4082. T Thiemann, Modern Canonical Quantum General Relativity. Cambridge, U.K.Cambridge University PressT. Thiemann, Modern Canonical Quantum General Rel- ativity (Cambridge University Press, Cambridge, U.K., 2007). . E Bianchi, M Han, E Magliaro, C Perini, W Wieland, arXiv:1012.4719E. Bianchi, M. Han, E. Magliaro, C. Perini, and W. Wieland, (2010), arXiv:1012.4719. . C Rovelli, F Vidotto, 10.1103/PhysRevD.81.044038arXiv:0905.2983Phys.Rev. 8144038C. Rovelli and F. Vidotto, Phys.Rev., D81, 044038 (2010), arXiv:0905.2983. . E Alesci, T Thiemann, A Zipfel, arXiv:1109.1290Phys. Rev. D. 8624017E. Alesci, T. Thiemann and A. Zipfel, Phys. Rev. D 86, 024017 (2012) arXiv:1109.1290 . E Alesci, K Liegener, A Zipfel, arXiv:1306.0861E. Alesci, K. Liegener and A. Zipfel, arXiv:1306.0861
[]
[ "Randomly stopped sums with consistently varying distributions", "Randomly stopped sums with consistently varying distributions" ]
[ "Edita Kizinevič [email protected]č \nFaculty of Mathematics and Informatics\nVilnius University\nNaugarduko 24LT-03225VilniusLithuania\n", "Jonas Sprindys [email protected] \nFaculty of Mathematics and Informatics\nVilnius University\nNaugarduko 24LT-03225VilniusLithuania\n", "Jonas Šiaulys [email protected].šiaulys \nFaculty of Mathematics and Informatics\nVilnius University\nNaugarduko 24LT-03225VilniusLithuania\n" ]
[ "Faculty of Mathematics and Informatics\nVilnius University\nNaugarduko 24LT-03225VilniusLithuania", "Faculty of Mathematics and Informatics\nVilnius University\nNaugarduko 24LT-03225VilniusLithuania", "Faculty of Mathematics and Informatics\nVilnius University\nNaugarduko 24LT-03225VilniusLithuania" ]
[ "Modern Stochastics: Theory and Applications" ]
Let {ξ 1 , ξ 2 , . . .} be a sequence of independent random variables, and η be a counting random variable independent of this sequence. We consider conditions for {ξ 1 , ξ 2 , . . .} and η under which the distribution function of the random sum S η = ξ 1 + ξ 2 + · · · + ξ η belongs to the class of consistently varying distributions. In our consideration, the random variables {ξ 1 , ξ 2 , . . .} are not necessarily identically distributed.
10.15559/16-vmsta60
[ "https://www.vmsta.org/journal/VMSTA/article/65/file/pdf" ]
55,298,484
1607.03619
a7c779e230977cb64560f9489c5d2c2bb269bc62
Randomly stopped sums with consistently varying distributions 2016 Edita Kizinevič [email protected]č Faculty of Mathematics and Informatics Vilnius University Naugarduko 24LT-03225VilniusLithuania Jonas Sprindys [email protected] Faculty of Mathematics and Informatics Vilnius University Naugarduko 24LT-03225VilniusLithuania Jonas Šiaulys [email protected].šiaulys Faculty of Mathematics and Informatics Vilnius University Naugarduko 24LT-03225VilniusLithuania Randomly stopped sums with consistently varying distributions Modern Stochastics: Theory and Applications 3201610.15559/16-VMSTA60Received: 13 May 2016, Accepted: 23 June 2016, Published online: 4 July 2016Heavy tailconsistently varying tailrandomly stopped suminhomogeneous distributionsconvolution closurerandom convolution closure 2010 MSC 62E2060E0560F1044A35 Let {ξ 1 , ξ 2 , . . .} be a sequence of independent random variables, and η be a counting random variable independent of this sequence. We consider conditions for {ξ 1 , ξ 2 , . . .} and η under which the distribution function of the random sum S η = ξ 1 + ξ 2 + · · · + ξ η belongs to the class of consistently varying distributions. In our consideration, the random variables {ξ 1 , ξ 2 , . . .} are not necessarily identically distributed. Introduction Let {ξ 1 , ξ 2 , . . .} be a sequence of independent random variables (r.v.s) with distribution functions (d.f.s) {F ξ 1 , F ξ 2 , . . .}, and let η be a counting r.v., that is, an integervalued, nonnegative, and nondegenerate at zero r.v. In addition, suppose that the r.v. η and r.v.s {ξ 1 , ξ 2 , . . .} are independent. Let S 0 = 0, S n = ξ 1 + ξ 2 + · · · + ξ n for n ∈ N, and let S η = η k=1 ξ k be the randomly stopped sum of r.v.s {ξ 1 , ξ 2 , . . .}. We are interested in conditions under which the d.f. of S η , F S η (x) = P(S η x) = ∞ n=0 P(η = n)P(S n x),(1) belongs to the class of consistently varying distributions. Throughout this paper, f (x) = o(g(x)) means that lim x→∞ f (x)/g(x) = 0, and f (x) ∼ g(x) means that lim x→∞ f (x)/g(x) = 1 for two vanishing (at infinity) functions f and g. Also, we denote the support of a counting r.v. η by supp(η) := n ∈ N 0 : P(η = n) > 0 . Before discussing the properties of F S η , we recall the definitions of some classes of heavy-tailed d.f.s, where F (x) = 1 − F (x) for all real x and a d.f. F . • A d.f. F is heavy-tailed (F ∈ H) if for every fixed δ > 0, lim x→∞ F (x)e δx = ∞. • A d.f. F is long-tailed (F ∈ L) if for every y (equivalently, for some y > 0), F (x + y) ∼ F (x). • A d.f. F has a dominatedly varying tail (F ∈ D) if for every fixed y ∈ (0, 1) (equivalently, for some y ∈ (0, 1)), • A d.f. F supported on the interval [0, ∞) is subexponential (F ∈ S) if lim x→∞ F * F (x) F (x) = 2. (2) If a d.f. G is supported on R, then we suppose that G is subexponential (G ∈ S) if the d.f. F (x) = G(x)1 [0,∞) (x) satisfies relation (2). It is known (see, e.g., [4,11,13], and Chapters 1.4 and A3 in [8]) that these classes satisfy the following inclusions: R ⊂ C ⊂ L ∩ D ⊂ S ⊂ L ⊂ H, D ⊂ H. These inclusions are depicted in Fig. 1 with the class C highlighted. There exist many results on sufficient or necessary and sufficient conditions in order that the d.f. of the randomly stopped sum (1) belongs to some heavy-tailed distribution class. Here we present a few known results concerning the belonging of the d.f. F S η to some class. The first result on subexponential distributions was proved by Embrechts and Goldie (see Theorem 4.2 in [9]) and Cline (see Theorem 2.13 in [5]). Theorem 1. Let {ξ 1 , ξ 2 , . . .} be independent copies of a nonnegative r.v. ξ with subex- ponential d.f. F ξ . Let η be a counting r.v. independent of {ξ 1 , ξ 2 , . . .}. If E(1 + δ) η < ∞ for some δ > 0, then the d.f. F S η ∈ S. Similar results for the class D can be found in Leipus and Šiaulys [14]. We present the statement of Theorem 5 from this work. J + F = − lim y→∞ 1 log y log lim inf x→∞ F (xy) F (x) . The random convolution closure for the class L was considered, for instance, in [1,14,16,17]. We now present a particular statement of Theorem 1.1 from [17]. (i) P(η κ) > 0 for some κ ∈ N; (ii) for all k κ, the d.f. F S k of the sum S k is long tailed; (iii) sup k 1 sup x∈R (F S k (x) − F S k (x − 1)) √ k < ∞; (iv) lim sup z→∞ sup k κ sup x k(z−1)+z F S k (x − 1) F S k (x) = 1; (v) F η (ax) = o( √ x F S κ (x)) for each a > 0. We observe that the case of identically distributed r.v.s is considered in Theorems 1 and 2. In Theorem 3, r.v.s {ξ 1 , ξ 2 , . . .} are independent but not necessarily identically distributed. A similar result for r.v.s having d.f.s with dominatedly varying tails can be found in [6]. (i) F ξ κ ∈ D for some κ ∈ supp(η), (ii) lim sup x→∞ sup n κ 1 nF ξ κ (x) n i=1 F ξ i (x) < ∞, (iii) Eη p+1 < ∞ for some p > J + F ξκ . In this work, we consider randomly stopped sums of independent and not necessarily identically distributed r.v.s. As noted before, we restrict ourselves on the class C. If r.v.s {ξ 1 , ξ 2 , . . .} are not identically distributed, then different collections of conditions on {ξ 1 , ξ 2 , . . .} and η imply that F S η ∈ C. We suppose that some r.v.s from {ξ 1 , ξ 2 , . . .} have distributions belonging to the class C, and we find minimal conditions on {ξ 1 , ξ 2 , . . .} and η for the distribution of the randomly stopped sum S η to remain in the same class. It should be noted that we use the methods developed in [6] and [7]. The rest of the paper is organized as follows. In Section 2, we present our main results together with two examples of randomly stopped sums S η with d.f.s having consistently varying tails. Section 3 is a collection of auxiliary lemmas, and the proofs of the main results are presented in Section 4. Main results In this section, we present three statements in which we describe the belonging of a randomly stopped sum to the class C. In the conditions of Theorem 5, the counting r.v. η has a finite support. Theorem 6 describes the situation where no moment conditions on the r.v.s {ξ 1 , ξ 2 , . . .} are required, but there is strict requirement for η. Theorem 7 deals with the opposite case: the r.v.s {ξ 1 , ξ 2 , . . .} should have finite means, whereas the requirement for η is weaker. It should be noted that the case of real-valued r.v.s {ξ 1 , ξ 2 , . . .} is considered in Theorems 5 and 6, whereas Theorem 7 deals with nonnegative r.v.s. Theorem 5. Let {ξ 1 , ξ 2 , . . . , ξ D }, D ∈ N, be independent real-valued r.v.s, and η be a counting r.v. independent of {ξ 1 , ξ 2 , . . . , ξ D }. Then the d.f. F S η belongs to the class C if the following conditions are satisfied: (a) P(η D) = 1, (b) F ξ 1 ∈ C, (c) for each k = 2, . . . , D, either F ξ k ∈ C or F ξ k (x) = o(F ξ 1 (x)). Theorem 6. Let {ξ 1 , ξ 2 , . . .} be independent real-valued r.v.s, and η be a counting r.v. independent of {ξ 1 , ξ 2 , . . .}. Then the d.f. F S η belongs to the class C if the following conditions are satisfied: (a) F ξ 1 ∈ C, (b) for each k 2, either F ξ k ∈ C or F ξ k (x) = o(F ξ 1 (x)), (c) lim sup x→∞ sup n 1 1 nF ξ 1 (x) n i=1 F ξ i (x) < ∞, (d) Eη p+1 < ∞ for some p > J + F ξ 1 . When {ξ 1 , ξ 2 , . . .} are identically distributed with common d.f. F ξ ∈ C, conditions (a), (b), and (c) of Theorem 6 are satisfied obviously. Hence, we have the following corollary. Corollary 1 (See also Theorem 3.4 in [3]). Let {ξ 1 , ξ 2 , . . .} be i.i.d. real-valued r.v.s with d.f. F ξ ∈ C, and η be a counting r.v. independent of {ξ 1 , ξ 2 , . . .}. Then the d.f. F S η belongs to the class C if Eη p+1 < ∞ for some p > J + F ξ . Theorem 7. Let {ξ 1 , ξ 2 ,(a) F ξ 1 ∈ C, (b) for each k 2, either F ξ k ∈ C or F ξ k (x) = o(F ξ 1 (x)), (c) Eξ 1 < ∞, (d) F η (x) = o(F ξ 1 (x)), (e) lim sup x→∞ sup n 1 1 nF ξ 1 (x) n i=1 F ξ i (x) < ∞, (f) lim sup u→∞ sup n 1 1 n n k=1 Eξ k u Eξ k = 0. Similarly to Corollary 1, we can formulate the following statement. We note that, in the i.i.d. case, conditions (a), (b), (e), and (f) of Theorem 7 are satisfied. Corollary 2. Let {ξ 1 , ξ 2 , . . .} be i.i.d. nonnegative r.v.s with common d.f. F ξ ∈ C, and η be a counting r.v. independent of {ξ 1 , ξ 2 , . . .}. Then the d.f. F S η belongs to the class C under the following two conditions: Eξ < ∞ and F η (x) = o(F ξ (x)). Further in this section, we present two examples of r.v.s {ξ 1 , ξ 2 , . . .} and η for which the random sum F S η has a consistently varying tail. Example 1. Let {ξ 1 , ξ 2 , . . .} be independent r.v.s such that ξ k are exponentially distributed for all even k, that is, F ξ k (x) = e −x , x 0, k ∈ {2, 4, 6, . . .}, whereas, for each odd k, ξ k is a copy of the r.v. (1 + U) 2 G , where U and G are independent r.v.s, U is uniformly distributed on the interval [0, 1], and G is geometrically distributed with parameter q ∈ (0, 1), that is, P(G = l) = (1 − q) q l , l = 0, 1, . . . . In addition, let η be a counting r.v. independent of {ξ 1 , ξ 2 , . . .} and distributed according to the Poisson law. Theorem 6 implies that the d.f. of the randomly stopped sum S η belongs to the class C because: (a) F ξ 1 ∈ C due to considerations in pp. 122-123 of [2], (b) F ξ k ∈ C for k ∈ {3, 5, . . .}, and F ξ k (x) = o(F ξ 1 (x)) for k ∈ {2, 4, 6, . . .}, (c) lim sup x→∞ sup n 1 1 nF ξ 1 (x) n i=1 F ξ i (x) 1, (d) all moments of the r.v. η are finite. Note that ξ 1 does not satisfy condition (c) of Theorem 7 in the case q 1/2. Hence, Example 1 describes the situation where Theorem 6 should be used instead of Theorem 7. Example 2. Let {ξ 1 , ξ 2 , . . .} be independent r.v.s such that ξ k are distributed according to the Pareto law (with tail index α = 2) for all odd k, and ξ k are exponentially distributed (with parameter equal to 1) for all even k, that is, F ξ k (x) = 1 x 2 , x 1, k ∈ {1, 3, 5, . . .}, F ξ k (x) = e −x , x 0, k ∈ {2, 4, 6, . . .}. In addition, let η be a counting r.v independent of {ξ 1 , ξ 2 , . . .} that has the Zeta distribution with parameter 4, that is, P(η = m) = 1 ζ(4) 1 (m + 1) 4 , m ∈ N 0 , where ζ denotes the Riemann zeta function. Theorem 7 implies that the d.f. of the randomly stopped sum S η belongs to the class C because: (a) F ξ 1 ∈ C, (b) F ξ k ∈ C for k ∈ {3, 5, . . .}, and F ξ k (x) = o(F ξ 1 (x)) for k ∈ {2, 4, 6, . . .}, (c) Eξ 1 = 2, (d) F η (x) = o(F ξ 1 (x)), (e) lim sup x→∞ sup n 1 1 nF ξ 1 (x) n i=1 F ξ i (x) 1, (f) max k∈N Eξ k = 2. Regarding condition (d), it should be noted that the Zeta distribution with parameter 4 is a discrete version of Pareto distribution with tail index 3. Note that η does not satisfy the condition (d) of Theorem 6 because J + Auxiliary lemmas This section deals with several auxiliary lemmas. The first lemma is Theorem 3.1 in [3] (see also Theorem 2.1 in [15] P n i=1 X i > x ∼ n i=1 F X i (x). The following statement about nonnegative subexponential distributions was proved in Proposition 1 of [10] and later generalized to a wider distribution class in Corollary 3.19 of [12]. Lemma 2. Let {X 1 , X 2 , . . . X n } be independent real-valued r.v.s. Assume that F X i /F (x) → x→∞ b i for some subexponential d.f. F and some constants b i 0, i ∈ {1, 2, . . . n}. Then F X 1 * F X 2 * · · · * F X n (x) F (x) → x→∞ n i=1 b i . In the next lemma, we show in which cases the convolution F X 1 * F X 2 * · · · * F X n belongs to the class C. Lemma 3. Let {X 1 , X 2 , . . . , X n }, n ∈ N, be independent real-valued r.v.s. Then the d.f. F Σ n of the sum Σ n = X 1 + X 2 + · · · + X n belongs to the class C if the following conditions are satisfied: (a) F X 1 ∈ C, (b) for each k = 2, . . . , n, either F X k ∈ C or F X k (x) = o(F X 1 (x)). Proof. Evidently, we can suppose that n 2. We split our proof into two parts. First part. Suppose that F X k ∈ C for all k ∈ {1, 2, . . . , n}. In such a case, the lemma follows from Lemma 1 and the inequality a 1 + a 2 + · · · + a m b 1 + b 2 + · · · + b m max a 1 b 1 , a 2 b 2 , . . . , a m b m(3) for a i 0 and b i > 0, i = 1, 2, . . . , m. Namely, using the relation of Lemma 1 and estimate (3) F X k (x) for arbitrary y ∈ (0, 1). Since F X k ∈ C for each k, the last estimate implies that the d.f. F Σ n has a consistently varying tail, as desired. Second part. Now suppose that F X k / ∈ C for some of indexes k ∈ {2, 3, . . . , n}. By the conditions of the lemma we have that F X k (x) = o(F X 1 (x)) for such k. Let K ⊂ {2, 3, . . . , n} be the subset of indexes k such that F X k / ∈ C and F X k (x) = o F X 1 (x) . By Lemma 2, F Σ n (x) ∼ F X 1 (x), where Σ n = X 1 + k∈K X k . Hence, lim sup x→∞ F Σ n (xy) F Σ n (x) = lim sup x→∞ F X 1 (xy) F X 1 (x)(4) for every y ∈ (0, 1). Equality (4) implies immediately that the d.f. F Σ n belongs to the class C. Therefore, the d.f. F Σ n also belongs to the class C according to the first part of the proof because Σ n = Σ n + k / ∈K X k and F X k ∈ C for each k / ∈ K. The lemma is proved. The following two statements about dominatedly varying distributions are Lemma 3.2 and Lemma 3.3 in [6]. Since any consistently varying distribution is also dominatingly varying, these statements will be useful in the proofs of our main results concerning the class C. 1 nF X ν (x) n i=1 F X i (x) < ∞. Then, for each p > J + F Xν , there exists a positive constant c 1 such that F S n (x) c 1 n p+1 F X ν (x)(5) for all n ν and x 0. In fact, Lemma 4 is proved in [6] for nonnegative r.v.s. However, the lemma remains valid for real-valued r.v.s. To see this, it suffices to observe that P(X 1 + X 2 + · · · + X n > x) P(X + 1 + X + 2 · · · + X + n > x) and P(X k > x) = P(X + k > x), where n ∈ N, k ∈ {1, 2, . . . , n}, x 0, and a + denotes the positive part of a. Lemma 5. Let {X 1 , X 2 , . . .} be independent real-valued r.v.s, and F X ν ∈ D for some ν 1. Let, in addition, lim u→∞ sup n ν 1 n n k=1 E |X k |1 {X k −u} = 0, lim sup x→∞ sup n ν 1 nF X ν (x) n i=1 F X i (x) < ∞, and EX k = EX + k − EX − k = 0 for k ∈ N. Then, for each γ > 0, there exists a positive constant c 2 = c 2 (γ ) such that P(S n > x) c 2 nF X ν (x) for all x γ n and all n ν. Proofs of the main results Proof of Theorem 5. It suffices to prove that lim sup y↑1 lim sup x→∞ F S η (xy) F S η (x) 1.(6) According to estimate (3), for x > 0 and y ∈ (0, 1), we have F S η (xy) F S η (x) = D n=1 n∈supp(η) P(S n > xy)P(η = n) D n=1 n∈supp(η) P(S n > x)P(η = n) max 1 n D n∈supp(η) P(S n > xy) P(S n > x) . Hence, by Lemma 3, lim sup y↑1 lim sup x→∞ F S η (xy) F S η (x) lim sup y↑1 lim sup x→∞ max 1 n D n∈supp(η) F S n (xy) F S n (x) max 1 n D n∈supp(η) lim sup y↑1 lim sup x→∞ F S n (xy) F S n (x) = 1, which implies the desired estimate (6). The theorem is proved. Proof of Theorem 6. As in Theorem 5, it suffices to prove inequality (6). For all K ∈ N and x > 0, we have P(S η > x) = K n=1 + ∞ n=K+1 P(S n > x)P(η = n). Therefore, for x > 0 and y ∈ (0, 1), we have P(S η > xy) P(S η > x) = K n=1 P(S n > xy)P(η = n) P(S η > x) + ∞ n=K+1 P(S n > xy)P(η = n) P(S η > x) =: J 1 + J 2 .(7) The random variable η is not degenerate at zero, so there exists a ∈ N such that P(η = a) > 0. If K a, then using inequality (3), we get J 1 K n=1 n∈supp(η) P(S n > xy)P(η = n) K n=1 n∈supp(η) P(S n > x)P(η = n) max 1 n K n∈supp(η) P(S n > xy) P(S n > x) . Similarly as in the proof of F S n (x) = 1.(8) Since C ⊂ D, we can use Lemma 4 for the numerator of J 2 to obtain ∞ n=K+1 P(S n > xy)P(η = n) c 3 F ξ 1 (xy) ∞ n=K+1 n p+1 P(η = n) with some positive constant c 3 . For the denominator of J 2 , we have that P(S η > x) = ∞ n=1 P(S n > x)P(η = n) P(S a > x)P(η = a). The conditions of the theorem imply that S a = ξ 1 + k∈K a ξ k + k / ∈K a ξ k , where K a = {k ∈ {2, . . . , a} : F ξ k / ∈ C, F ξ k (x) = o( F ξ 1 (x))}. By Lemma 2 F S a (x)/F ξ 1 (x) → x→∞ 1, where F S a is the d.f. of the sum S a = ξ 1 + k∈K a ξ k In addition, by Lemma 3 we have that the d.f. F S a belongs to the class C. If k / ∈ K a , then F ξ k ∈ C by the conditions of the theorem. This fact and Lemma 1 imply that lim inf x→∞ P(S a > x) F ξ 1 (x) 1 + k / ∈K a lim inf x→∞ F ξ k (x) F ξ 1 (x) . Hence, P(S η > x) 1 2 F ξ 1 (x)P(η = a)(9) for x sufficiently large. Therefore, lim sup y↑1 lim sup x→∞ J 2 2 c 3 P(η = a) lim sup y↑1 lim sup x→∞ F ξ 1 (xy) F ξ 1 (x) ∞ n=K+1 n p+1 P(η = n).(10) Estimates (7), (8), and (10) imply that lim sup y↑1 lim sup x→∞ P(S η > xy) P(S η > x) 1 + 2 c 3 P(η = a) Eη p+1 1 {η>K} for arbitrary K a. Letting K tend to infinity, we get the desired estimate (6) due to condition (d). The theorem is proved. Proof of Theorem 7. Once again, it suffices to prove inequality (6). By condition (e) we have that there exist two positive constants c 4 and c 5 such that n i=1 F ξ i (x) c 5 nF ξ 1 (x), x c 4 , n ∈ N. Therefore, ES n = n j =1 Eξ j = n j =1 c 4 0 + ∞ c 4 F ξ j (u)du c 4 n + c 5 nEξ 1 =: c 6 n(11) for a positive constant c 6 and all n ∈ N. If K ∈ N and x > 4Kc 6 , then we have P(S η > x) = P(S η > x, η K) + P S η > x, K < η x 4c 6 + P S η > x, η > x 4c 6 . Therefore, P(S η > xy) P(S η > x) =: I 1 + I 2 + I 3 (12) if xy > 4Kc 6 , x > 0, and y ∈ (0, 1). The random variable η is not degenerate at zero, so P(η = a) > 0 for some a ∈ N. P(S η > x) = P(S η > xy, η K) P(S η > x) + P S η >P n i=1 (ξ i − Eξ i ) > 3 4 xy P(η = n)(14) by inequality (11). The random variables ξ 1 − Eξ 1 , ξ 2 − Eξ 2 , . . . satisfy the conditions of Lemma 5. Namely, E(ξ k − Eξ k ) = 0 for k ∈ N and F ξ 1 −Eξ 1 ∈ C ⊂ D obviously. In addition, F ξ 1 (x)P(η = a)(15) with some positive constant c 8 because F ξ 1 ∈ C ⊂ D. Using inequality (15) for K a. Letting K tend to infinity, we get the desired estimate (6) because Eη < ∞ by conditions (c) and (d). The theorem is proved. •• A d.f. F has a consistently varying tail(F ∈ C) A d.f. F has a regularly varying tail (F ∈ R) if lim x→∞ F (xy) F (x)= y −α for some α 0 and any fixed y > 0. Fig. 1 . 1Classes of heavy-tailed distributions. Theorem 4 ( 4[6], Theorem 2.1). Let r.v.s {ξ 1 , ξ 2 , . . .} be nonnegative independent, not necessarily identically distributed, and η be a counting r.v. independent of {ξ 1 , ξ 2 , . . .}. Then the d.f F S η belongs to the class D if the following three conditions are satisfied: F ξ 1 = 2 and 12Eη 3 = ∞. Hence, Example 2 describes the situation where Theorem 7 should be used instead of Theorem 6. Lemma 4 . 4Let {X 1 , X 2 , . . .} be independent real-valued r.v.s, and F X ν ∈ D for some ν 1. Suppose, in addition, that lim sup x→∞ sup n ν E |ξ k − Eξ k |1 {ξ k −Eξ k −u} ) (Eξ k − ξ k )1 {ξ k −Eξ k Eη1 {η>K}with a positive constant c 7 . For the denominator of I 2 , we can use the inequalityP(S η > x) = ∞ n=1 P(S n > x)P(η = n) ∞ n=1 P(ξ 1 > x)P(η = n) Theorem 2 . 2Let {ξ 1 , ξ 2 , . . .} be i.i.d. nonnegative r.v.s with common d.f. F ξ ∈ D and finite mean Eξ . Let η be a counting r.v. independent of {ξ 1 , ξ 2 , . . .} with d.f. F η and finite mean Eη. Then d.f. F S η ∈ D iff min{F ξ , F η } ∈ D. We recall only that the d.f. F belongs to the class D if and only if the upper Matuszewska index J + F < ∞, where, by definition, Theorem 3 . 3Let {ξ 1 , ξ 2 , . . .} be independent r.v.s, and η be a counting r.v. independent of {ξ 1 , ξ 2 , . . .} with d.f. F η . Then the d.f. F S η ∈ L if the following five conditions are satisfied: . . .} be independent nonnegative r.v.s, and η be a counting r.v. independent of {ξ 1 , ξ 2 , . . .}. Then the d.f. F S η belongs to the class C if the following conditions are satisfied: ) . )Lemma 1. Let {X 1 , X 2 , .. . X n } be independent real-valued r.v.s. If F X k ∈ C for each k ∈ {1, 2, . . . , n}, then xy, K < ηxy 4c 6 P(S η > x) + P S η > xy, η > xy 4c 6 If K a, then lim supy↑1 lim sup x→∞ I 1 1 (13) similarly to estimate (8) in Theorem 6. For the numerator of I 2 , we have I 2,1 := P S η > xy, K < η xy 4c 6 = K<n xy 4c 6 P n i=1 (ξ i − Eξ i ) > xy − n j =1 Eξ j P(η = n) K<n xy 4c 6 since the r.v.s {ξ 1 , ξ 2 , . . .} are nonnegative by assumption. Hence, If y ∈(1/2, 1), then the last estimate implies thatI 2 c 7 P(η = a) Eη1 {η>K} F ξ 1 3 4 xy F ξ 1 (x) . lim sup x→∞ I 2 c 7 P(η = a) Eη1 {η>K} lim sup x→∞ F ξ 1 3 8 x F ξ 1 (x) c 8 Eη1 {η>K} again, we obtain F ξ1 (x) .I 3 P η > xy 4c 6 P(S η > x) 1 P(η = a) F η xy 4c 6 F ξ 1 xy 4c 6 F ξ 1 xy 4c 6 Therefore, for y ∈ (1/2, 1), we get lim sup x→∞ I 3 1 P(η = a) lim sup x→∞ F η xy 4c 6 F ξ 1 xy 4c 6 lim sup x→∞ F ξ 1 xy 4c 6 F ξ 1 (x) = 0 (17) by condition (d). Estimates (12), (13), (16), and (17) imply that lim sup y↑1 lim sup x→∞ P(S η > xy) P(S η > x) 1 + c 8 Eη1 {η>K} A note on the closure of convolution power mixtures (random sums) of exponential distributions. J M P Albin, 10.1017/S1446788708000104MR2469263. doi:10.1017/ S1446788708000104J. Aust. Math. Soc. 84Albin, J.M.P.: A note on the closure of convolution power mixtures (random sums) of exponential distributions. J. Aust. Math. Soc. 84, 1-7 (2008). MR2469263. doi:10.1017/ S1446788708000104 On max-sum equivalence and convolution closure of heavy-tailed distributions and their applications. J Cai, Q Tang, 10.1239/jap/1077134672J. Appl. Probab. 41Cai, J., Tang, Q.: On max-sum equivalence and convolution closure of heavy-tailed dis- tributions and their applications. J. Appl. Probab. 41, 117-130 (2004). MR2036276. doi: 10.1239/jap/1077134672 Sums of pairwise quasi-asymptotically independent random variables with consistent variation. Y Chen, K C Yuen, 10.1080/15326340802641006Stoch. Models. 25Chen, Y., Yuen, K.C.: Sums of pairwise quasi-asymptotically independent random vari- ables with consistent variation. Stoch. Models 25, 76-89 (2009). MR2494614. doi: 10.1080/15326340802641006 A theorem on sums of independent positive random variables and its application to branching processes. V P Chistyakov, 10.1137/1109088Theory Probab. Appl. 9Chistyakov, V.P.: A theorem on sums of independent positive random variables and its application to branching processes. Theory Probab. Appl. 9, 640-648 (1964). doi:10.1137/1109088 Convolutions of distributions with exponential and subexponential tails. D B H Cline, 10.1017/S1446788700029633MR0904394. doi:10.1017/ S1446788700029633J. Aust. Math. Soc. A. 43Cline, D.B.H.: Convolutions of distributions with exponential and subexponen- tial tails. J. Aust. Math. Soc. A 43, 347-365 (1987). MR0904394. doi:10.1017/ S1446788700029633 Randomly stopped sums of not identically distributed heavy tailed random variables. S Danilenko, J Šiaulys, 10.1016/j.spl.2016.03.001Stat. Probab. Lett. 1133480399Danilenko, S., Šiaulys, J.: Randomly stopped sums of not identically distributed heavy tailed random variables. Stat. Probab. Lett. 113, 84-93 (2016). MR3480399. doi: 10.1016/j.spl.2016.03.001 Random convolution of inhomogeneous distributions with O-exponential tail. S Danilenko, S Paškauskaitė, J Šiaulys, 10.15559/16-VMSTA52Mod. Stoch. Theory Appl. 3Danilenko, S., Paškauskaitė, S., Šiaulys, J.: Random convolution of inhomogeneous distributions with O-exponential tail. Mod. Stoch. Theory Appl. 3, 79-94 (2016). doi:10.15559/16-VMSTA52 Modelling Extremal Events for Insurance and Finance. P Embrechts, C Klüppelberg, T Mikosch, 10.1007/978-3-642-33483-2SpringerNew YorkEmbrechts, P., Klüppelberg, C., Mikosch, T.: Modelling Extremal Events for Insur- ance and Finance. Springer, New York (1997). 1997. MR1458613. doi:10.1007/978-3- 642-33483-2 On convolution tails. P Embrechts, C M Goldie, 10.1016/0304-4149(82)90013-8Stoch. Process. Appl. 13Embrechts, P., Goldie, C.M.: On convolution tails. Stoch. Process. Appl. 13, 263-278 (1982). MR0671036. doi:10.1016/0304-4149(82)90013-8 Subexponentiality and infinite divisibility. P Embrechts, C M Goldie, N Veraverbeke, 10.1007/BF00535504Z. Wahrscheinlichkeitstheor. Verw. Geb. 49547833Embrechts, P., Goldie, C.M., Veraverbeke, N.: Subexponentiality and infinite divis- ibility. Z. Wahrscheinlichkeitstheor. Verw. Geb. 49, 335-347 (1979). MR0547833. doi:10.1007/BF00535504 A property of longtailed distributions. P Embrechts, E Omey, 10.2307/3213666J. Appl. Probab. 21Embrechts, P., Omey, E.: A property of longtailed distributions. J. Appl. Probab. 21, 80- 87 (1984). MR0732673. doi:10.2307/3213666 An Introduction to Heavy-Tailed and Subexponential Distributions. S Foss, D Korshunov, S Zachary, 10.1007/978-1-4419-9473-8MR2810144. doi:10.1007/ 978-1-4419-9473-8SpringerNew YorkFoss, S., Korshunov, D., Zachary, S.: An Introduction to Heavy-Tailed and Subex- ponential Distributions. Springer, New York (2011). MR2810144. doi:10.1007/ 978-1-4419-9473-8 Subexponential distributions and integrated tails. C Klüppelberg, 10.2307/3214240J. Appl. Probab. 25Klüppelberg, C.: Subexponential distributions and integrated tails. J. Appl. Probab. 25, 132-141 (1988). MR0929511. doi:10.2307/3214240 Closure of some heavy-tailed distribution classes under random convolution. R Leipus, J Šiaulys, 10.1007/s10986-012-9171-7MR3020941. doi:10.1007/ s10986-012-9171-7Lith. Math. J. 52Leipus, R., Šiaulys, J.: Closure of some heavy-tailed distribution classes under ran- dom convolution. Lith. Math. J. 52, 249-258 (2012). MR3020941. doi:10.1007/ s10986-012-9171-7 Tail probabilities of randomly weighted sums of random variables with dominated variation. D Wang, Q Tang, 10.1080/15326340600649029Stoch. Models. 22Wang, D., Tang, Q.: Tail probabilities of randomly weighted sums of random vari- ables with dominated variation. Stoch. Models 22, 253-272 (2006). MR2220965. doi:10.1080/15326340600649029 Ratio of the tail of an infinitely divisible distribution on the line to that of its Lévy measure. T Watanabe, K Yamamuro, 10.1214/EJP.v15-732Electron. J. Probab. 15Watanabe, T., Yamamuro, K.: Ratio of the tail of an infinitely divisible distribution on the line to that of its Lévy measure. Electron. J. Probab. 15, 44-75 (2010). MR2578382. doi:10.1214/EJP.v15-732 On closedness under convolution and convolution roots of the class of long-tailed distributions. H Xu, S Foss, Y Wang, 10.1007/s10687-015-0224-2Extremes. 18Xu, H., Foss, S., Wang, Y.: On closedness under convolution and convolution roots of the class of long-tailed distributions. Extremes 18, 605-628 (2015). MR3418770. doi:10.1007/s10687-015-0224-2
[]
[ "WIGNER ANALYSIS OF OPERATORS. PART I: PSEUDODIFFERENTIAL OPERATORS AND WAVE FRONTS", "WIGNER ANALYSIS OF OPERATORS. PART I: PSEUDODIFFERENTIAL OPERATORS AND WAVE FRONTS" ]
[ "Elena Cordero ", "Luigi Rodino " ]
[]
[]
We perform Wigner analysis of linear operators. Namely, the standard time-frequency representation Short-time Fourier Transform (STFT) is replaced by the A-Wigner distribution defined by W A (f ) = µ(A)(f ⊗f ), where A is a 4d × 4d symplectic matrix and µ(A) is an associate metaplectic operator. Basic examples are given by the so-called τ -Wigner distributions. Such representations provide a new characterization for modulation spaces when τ ∈ (0, 1). Furthermore, they can be efficiently employed in the study of the off-diagonal decay for pseudodifferential operators with symbols in the Sjöstrand class (in particular, in the Hörmander class S 0 0,0 ). The novelty relies on defining time-frequency representations via metaplectic operators, developing a conceptual framework and paving the way for a new understanding of quantization procedures. We deduce micro-local properties for pseudodifferential operators in terms of the Wigner wave front set. Finally, we compare the Wigner with the global Hörmander wave front set and identify the possible presence of a ghost region in the Wigner wave front.In the second part of the paper applications to Fourier integral operators and Schrödinger equations will be given.2010 Mathematics Subject Classification. 42A38,42B35,47G30,81S30,46F12.
10.1016/j.acha.2022.01.003
[ "https://arxiv.org/pdf/2106.02722v3.pdf" ]
235,358,336
2106.02722
e4f00bdfb8933510934b5fc20d4a46339c799312
WIGNER ANALYSIS OF OPERATORS. PART I: PSEUDODIFFERENTIAL OPERATORS AND WAVE FRONTS 9 Aug 2021 Elena Cordero Luigi Rodino WIGNER ANALYSIS OF OPERATORS. PART I: PSEUDODIFFERENTIAL OPERATORS AND WAVE FRONTS 9 Aug 2021 We perform Wigner analysis of linear operators. Namely, the standard time-frequency representation Short-time Fourier Transform (STFT) is replaced by the A-Wigner distribution defined by W A (f ) = µ(A)(f ⊗f ), where A is a 4d × 4d symplectic matrix and µ(A) is an associate metaplectic operator. Basic examples are given by the so-called τ -Wigner distributions. Such representations provide a new characterization for modulation spaces when τ ∈ (0, 1). Furthermore, they can be efficiently employed in the study of the off-diagonal decay for pseudodifferential operators with symbols in the Sjöstrand class (in particular, in the Hörmander class S 0 0,0 ). The novelty relies on defining time-frequency representations via metaplectic operators, developing a conceptual framework and paving the way for a new understanding of quantization procedures. We deduce micro-local properties for pseudodifferential operators in terms of the Wigner wave front set. Finally, we compare the Wigner with the global Hörmander wave front set and identify the possible presence of a ghost region in the Wigner wave front.In the second part of the paper applications to Fourier integral operators and Schrödinger equations will be given.2010 Mathematics Subject Classification. 42A38,42B35,47G30,81S30,46F12. Introduction The Wigner distribution was introduced by E. Wigner in 1932 [51] in Quantum Mechanics and fifteen years later employed by J. Ville [50] in Signal Analysis. Since then the Wigner distribution has been applied in many different frameworks by mathematicians, engineers and physicists: it is one of the most popular timefrequency representations, cf. [4,7,8]. Definition 1.1. Consider f, g ∈ L 2 (R d ). The Wigner distribution W f is defined as (1) W f (x, ξ) = W (f, f )(x, ξ) = R d f (x + t 2 )f (x − t 2 )e −2πitξ dt; the cross-Wigner distribution W (f, g) is (2) W (f, g)(x, ξ) = R d f (x + t 2 )g(x − t 2 )e −2πitξ dt. W f and W (f, g) turn out to be well defined in L 2 (R 2d ), with W (f, g) L 2 (R 2d ) = f L 2 (R d ) g L 2 (R d ) . A strictly related time-frequency representation is given by the Short-time Fourier Transform (STFT): (3) V g f (x, ξ) = f, M ξ T x g = R d f (t)g(t − x) e −2πitξ dt, x, ξ ∈ R d , where f is in L 2 (R d ) and the window function g is the Gaussian, as in the original definition of Gabor 1946 [27], or belonging to some space of regular functions. Advantages and drawbacks of the use of the Wigner transform with respect to the STFT are well described in the textbook of Gröchenig [30,Chapter 4] where we read about quadratic representations G(f, f ) of Wigner type: " the non-linearity of these time-frequency representations makes the numerical treatment of signals difficult and often impractical" and further " On the positive side, a genuinely quadratic time-frequency representation of the form G(f, f ) does not depend on an auxiliary window g. Thus it should display the time-frequency behaviour of f in a pure, unobstructed form." In the last ten years, time-frequency analysis methods have been applied to the study of the partial differential equations, with the emphasis on pseudodifferential and Fourier integral operator theory. As technical tool, preference was given to the STFT, with numerical applications in terms of Gabor frames. Let us refer to Cordero and Rodino 2020 [16], and corresponding bibliography. For a given linear operator P , with action on L 2 (R d ) or on more general functional spaces, one considers there the STFT kernel h, defined as the distributional kernel of an operator H satisfying (4) V g (P f ) = HV g f, that is, with formal integral notation: (5) V g (P f )(x, ξ) = R 2d h(x, ξ, y, η)V g f (y, η) dydη. Attention is then fixed on the properties of almost-diagonalization for h, in the case when P is a pseudodifferential operator, with generalization to Fourier integral operators appearing in the study of Schrödinger equations. In our present work, addressing again to linear operators, we abandon the STFT in favour of the Wigner transform. Namely, using the original Wigner approach [51], later developed by Cohen and many other authors (see e.g. [7,8]), we consider K such that (6) W (P f ) = KW (f ) and its kernel k (7) W (P f )(x, ξ) = R 2d k(x, ξ, y, η)W f (y, η) dydη. Part I of the paper concerns the case of the pseudodifferential operators. The results will be applied in Part II to Schrödinger equations and corresponding propagators. We shall argue in terms of the τ -Wigner representations, 0 < τ < 1, see the definition in the sequel. In these last years they have become popular for their use in Signal Theory and Quantum Mechanics, see for example [4,11,12,36,37]. We shall begin, in this introduction, to describe a circle of ideas for W f, W (f, g) in the elementary L 2 (R d ) setting, general results being left to the next sessions. As for the class of the pseudodifferential operators, the most natural choice is given by symbols in the Hörmander class S 0 0,0 (R 2d ), consisting of smooth functions a on R 2d such that (8) |∂ α x ∂ β ξ a(x, ξ)| ≤ c α,β , α, β ∈ N d , x, ξ ∈ R d . The corresponding pseudodifferential operator is defined by the Weyl quantization (9) Op w (a)f (x) = R 2d e 2πi(x−y)ξ a x + y 2 , ξ f (y) dydξ. If a ∈ S 0 0,0 (R 2d ) then Op w (a) is bounded on L 2 (R d ), according to the celebrated result of Calderón and Vaillancourt [5]. Note that the Wigner distribution can provide an alternative definition of Op w (a) by the identity (10) Op w (a)f, g = a, W (g, f ) , f, g ∈ L 2 (R d ). Our preliminary result will be the following. Proposition 1.2. Assume a ∈ S 0 0,0 (R 2d ). Then (11) W (Op w (a)f, g) = Op w (b)W (f, g), with b ∈ S 0 0,0 (R 4d ) given by (12) b(x, ξ, u, v) = a(x − v/2, ξ + u/2), where u and v are the dual variables of x and ξ, respectively. Using the notation a(x, D) for Op w (a), with D = −i∂, we may write (13) Op w (b) = a(x − 1 4π D ξ , ξ + 1 4π D x ), acting on W (f, g)(x, ξ). By using the identity W (g, f ) = W (f, g), we easily deduce: Theorem 1.3. For a ∈ S 0 0,0 (R 2d ) we have (14) W (Op w (a)f ) = KW f with (15) K = a(x − 1 4π D ξ , ξ + 1 4π D x )ā(x + 1 4π D ξ , ξ − 1 4π D x ). Note that K is a pseudodifferential operator with symbol in S 0 0,0 (R 4d ), hence if f ∈ L 2 (R d ) we have W f ∈ L 2 (R 2d ) and KW f ∈ L 2 (R 2d ), consistently with the left-hand side of (14), where W (Op w (a)f ) ∈ L 2 (R 2d ). Proposition 1.2 and Theorem 1.3 are not new in literature, at least at the formal level. If we fix a = x j , a = ξ j , we obtain respectively, after extension to higher order operators: (16) W (x j f, g) = (x j − 1 4π D ξ j )W (f, g),(17)W (D x j f, g) = (2πξ j + 1 2 D x j )W (f, g), and similar formulas for W (f, x j g), W (f, D x j g), recapturing the so-called Moyal operators. General identities of the type (14), (15) appeared in Quantum Mechanics and Signal Theory, see the papers of L. Cohen, for example Cohen and Galleani [26], and the contribution of de Gosson concerning Bopp quantization, cf. Dias, de Gosson, Prata [24]. We may now present the counterpart for the Wigner kernel k in (7) of the STFT almost-diagonalization. Theorem 1.4. Let a ∈ S 0 0,0 (R 2d ) and let k be the corresponding Wigner kernel, that is the one of the operator K in (14). Write for short z = (x, ξ), w = (y, η), so that (18) W (Op w (a)f )(z) = R 2d k(z, w)W f (w) dw. Then, for every integer N ≥ 0, (19) k(z, w) z − w N is the kernel of an operator K N bounded on L 2 (R 2d ), namely of a pseudodifferential operator K N = Op w (a N ) with symbol a N ∈ S 0 0,0 (R 4d ). In (19) we set, as standard, t = (1 + |t| 2 ) 1/2 . This off-diagonal algebraic decay looks promising for sparsity property of K. However, the computational methods available in literature for Wigner transform seem not adapted to such applications. We shall then limit to a qualitative result, based on the following definition. As standard in the study of partial differential equations, we address to high frequencies. Definition 1.5. Let f ∈ L 2 (R d ). We define W F (f ), the Wigner wave front set of f , as follows: z 0 = (x 0 , ξ 0 ) / ∈ W F (f ), z 0 = 0, if there exists a conic open neighbourhood Γ z 0 ⊂ R 2d of z 0 such that for every integer N ≥ 0 (20) Γz 0 |z| 2N |W f (z)| 2 dz < ∞. Hence W F (f ) is a closed cone in R 2d \ {0}. Note that W F (f ) is the natural version in the Wigner context of the global wave front set of Hörmander 1989 [34]. From Theorem 1.4 we may now deduce the following micro-local property for pseudodifferential operators. Theorem 1.6. Consider a ∈ S 0 0,0 (R 2d ). Then for every f ∈ L 2 (R d ): (21) W F (Op w (a)f ) ⊂ W F (f ). Finally, we describe in short the generalizations of the next sections. After the preliminary Section 2, where we recall basic facts and known results to be used in the sequel, in Section 3 we extend the definition of the Wigner distribution by considering τ -Wigner distributions, playing a crucial role in the sequel. Definition 1.7. For τ ∈ [0, 1], f, g ∈ L 2 (R d ), we define the (cross-)τ -Wigner distribution by (22) W τ (f, g)(x, ξ) = R d e −2πitξ f (x + τ t)g(x − (1 − τ )t)dt = F 2 T τ (f ⊗ḡ)(x, ξ), where F 2 is the partial Fourier transform with respect to the second variables y and the change of coordinates T τ is given by T τ F (x, y) = F (x + τ y, x − (1 − τ )y). For f = g we obtain the τ -Wigner distribution W τ f := W τ (f, f ). W f is the particular case of τ -Wigner distribution corresponding to the value τ = 1/2. The cases τ = 0 and τ = 1 correspond to the (cross-)Rihaczek and conjugate-(cross-)Rihaczek distribution, respectively; they will remain out of consideration in the main part of the statements, because of their peculiarities. The τ -quantization Op τ (a), which we may define by extending (10) as (23) Op τ (a)f, g = a, W τ (g, f ) , f, g ∈ L 2 (R d ), was already in Shubin [44] and it is largely used in modern literature. Let us also emphasize the connection with L. Cohen [7,8]. We recall that a time-frequency representation belongs to the Cohen class if it is of the form Qf = W f * σ, where σ, so-called kernel of Q, is a fixed function or element of S ′ (R 2d ). The τ -Wigner distribution belongs to the Cohen class, with kernel [16]. (24) σ τ (x, ξ) =      2 d |2τ −1| d e 2πi 2 2τ −1 xξ τ = 1 2 δ τ = 1 2 , cf. Proposition 1.3.27 in In Section 3 we use the τ -Wigner distribution to characterize the function spaces involved in this study: the modulation spaces (see Subsection 2.1 for their definition and main properties). An interesting result for the time-frequency community is that modulation spaces can be defined by replacing the STFT with τ -Wigner distributions. Namely, given a fixed non-zero window function g ∈ S(R d ), we characterize the modulation spaces M p,q vs (R d ), 1 ≤ p, q ≤ ∞, s ∈ R, as the subspace of functions/distributions f ∈ S ′ (R d ), satisfying for a fixed τ ∈ (0, 1), (25) W τ (f, g) L p,q vs = R d R d (|W τ (f, g)(x, ξ)| p (x, ξ) ps dx q p dξ 1 q < ∞, where z = (1 + |z| 2 ) 1/2 , with f M p,q vs ≍ W τ (f, g) L p,q vs . For p = 2, the previous characterization does not hold when τ = 0 or τ = 1. A fundamental step to develop our theory is contained in Corollary 3.17 below: Assume τ ∈ [0, 1], 1 ≤ p ≤ ∞ and s ≥ 0. If f, g ∈ M p vs (R d ) then W τ (f, g) ∈ M p vs (R 2d ) with W τ (f, g) M p vs f M p vs g M p vs , When p = 2 the Hilbert space M 2 s (R d ) is the so-called Shubin space, cf. Remark 4.4.3, (iii) in [16], so that its norm can be computed by means of τ -Wigner distributions as well. In Section 4 we generalize further the notion of Wigner distribution. Namely, fixed a 4d × 4d symplectic matrix A ∈ Sp(2d, R), we define the (cross-)A-Wigner distribution of f, g ∈ L 2 (R d ) by (26) W A (f, g) = µ(A)(f ⊗ḡ), and [23], [24], and the contributions related to Convolutional Neural Networks [18,19]). In this paper, by using A-Wigner distributions, we prove a general version of Proposition 1.2, expressed in terms of W τ and τ -pseudodifferential operators in the frame of Schwartz spaces S, S ′ . Beside the use of (26), basic tool for the proof is the covariance property W A f = W A (f, f ), where µ(A) is(27) µ(A)Op w (a) = Op w (a • A −1 )µ(A), valid for general symbols a ∈ S ′ (R 2d ). A further study of the A-Wigner distributions would be interesting per se, we believe. In Part II we shall apply them to Schrödinger equations with quadratic Hamiltonians. In short: starting from the standard τ -Wigner representation of the initial datum, the evolved solution will require a representation in terms of A-Wigner distributions, and their use will be then natural in related problems of Quantum Mechanics. Section 5 is devoted to almost-diagonalization and wave front set. We begin by proving Proposition 1.2 and Theorem 1.6 in full generality. Namely, by using the results of Sections 3 and 4, we extend the functional frame to the modulation spaces in the context of the τ -Wigner representations. Moreover, symbols of pseudodifferential operators will be considered in the Sjöstrand's class, i.e., the modulation space M ∞,1 (R d ) with weights, containing S 0 0,0 as a subspace. We shall then prove a generalized version of Theorem 1.4, by extending the statement to modulation spaces, and Theorem 1.6, by considering the τ -version of Definition 1.5. Finally, in Subsection 5.3 we compare the Wigner with the global Hörmander wave front set, and we identify the possible presence of a ghost part in the Wigner wave front. Concerning Bibliography, we observe that several references are certainly missing, in particular in the field of Mathematical Physics. To our excuse, we may say that the literature on the Wigner distribution is enormous, and it seems impossible to list even the more relevant contributions. Preliminaries and Function spaces Notations. The reflection operator I is given by If (t) = f (−t). Translations, modulations and time-frequency shifts are defined as standard, for z = (x, ξ) π(z)f (t) = M ξ T x f (t) = e 2πiξt f (t − x). We also write f * (t) = f (−t). Recall the Fourier transform f (ξ) = F f (ξ) = R d f (t)e −2πitξ dt and the symplectic Fourier transform (28) F σ a(z) = R 2d e −2πiσ(z,z ′ ) a(z ′ ) dz ′ , with σ the standard symplectic form σ(z, z ′ ) = Jz · z ′ , where the symplectic matrix J is defined as J = 0 d×d I d×d −I d×d 0 d×d , (here I d×d , 0 d×d are the d × d identity matrix and null matrix, respectively). The Fourier transform and symplectic Fourier transform are related by (29) F σ a(z) = F a(Jz) = F (a • J)(z), a ∈ S(R 2d ). We denote by GL(2d, R) the linear group of 2d × 2d invertible matrices; for a complex-valued function F on R 2d and L ∈ GL(2d, R) we define (30) T L F (x, y) = | det L|F (L(x, y)), (x, y) ∈ R 2d , with the convention L(x, y) = L x y , (x, y) ∈ R 2d . Modulation spaces. We denote by v a continuous, positive, submultiplicative weight function on R d , that is, v(z 1 + z 2 ) ≤ v(z 1 )v(z 2 ), for all z 1 , z 2 ∈ R d . We say that m ∈ M v (R d ) if m is a positive, continuous weight function on R d v-moderate: m(z 1 + z 2 ) ≤ Cv(z 1 )m(z 2 ) for all z 1 , z 2 ∈ R d . We focus on weights on R 2d of the type (31) v s (z) = z s = (1 + |z| 2 ) s/2 , z ∈ R 2d , and weight functions on R 4d : (32) (v s ⊗ 1)(z, ζ) = (1 + |z| 2 ) s/2 , (1 ⊗ v s )(z, ζ) = (1 + |ζ| 2 ) s/2 , z, ζ ∈ R 2d . For s < 0, v s is v |s| -moderate. Given two weight functions m 1 , m 2 on R d , we write (m 1 ⊗ m 2 )(x, ξ) = m 1 (x)m 2 (ξ), x, ξ ∈ R d , and similarly for weights m 1 , m 2 on R 2d . The modulation spaces were introduced by Feichtinger in [20] (see also Galperin and Samarah [22] for the quasi-Banach setting). They are now available in many textbooks, see e.g. [3,16,30]. Fix a non-zero window g ∈ S(R d ) and consider a weight m ∈ M v and indices 1 ≤ p, q ≤ ∞. The modulation space M p,q m (R d ) consists of all tempered distributions f ∈ S ′ (R d ) such that (33) f M p,q m = V g f L p,q m = R d R d |V g f (x, ξ)| p m(x, ξ) p dx q p dξ 1 q < ∞ (obvious modifications with p = ∞ or q = ∞). The STFT V g f was defined in (3). For simplicity, we write M p m (R d ) for M p,p m (R d ) and M p,q (R d ) if m ≡ 1. The space M p,q (R d ) is a Banach space whose definition is independent of the choice of the window g, in the sense that different non-zero window functions yield equivalent norms and the window class can be extended to the modulation space M 1 v (R d ) (also known as Feichtinger algebra). The modulation space M ∞,1 (R d ) is also called Sjöstrand's class [45]. Modulation spaces enjoy the following inclusion properties: (34) S(R d ) ⊆ M p 1 ,q 1 m (R d ) ⊆ M p 2 ,q 2 m (R d ) ⊆ S ′ (R d ), p 1 ≤ p 2 , q 1 ≤ q 2 . The closure of S(R d ) in the M p,q m -norm is denoted M p,q m (R d ). Then M p,q m (R d ) ⊆ M p,q m (R d ), and M p,q m (R d ) = M p,q m (R d ), provided p < ∞ and q < ∞. For technical purposes we recall the spaces that can be viewed as images under Fourier transform of the modulation spaces. For p, q ∈ [1, ∞], the Wiener amalgam spaces W (F L p , L q )(R d ) are given by the distributions f ∈ S ′ (R d ) such that f W (F L p ,L q )(R d ) := R d R d |V g f (x, ξ)| p dξ q/p dx 1/q < ∞ (obvious changes for p = ∞ or q = ∞). Using Parseval identity in (3), we can write V g f (x, ξ) = e −2πixξ Vĝf (ξ, −x), hence |V g f (x, ξ)| = |Vĝf (ξ, −x)| = |F (f T ξĝ )(−x)| so that f M p,q = R d f T ξĝ q F L p (ξ) dξ 1/q = f W (F L p ,L q ) . This proves the claim that Wiener amalgam spaces are the image under Fourier transform of modulation spaces: (35) F (M p,q ) = W (F L p , L q ). 2.2. The metaplectic representation and miscellaneous results. The symplectic group is defined by (36) Sp(d, R) = g ∈ GL(2d, R) : t gJg = J . The metaplectic or Shale-Weil representation µ is a unitary representation of the (double cover of the) symplectic group Sp(d, R) on L 2 (R d ). It arises as intertwining operator between the standard Schrödinger representation ρ of the Heisenberg group H d and the representation that is obtained from it by composing ρ with the action of Sp(d, R) by automorphisms on H d (see, e.g., [21]). For elements of Sp(d, R) in special form, the metaplectic representation can be computed explicitly, with definition up to a phase factor. In particular, µ(J)f = F f (37) µ I d×d 0 d×d C I d×d f (x) = e iπx·Cx f (x). (38) Proposition 2.1. Let A = A B C D ∈ Sp(d, R) with det A = 0, then (39) µ(A)f (x) = (det A) −1/2 e −πix·CA −1 x+2πiξ·A −1 x+πiξ·A −1 Bξf (ξ) dξ. We shall need the following computation of Fourier transforms. Proposition 2.2. Let B a real, invertible, symmetric d×d matrix, and let F B (z) = e iπz·Bz . Then the distribution Fourier transform of F B is given by (40) F B (ζ) = e πi♯(B)/4 | det B|e −πiζ·B −1 ζ , where ♯(B) is the number of positive eigenvalues of B minus the number of negative eigenvalues. In the sequel we shall use the 2d × 2d matrix (41) T τ = 0 d×d (1 − τ ) I d×d −τ I d×d 0 d×d , and its properties [15, Lemma 2.2]: Lemma 2.1. For τ ∈ [0, 1] we have (T τ ) ⊤ = −T 1−τ and T τ + T 1−τ = J. Let us recall the convolution relation for the STFT (see, e.g., Lemma [16]). Lemma 2.2. Consider g, h, γ ∈ S(R d ) \ {0} such that h, γ = 0 and f ∈ S ′ (R d ). Then (42) |V g f (x, ξ)| ≤ 1 | h, γ | (|V h f | * |V g γ|)(x, ξ), ∀(x, ξ) ∈ R 2d . Properties of the τ -Wigner distributions The main goal of this section relies on the characterization of modulation spaces by τ -Wigner distributions, when τ ∈ (0, 1). We note that if τ = 0 or τ = 1 the characterization does not hold. Preliminaries on τ -Wigner distributions. For τ ∈ [0, 1] consider the ma- trix (43) L τ = I d×d τ I d×d I d×d −(1 − τ )I d×d . Define the change of coordinates T τ := T Lτ according to (30) and F 2 to be the partial Fourier transform with respect to the second variables y (44) F 2 F (x, ξ) = R d e −2πiy·ξ F (x, y) dy, F ∈ L 2 (R 2d ). For τ ∈ [0, 1], the (cross-)τ -Wigner distribution is defined in (22): W τ (f, g) = F 2 T τ (f ⊗ḡ). The case τ = 1/2 is the cross-Wigner distribution W (f, g). For τ = 0, W 0 (f, g) is the (cross-)Rihaczek distribution (also denoted by R(f, g)). In detail, (45) W 0 (f, g)(x, ξ) = R(f, g)(x, ξ) = e −2πixξ f (x)ĝ(ξ), f, g ∈ L 2 (R d ). For τ = 1, W 1 (f, g) is the conjugate-(cross-)Rihaczek distribution R * (f, g) (46) W 1 (f, g)(x, ξ) = R * (f, g)(x, ξ) = R(g, f ) = e 2πixξ g(x)f (ξ), f, g ∈ L 2 (R d ). In terms of the symplectic Fourier transform (cf. (29)) (47) F σ V g f (z) = W 0 (f, g)(z), z ∈ R 2d , f, g ∈ L 2 (R d ). or, in terms of the conjugate-(cross-)-Rihaczek distribution, (48) F σ V g f (z) = W 1 (g, f )(z), z ∈ R 2d , f, g ∈ L 2 (R d ). In what follows we shall use the following formula for the STFT of the τ -Wigner distribution W τ (f, g) (cf. [ Lemma 3.1. Consider τ ∈ [0, 1], ϕ 1 , ϕ 2 ∈ S R d , f, g ∈ S(R d ) and set Φ τ = W τ (ϕ 1 , ϕ 2 ) ∈ S R 2d . Then, V Φτ W τ (f, g) (z, ζ) = e −2πiz 2 ζ 2 V ϕ 1 f (z − T 1−τ ζ) V ϕ 2 g (z + T τ ζ) , where z = (z 1 , z 2 ) , ζ = (ζ 1 , ζ 2 ) ∈ R 2d and the matrix T τ is defined in (41). In particular, |V Φτ W τ (f, g)| = |V ϕ 1 f (z − T 1−τ ζ)| · |V ϕ 2 g (z + T τ ζ)| . We remark that the preceding result can be easily extended to the case of distributions f, g ∈ S ′ (R d ) by standard approximation arguments. Definition 3.1. For τ ∈ (0, 1), we define the operator Q τ : (49) Q τ : f (t) −→ If 1 − τ τ t . In other words, Q τ = D 1−τ τ I, where we set D 1−τ τ f (t) := f ( 1−τ τ t). For τ ∈ (0, 1) and f ∈ L p (R d ), 1 ≤ p ≤ ∞, we obtain (50) Q τ f L p (R d ) = τ d p (1 − τ ) d p f L p (R d ) , with the convention d/∞ = 0. We can express the τ -Wigner in terms of the STFT as follows [16,Prop. 1.3.30]. We need the following matrix B τ and its inverse: (51) B τ = 1 1−τ I d×d 0 d×d 0 d×d 1 τ I d×d B −1 τ = (1 − τ )I d 0 d 0 d τ I d . Proposition 3.2. For τ ∈ (0, 1), f, g ∈ L 2 (R d ), and B τ defined in (51), we have (52) W τ (f, g)(x, ξ) = 1 τ d e 2πi 1 τ xξ V Qτ g f 1 1 − τ x, 1 τ ξ = 1 τ d e 2πi 1 τ xξ V Qτ g f (B τ (x, ξ)) . Corollary 3.3. Under the assumptions of the proposition above, V g f (x, ξ) = τ d e −2πi(1−τ )xξ W τ (f, Q −1 τ g) ((1 − τ )x, τ ξ) = τ d e −2πi(1−τ )xξ W τ (f, Q −1 τ g) B −1 τ (x, ξ) (53) where (54) Q −1 τ g(t) = Ig τ 1 − τ t , t ∈ R d . Proof. It is a straightforward consequence of formula (52). Lemma 3.2 (Convolution inequalities for τ -Wigner distributions). Consider g, h, γ ∈ S(R d ) \ {0} such that h, γ = 0, τ ∈ (0, 1), the operator Q τ defined in (49). Then, for any f ∈ S ′ (R d ) we have (55) |W τ (f, g)(x, ξ)| ≤ 1 (1 − τ ) d 1 | Q τ h, γ | |W τ (f, h)| * |W τ (γ, g)|(x, ξ), (x, ξ) ∈ R 2d . Proof. We use formula (52) and write |V Qτ g f | (B τ (x, ξ)) = τ d |W τ (f, g)|(x, ξ), so that τ d |W τ (f, g)|(B −1 τ (x, ξ)) = |V Qτ g f |(x, ξ). Observe that the representations above are well-defined continuous functions on R 2d (cf., e.g., [16,Corollary 1.2.19]). The convolution relations for the STFT (42) let us infer (56) |W τ (f, g)|(B −1 τ (x, ξ)) ≤ t d | Q τ h, γ | (|W τ (f, h)(B −1 τ ·)| * |W τ (γ, g)(B −1 τ ·)|)(x, ξ). Using the change of variables (56), we obtain the estimate in (55). B −1 τ (u, v) = (u ′ , v ′ ), dudv = 1 τ d (1−τ ) d du ′ dv ′ , we work out |W τ (f, h)(B −1 τ ·)| * |W τ (γ, g)(B −1 τ ·)|(x, ξ) = R 2d |W τ (f, h)(B −1 τ (x, ξ) − B −1 τ (u, v))| |W τ (γ, g)(B −1 τ (u, v))| dudv = 1 τ d (1 − τ ) d |W τ (f, h)| * |W τ (γ, g)|(B −1 τ (x, ξ)) for every (x, ξ) ∈ R 2d . Replacing B −1 τ (x, ξ) by (x, ξ) in(x, ξ) ∈ R 2d (57) |W τ (f, Q −1 τ g)|(x, ξ) ≤ 1 (1 − τ ) d 1 | h, γ | |W τ (f, Q −1 τ h)| * |W τ (γ, Q −1 τ g)|(x, ξ). Proof. Replace g by Q −1 τ g and h by Q −1 τ h. Characterization of modulation spaces via τ -Wigner distributions. For τ ∈ (0, 1), we can now provide a new characterization for modulation spaces in terms of the τ -Wigner distribution. We refer to [14] for other characterizations. Proposition 3.5. Fix a window g in M 1 v (R d ) and a weight function m ∈ M v (R 2d ). For τ ∈ (0, 1) define (58) m τ (x, ξ) = m(B τ (x, ξ)), (x, ξ) ∈ R 2d , v τ (x, ξ) = v(B τ (x, ξ)) . For 1 ≤ p, q ≤ ∞ and f ∈ M p,q m (R d ) we have the norm equivalence (59) f M p,q m ≍ V g f L p,q m = 1 (1 − τ ) d W τ (f, Q −1 τ g) L p,q mτ , where the operator Q τ is defined in (49). In particular, (60) f M p,q m ≍ τ W τ (f, Q −1 τ g) L p,q mτ . Proof. Let us start with a window g in S(R d ) \ {0}. Then the characterization in (59) follows by using the definition of modulation spaces and the Corollary 3.3. In detail, V g f L p,q m = τ d W τ (f, Q −1 τ g)(B −1 τ ·) L p,q m = 1 (1 − τ ) d W τ (f, Q −1 τ g) L p,q mτ , where we performed the change of variables B −1 τ (x, ξ) = (x ′ , ξ ′ ). For a more general window g in M 1 v (R d ),(61) f M p,q m ≍ τ W τ (f, g) L p,q mτ . Proof. The claim is a straightforward application of the change-window property for the STFT and the τ -Wigner distribution in Lemmas 2.2 and 3.2, respectively. Remark 3.7. (i) For τ = 0, by (47), f M p,q m ≍ V g f L p,q m = F σ W 0 (f, g) L p,q m = F W 0 (f, g)(J·) L p,q m = F W 0 (f, g) L p,q m•J −1 . (ii) For τ = 1, by (46), f M p,q m ≍ V g f L p,q m = F σ W 1 (g, f ) L p,q m = F W 1 (g, f ) L p,q m•J −1 . Notice that for τ = 0 or τ = 1 we do not get a characterization similar to the case τ ∈ (0, 1) in Corollary 3.6. Let us study for simplicity the unweighted case m = 1. Then by (45) for a fixed window g W 0 (f, g) L p,q ≍ f L p ĝ L q = f L p g F L q ≍ f L p so we are reduced to the L p norm of the function f ; whereas by (46) W 1 (f, g) L p,q ≍ g L p f L q = g L p f F L q ≍ f F L q , that is the F L q norm of the function f . The above norms are different from the M p,q norm in general, the equality being satisfied only in the case p = q = 2: f M 2 = f 2 = F f 2 .(62) W τ (f 1 , g 1 ), W τ (f 2 , g 2 ) = f 1 , f 2 g 1 , g 2 , f 1 , f 2 , g 1 , g 2 ∈ L 2 (R d ). Theorem 3.8. Assume τ ∈ (0, 1) and g 1 , g 2 ∈ L 2 (R d ) with g 1 , g 2 = 0. Then, for any f ∈ L 2 (R d ), (63) f = 1 τ d 1 g 2 , g 1 R 2d e − 2πi τ xξ W τ (f, g 1 )M ξ τ T x 1−τ Q τ g 2 dxdξ, where Q τ is defined in (49). Proof. The formula can be inferred from [17,Corollary 3.17]. For sake of clarity we exhibit a direct proof for the τ -Wigner distribution, following the pattern of (50); moreover translations and modulations are isometries on [30, Corollary 3.2.3]. From the Moyal's formula (62) we infer W τ (f, g 1 ) ∈ L 2 (R 2d ). Moreover, for g 2 ∈ L 2 (R d ) also the function Q τ g 2 ∈ L 2 (R d ), cf.L 2 (R d ) so that M ξ τ T x 1−τ Q τ g 2 ∈ L 2 (R d ) for every x, ξ ∈ R d . Hence the vector-valued integral f = 1 τ d 1 g 2 , g 1 R 2d e − 2πi τ xξ W τ (f, g 1 )M ξ τ T x 1−τ Q τ g 2 dxdξ, is a well-defined functionf ∈ L 2 (R d ) (cf.,e.g., [16, Section 1.2.4]). Choose F ∈ L 2 (R 2d ) and consider the conjugate-linear functional: l(h) = 1 τ d R 2d F (x, ξ)e − 2πi τ xξ h, M ξ τ T x 1−τ Q τ g 2 dxdξ = R 2d F (x, ξ)e − 2πi τ xξ 1 τ d e 2πi τ xξ V Qτ g 2 h(B τ (x, ξ)) dxdξ = R 2d F (x, ξ)W τ (h, g 2 )(x, ξ)dx dξ, where we used the connection between STFT and τ -Wigner in (52). Such functional is bounded on L 2 (R d ): |l(h)| ≤ F 2 W τ (h, g 2 ) 2 = F 2 h 2 g 2 2 , by Moyal's formula (62). So l(h) ∈ L 2 (R d ), for every F ∈ L 2 (R 2d ). It remains to provef = f . Using Moyal's formula again, f , h = 1 g 2 , g 1 R 2d W τ (f, g 1 )(x, ξ)W τ (h, g 2 )(x, ξ) dxdξ = f, h , for every h ∈ L 2 (R d ), that yieldsf = f in L 2 (R d ). Corollary 3.3 (Inversion formula for the Wigner distribution). Fix g 1 , g 2 ∈ L 2 (R d ), with g 1 , g 2 = 0. Then for every f ∈ L 2 (R d ), (64) f = 2 d g 2 , g 1 R 2d e −4πixξ W (f, g 1 )(x, ξ)M 2ξ T 2x Ig 2 dxdξ. Proof. For τ = 1/2 we have Q τ = I, the reflection operator, and this gives the claim. In particular, if we consider the Grossmann-Royer operator T GR defined by T GR ψ(t) = e 4πiξ(t−x) ψ(2x − t), for any ψ ∈ L 2 (R d ), we can rewrite (64) as f = 2 d g 2 , g 1 R 2d e −4πixξ W (f, g 1 )(x, ξ) T GR g 2 dxdξ, and we recapture the inversion formula for the Wigner distribution proved in [28,Prop. 184]. (61), then among all possible windows we can choose f = g. Hence, Modulation spaces and τ -Wigner distributions. Willing to study W τ f = W τ (f, f ) in modulation spaces we are led to consider W τ (f, g) with g ∈ M p v (R d ), for p ≥ 1. Remark 3.9. (i) If we consider f ∈ M 1 v (R d ) inf ∈ M 1 v (R d ) ⇔ W τ (f, f ) ∈ L 1 vτ (R 2d ). (ii) We observe that, as a consequence of Moyal's identity (62), W τ (f, g) 2 = f 2 g 2 , f, g ∈ L 2 (R d ). Since we proved f M 2 ≍ W τ (f, g) 2 , we infer that for M 2 (R d ) the space of admissible windows can be enlarged from M 1 (R d ) to M 2 (R d ) = L 2 (R d ). For p ∈ [1, 2], we denote by p ′ the conjugate exponent of p (1/p + 1/p ′ = 1) and set (65) γ = 1 p − 1 p ′ ∈ [0, 1]. Theorem 3.10. Consider 1 ≤ p ≤ 2, τ ∈ (0, 1) and a submultiplicative weight v on R 2d such that there exists 0 < C 1 (τ ) ≤ C 2 (τ ) with (66) C 1 (τ )v(x, ξ) ≤ v(B τ (x, ξ)) ≤ C 2 (τ )v(x, ξ), (x, ξ) ∈ R 2d . Fix g ∈ M p v γ (R d ). Then (67) f ∈ M p v γ (R d ) ⇔ W τ (f, g) ∈ L p v γ (R 2d ). Proof. Fix g 0 ∈ S(R d ) such that g 0 , g = 0 and Q τ g 0 , g = 0 (for example take the Gaussian g 0 (t) = e −πt 2 ). If g ∈ M 1 v we can use the convolution inequalities in Lemma 3.2 (which still hold for f, g ∈ M 1 v (R d ) by density argument) and we can write (68) W τ (f, g) L 1 vτ ≍ W τ (f, g) L 1 v ≤ C τ W τ (f, g 0 ) L 1 v W τ (g 0 , g) L 1 v ≍ τ f M 1 v g M 1 v . Now, fix g ∈ L 2 (R d ). By Moyal's formula (62) we infer (69) W τ (f, g) 2 = f M 2 g M 2 . for every f ∈ L 2 (R d 1 v (R d ) × M 1 v (R d ) into L 1 v (R 2d ) and from M 2 (R d ) × M 2 (R d ) into L 2 (R 2d[M 1 v , M 2 ] θ = M p v 1−θ , [L 1 v , L 2 ] θ = L p v 1−θ , where 1 p = 1 − θ 2 ⇔ θ 2 = 1 p ′ and 1 − θ = 1 p − θ 2 = 1 p − 1 p ′ = γ. (observe p ∈ [1, 2]) we infer that the linear mapping W τ : M p v γ (R d ) × M p v γ (R d ) → L p v γ (R 2d ) is well defined and bounded. Vice versa, using the convolution inequalities in Lemma 3.2 we obtain f M p v γ ≍ τ W τ (f, g 0 ) L p v γ ≤ 1 (1 − τ ) d 1 | Q τ g 0 , g | W τ (f, g) L p v γ W τ g 0 L 1 v γ ≤ C(τ, g, g 0 ) W τ (f, g) L p v γ . Hence we obtain the claim, since, for g 0 ∈ S(R d ), the τ -Wigner W τ g 0 is in S(R 2d ) ⊂ L 1 v γ (R 2d ) (τ min = min{1 − τ, τ } ∈ (0, 1), τ max = max{1 − τ, τ } ∈ (0, 1) 1 τ d max |(x, ξ)| ≤ |B τ (x, ξ)| ≤ 1 τ d min |(x, ξ)|, so that (70) v s ≍ τ v s (B τ ·). Observe that if we fix the window g ∈ M 1 v (R d ) we have Theorem 3.11. Consider 1 ≤ p ≤ ∞, τ ∈ (0, 1) and a submultiplicative weight v on R 2d satisfying (66). Fix g ∈ M 1 v (R d ). Then (71) f ∈ M p v (R d ) ⇔ W τ (f, g) ∈ L p v (R 2d ). Proof. It immediately follows from Corollary 3.6. The following theorems extend the result for the Wigner distribution [16, Theorem 4.4.1] (cf. [9,Theorem 4]) to any τ ∈ (0, 1). They estimate the modulation norm of W τ (f, g) in terms of the modulation norms of f, g and play a crucial role in the final Section 5. Theorem 3.12. Assume τ ∈ [0, 1] and indices p 1 , q 1 , p 2 , q 2 , p, q ∈ [1, ∞] such that (72) p i , q i ≤ q, i = 1, 2 and (73) 1 p 1 + 1 p 2 ≥ 1 p + 1 q , 1 q 1 + 1 q 2 ≥ 1 p + 1 q . Consider s ∈ R, the weight functions v s , 1 ⊗v s defined in (31), (32), respectively. If f ∈ M p 1 ,q 1 v |s| (R d ) and g ∈ M p 2 ,q 2 vs (R d ), then W τ (f, g) ∈ M p,q 1⊗vs (R 2d ), and the following estimate holds (74) W τ (f, g) M p,q 1⊗vs f M p 1 ,q 1 v |s| g M p 2 ,q 2 vs . Proof. The proof uses the formula of the STFT of the τ -Wigner distribution recalled in Lemma 3.1. Namely, W τ (f, g) M p,q 1⊗vs = R 2d R 2d |V Φτ W τ (f, g) (z, ζ)| p dz q p v s (ζ) q dζ 1 q = R 2d R 2d |V ϕ 1 f (z − T 1−τ ζ)| p · |V ϕ 2 g (z + T τ ζ)| p dz q p v s (ζ) q dζ 1 q . The substitution z ′ = z + T τ ζ, the properties of T τ provided in Lemma 2.1 (in particular T τ + T 1−τ = J) yield W τ (f, g) M p,q 1⊗vs = R 2d R 2d |V ϕ 1 f (z ′ − Jζ)| p · |V ϕ 2 g (z ′ )| p dz ′ q p v s (ζ) q dζ 1 q = R 2d [|V ϕ 2 g| p * |(V ϕ 1 f ) * | p (Jζ)] q p v s (ζ) q dζ 1 p = |V ϕ 2 g| * |(V ϕ 1 f ) * | p 1 p L q p vps , since v s (Jζ) = v s (ζ, v s ⊗ 1 defined in (31), (32), respectively. If f, g ∈ M p vs (R d ), then W τ (f, g) ∈ M p vs⊗1 (R 2d ), with (75) W τ (f, g) M p vs⊗1 f M p vs g M p vs Proof. W τ (f, g) = F [W (f, g) * σ τ ], where the kernel σ τ ∈ S ′ (R 2d ) is given by (24). Observe that the convolution [13,Prop. 4.1]), and the convolution relations for modulation spaces give M p 1⊗vs (R 2d ) * M 1,∞ (R 2d ) ֒→ M p (R 2d ). Let us first study the case τ = 1/2, where W (f, g) * σ τ is well defined for f, g ∈ M p vs (R d ) since W (f, g) ∈ M p 1⊗vs (R 2d ) by Theorem 3.12, σ τ ∈ M 1,∞ (R 2d ) (seeW τ (f, g)(x, ξ) = F W (f, g)(x, ξ). Now F W (f, g)(x, ξ) = F σ W (f, g)(−J(x, ξ)) = A(f, g)(−J(x, ξ)) (where in the last equality we used [16,Lemma 1.3.11] for the ambiguity function A(f, g)) and ξ)). A(f, g)(−J(x, ξ)) = 2 −d W (f, Ig)(− J 2 (x, Using the easy-verified formula for the STFT of a dilated function f D (t) := f (Dt), D an invertible d × d matrix, (76) V ϕ f D (x, ξ) = | det D| −1 V ϕ D −1 f (Dx, (D * ) −1 ξ). and taking in our context Φ(z) = e −2πz 2 so that for D = − J 2 , we get Φ D −1 (z) = e − 1 2 πz 2 . Hence W τ (f, g) M p 1⊗vs = 2 −d W (f, Ig)(− J 2 ·) M p 1⊗vs ≍ R 4d |V Φ D −1 W (f, Ig)(− J 2 z, −2Jζ)| p v p s (ζ)dz dζ 1 p ≍ R 4d |V Φ D −1 W (f, Ig)(z, ζ)| p v p s (ζ)dz dζ 1 p = W (f, Ig) M p 1⊗vs since v s (−2Jζ) = v s (2ζ) ≍ v s (ζ) . Finally, the conclusion follows from Theorem 3.12, observing that, by relation (76), Ig M p vs = g M p vs . Case τ = 1/2. Here we can write W τ (f, g)(x, ξ) = [F W (f, g) · F σ τ ](x, ξ) with F σ τ (x, ξ) = e −πi(F σ τ ∈ W (F L 1 , L ∞ )(R 2d ) and using M p 1⊗vs = W (F L p , L p vs ) and the pointwise product for Wiener amalgam spaces W (F L 1 , L ∞ ) · W (F L p , L p vs ) ⊂ W (F L p , L p vs ) we get F W (f, g) · F σ τ M p 1⊗vs F W (f, g) M p 1⊗vs F σ τ W (F L 1 ,L ∞ ) ≍ F W (f, g) M p 1⊗vs σ τ M 1,∞ , and the result follows from the case τ = 1/2. Modulation spaces and τ -Wigner distributions (conclusions). Let us summarise the results of this section. Observe that for the weight v s we infer v γ s (z) = (1 + |z| 2 ) sγ 2 = v sγ (z), z ∈ R 2d , so that we are reduced to the same type of polynomial weight v s ′ , with 0 ≤ s ′ = sγ ≤ s. ( i) f ∈ M p vsγ (R d ) (ii) W τ (f, g) ∈ L p vsγ (R 2d ) (iii) W τ (f, g) ∈ M p vsγ ⊗1 (R 2d ), where the exponent γ is defined in (65). Proof. (i) ⇔ (ii). It immediately follows from Theorem 3.10 and the weight equivalence in (70). (i) ⇒ (iii). It is proved in Theorem 3.14. (iii) ⇒ (ii). It follows by the inclusion relations M p vsγ ⊗1 (R 2d ) ⊂ L p vsγ (R 2d ), for 1 ≤ p ≤ 2, cf. [48, Proposition 2.9]. Since the previous result holds true for every s ≥ 0, if we avoid the case γ = 0 that corresponds to p = 2 we can state the simpler characterization: Corollary 3.16. Consider τ ∈ (0, 1), 1 ≤ p < 2, s ≥ 0 and the weights v s , v s ⊗ 1 in (31), (32), respectively. Fix g ∈ M p vs (R d ). Then the following conditions are equivalent: ( i) f ∈ M p vs (R d ) (ii) W τ (f, g) ∈ L p vs (R 2d ) (iii) W τ (f, g) ∈ M p vs⊗1 (R 2d ) . Note that the result above holds true for any fixed window g ∈ M p vs (R d ). One could be tempted to put f = g in the characterization above, and be misled by thinking that f M p vs ≍ W τ f L p vs or f M p vs ≍ |W τ f | L p vs . This is not even the case when s = 0. As an example, consider g(t) = ϕ(t) = e −πt 2 and its rescaled version f (t) = ϕ √ λ (t) = e −πλt 2 , and τ = 1/2. Then (see, e.g. [ (78) ϕ √ λ M p ≍ W (ϕ √ λ , ϕ) L p ≍ W (ϕ, ϕ √ λ ) L p ≍ (λ + 1) d p − d 2 λ d 2p whereas an easy computation shows (see [30, (4.20)]) W (ϕ √ λ , ϕ √ λ ) = 2 d 2 λ − d 4 e −2πλx 2 e − 2π λ ξ 2 (79) W (ϕ √ λ , ϕ √ λ ) L p ≍ λ − d 2 . Hence it is clear that the norms in (78) and (79) behave differently as the parameter λ goes to 0 or to +∞. In particular, the norm in (79) does not even depend on the exponent p. v s defined in (31). If f, g ∈ M p vs (R d ) then W τ (f, g) ∈ M p vs (R 2d ) with (80) W τ (f, g) M p vs f M p vs g M p vs . Proof. If f, g ∈ M p vs (R d ) from Theorems 3.12 and 3.14 we infer that W τ f, W τ (f, g) ∈ M p vs⊗1 (R 2d ) ∩ M p 1⊗vs (R 2d ). For s ≥ 0 we have the equivalence v s (z 1 , z 2 ) ≍ v s (z 1 ) + v s (z 2 ), ∀z 1 , z 2 ∈ R 2d , hence, for 1 ≤ p < ∞, v s (z 1 , z 2 ) p ≍ v s (z 1 ) p + v s (z 2 ) p , for every z 1 , z 2 ∈ R 2d . For every fixed Φ ∈ S(R 2d ), we can write, for p < ∞, W τ (f, g) p M p vs ≍ R 4d |V Φ W τ (f, g)(z 1 , z z )| p v s (z 1 , z 2 ) p dz 1 dz 2 R 4d |V Φ W τ (f, g)(z 1 , z z )| p v s (z 1 ) p dz 1 dz 2 + R 4d |V Φ W τ (f, g)(z 1 , z z )| p v s (z 2 ) p dz 1 dz 2 ≍ W τ (f, g) p M p vs⊗1 + W τ (f, g) p M p 1⊗vs f p M p vs g p M p vs . The case p = ∞ is similar. This concludes the proof. Metaplectic Operators and A-Wigner representations In this section we highlight a new viewpoint for time-frequency representations: they can be defined as images of metaplectic operators. This approach might shed more light on the roots of Time-frequency Analysis and Quantum Harmonic Analysis. Definition 4.1. Consider a 4d × 4d symplectic matrix A ∈ Sp(2d, R) and define the time-frequency representation A-Wigner of f, g ∈ L 2 (R d ) by (81) W A (f, g) = µ(A)(f ⊗ḡ), f, g ∈ L 2 (R d ). Observe that for f, g ∈ L 2 (R d ), the tensor product f ⊗ḡ acts continuously from L 2 (R d ) × L 2 (R d ) to L 2 (R 2d ) and µ(A) is a unitary operator on L 2 (R 2d ), hence W A is a well-defined mapping from L 2 (R d ) × L 2 (R d ) into L 2 (R 2d ). We set W A f := W A (f, f ). Note that the metaplectic operator µ(A) is defined up to a multiplicative factor, and W A in (81) depends on its choice. By abuse, in the notation we omit to specify the choice and limit to the dependence on A. When appropriate we shall detail the phase factor, see in particular the following class of examples. We are interested in matrices A ∈ Sp(2d, R) such that (82) µ(A) = F 2 T L where F 2 is the partial Fourier transform with respect to the second variables y defined in (44) and the change of coordinates T L is defined in (30). If we introduce the symplectic matrix (83) A F T 2 = A 11 A 12 A 21 A 22 ∈ Sp(2d, R),D L = L −1 0 d×d 0 d×d L T ∈ Sp(2d, R) and with a choice of the phase factor (87) µ(D L )F (x, y) = | det L|F (L(x, y)) = T L F (x, y), F ∈ L 2 (R 2d ). An easy computation shows (88) A := A F T 2 D L ∈ Sp(2d, R), where A = A 11 L −1 A 12 L T A 21 L −1 A 11 L T , A −1 = L A 11 L A 21 L −T A 12 L −T A 11 . Hence the symplectic matrix A = A satisfies the equality in (82). Remark 4.2. (i) Consider the linear operator T a defined by (89) T a F (x, y) = F (y, y − x) x, y ∈ R d . Observe that T a = T L , with L = 0 d×d I d×d −I d×d I d×d . We can then regard the STFT as A-Wigner representation according to (81) in Definition 4.1. Namely, for f, g ∈ L 2 (R d ), (90) V g f = F 2 T a (f ⊗ḡ) = µ(A ST )(f ⊗ḡ), where A ST := A F T 2 D L is explicitly computed as (91) A ST =     I d×d −I d×d 0 d×d 0 d×d 0 d×d 0 d×d I d×d I d×d 0 d×d 0 d×d 0 d×d −I d×d −I d×d 0 d×d 0 d×d 0 d×d     . (ii) For τ ∈ [0, 1], L = L τ in (43), the symplectic matrix A τ := A F T 2 D L in (88) can be explicitly computed as (92) A τ =     (1 − τ )I d×d τ I d×d 0 d×d 0 d×d 0 d×d 0 d×d τ I d×d −(1 − τ )I d×d 0 d×d 0 d×d I d×d I d×d −I d×d I d×d 0 d×d 0 d×d     . This allows the representation of the τ -Wigner as A-Wigner distribution with matrix A = A τ : (93) W τ (f, g)(x, ξ) = µ(A τ )(f ⊗ḡ)(x, ξ), f, g ∈ L 2 (R d ), (x, ξ) ∈ R 2d . In particular W (f, g) = µ(A 1/2 )(f ⊗ḡ). A-pseudodifferetial operators. Schwartz' kernel theorem states, in the framework of tempered distributions, that every linear continuous operator T : S(R d ) → S ′ (R d ) can be regarded as an integral operator in a generalized sense, namely T f, g = K, g ⊗ f , f, g ∈ S(R d ), in the context of distributions, for some kernel K ∈ S ′ (R 2d ), and vice versa [35]. For A ∈ Sp(2d, R), using µ(A)µ(A) −1 = I, the identity operator, we can write T f, g = µ(A) −1 µ(A)K, g ⊗ f = µ(A)K, µ(A)(g ⊗ f ) = µ(A)K, W A (g ⊗ f ) The equalities above provide a new definition of a pseudodifferential operator with symbol σ A related to the symplectic matrix A. We define a A-pseudodifferential operator related to the A-Wigner representation the mapping Op(σ A ) : S(R d ) → S ′ (R d ) given by (94) Op(σ A )f, g = σ A , W A (g ⊗ f ) , where the A-Wigner is defined in (81) and the symbol σ A is given by (95) σ A = µ(A)K, with K integral kernel of the operator. In this paper we limit ourselves to the case A = A τ . In the sequel we shall also write for short (96) W τ (F ) := µ(A τ )F, F ∈ L 2 (R 2d ). For F = f ⊗ḡ, f, g ∈ L 2 (R d ) we come back to the cross-τ -Wigner distribution W τ (f ⊗ḡ) = W τ (f, g). Note that the definition of τ -operators in (23) of the Introduction is now a particular case of (94), in view of (93). In detail we have (97) Op τ (σ)f (x) := Op(σ Aτ )f (x) = R 2d e 2πi(x−y)ξ σ((1 − τ )x + τ y, ξ)f (y)dydξ. Lemma 4.1. For τ ∈ [0, 1], a ∈ S ′ (R 2d ), f, g ∈ S(R d ), we have (98) (Op τ (a)f ) ⊗ḡ = Op τ (σ)(f ⊗ḡ), with (99) σ(r, y, ρ, η) = a(r, ρ) ⊗ 1 (y,η) , r, ρ, y, η ∈ R d , and 1 (y,η) ≡ 1, for every (y, η) ∈ R 2d . Besides, the result is still valid if we replace S ′ , S by the modulation spaces M ∞ , M 1 . Proof. The operators are well defined by the Schwartz' kernel theorem and the equality in (98) is a straightforward computation. The case of modulation spaces is analogous, one has to use the kernel theorem for modulation spaces, cf. [16,Sec. 3.3]. In what follows we need the inverse matrix A −1 τ of A τ , that can be easily computed as (100) A −1 τ =     I d×d 0 d×d 0 d×d −τ I d×d I d×d 0 d×d 0 d×d (1 − τ )I d×d 0 d×d I d×d (1 − τ )I d×d 0 d×d 0 d×d −I d×d τ I d×d 0 d×d     . Lemma 4.2. For τ ∈ [0, 1] consider the matrix (101) N τ = I 2d×2d 0 2d×2d C τ I 2d×2d where C τ = (τ − 1/2) 0 d×d I d×d I d×d 0 d×d . Observe that C T τ = C τ and N τ ∈ Sp(2d, R). Moreover, (i) N −1 τ = N 1−τ (ii) For f ∈ L 2 (R 2d ), µ(N τ )f = e −2πi(τ −1/2)Φ f , with Φ(x, ξ) = xξ, x, ξ ∈ R d . (iii) For f ∈ L 2 (R 2d ), µ(N −T τ )f = F −1 e −2πi(τ −1/2)Φ F f . Proof. Item (i) is a simple computation. Item (ii) follows by formula (38). Let us prove Item (iii). From the definition of a symplectic matrix we get N −T τ = J −1 N τ J. Applying the metaplectic representation and using (37) we obtain µ(N −T τ ) = F −1 µ(N τ )F , that gives the claim. Theorem 4.3. Consider a ∈ S ′ (R d ). Then for every f, g ∈ S(R d ), for τ 1 , τ 2 ∈ [0, 1], we have (102) W τ 1 (Op τ 2 (a)f, g) = Op τ 2 (b)W τ 1 (f, g), where (103) b = µ(N T τ 2 )[(µ(N −T τ 2 )σ) • A −1 τ 1 ], where N τ 2 ∈ Sp(2d, R) is defined in (101) and σ in (99). In particular, for τ 2 = 1/2, we write Op w := Op 1/2 and the equality in (102) becomes (104) W τ 1 (Op w (a)f, g) = Op w (b)W τ 1 (f, g), where (105) b(x, ξ, u, v) = σ(A −1 τ 1 (x, ξ, u, v)) = a(x − τ 1 v, ξ + (1 − τ 1 )u), x, ξ, u, v ∈ R d . To be definite about the notation for variables in Theorem 4.3 and subsequent proof: the linear map A τ 1 acts from (r, y), with respective dual variables (ρ, η), to (x, ξ) with dual variables (u, v). By standard Weyl quantization, the right-hand side of (104) reads Op w (b)W τ 1 (f, g)(x, ξ) = a(x − 1 2π τ 1 D ξ , ξ + 1 2π (1 − τ 1 )D x )W τ 1 (f, g). As a particular case, we obtain the τ -Moyal operators, τ ∈ [0, 1], W τ (x j f, g) = (x j − 1 2π τ D ξ j )W τ (f, g), W τ (D x j f, g) = (2πξ j + (1 − τ )D x j )W τ (f, g), cf. (16) and (17) for τ = 1/2. Proof. We use the metaplectic operator defined in (96) for τ = τ 1 and Lemma 4.1 for τ = τ 2 to write W τ 1 (Op τ 2 (a)f, g) = W τ 1 (Op τ 2 (a)f ⊗ḡ) = W τ 1 (Op τ 2 (σ))(f ⊗ḡ) (106) = µ(A τ 1 )(Op τ 2 (σ))(f ⊗ḡ). From now on we split into the two cases τ 2 = 1/2 or τ 2 = 1/2. In fact, when τ 2 = 1/2 we can use the covariance property for Weyl operators. Such property is well known and enjoyed only by Weyl operators, we remark that it does not hold for the other τ -pseudodifferential operators, see for instance [47] or [52] and the recent contribution [29]. Namely, for any A ∈ Sp(d, R), µ(A)Op τ (σ)µ(A) −1 = Op τ (σ • A −1 ) ⇔ τ = 1/2. That is why we need to study the two cases above separately. Let us start with the easy one: τ 2 = 1/2. Using relation (106), Lemma 4.1 and the covariance property above, W τ 1 (Op w (a)f, g) = µ(A τ 1 )Op w (σ)(f ⊗ḡ) = Op w (σ • A −1 τ 1 )µ(A τ 1 )(f ⊗ḡ) = Op w (σ • A −1 τ 1 )W τ 1 (f, g) . Now, by (99) and using the inverse matrix (100) for τ = τ 1 , σ • A −1 τ 1 (x, ξ, u, v) = σ(A −1 τ 1 (x, ξ, u, v)) = a(x − τ 1 v, ξ + (1 − τ 1 )u) For τ 2 = 1/2, using (106), and the relation (see e.g., [16, (4.37)]), written for arbitrary τ 1 , τ 2 : Op τ 1 (a 1 ) = Op τ 2 (a 2 ) ⇔ a 2 (ζ 1 , ζ 2 ) = e −2πi(τ 2 −τ 1 )ζ 1 ζ 2 a 1 (ζ 1 , ζ 2 ) we infer (107) Op w (σ 1/2 ) = Op τ (σ τ ) ⇔ σ 1/2 = µ(N −T τ )σ τ , where the symplectic matrix N τ is defined in (101). Hence W τ 1 (Op τ 2 (a)f, g) = µ(A τ 1 )(Op τ 2 (σ))(f ⊗ḡ) = µ(A τ 1 )Op w (µ(N −T τ 2 )σ)(f ⊗ḡ) = Op w ((µ(N −T τ 2 )σ) • A −1 τ 1 )µ(A τ 1 )(f ⊗ḡ) = Op w ((µ(N −T τ 2 )σ) • A −1 τ 1 )W τ 1 (f, g) = Op τ 2 (µ(N T τ 2 )[(µ(N −T τ 2 )σ) • A −1 τ 1 ])W τ 1 (f, g) where in the last row we used (107) for τ = τ 2 . Note that for τ 2 = 1/2 we obtain N 1/2 = I 2d×2d the identity matrix, and µ(N 1/2 ) = µ(N −T 1/2 ) = I, the identity operator, so that the Weyl symbol b in (105) can be inferred from (103) when τ 2 = 1/2. Observe that the representation W A in (81) is a sesquilinear form L 2 (R d ) × L 2 (R d ) → L 2 (R 2d ). Proposition 4.3. Consider A ∈ Sp(2d, R), then the representation W A in (81) is a sesquilinear form from S(R d ) × S(R d ) → S(R 2d ). Proof. We recall that the symplectic group is generated by the so-called free symplectic matrices [28] and thus every metaplectic operator is the product of metaplectic operators associated to free symplectic matrices which reduce to Fourier transforms, multiplications by chirps, and linear change of variables. All the operators aforementioned are bounded operators on the Schwartz class. This gives the claim. Proposition 4.4 (Covariance Property). Consider A ∈ Sp(2d, R) having block decomposition A =     A 11 A 12 A 13 A 14 A 21 A 22 A 23 A 24 A 31 A 32 A 33 A 34 A 41 A 42 A 43 A 44     with A ij , i, j = 1, . . . , 4, d × d real matrices. Then the representation W A in (81) is covariant, namely (108) W A (π(z)f ) = T z W A f, f ∈ S(R d ), z ∈ R 2d , if and only if A is of the form (109) A =     A 11 I d×d − A 11 A 13 A 13 A 21 −A 21 I d×d − A T 11 A T 11 A 31 −A 31 A 33 A 33 A 41 −A 41 A 43 A 43     . The result does not depend on the choice of the phase factor in the definition of µ(A) and W A in (81). Proof. For z = (z 1 , z 2 ) ∈ R 2d , we use the intertwining property (see e.g. Formula (1.10) in [16]) π(Az) = c A µ(A)π(z)µ(A) −1 , z ∈ R 2d where c A is a phase factor: |c A | = 1. For z = (z 1 , z 2 ) ∈ R 2d we can write W A (π(z 1 , z 2 )f ) = µ(A)[π(z 1 , z 1 , z 2 , −z 2 )(f ⊗f )] = c −1 A π(A(z 1 , z 1 , z 2 , −z 2 ))W A f. Hence W A is covariant if and only if (110) π(A(z 1 , z 1 , z 2 , −z 2 )) = c A π(z 1 , z 2 , 0, 0), ∀ (z 1 , z 1 ) ∈ R 2d , where we used T (z 1 ,z 2 ) = π(z 1 , z 2 , 0, 0). The equality in (110) yields A(z 1 , z 1 , z 2 , −z 2 ) = (z 1 , z 2 , 0, 0) for every z 1 , z 2 ∈ R d . The last equality and the symplectic properties of A (see, e.g. [16, (1.4)-(1.9)]) give the claim. The special form of the symplectic matrix A in (109) guarantees the membership of W A in the Cohen's class. To compute the corresponding kernel, we begin to write (92) and (100) for τ = 1/2, so that according to A = AA −1 1/2 A 1/2 where A 1/2 , A −1 1/2 are defined as in(93) we have µ(A 1/2 )(f ⊗f ) = W f . Then (111) W A f = µ(A)(f ⊗f ) = µ(AA −1 1/2 )µ(A 1/2 )(f ⊗f ) = µ( A)W f where A = AA −1 1/2 is given by (112) A = I 2d×2d B 0 I 2d×2d with (113) B = A 13 1 2 I d×d − A 11 1 2 I d×d − A T 11 −A 21 . In the computation of A we took advantage from the fact that A is symplectic, hence preserving the symplectic form. Applying Proposition 2.1 to A ∈ Sp(2d, R) with B as in (113) and variables z = (x, ξ), ζ = (u, v), we obtain (modulo phase factors) (114) µ( A)F (z) = R 2d e 2πiS(z,ζ) F (ζ)dζ where (115) S(z, ζ) = zζ + 1 2 ζ · Bζ. Hence we conclude from (111): Theorem 4.6. Let A ∈ Sp(2d, R) be of the form (109). Then (116) W A f = W f * σ where (117) σ(z) = F −1 ζ→z (e −πiζ·Bζ ) ∈ S ′ (R 2d ) , and B defined in (113). By using Proposition 2.2 we have: (119) B = 0 d×d (τ − 1 2 )I d×d (τ − 1 2 )I d×d 0 d×d , that provides in (117) (120) σ τ (x, ξ) = F −1 u→x,v→ξ (e −2πi(τ −1/2)uv ), and we recapture the kernel (24) in the Introduction. A more general example is given by W A with µ(A) as in (82). Under the covariance assumption (108), the matrix B is again anti-diagonal and (121) σ(x, ξ) = F −1 u→x,v→ξ (e −2πiu·M v ) for a suitable M ∈ GL(d, R), see [17] and [2] for details. Example 4.9. In Part II of this paper the A-Wigner representations will play a basic role in the study of Schrödinger equations. In fact, starting from the τ -Wigner distribution of the initial datum, the representation of the evolved solution will require general Cohen's classes, outside those in (120). Consider here, as example, the free particle equation (122) i∂ t u + ∆u = 0, u(0, x) = u 0 (x), with (t, x) ∈ R × R d , d ≥ 1. For the solution u(t, x) we have from Wigner [51] (123) W u(t, x, ξ) = W u 0 (x − 4πtξ, ξ). Looking for a generalization of (123) to the case of W τ , τ ∈ (0, 1), we may obtain by a direct computation (124) W τ,t u(t, x, ξ) = W τ,t u 0 (x − 4πtξ, ξ), where the representation W τ,t is of Cohen class: (125) W τ,t f = W f * σ τ,t , (126) σ τ,t (x, ξ) = σ τ (x + 4πtξ, ξ), with σ τ defined in (24). We may write W τ,t in the form of an A-Wigner representation, with A easily computed and (127) µ(A)F (x, ξ) = R d e −2πi(yξ+2πt(1−2τ )y 2 ) F (x+τ y, x−(1−τ )y) dy = F 2 M τ,t T τ F (x, ξ), where M τ,t is the operator of multiplication by the chirp e 2πit(1−2τ )y 2 and T τ as in Definition 1.7. Almost-diagonalization and wave front sets In this section we first study the action of τ -Wigner representations on Weyl operators, then we introduce the τ -Wigner wave front set and provide the almost diagonalization results. Finally, we compare this new wave front set with the classical Hörmander's global wave front set. 1⊗vs (R 2d ), containing S 0 0,0 (R 2d ). For a symbol a ∈ M ∞,1 1⊗vs (R 2d ), s ≥ 0, we begin to define, for r, y, ρ, η ∈ R d , (128) σ(r, y, ρ, η) = a(r, ρ) ⊗ 1 (y,η) ,σ(r, y, ρ, η) = 1 (r,ρ) ⊗ a(y, −η), and, for τ ∈ [0, 1], A −1 τ as in (100) b(x, ξ, u, v) = (σ • A −1 τ )(x, ξ, u, v),(129)b (x, ξ, u, v) = (σ • A −1 τ )(x, ξ, u, v),(130)c(x, ξ, u, v) = b(x, ξ, u, v)b(x, ξ, u, v).(131) In particular, for τ = 1/2, we infer b(x, ξ, u, v) = a(x − v 2 , ξ + u 2 ),b(x, ξ, u, v) =ā(x + v 2 , ξ − u 2 ), c(x, ξ, u, v) = a(x − v 2 , ξ + u 2 )ā(x + v 2 , ξ − u 2 ).(132) We need to show that the symbols above are in M ∞,1 1⊗vs (R 4d ). This is done in the lemma below. Lemma 5.1. Assume a ∈ M ∞,1 1⊗vs (R 2d ), s ≥ 0, τ ∈ [0, 1]. Then, (i) The symbol b = σ • A −1 τ in (129) belongs to M ∞,1 1⊗vs (R 4d ). (ii)The symbolb =σ • A −1 τ in (130) belongs to M ∞,1 1⊗vs (R 4d ). (iii) The product symbol c = bb in (131) is in M ∞,1 1⊗vs (R 4d ). Proof. (i) Let us show that σ ∈ M ∞,1 1⊗vs (R 4d ). First, the constant function 1 (y,η) is in M ∞,1 1⊗vs (R 2d ). In fact, taking a non-zero window G ∈ S(R 2d ), V G 1 (y,η) (u 1 , u 2 ) = F (T u 1 G)(u 2 ) = M −u 1 G(u 2 ) and 1 (y,η) M ∞,1 1⊗vs ≍ V G 1 (y,η) L ∞,1 1⊗vs = G L 1 vs < ∞ since S(R 2d ) ֒→ L 1 vs (R 2d ) for every s ≥ 0. Second, observe that a ⊗ 1 (y,η) ∈ M ∞,1 1⊗vs (R 4d ), as can be easily checked by taking the window function G 1 (r, ρ) ⊗ G 2 (y, η) ∈ S(R 4d ) for any G 1 , G 2 ∈ S(R 2d ) \ {0}, noting that V G 1 ⊗G 2 (a ⊗ 1 (y,η) )(z 1 , z 2 , ζ 1 , ζ 2 ) = V G 1 a(z 1 , ζ 1 )V G 2 1 (y,η) (z 2 , ζ 2 ), z 1 , z 2 , ζ 1 , ζ 2 ∈ R 2d and v s (ζ 1 , ζ 2 ) ≤ v s (ζ 1 )v s (ζ 2 ). We now focus on b = σ • A −1 τ . For any τ ∈ [0, 1] we have b ∈ M ∞,1 1⊗vs (R 4d ). In fact, it is well known that affine transformations leave the modulation spaces M p,q invariant. The first result for the Sjöstrand class M ∞,1 (R 2d ) was shown by Sjöstrand himself in [45]. More general contributions involving all modulation spaces are contained in [40] and [43]. However, there is nothing in the framework of weighted modulation spaces. That is why we will prove the previous statement. For any fixed non-zero Φ in S(R 4d ), an easy computation shows that, for every z, ζ ∈ R 4d , writing Φ Aτ := Φ • A τ , V Φ b(z, ζ) = V Φ (σ • A −1 τ )(z, ζ) = | det A τ | V Φ Aτ σ(A −1 τ z, A −1 τ ζ) = V Φ Aτ σ(A −1 τ z, A −1 τ ζ) since A τ is a symplectic matrix. Now b M ∞,1 1⊗vs ≍ V Φ b L ∞,1 1⊗vs = R 4d sup z∈R 4d |V Φ Aτ σ(A −1 τ z, A −1 τ ζ)|v s (ζ)dζ = R 4d sup z∈R 4d |V Φ Aτ σ(z, ζ)|v s (A τ ζ)dζ ≍ R 4d sup z∈R 4d |V Φ Aτ σ(z, ζ)|v s (ζ)dζ ≍ σ M ∞,1 1⊗vs < ∞ since |A τ ζ| ≍ |ζ|. (ii) One can show thatb ∈ M ∞,1 1⊗v s (R 4d ) by using a similar pattern as in the previous stage (i). (iii) We use the product properties for modulation spaces (see [16,Proposition 2.4.23]) to infer, for c = bb, c M ∞,1 1⊗vs b M ∞,1 1⊗vs b M ∞,1 1⊗vs . This concludes the proof. Let us recall the following boundedness result, see [16,Theorem 4.4.15] (cf. also the early work by Gröchenig and Heil [31]). Lemma 5.2. If σ ∈ M ∞,1 1⊗vs (R 2n ) then Op w (σ) : S(R n ) → S ′ (R n ) extends to a bounded operator on M p vs (R n ), 1 ≤ p ≤ ∞, s ≥ 0. This result is actually valid for much more general modulation spaces, cf. [16]. In the sequel the dimension n will be fixed as d or 2d. It will be also convenient to recall from Corollary 3.17 the implication for 1 ≤ p ≤ ∞, s ≥ 0, (133) f, g ∈ M p vs (R d ) ⇒ W τ f, W τ (f, g) ∈ M p vs (R 2d ). Theorem 5.1. Consider τ ∈ [0, 1], s ≥ 0, a symbol a ∈ M ∞,1 1⊗vs (R 2d ), functions f, g ∈ M p vs (R d ), 1 ≤ p ≤ ∞. Then, for b,b, c as in (129), (130), (131) respectively, we have the following identities in M p vs (R 2d ): W τ (Op w (a)f, g) = Op w (b)W τ (f, g),(134)W τ (f, Op w (a)g) = Op w (b)W τ (f, g),(135)W τ (Op w (a)f ) = Op w (c)W τ f.(136) Proof. From Lemma 5.2 we deduce that Op w (a)f ∈ M p vs (R d ) for any f ∈ M p vs (R d ). Then the (cross-)τ -Wigner distributions W τ f , W τ (f, g), W τ (Op w (a)f, g), W τ (f, Op w (a)g), and W τ (Op w (a)f ) are in M p vs (R 2d ) in view of the implication in (133). Furthermore, Op w (b)W τ (f, g), Op w (b)W τ (f, g), Op w (c)W τ f in the right-hand side of (134), (135), (136) belong to M p vs (R 2d ) in view of Lemmas 5.1 and 5.2. Since the Schwartz class S in dense in M p vs , by Lemmas 5.1 and 5.2 it will be sufficient to prove the identities (134), (135) and (136) for f, g ∈ S(R d ). Now, (134) is already proved, cf. (104) and (105). The equality in (135) is obtained arguing as in the proof of Theorem 4.3. Namely, we write (137) W τ (f, Op w (a)g) = W τ (f ⊗ Op w (a)g), where W τ is defined in (96). Observe that (138) Op w (a)g = Op w (a * )ḡ, where a * (y, η) =ā(y, −η). Then by (137) W τ (f ⊗ Op w (a)g) = W τ (f ⊗ Op w (a * )ḡ) = W τ (Op w (σ)(f ⊗ḡ)) (139) = µ(A τ )Op w (σ)(f ⊗ḡ). By the covariance property and using the inverse matrix in (100) we conclude (140) W τ (f, Op w (a)g) = Op w (σ • A −1 τ )W τ (f, g) = Op w (b)W τ (f, g) , whereb is defined in (130). Hence (135) is proved. As for (136), we may apply repeatedly (134) and (135), obtaining W τ (Op w (a)f ) = W τ (Op w (a)f, Op w (a)f ) = Op w (b)Op w (b)W τ (f, g) (141) = Op w (b)Op w (b)W τ (f, g). So Op w (b) and Op w (b) commute. It is not clear whether the symbol of their product is simply given by the product of the respective symbols. Concerning this, we may argue as before to obtain (142) W τ (Op w (a)f ) = W τ (Op w (a)f ⊗ Op w (a)f ) = W τ (Op w (λ)(f ⊗f )), where now (143) λ(r, y, ρ, η) = a(r, ρ) ⊗ā(y, −η). (131), we conclude W τ (Op w (λ)f ) = Op w (c)W τ f . The proof is completed. Since (144) W τ (Op w (λ)(f ⊗ḡ)) = µ(A τ )Op w (λ)(f ⊗ḡ) = Op w (λ • A −1 τ )W τ (f, g) and (145) (λ • A −1 τ )(x, ξ, u, v) = c(x, ξ, u, v) with c as in 5.2. τ -Wigner wave front set. We can now prove Theorems 1.4 and 1.6 in the Introduction. As before, we shall argue in the more general setting of the τ -Wigner representations and extend Definition 1.5 as follows. Definition 5.2. Let f ∈ L 2 (R d ), 0 < τ < 1. We define W F τ (f ), the τ -Wigner wave front set of f , by setting z 0 / ∈ W F τ (f ), z 0 ∈ R 2d \ {0}, if there exists a conic open neighbourhood Γ z 0 ⊂ R 2d of z 0 such that for every integer N ≥ 0 Γz 0 |z| 2N |W τ f (z)| 2 dz < ∞. We shall limit to consider symbols in the smooth Hörmander class S 0 0,0 (R 2d ), the intersection of the modulation spaces: S 0 0,0 (R 2d ) = s≥0 M ∞ 1⊗vs (R 2d ) = s≥0 M ∞,1 1⊗vs (R 2d ), cf. [1,32]. Observe that Lemmas 5.1 and 5.2 apply obviously to this class. As for the functional frame, in the statement of Theorem 1.4 we may refer to bounded operators on M p vs , 1 ≤ p ≤ ∞, s ≥ 0, whereas in Theorem 1.6 we shall use the preceding Definition 5.2. From Theorem 5.1 we obtain (146) W (Op w (a)f )(z) = R 2d k(z, w)W f (w) dw where (147) k(z, w) = R 2d e 2πi(z−w)ζ c z + w 2 , ζ dζ, with c defined as in (131), z = (x, ξ), ζ = (u, v), w = (y, η). We shall apply for n = 2d the following results, valid in any dimension n. Lemma 5.3. Let c(z, ζ) be a symbol in S 0 0,0 (R 2n ), z, ζ ∈ R n . If k denotes the kernel of Op w (c), then for any integer N ≥ 0, (148) k N := z − w 2N k(z, w) is the kernel of an operator Op w (c N ) with c N ∈ S 0 0,0 (R 2n ). Moreover, assume χ, ϕ ∈ C ∞ (R n ) with bounded derivatives of any order and support in two disjoint open cones in R n \ {0} for large z. Then, for every integer L ≥ 0 the operator P L with kernel (149)k L (z, w) = χ(z) z L k(z, w)ϕ(w) is bounded on L 2 (R n ). Proof. We have k N (z, w) = e 2πi(z−w)ζ z − w N c z + w 2 , ζ dζ. Writing z − w 2N e 2πi(z−w)ζ = 1 − 1 (2π) 2 ∆ ζ N e 2πi(z−w)ζ and integrating by parts we obtain (150) k N (z, w) = e 2πi(z−w)ζ c N z + w 2 , ζ dζ with (151) c N = 1 − 1 (2π) 2 ∆ ζ N c. Observe that c N ∈ S 0 0,0 (R 2n ) and k N is the kernel of Op w (c N ). This proves the first part of the lemma. For the second one, using (148) we may writẽ k(z, w) = χ(z) z L z − w −2N k N (z, w)ϕ(w) and therefore from (150) k L (z, w) = e 2πi(z−w)ζ d L,N (z, w, ζ) dζ where (152) d L,N (z, w, ζ) = χ(z) z L z − w −2N c N z + w 2 , ζ ϕ(w), with c N as in (151). The integer N will be chosen later. The operator P L with kernelk L can be regarded as a pseudodifferential operator defined in terms of the amplitude (152): P L f (z) = e 2πi(z−w)ζ d L,N (z, w, ζ)f (w) dwdζ. Now, observe that d L,N ∈ S 0 0,0 (R 3n ), that is (153) |∂ α z ∂ β w ∂ γ ζ d L,N (z, w, ζ)| ≤ c α,β,γ , (z, w, ζ) ∈ R 3n . In fact, for z ∈ suppχ and w ∈ supp ϕ we have (154) z z − w , so that choosing N ≥ L we prove that d L,N in (152) is bounded. Possibly enlarging N and using again (154) we obtain easily the estimates (153) for every α, β, γ. We may apply to P L the generalized version of the Calderón-Vaillancourt theorem in [6] where the L 2 -boundedness was proved for operators defined by amplitudes satisfying the estimates (153) for a suitable finite set of α, β, γ. This concludes the proof. Proof of Theorem 1.4 and Theorem 1.6. Applying the first part of Lemma 5.3 to c ∈ S 0 0,0 (R 4d ) and k(z, w) in (147), we deduce that z − w 2N k(z, w) is the kernel of an operator bounded on M p vs (R 2d ), 1 ≤ p ≤ ∞, s ≥ 0, in particular on L 2 (R 2d ). Hence Theorem 1.4 is proved. To obtain the inclusion (21) in Theorem 1.6, for f ∈ L 2 (R d ), we assume z 0 / ∈ W F τ (f ), z 0 = 0, and prove z 0 / ∈ W F τ (Op w (a)f ). In Theorem 5.3. If f ∈ L 2 (R d ) and W F τ (f ) = ∅, for some τ ∈ (0, 1), then f ∈ S(R d ). To prove the above issue we will need the following preliminary result. Lemma 5.4. For τ ∈ [0, 1] and g ∈ S(R d ) \ {0}, define Ψ τ = IW τ g. Then for every f ∈ L 2 (R d ): (157) |V g f | 2 = Ψ τ * W τ f Proof. We apply Lemma 3.1 with g = ϕ 1 = ϕ 2 and ζ = 0. By viewing V Φτ W τ f (z, 0) as the convolution Ψ τ * W τ f (z) we obtain (157). Proof of Theorem 5.3. Assume W F τ (f ) = ∅. Then, from the compactness of the sphere S 2d−1 , we have for every N (158) W τ f 2 L 2 v N = R 2d z 2N |W τ f (z)| 2 dz < ∞. Let us prove that the validity of (158) for every N implies for all M (159) |V g f | z −M , z ∈ R 2d , for every fixed g ∈ S(R d ), say g the Gaussian; that is, in terms of modulation [16, (2.28)]), we shall conclude f ∈ S(R d ). spaces, f ∈ M ∞ v M (R d ). Since ∩ M ≥0 M ∞ v M = S(R d ) (cf. To deduce (159) we apply Lemma 5.4: |V g f | 2 ≤ R 2d |Ψ τ (z − ζ)||W τ f (ζ)| dζ R 2d z − ζ −2M |W τ f (ζ)| dζ, for every M ≥ 0, since Ψ τ ∈ S(R 2d ). By Peetre's inequality z − ζ −2M z −2M ζ 2M , hence by Schwartz' inequality |V g f | 2 z −2M R 2d ζ −2d−1 ζ 2M +2d+1 |W τ f (ζ)| dζ z −2M W τ f L 2 v 2M +2d+1 and (159) follows from (158). 5.3. Comparison with the Hörmander's global wave front set. The Wigner wave front certainly deserves a more detailed study. Here we shall limit to a comparison with the wave front set W F G introduced by Hörmander [34] as global version of the standard microlocal wave front, cf. [35]. Under the name of Gabor wave front set, and other different names, W F G has recently had several applications, see [41] for a survey. Following the notation and equivalent definition in [42], we recall that z 0 = (x 0 , ξ 0 ) / ∈ W F G (f ), for f ∈ L 2 (R d ), z 0 = 0, if there exists an open conic neighbourhood Γ z 0 ⊂ R 2d such that for every N ≥ 0 (160) Γz 0 |z| 2N |V g f (z)| 2 dz < ∞, where V g f is the STFT of f defined in (3). The definition of W F G (f ) does not depend on the choice of the window g ∈ S(R d ) \ {0}. f ⊂ [a, b], with a, b ∈ R, a < b. From the standard support property for Wigner transform, we have that supp W f is included in the strip {(x, ξ) ∈ R 2 , a ≤ x ≤ b}, hence W F (f ) ⊂ {(x, ξ), x = 0} in view of Definition 1.5. Same inclusion is valid for W F G (f ), see for example Section 6.6.4 in [16]. Similarly, we may consider g ∈ L 2 (R) with compactly supportedĝ and obtain that both wave fronts are included in the x axis {(x, ξ), ξ = 0}. It is easy to prove that we have the identities (164) W F (f ) = W F G (f ), W F (g) = W F G (g). The situation changes drastically if we consider the sum f + g. In fact, by linearity (160) gives (165) W F G (f + g) = W F G (f ) ∪ W F G (g), and W F G (f +g) is the union of the axes in R 2 for suitable f, g ∈ L 2 (R d ), see below. Instead, (166) W (f + g) = W f + W g + 2ReW (f, g), and the cross-Wigner term may produce an additional ghost part of the wave front, according to the presence of the so-called ghost frequencies in Signal Theory [4]. To be definite, fix f ∈ C ∞ (R \ {0}), 0 ≤ f (x) ≤ 1, f (x) = 1 for −1/2 ≤ x < 0 and f (x) = 0 for x ≤ −1 and x > 0. For g ∈ L 2 (R) we take g = −2πf . Let us test (20) for small Γ z 0 with z 0 outside the axes and N = 2. It will be sufficient to consider xξW (f, g)(x, ξ). An easy computation by Moyal operators gives (167) 8πxξW (f, g) = W (xDf, g) + W (f, xDg) + W (xf, Dg) + W (Df, xg). Differentiating f in the distributions' framework gives (168) Df = iδ + if ′ , with f ′ ∈ C ∞ (R) with compact support, hence (169) xg = −2πxf = − Df = −i − ih, with h ∈ S(R). From the definition of f, g and from (168), (169) we deduce that xf, Dg, xDf, xDg belong to L 2 (R), therefore by the Moyal L 2 identity all the terms in the right-hand side of (167) are in L 2 (R 2 ), but the term W (Df, xg). On the other hand, in view of (168), (169), (170) W (Df, xg) = W (δ, 1) + W (δ, h) + W (f ′ , 1) + W (f ′ , h). Since h, f ′ ∈ S(R), then W (f ′ , h) ∈ S(R 2 ), and W (δ, h), W (f ′ , 1) are of rapid decay in the complement of any conic neighbourhood of the axes in R 2 . It remains to consider (171) W (δ, 1) = R e −2πitξ δ x+ t 2 dt = e 4πixξ , that provides a non-convergent integral in (20), for any z 0 = (x 0 , ξ 0 ), x 0 = 0, ξ 0 = 0 and any neighbourhood Γ z 0 . Hence W F (f + g) = R 2 \ {0}, i.e. the ghost wave front invades the whole R 2 . Remark 5. 5. Similar examples can be given for W τ f . Actually, for τ = 1/2 the τ -wave front is not limited to the convex closure of W F G (f ), in particular W F G (f ) may consist of a single ray and W F τ (f ) be a larger cone. In conclusion, let us suggest, without giving details, an alternative approach to the Wigner microlocal analysis. Namely, we may replace the rapid decay in cones expressed by Definitions 1.5 and 5.2 with the distributional rapid decay characterizing the space (O ′ C ) of Schwartz ([46, Chapter 7, Section 5]). According to Example (V II, 5; 1) in [46] the chirp function belongs to (O ′ C ), hence it is, somehow surprisingly, a distribution of rapid decay. We may extend the argument of Schwartz to all the ghost part of W F τ (f ), as suggested by (171) and Lemma 5.4. In this perspective, ghosts do not exist, so Wigner wave front and Hörmander global wave front coincide. Corollary 3 . 4 . 34Under the assumptions of the lemma above, for the proof follows the same argument as in [16, Theorem 2.3.12(iii)], replacing the convolution inequalities for the STFT with the corresponding ones for the τ -Wigner, cf. Lemma 3.2 and Corollary 3.4. Corollary 3 . 6 . 36Under the assumptions of Proposition 3.5, Corollary 3 . 15 . 315Consider τ ∈ (0, 1), 1 ≤ p ≤ 2, γ defined in (65), s ≥ 0 and the weights v s , v s ⊗ 1 in (31), (32), respectively. Fix g ∈ M p vsγ (R d ). Then the following conditions are equivalent: Corollary 3 . 17 . 317Assume τ ∈ [0, 1], 1 ≤ p ≤ ∞ and s ≥ 0 and the weight functions Remark 4. 5 . 5(i) An example of covariant matrix is A τ in (92), for every τ ∈ [0, 1] (actually for every τ ∈ R). (ii) The STFT V f f is not covariant, since the metaplectic matrix A ST in (91)defining V f f does not satisfy the block matrix decomposition in (109). Theorem 4 . 7 . 47In the preceding theorem assume det B = 0. Then z) = e πi♯(B)/4 | det B|e −πiζ·B −1 ζ , where ♯(B) is the number of positive eigenvalues of B minus the number of negative eigenvalues.Example 4.8. Applying the preceding arguments to A τ in (92) and W τ in (93) we obtain in (112) the expression of B: 5. 1 . 1Weyl operators and τ -Wigner representations. We first reset Theorem 4.3 in the frame of the spaces M p s . Also, we want to extend Proposition 1.2 to recapture Theorem 1.3 in the more general symbol class M ∞,1 a metapectic operator associated with A (see Subsection 2.2 below). In Section 4 the analysis of W A f is limited to some basic facts, used in the sequel of the paper. In particular, we characterize the A-Wigner distributions which belong to the Cohen class. Note that the STFT can be viewed as A-Wigner distribution, cf. Remark 4.2. We believe that this point of view of defining time-frequency representations via metaplectic operators could find applications in Quantum Mechanics as well as in Quantum Harmonic Analysis (cf. 15, Lemma 2.3] [16, Lemma 1.3.38]): 3.3. Inversion formula for the τ -Wigner distribution. For τ ∈ [0, 1] we recall the Moyal's formula for the τ -Wigner distribution [16, Corollary 1.3.28]: linear with respect to the first component and anti-linear to the second one) is bounded from M). Observe that the inclusion relations for modulation spaces (cf. [16, Theorem 2.4.17]) give g ∈ M 1 v (R d ) ֒→ L 2 (R d ), so that the mapping W τ ( ). By complex interpolation of modulation spaces (cf. [16, Proposition 2.3.16]) and Lebesgue spaces (cf. [49] ) we infer, for θ ∈ [0, 1], Theorem 3.14. Assume τ ∈ [0, 1], 1 ≤ p ≤ ∞ and s ≥ 0 and the weight functions v s). The rest of the proof follows the pattern of the corresponding one for the Wigner distribution in [16, Theorem 4.4.2]. Remark 3.13. In this framework, the recent contribution by Guo et al. [33, Theorem 1.1] shows boundedness results for τ -Wigner distributions on modula- tion spaces, where they consider different weights for the functions f, g of the type v t,s (z 1 , z 2 ) = z 1 t z 2 s . Using [16, Proposition 1.3.27], for τ ∈ [0, 1], we can writeConsider f, g ∈ M p vs (R d ). In view of Theorem [16, 2.3.27] we have W τ (f, g) M p vs⊗1 = W τ (f, g) M p 1⊗vs . AcknowledgementsThe first author has been supported by the Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).We thank Maurice de Gosson for reading the manuscript and providing useful comments.view of Definition 5.2, the assumption z 0 / ∈ W F τ (f ) means that there exists an open conic neighbourhood Γ z 0 ⊂ R 2d of z 0 such that for every integer N ≥ 0and we want to prove that, possibly shrinking Γ z 0 to Γ ′ z 0 , we have, for every N ≥ 0,To this end, take first an open conic neighbourhood Λ z 0 with Λ z 0 ⊂⊂ Γ z 0 (we mean the closure of Λ z 0 ∩ S 2d−1 is included in Γ z 0 ∩ S 2d−1 ). Then, consider ψ ∈ C ∞ (R 2d ), homogeneous of degree 0, with 0 ≤ ψ(z) ≤ 1, ψ(z) = 1 in Λ z 0 and supp ψ ⊂ Γ z 0 for large |z|. Also, apply to W τ (Op w (a)f ) the identity (136) in Theorem 5.1 and write in (156) W τ (Op w (a)f ) = Op w (c)W τ f, with c as in (131). We may therefore estimate the integral in (156)We estimate I 1 as follows:where we used Lemma 5.2 and the assumption (155).To estimateThen we introduce another cut-off function χ ∈ C ∞ (R 2d ), homogeneous of degree 0, with 0 ≤ χ(z) ≤ 1, χ(z) = 1 in Γ ′ z 0 and with supp χ ⊂ Λ ′ z 0 for large |z|. Note that χ and ϕ = 1 − ψ satisfy the assumptions of the second part of Lemma 5.3. Now we estimateThe kernel of the operator acting on W τ f is of the form (149), and writingwe conclude from Lemma 5.3For τ = 1/2 we obtain in particular Theorem 1.6.Theorem5.4. For all f ∈ L 2 (R d ) and τ ∈ (0, 1) we haveProof. Assume z 0 = (x 0 , ξ 0 ) / ∈ W F τ (f ), that is the estimate in Definition 5.2 are satisfied for every N in a suitable conic neighbourhood Γ z 0 . Let us prove that (160) is valid for every N, by shrinking Γ z 0 to Γ ′ z 0 ⊂⊂ Γ z 0 , namely:It will be sufficient to estimate for every M ≥ 0Applying Lemma 5.4 and arguing as in the proof of Theorem 5.3 we have for every Q ≥ 0 |V g f | 2 (z) ≤ I 1 + I 2 withSince the restriction of W τ f (ζ) to Γ z 0 satisfies the estimates (158) in R 2d , for I 1 we may argue as in the proof of Theorem 5.3 and deduce the estimates (163) in R 2d . As for I 2 , we note that for z ∈ Γ ′ z 0 and ζ ∈ R 2d \ Γ z 0 we have ζ z − ζ , z z − ζ .Hence by taking Q = M + dimplies W τ f ∈ L 2 (R 2d ), (159) follows and Theorem 5.4 is proved.The similarity of (160) and(20)leads naturally to ask whether the wave front sets W F in Definition 1.5 and W F G coincide. This can be easily tested on examples in dimension d = 1. Consider first f ∈ L 2 (R) with compact support, say supp Characterization of smooth symbol classes by Gabor matrix decay. F Bastianoni, E Cordero, arXiv:2102.12437SubmittedF. Bastianoni and E. Cordero. Characterization of smooth symbol classes by Gabor matrix decay. Submitted. arXiv:2102.12437 Linear perturbations of the Wigner transform and the Weyl quantization. Applied and Numerical Harmonic Analysis. D Bayer, E Cordero, K Gröchenig, S I Trapasso, D. Bayer, E. Cordero, K. Gröchenig and S. I. Trapasso. Linear perturbations of the Wigner transform and the Weyl quantization. Applied and Numerical Harmonic Analysis, 79-120 . / Birkhäuser, Springer, 10.1007/978-3-030-36138-9-5Birkhäuser/Springer, 2020, ISBN: 978-3-030-36137-2, DOI: 10.1007/978-3-030-36138-9-5 Modulation Spaces With Applications to Pseudodifferential Operators and Nonlinear Schrödinger Equations. A Bényi, K A Okoudjou, SpringerNew YorkA. Bényi and K.A.Okoudjou. Modulation Spaces With Applications to Pseudodifferential Operators and Nonlinear Schrödinger Equations, Springer New York, 2020. Time-frequency representations of Wigner type and pseudo-differential operators. P Boggiatto, G De Donno, A Oliaro, Trans. Amer. Math. Soc. 3629P. Boggiatto, G. De Donno, A. Oliaro, Time-frequency representations of Wigner type and pseudo-differential operators, Trans. Amer. Math. Soc., 362(9) (2010) 4955-4981. On the boundedness of pseudo-differential operators. A P Calderón, R Vaillancourt, J. Math. Soc. Japan. 23A.P. Calderón and R. Vaillancourt. On the boundedness of pseudo-differential operators. J. Math. Soc. Japan, 23:374-378, 1971. A class of bounded pseudo-differential operators. A P Calderón, R Vaillancourt, Proc. Nat. Acad. Sci. U.S.A. 69A.P. Calderón and R. Vaillancourt. A class of bounded pseudo-differential operators. Proc. Nat. Acad. Sci. U.S.A., 69:1185-1187, 1972 Generalized phase-space distribution functions. L Cohen, J. Math. Phys. 7L. Cohen, Generalized phase-space distribution functions, J. Math. Phys., 7:781-786, 1966. L Cohen, Time Frequency Analysis: Theory and Applications. Prentice HallL. Cohen. Time Frequency Analysis: Theory and Applications, Prentice Hall, 1995. Note on the Wigner distribution and Localization Operators in the quasi-Banach setting. E Cordero, Anomalies in Partial Differential Equations. M. Cicognani et al.43E. Cordero. Note on the Wigner distribution and Localization Operators in the quasi-Banach setting. Anomalies in Partial Differential Equations, M. Cicognani et al. (eds.), Springer INdAM Series 43:149-166, 2021. Wigner Analysis of Operators. Part II: Schrödinger equations. E Cordero, L Rodino, In preparationE. Cordero and L. Rodino. Wigner Analysis of Operators. Part II: Schrödinger equations. In preparation. On the symplectic covariance and interferences of time-frequency distributions. E Cordero, M De Gosson, M Döfler, F Nicola, SIAM J. Math. Anal. 502E. Cordero, M. de Gosson, M. Döfler and F. Nicola. On the symplectic covariance and interferences of time-frequency distributions. SIAM J. Math. Anal., 50(2):2178-2193, 2018. Generalized Born-Jordan Distributions and Applications. E Cordero, M De Gosson, M Döfler, F Nicola, Adv. Comput. Math. 46512020E. Cordero, M. de Gosson, M. Döfler and F. Nicola. Generalized Born-Jordan Distributions and Applications. Adv. Comput. Math., 46 (51), 2020. Time-frequency Analysis of Born-Jordan Pseudodifferential Operators. E Cordero, M De Gosson, F Nicola, J. Funct. Anal. 2722E. Cordero, M. de Gosson and F. Nicola. Time-frequency Analysis of Born-Jordan Pseudo- differential Operators. J. Funct. Anal., 272(2):577-598, 2017. A characterization of modulation spaces by symplectic rotations. E Cordero, M De Gosson, F Nicola, J. Funct. Anal. 27811E. Cordero, M. de Gosson and F. Nicola. A characterization of modulation spaces by sym- plectic rotations. J. Funct. Anal., 278(11):108474, 19, 2020. Almost diagonalization of τ -pseudodifferential operators with symbols in Wiener amalgam and modulation spaces. E Cordero, F Nicola, S I Trapasso, J. Fourier Anal. Appl. 254E. Cordero, F. Nicola and S. I. Trapasso. Almost diagonalization of τ -pseudodifferential operators with symbols in Wiener amalgam and modulation spaces. J. Fourier Anal. Appl., 25(4):1927-1957, 2019. Time-Frequency Analysis of Operators. E Cordero, L Rodino, De Gruyter Studies in Mathematics. E. Cordero and L. Rodino, Time-Frequency Analysis of Operators, De Gruyter Studies in Mathematics, 2020. Linear perturbations of the Wigner distribution and the Cohen's class. E Cordero, S I Trapasso, Anal. Appl. (Singap.). 183E. Cordero and S. I. Trapasso. Linear perturbations of the Wigner distribution and the Cohen's class. Anal. Appl. (Singap.), 18(3):385-422, 2020. Basic Filters for Convolutional Neural Networks Applied to Music: Training or Design?. M Döfler, T Grill, R Bammer, A Flexer, Neural Computing and Applications. M. Döfler, T. Grill, R. Bammer and A. Flexer. Basic Filters for Convolutional Neural Net- works Applied to Music: Training or Design? Neural Computing and Applications, 2018. Reducing the entropy of data via convolutional neural networks. M Döfler, F Luef, E Skrettingland, Work in progressM. Döfler, F. Luef and E. Skrettingland. Reducing the entropy of data via convolutional neural networks. Work in progress, May 2020. Modulation spaces on locally compact abelian groups. H G Feichtinger, Technical report. University of ViennaH. G. Feichtinger. Modulation spaces on locally compact abelian groups. In Technical report, University of Vienna, 1983, and also in "Wavelets and Their Applications", pages 99-140. . M Krishna, R Radha, S Thangavelu, Allied PublishersM. Krishna, R. Radha, S. Thangavelu, editors, Allied Publishers, 2003. Harmonic Analysis in Phase Space. G B Folland, Princeton Univ. PressPrinceton, NJG. B. Folland. Harmonic Analysis in Phase Space. Princeton Univ. Press, Princeton, NJ, 1989. Time-frequency analysis on modulation spaces M p,q m , 0 < p, q ≤ ∞. Y V Galperin, S Samarah, Appl. Comput. Harmon. Anal. 161Y. V. Galperin and S. Samarah. Time-frequency analysis on modulation spaces M p,q m , 0 < p, q ≤ ∞. Appl. Comput. Harmon. Anal., 16(1):1-18, 2004. A pseudo-differential calculus on nonstandard symplectic space; spectral and regularity results in modulation spaces. N C Dias, M De Gosson, F Luef, J N Prata, J. Math. Pures Appl. 96N.C. Dias, M. de Gosson, F. Luef and J.N. Prata. A pseudo-differential calculus on non- standard symplectic space; spectral and regularity results in modulation spaces. J. Math. Pures Appl., 96:423-445, 2011. Metaplectic formulation of the Wigner transform and applications. N C Dias, M De Gosson, J N Prata, Rev. Mat. Phys. 25101343010N.C. Dias, M. de Gosson and J.N. Prata. Metaplectic formulation of the Wigner transform and applications. Rev. Mat. Phys. 25 (10), 1343010, 2013. A symplectic extension map and a new Shubin class of pseudo-differential operators. N C Dias, M De Gosson, J N Prata, J. Funct. Anal. 26610N.C. Dias, M. de Gosson and J.N. Prata. A symplectic extension map and a new Shubin class of pseudo-differential operators J. Funct. Anal., 266 (10):3772-3796, 2014. The Wigner distribution for classical systems. L Galleani, L Cohen, Physics Letters A. 302L. Galleani and L. Cohen. The Wigner distribution for classical systems. Physics Letters A, 302: 149-155, 2002. Theory of Communication. D Gabor, J. IEE. 93IIID. Gabor. Theory of Communication, J. IEE, 93(III):429-457, 1946. Symplectic Methods in Harmonic Analysis and in Mathematical Physics. M De Gosson, BirkhäuserM. de Gosson. Symplectic Methods in Harmonic Analysis and in Mathematical Physics, Birkhäuser, 2011. Symplectic Covariance Properties for Shubin and Born-Jordan Pseudo-differential Operators. M De Gosson, Transactions of the American Mathematical Society. 3656M. de Gosson. Symplectic Covariance Properties for Shubin and Born-Jordan Pseudo-differential Operators. Transactions of the American Mathematical Society, 365(6):3287-3307, 2013. Foundation of Time-Frequency Analysis. Birkhäuser. K Gröchenig, Boston MAK. Gröchenig. Foundation of Time-Frequency Analysis. Birkhäuser, Boston MA, 2001. Modulation spaces and pseudodifferential operators, Integral Equations Operator Theory. K Gröchenig, C Heil, 34K. Gröchenig and C. Heil. Modulation spaces and pseudodifferential operators, Integral Equa- tions Operator Theory, 34(4): 439-457, 1999 Banach algebras of pseudodifferential operators and their almost diagonalization. K Gröchenig, Z Rzeszotnik, Ann. Inst. Fourier. 587K. Gröchenig and Z. Rzeszotnik. Banach algebras of pseudodifferential operators and their almost diagonalization. Ann. Inst. Fourier. 58(7):2279-2314, 2008. Characterization of boundedness on weighted modulation spaces of τ -Wigner distributions. W Guo, J Chen, D Fan, G Zhao, arXiv:2011.04467W. Guo, J. Chen, D. Fan and G. Zhao. Characterization of boundedness on weighted mod- ulation spaces of τ -Wigner distributions, 2020, arXiv:2011.04467. Quadratic hyperbolic operators. L Hörmander, Microlocal analysis and applications. BerlinSpringerMontecatini TermeL. Hörmander. Quadratic hyperbolic operators, Microlocal analysis and applications (Mon- tecatini Terme, 1989), Lecture Notes in Math., (1495):118-160, Springer, Berlin, 199. The analysis of linear partial differential operators. I. L Hörmander, Springer-VerlagBerlinL. Hörmander. The analysis of linear partial differential operators. I, Springer-Verlag, Berlin, 1990. Mixed-state localization operators: Cohen's class and trace class operators. F Luef, E Skrettingland, J. Fourier Anal. Appl. 254F. Luef and E. Skrettingland. Mixed-state localization operators: Cohen's class and trace class operators. J. Fourier Anal. Appl., 25(4):2064-2108, 2019. On accumulated Cohens class distributions and mixed-state localization operators. F Luef, E Skrettingland, Constr. Approx. 52F. Luef and E. Skrettingland. On accumulated Cohens class distributions and mixed-state localization operators. Constr. Approx., 52, 31-64, 2020. On the Integral Representations for Metaplectic Operators. H Morsche, P J Oonincx, J. Fourier Anal. Appl. 83H. Morsche and P.J. Oonincx. On the Integral Representations for Metaplectic Operators. J. Fourier Anal. Appl., 8(3):245-257, 2002. Quantum mechanics as a statistical theory Math. J E Moyal, M S Bartlett, Proc. Cambridge Philos. Soc. 451J.E. Moyal and M. S. Bartlett. Quantum mechanics as a statistical theory Math. Proc. Cambridge Philos. Soc., 45(1):99-124, 1949. A Beurling-Helson type theorem for modulation spaces. K A Okoudjou, J. Funct. Spaces Appl. 71K.A. Okoudjou, A Beurling-Helson type theorem for modulation spaces. J. Funct. Spaces Appl., 7(1): 33-41, 2009. An Introduction to the Gabor Wave Front Set. L Rodino, S I Trapasso, Anomalies in Partial Differential Equations. M. Cicognani et al.43L. Rodino and S.I. Trapasso. An Introduction to the Gabor Wave Front Set. Anomalies in Partial Differential Equations, M. Cicognani et al. (eds.), Springer INdAM Series 43:369-393, 2021. The Gabor wave front set. L Rodino, P Wahlberg, Monatsh. Math. 1734L. Rodino and P. Wahlberg. The Gabor wave front set, Monatsh. Math., 173(4):625-655, 2014. Changes of variables in modulation and Wiener amalgam spaces. M Ruzhansky, M Sugimoto, J Toft, N Tomita, Mathematische Nachrichten. 28416M. Ruzhansky, M. Sugimoto, J. Toft and N. Tomita, Changes of variables in modulation and Wiener amalgam spaces. Mathematische Nachrichten, 284(16):2078-2092, 2011. Pseudodifferential Operators and Spectral Theory. M A Shubin, Springer Series in Soviet Mathematics. BerlinSpringer-VerlagM. A. Shubin. Pseudodifferential Operators and Spectral Theory. Springer Series in Soviet Mathematics. Springer-Verlag, Berlin, 1987. An algebra of pseudodifferential operators. J Sjöstrand, Math. Res. Lett. 1J. Sjöstrand, An algebra of pseudodifferential operators. Math. Res. Lett., 1:185-192, 1994. Théorie des distributions. L Schwartz, HermannParisL. Schwartz. Théorie des distributions. Hermann, Paris 1966. E M Stein, Harmonic Analysis: Real Variable Methods, Orthogonality, and Oscillatory Integrals. Princeton University PressE. M. Stein. Harmonic Analysis: Real Variable Methods, Orthogonality, and Oscillatory Integrals. Princeton University Press, 1993. Continuity properties for modulation spaces, with applications to pseudo-differential calculus. J Toft, II. Ann. Global Anal. Geom. 261J. Toft. Continuity properties for modulation spaces, with applications to pseudo-differential calculus. II. Ann. Global Anal. Geom., 26(1):73-106, 2004. Interpolation theory, function spaces, differential operators. H Triebel, North-HollandH. Triebel. Interpolation theory, function spaces, differential operators. North-Holland, 1978. Theorie et Applications de la notion de signal analytique. J Ville, Câbles et Transmissions. 2J. Ville. Theorie et Applications de la notion de signal analytique. Câbles et Transmissions,2:61-74, 1948. On the Quantum Correction For Thermodynamic Equilibrium. E Wigner, Phys. Rev. 405E. Wigner. On the Quantum Correction For Thermodynamic Equilibrium. Phys. Rev., 40(5):749-759, 1932. . M W Wong, Weyl Transforms. SpringerM. W. Wong. Weyl Transforms, Springer, 1998.
[]
[ "Imaginary geometry III: reversibility of SLE κ for κ ∈ (4, 8)", "Imaginary geometry III: reversibility of SLE κ for κ ∈ (4, 8)" ]
[ "Jason Miller ", "Scott Sheffield " ]
[]
[]
Suppose that D ⊆ C is a Jordan domain and x, y ∈ ∂D are distinct. Fix κ ∈ (4, 8) and let η be an SLE κ process from x to y in D. We prove that the law of the time-reversal of η is, up to reparameterization, an SLE κ process from y to x in D. More generally, we prove that SLE κ (ρ 1 ; ρ 2 ) processes are reversible if and only if both ρ i are at least κ/2 − 4, which is the critical threshold at or below which such curves are boundary filling.Our result supplies the missing ingredient needed to show that for all κ ∈ (4, 8) the so-called conformal loop ensembles CLE κ are canonically defined, with almost surely continuous loops. It also provides an interesting way to couple two Gaussian free fields (with different boundary conditions) so that their difference is piecewise constant and the boundaries between the constant regions are SLE κ curves.
10.4007/annals.2016.184.2.3
[ "https://arxiv.org/pdf/1201.1498v2.pdf" ]
118,763,097
1201.1498
c7c044374715fa5580ee3a6eae042b5cd258c666
Imaginary geometry III: reversibility of SLE κ for κ ∈ (4, 8) Jason Miller Scott Sheffield Imaginary geometry III: reversibility of SLE κ for κ ∈ (4, 8) Suppose that D ⊆ C is a Jordan domain and x, y ∈ ∂D are distinct. Fix κ ∈ (4, 8) and let η be an SLE κ process from x to y in D. We prove that the law of the time-reversal of η is, up to reparameterization, an SLE κ process from y to x in D. More generally, we prove that SLE κ (ρ 1 ; ρ 2 ) processes are reversible if and only if both ρ i are at least κ/2 − 4, which is the critical threshold at or below which such curves are boundary filling.Our result supplies the missing ingredient needed to show that for all κ ∈ (4, 8) the so-called conformal loop ensembles CLE κ are canonically defined, with almost surely continuous loops. It also provides an interesting way to couple two Gaussian free fields (with different boundary conditions) so that their difference is piecewise constant and the boundaries between the constant regions are SLE κ curves. Introduction Fix κ ∈ (2, 4) and write κ = 16/κ ∈ (4,8). Our main result is the following: Theorem 1.1. Suppose that D is a Jordan domain and let x, y ∈ ∂D be distinct. Let η be a chordal SLE κ process in D from x to y. Then the law of η has time-reversal symmetry. That is, if ψ : D → D is an anti-conformal map which swaps x and y, then the time-reversal of ψ • η is equal in law to η , up to reparameterization. Since chordal SLE κ curves were introduced by Schramm in 1999 [21], they have been widely believed and conjectured to be time-reversible for all κ ≤ 8. For certain κ values, this follows from the fact that SLE κ is a scaling limit of a discrete model that does not distinguish between paths from x to y and paths from y to x (κ = 2: chordal loop-erased random walk [11], κ = 3: Ising model spin cluster boundaries [30], κ = 4: level lines of the discrete Gaussian free field [24], κ = 16/3: the FK-Ising model cluster boundaries [30], κ = 6: critical percolation [29,2], κ = 8 uniform spanning tree boundary [11]). The reversibility of chordal SLE κ curves for arbitrary κ ∈ (0, 4] was established by Zhan [32] in a landmark work that builds on the commutativity approach proposed by Dubédat [4] and by Schramm [21] in order to show that it is possible to construct a coupling of two SLE κ curves growing at each other in opposite directions so that their ranges are almost surely equal. By expanding on this approach, Dubédat [5] and Zhan [33] extended this result to include one-sided SLE κ (ρ) processes with κ ∈ (0, 4] which do not intersect the boundary (i.e., ρ ≥ κ 2 − 2). The reversibility of the entire class of chordal SLE κ (ρ 1 ; ρ 2 ) processes for ρ 1 , ρ 2 > −2 (even when they intersect the boundary) was proved in [16] using a different approach, based on coupling SLE with the Gaussian free field [15]. This work is a sequel to and makes heavy use of the techniques and results from [15,16]. We summarize these results in Section 2.2, so that this work can be read independently. Of particular importance is a variant of the "light cone" characterization of SLE κ traces given in [15], which is a refinement of so-called Duplantier duality. This gives a description of the outer boundary of an SLE κ process stopped upon hitting the boundary in terms of a certain SLE κ process. We will also employ the almost sure continuity of so-called SLE κ (ρ) and SLE κ (ρ ) traces (see Section 2.2), even when they interact nontrivially with the boundary [15,Theorem 1.3]. Theorem 1.1 is a special case of a more general theorem which gives the time-reversal symmetry of SLE κ (ρ 1 ; ρ 2 ) processes provided ρ 1 , ρ 2 ≥ κ 2 − 4. We remark that the value κ 2 − 4 is the critical threshold at or below which such processes are boundary filling [15]. Theorem 1.2. Suppose that D is a Jordan domain and let x, y ∈ ∂D be distinct. Suppose that η is a chordal SLE κ (ρ 1 ; ρ 2 ) process in D from x to y where the force points are located at x − and x + . If ψ : D → D is an anticonformal map which swaps x and y, then the time-reversal of ψ • η is an SLE κ (ρ 1 ; ρ 2 ) process from x to y, up to reparameterization. Theorem 1.2 has many consequences for SLE. For example, the conformal loop ensembles CLE κ are random collections of loops in a planar domain, defined for all κ ∈ (8 /3, 8]. Each loop in a CLE κ looks locally like SLE κ , and the collection of loops can be constructed using a branching form of SLE κ (κ−6) that traces through all of the loops, as described in [28]. However, there is some arbitrariness in the construction given in [28]: one has to choose a boundary point at which to start this process, and it was not clear in [28] whether the law of the final loop collection was independent of this initial choice; also, each loop is traced from a specific starting/ending point, and it was not clear that the "loops" thus constructed were actually continuous at this point. For κ ∈ (4,8] the continuity and initial-point independence were proved in [28] as results contingent on the continuity and time-reversal symmetry of SLE κ and SLE κ (κ − 6) processes. As mentioned above, continuity was recently established in [15]; thus Theorem 1.2 implies that the CLE κ defined in [28] are almost surely ensembles of continuous loops and that their laws are indeed canonical (independent of the location at which the branching form of SLE κ (κ − 6) is started). We remark that the analogous fact for CLE κ with κ ∈ (8/3, 4] was only recently proved in [27]. In that case, the continuity and initial-point independence are established by showing that the branching SLE κ (κ − 6) construction of CLE κ is equivalent to the loop-soupcluster-boundary construction proposed by Werner. Our final result is the non-reversibility of SLE κ (ρ 1 ; ρ 2 ) processes when either ρ 1 < κ 2 − 4 or ρ 2 < κ 2 − 4: Theorem 1.3. Suppose that D is a Jordan domain and let x, y ∈ ∂D be distinct. Suppose that η is a chordal SLE κ (ρ 1 ; ρ 2 ) process in D from x to y. Let ψ : D → D be an anti-conformal map which swaps x and y. If either ρ 1 < κ 2 − 4 or ρ 2 < κ 2 − 4, then the law of the time-reversal of ψ(η ) is not an SLE κ (ρ) process for any collection of weights ρ. 1] that the law of the outer boundary of this path (the pair of red curves from x to y on the right) has time-reversal symmetry; thus one can couple an SLE κ (ρ 1 ; ρ 2 ) path η from x to y with an SLE κ (ρ 2 ; ρ 1 ) path γ from y to x in such a way that their boundaries almost surely agree. Moreover, it was also shown in [15,Proposition 7.30] that given these outer boundaries, the conditional law of the path within each of the white "bubbles" shown on the right (i.e., each of the countably many components of the complement of the boundary that lies between the two boundary paths) is given by an independent SLE κ ( κ 2 − 4; κ 2 − 4) process from its first to its last endpoint (illustrated by the black dots on the right). Thus, if SLE κ ( κ 2 − 4; κ 2 − 4) has time-reversal symmetry, then we can couple η and γ so that they agree (up to time-reversal) within each bubble as well. Outline The remainder of this article is structured as follows. In Section 2, we will give a brief overview of both SLE and the so-called imaginary geometry of the Gaussian free field. The latter is a non-technical summary of the results proved in [15] which are needed for this article. This section is similar to [16,Section 2]. In Section 3, we will prove Theorems 1.1-1.3. Finally, in Section 4 we briefly explain how these theorems can be used to construct couplings of different Gaussian free field instances with different boundary conditions; as an application, we compute a simple formula for the probability that a given point lies to the left of an SLE κ ( κ 2 − 4; κ 2 − 4) curve. (The analogous result for SLE κ , computed by Schramm in [22], does not have such a simple form.) As Figure 1.1 illustrates, when κ ∈ (4, 8) and ρ 1 , ρ 2 ≥ κ 2 − 4, the results obtained in [15,16] reduce the problem of showing time-reversal symmetry to the special case that η is an SLE κ ( κ 2 − 4; κ 2 − 4), which is a random curve that hits every point on the entire boundary almost surely. The second step is to pick some point z on the boundary of D and consider the outer boundaries of the past and future of η upon hitting z -i.e., the outer boundary of the set of points visited by η before hitting z and the outer boundary of the set of points visited by η after hitting z, as illustrated in Figure 1.2. Lemma 3.2 shows that the law of this pair of paths is invariant under the anti-conformal map D → D that swaps x and y while fixing z. The proof of Lemma 3.2 is the heart of the argument. It makes use of Gaussian free field machinery in a rather picturesque way that avoids the need for extensive calculations. Roughly speaking, we will first consider an "infinite volume limit" obtained by "zooming in" near the point z in Figure 1.2. In this limit, we find a coupled pair of SLE κ (ρ 1 ; ρ 2 ) paths from z ∈ ∂H to ∞ in H and [16,Theorem 1.1] implies that the law of the pair of paths is invariant under reflection about the vertical axis through z. By employing a second trick (involving a second pair of paths started at a second point in ∂H) we are able to recover the finite volume symmetry from the infinite volume symmetry. Once we have Lemma 3.2, we can couple forward and reverse SLE κ ( κ 2 − 4; κ 2 − 4) processes so that their past and future upon hitting z have the same outer boundaries. Moreover, given the information in Figure 1 The left of the two blue curves shown (starting at z, ending at the top of the box) is the outer boundary of the "past of z" (i.e., the set of all points η disconnects from y before z is hit). The right blue curve with the same endpoints is the outer boundary of the "future of z" (i.e., the set of all points the time-reversal of η disconnects from x before z is hit). Lemma 3.2 shows that the law of this pair of paths is invariant under the anti-conformal map D → D that swaps x and y while fixing z. Thus we can couple forward and reverse SLE κ ( κ 2 − 4; κ 2 − 4) processes so that these boundaries are the same for both of them. The conditional law of η within each of the white bubbles is given by an independent SLE κ ( κ 2 − 4; κ 2 − 4) process [15,Proposition 7.24], so we can iterate this construction. consequence of the "light cone" characterization of SLE κ processes established in [15].) Thus we can pick any point on the boundary of a bubble and further couple so that the past and future of that point (within the bubble) have the same boundary. Iterating this procedure a countably infinite number of times allows us to couple two SLE κ ( κ 2 − 4; κ 2 − 4) curves so that one is almost surely the time-reversal of the other, thereby proving Theorem 1.2. The non-reversibility when one of ρ 1 or ρ 2 is less than κ 2 − 4 is shown by checking that the analog of Figure 1.2 is not invariant under anti-conformal maps (fixing z, swapping x and y) in this case. Preliminaries The purpose of this section is to review the basic properties of SLE κ (ρ L ; ρ R ) processes in addition to giving a non-technical overview of the so-called imaginary geometry of the Gaussian free field. The latter is a mechanism for constructing couplings of many SLE κ (ρ L ; ρ R ) strands in such a way that it is easy to compute the conditional law of one of the curves given the realization of the others [15]. SLE κ (ρ) Processes SLE κ is a one-parameter family of conformally invariant random curves, introduced by Oded Schramm in [21] as a candidate for (and later proved to be) the scaling limit of loop erased random walk [11] and the interfaces in critical percolation [29,2]. Schramm's curves have been shown so far also to arise as the scaling limit of the macroscopic interfaces in several other models from statistical physics: [24,30,3,23,14]. More detailed introductions to SLE can be found in many excellent survey articles of the subject, e.g., [31,10]. An SLE κ in H from 0 to ∞ is defined by the random family of conformal maps g t obtained by solving the Loewner ODE ∂ t g t (z) = 2 g t (z) − W t , g 0 (z) = z (2.1) where W = √ κB and B is a standard Brownian motion. Write K t := {z ∈ H : τ (z) ≤ t}. Then g t is a conformal map from H t := H\K t to H satisfying lim |z|→∞ |g t (z) − z| = 0. Rohde and Schramm showed that there almost surely exists a curve η (the so-called SLE trace) such that for each t ≥ 0 the domain H t is the unbounded connected component of H \ η([0, t]), in which case the (necessarily simply connected and closed) set K t is called the "filling" of η([0, t]) [19]. An SLE κ connecting boundary points x and y of an arbitrary simply connected Jordan domain can be constructed as the image of an SLE κ on H under a conformal transformation ψ : H → D sending 0 to x and ∞ to y. (The choice of ψ does not affect the law of this image path, since the law of SLE κ on H is scale invariant.) SLE κ is characterized by the fact that it satisfies the domain Markov property and is invariant under conformal transformations. SLE κ (ρ L ; ρ R ) is the stochastic process one obtains by solving (2.1) where the driving function W is taken to be the solution to the SDE dW t = √ κdB t + q∈{L,R} i ρ i,q W t − V i,q t dt (2.2) dV i,q t = 2 V i,q t − W t dt, V i,q 0 = x i,q . Like SLE κ , the SLE κ (ρ L ; ρ R ) processes arise in a variety of natural contexts. The existence and uniqueness of solutions to (2.2) is discussed in [15,Section 2]. In particular, it is shown that there is a unique solution to (2.2) until the first time t that W t = V j,q t where j i=1 ρ i,q ≤ −2 for q ∈ {L, R} -we call this time the continuation threshold (see [15, Section 2]). In particular, if j i=1 ρ i,q > −2 for all 1 ≤ j ≤ |ρ q | for q ∈ {L, R}, then (2. 2) has a unique solution for all times t. This even holds when one or both of the x 1,q are zero. The almost sure continuity of the SLE κ (ρ L ; ρ R ) trace is also proved in [15, Theorem 1.3]. Imaginary Geometry of the Gaussian Free Field We will now give an overview of the so-called imaginary geometry of the Gaussian free field (GFF). In this article, this serves as a tool for constructing couplings of multiple SLE strands and provides a simple calculus for computing the conditional law of one of the strands given the realization of the others [15]. The purpose of this overview is to explain just enough of the theory so that this article may read and understood independently of [15], however we refer the reader interested in proofs of the statements we make here to [15]. We begin by fixing a domain D ⊆ C with smooth boundary and letting where (α n ) are i.i.d. N (0, 1) and (f n ) is an orthonormal basis of H(D). The GFF with non-zero boundary data ψ is given by adding the harmonic extension of ψ to a zero-boundary GFF h. The GFF is a two-dimensional-time analog of Brownian motion. Just as Brownian motion can be realized as the scaling limit of many random lattice walks, the GFF arises as the scaling limit of many random (real or integer valued) functions on two dimensional lattices [1,9,17,18,13]. The GFF can be used to generate various kinds of random geometric structures, in particular the imaginary geometry discussed here [26,15]. This corresponds to considering e ih/χ , for a fixed constant χ > 0. Informally, the "rays" of the imaginary geometry are flow lines of the complex vector field e i(h/χ+θ) , i.e., solutions to the ODE C ∞ 0 (D) denote the space of compactly supported C ∞ functions on D. For f, g ∈ C ∞ 0 (D), we let (f, g) ∇ := 1 2π D ∇f (x) · ∇g(x)dxη (t) = e i(h(η(t))+θ) for t > 0, (2.4) for given values of η(0) and θ. A brief overview of imaginary geometry (as defined for general functions h) appears in [26], where the rays are interpreted as geodesics of a variant of the Levi-Civita connection associated with Liouville quantum gravity. One can interpret the e ih direction as "north" and the e i(h+π/2) direction as "west", etc. Then h determines a way of assigning a set of compass directions to every point in the domain, and a ray is determined by an initial point and a direction. When h is constant, the rays correspond to rays in ordinary Euclidean geometry. For more general continuous h, one can still show that when three rays form a triangle, the sum of the angles is always π [26]. ψ D D h = h • ψ − χ arg ψ hf e c+ π 2 χ d+ π 2 χ c− π 2 χ e+ π 2 χ f − π 2 χ a+ π 2 χ a a− π 2 χ b+ π 2 χ d+πχ Continuously varying Continuously varying b Figure 2.2: We will often make use of the notation depicted on the left hand side to indicate boundary values for Gaussian free fields. Specifically, we will delineate the boundary ∂D of a Jordan domain D with black dots. On each arc L of ∂D which lies between a pair of black dots, we will draw either a horizontal or vertical segment L 0 and label it with x : . This means that the boundary data on L 0 is given by x. Whenever L makes a quarter turn to the right, the height goes down by π 2 χ and whenever L makes a quarter turn to the left, the height goes up by π 2 χ. More generally, if L makes a turn which is not necessarily at a right angle, the boundary data is given by χ times the winding of L relative to L 0 . If we just write x next to a horizontal or vertical segment, we mean to indicate the boundary data just at that segment and nowhere else. The right side above has exactly the same meaning as the left side, but the boundary data is spelled out explicitly everywhere. Even when the curve has a fractal, non-smooth structure, the harmonic extension of the boundary values still makes sense, since one can transform the figure via the rule in Figure 2.1 to a half plane with piecewise constant boundary conditions. To build these rays, one begins by constructing explicit couplings of h with variants of SLE and showing that these couplings have certain properties. Namely, if one conditions on part of the curve, then the conditional law of h is that of a GFF in the complement of the curve with certain boundary conditions (see Figure 2.3). Examples of these couplings appear in [25,20,6,26] as well as variants in [12,7,8]. The next step is to show that in these couplings the path is almost surely determined by the field so that we can really interpret the ray as a path-valued function of the field. This step is carried out in some generality in [6,26,15]. If h is a smooth function, η a flow line of e ih/χ , and ψ : D → D a conformal transformation, then by the chain rule, ψ −1 (η) is a flow line of h•ψ −χ arg ψ , as in Figure 2.1. With this in mind, we define an imaginary surface to be an equivalence class of pairs (D, h) under the equivalence relation (D, h) → (ψ −1 (D), h • ψ − χ arg ψ ) = ( D, h). (2.5) We interpret ψ as a (conformal) coordinate change of the imaginary surface. In what follows, we will generally take D to be the upper half plane, but one can map the flow lines defined there to other domains using (2.5). We assume throughout the rest of this section that κ ∈ (0, 4) so that κ := 16/κ ∈ (4, ∞). When following the illustrations, it will be useful to keep in mind a few definitions and identities: The conditional law of h given η θ 1 and η θ 2 is a GFF on H\∪ 2 i=1 η θ i whose boundary data is shown above [15, Proposition 6.1]. By applying a conformal mapping and using the transformation rule (2.5), we can compute the conditional law of η θ 1 given the realization of η θ 2 and vice-versa. That is, η θ 2 given η θ 1 is an SLE κ ((a − θ 2 χ)/λ − 1; (θ 2 − θ 1 )χ/λ − 2) process independently in each of the connected components of H \ η θ 1 which lie to the left of η θ 1 . Moreover, η θ 1 given η θ 2 is an SLE κ ((θ 2 − θ 1 )χ/λ − 2; (b + θ 1 χ)/λ − 1) independently in each of the connected components of H \ η θ 2 which lie to the right of η θ 2 [15, Section 7.1]. x 1,L x 1,R −λ λ λ(1 + ρ 1,R ) −λ(1 + ρ 1,L ) η([0, τ ]) 0 ::::: −λ λ(1 + ρ 1,R + ρ 2,R ) x 2,R x 2,L −λ(1 + ρ 1,L + ρ 2,λ := π √ κ , λ := π 16/κ = π √ κ 4 = κ 4 λ < λ, χ := 2 √ κ − √ κ 2 > 0 (2.6) 2πχ = 4(λ − λ ), λ = λ − π 2 χ (2.7) 2πχ = (4 − κ)λ = (κ − 4)λ . (2.8) b −a η θ1 η θ2 −λ+(θ 2 −θ 1 )χ −a+θ 2 χ 0 ψ ψ(η θ2 ) 0 λ −θ 2 χ The boundary data one associates with the GFF on H so that its flow line from 0 to ∞ is an SLE κ (ρ L ; ρ R ) process with force points located at x = (x L , x R ) is −λ 1 + j i=1 ρ i,L for x ∈ [x j+1,L , x j,L ) and (2.9) λ 1 + j i=1 ρ i,R for x ∈ [x j,R , x j+1,R ) (2.10) This is depicted in Figure 2.3 in the special case that |ρ L | = |ρ R | = 2. As we explained earlier, for any η stopping time τ , the law of h conditional on η([0, τ ]) is a GFF in H \ η([0, τ ]). The boundary data of the conditional field agrees with that of h on ∂H. On the right side of η([0, τ ]), it is λ +χ·winding, where the terminology "winding" is explained in Figure 2.2, and to the left it is −λ + χ · winding. This is also depicted in Figure 2.3. By considering several flow lines of the same field, we can construct couplings of multiple SLE κ (ρ) processes. For example, suppose that θ ∈ R. The flow line η θ of h + θχ should be interpreted as the flow line of the vector field e ih/χ+θ . That is, η θ is the flow line of h with initial angle θ. If h were a continuous function and we had θ 1 < θ 2 , then it would be obvious that η θ 1 lies to the right of η θ 2 . Although non-trivial to prove, this is also true in the setting of the GFF [15, Theorem 1.5] and is depicted in Figure 2 For θ 1 < θ 2 , we can compute the conditional law of η θ 2 given η θ 1 [15, Section 7.1]. It is an SLE κ ((a − θ 2 χ)/λ − 1; (θ 2 − θ 1 )χ/λ − 2) process independently in each connected component of H \ η θ 1 which lies to the left of η θ 1 [15, Section 7.1]. Moreover, η θ 1 given η θ 2 is independently an SLE κ ((θ 2 − θ 1 )χ/λ − 2; (b + θ 1 χ)/λ − 1) in each of the connected components of H \ η θ 2 which lie to the right of η θ 2 . This is depicted in Figure 2 It is also possible to determine which segments of the boundary a flow or counterflow line cannot hit. This is described in terms of the boundary data of the field in Figure 2.5 and proved in [15,Lemma 5.2] (this result gives the range of boundary data that η cannot hit, contingent on the almost sure continuity of η; this, in turn, is given in [15,Theorem 1.3]). This can be rephrased in terms of the weights ρ: an SLE κ (ρ) process almost surely does not hit a boundary interval ( x i,R , x i+1,R ) (resp. (x i+1,L , x i )) if i s=1 ρ s,R ≥ κ 2 − 2 (resp. i s=1 ρ s,L ≥ κ 2 − 2) . See [15,Remark 5.3]. Recall that κ = 16/κ ∈ (4, ∞). We refer to SLE κ processes η as counterflow lines. The left boundaries of η ([0, τ ]), taken over a range of τ values, form a tree structure comprised of SLE κ flow lines which in some sense run orthogonal to η . The right boundaries form a dual tree structure. We can construct couplings of SLE κ and SLE κ processes (flow lines and counterflow lines) within the same imaginary geometry [15,Theorem 1.4]. This is depicted in Figure 2 If θ = π 2 , then η θ is equal to the left boundary of η . There is some intuition provided for this in Figure 2.6. Analogously, if θ < 1 χ (λ − λ) = − π 2 , then η θ passes to the right of η [15, Theorem 1.4 and Theorem 1.5] and when θ = − π 2 , η θ is equal to the right boundary of η . Just as in the setting of multiple flow lines, we can compute the conditional law of a counterflow line given the realization of a flow line (or multiple flow lines) within the same geometry. One case of this which will be particularly important for us is explained in Figure 2.7 -that the conditional law of η given its left and right boundaries evolves as an SLE κ ( κ 2 − 4; κ 2 − 4) process independently in each of the complementary connected components which lie between its left and right boundaries [15,Proposition 7.30]. This is sometimes referred to as "strong duality" for SLE (see [6, Section 8.2] for related results). We remark that κ 2 − 4 is the critical value of ρ at which counterflow lines are boundary filling. When ρ > κ 2 − 4, then SLE κ (ρ) does not fill the boundary and when ρ ∈ (−2, κ 2 − 4], then SLE κ (ρ) does fill the boundary. The situation is analogous for two-sided SLE κ (ρ 1 ; ρ 2 ). x i,R , x i+1,R ) (resp. (x i+1,L , x i )) if i s=1 ρ s,R ≥ κ 2 − 2 (resp. i s=1 ρ s,L ≥ κ 2 − 2 There is an important variant of SLE duality which allows us to give , counterflow lines within the same geometry. This is depicted above for a single counterflow line η emanating from y and a flow line η θ with angle θ starting from 0 (we intentionally did not describe the boundary data of h on ∂D). If θ = θ R := 1 χ (λ − λ) = − π 2 so that the boundary data on the right side of η θ matches that on the right side of η , then η θ will almost surely hit and then "merge" into the right boundary of η ([0, τ ]) for any η stopping time τ and, more generally, the right boundary of the entire trace of η is given by η θ -this fact is known as SLE duality. Analogously, if θ = θ L := 1 χ (λ − λ ) = π 2 = −θ R , then η θ will almost surely hit and then merge into the left boundary of η ([0, τ ]) and is equal to the left boundary of the entire trace of η . These facts are proved in [15,Theorem 1.4]. the law of the outer boundary of the counterflow line η upon hitting any point z on the boundary [15,Proposition 7.32]. If z is on the left side of ∂D, it is given by the flow line of h with angle − π 2 and if z is on the right side of ∂D, it is given by the flow of h with angle π 2 . This is explained in Figure 2.8 in the special case of boundary filling SLE κ ( κ 2 − 4; κ 2 − 4) processes. This will be particularly important for this article, since it will allow us to describe the geometry of the outer boundary between the set of points that η visits before and after hitting a given boundary point. Iterating the procedure of decomposing the path into its future and past leads to a new path decomposition of SLE κ curves. We remark that this result is closely related to a decomposition of SLE κ paths into a so-called "light cone" . We now assume that the boundary data for h is as depicted above and that ρ 1 , ρ 2 > κ 2 − 4. Then η ∼ SLE κ (ρ 1 ; ρ 2 ). Let η θ L and η θ R be the left and right boundaries of the counterflow line η , respectively. One can check that in this case, η θq ∼ SLE κ (ρ q 1 ; ρ q 2 ) with ρ q 1 , ρ q 2 > −2 for q ∈ {L, R} (see Figure 2.3 and recall the transformation rule (2.5)). Each connected component C of D\(η θ L ∪η θ R ) which lies between η θ L and η θ R has two distinguished points x C and y Cthe first and last points on ∂C traced by η θ L (as well as by η θ R ). In each such C, the law of η is independently an SLE κ ( κ 2 − 4; κ 2 − 4) process from y C to x C [15, Proposition 7.30]. If we apply a conformal change of coordinates ψ : C → S with ψ(x C ) = −∞ and ψ(y C ) = ∞, then the law of h • ψ −1 − χ arg(ψ −1 ) is a GFF on S whose boundary data is depicted on the right hand side. Moreover, ψ(η ) is the counterflow line of this field running from +∞ to −∞ and almost surely hits every point on ∂S. This holds more generally whenever the boundary data is such that η θ L , η θ R make sense as flow lines of h until terminating at y (i.e., the continuation threshold it not hit until the process terminates at y). λ −πχ −λ (1+ρ 1 ) λ (1+ρ 2 ) λ (1+ρ 2 )−πχ −λ (1+ρ 1 )+πχ Proofs In this section, we will complete the proofs of Theorems 1.1-1.3. The strategy for the former two is first to reduce the reversibility of SLE κ (ρ 1 ; ρ 2 ) for ρ 1 , ρ 2 ≥ κ 2 − 4 to the reversibility of SLE κ ( κ 2 − 4; κ 2 − 4) (Lemma 3.1). The main step to establish the reversibility in this special case is Lemma 3.2, which implies that the law of the geometry of the outer boundary of the set of points visited by such a curve before and after hitting a particular boundary point z is invariant under the anti-conformal map which swaps the seed and terminal point but fixes z (see Figure 2.9). This allows us to construct a coupling of two SLE κ ( κ 2 − 4; κ 2 − 4) processes growing in opposite directions whose outer boundary before and after hitting z is the same. Successively iterating this exploration procedure in the complementary components results in a coupling where one path is almost surely the time-::::: Reducing Theorems 1.1 and 1.2 to critical case We begin with the reduction of Theorem 1.2 to the critical boundary-filling case, which was mostly explained in Figure 1.1. Lemma 3.1. Fix ρ 1 , ρ 2 ≥ κ 2 − 4. The reversibility of SLE κ (ρ 1 ; ρ 2 ) is equivalent to the reversibility of SLE κ ( κ 2 − 4; κ 2 − 4). Proof. Suppose that D is a Jordan domain and x, y ∈ ∂D are distinct. Assume that ρ 1 , ρ 2 > κ 2 − 4 and let η be an SLE κ (ρ 1 ; ρ 2 ) from y to x. Let ψ : D → D be an anti-conformal map which swaps x and y. Figure 2.7 implies that the left boundary η L of η is an SLE κ (ρ L 1 ; ρ L 2 ) process from x to y for some ρ L 1 , ρ L 2 > −2. Since the time-reversal of η L is an SLE κ (ρ L 2 ; ρ L 1 ) process from y to x [16, Theorem 1.1], it follows that ψ(η L ) has the law of the left boundary of an SLE κ (ρ 1 ; ρ 2 ) process in D from y to x. Combining Figure 2.7 with Figure 2.4, we see that the right boundary η R of η conditional on η L is also an SLE κ (ρ R 1 ; ρ R 2 ) process for ρ R 1 , ρ R 2 > −2 from x to y. Thus [16, Theorem 1.1] implies that the time-reversal of η R given η L is an SLE κ (ρ R 2 ; ρ R 1 ) process from y to x. Consequently, we have that ψ({η L , η R }) has the law of the outer boundary of an SLE κ (ρ 1 ; ρ 2 ) process in D from y to x. By Figure 2.7 (and [15, Proposition 7.30]), we know that the conditional law of η given η L and η R is an SLE κ ( κ 2 − 4; κ 2 − 4) process independently in each of the connected components of D \ (η L ∪ η R ) which lie between η L and η R . This proves the desired equivalence for ρ 1 , ρ 2 > κ 2 − 4. The proof is analogous if either ρ 1 = κ 2 − 4 or ρ 2 = κ 2 − 4. Main lemma For the remainder of this section, we shall make use of the following setup. Let S = R × (0, 1) be the infinite horizontal strip in C and let h be a GFF on S whose boundary data is as indicated in Figure 2 For each a ∈ R, we let R a : C → C be the reflection of C about the vertical line through a. We will now prove that the law of Figure 2.9, is invariant under R z , up to time-reversal and reparameterization. T z = {η 1 z | [0,τ 1 z ] , η 2 z | [0,τ 2 z ] }, as in Lemma 3.2. The law of T z defined just above is invariant under R z , up to time-reversal and reparameterization. We note that R z is the unique anti-conformal map S → S which fixes z and swaps −∞ with +∞. The proof begins with a half-plane version of the construction described in Figure 2.9 (as would be obtained by "zooming in near z") which we explain in Figure 3.1. We then consider a similar construction (using the same instance of the GFF) from a nearby point, as shown in Figure 3.2. The result follows from these constructions in a somewhat indirect but rather interesting way. It builds on time-reversal results for SLE κ (ρ 1 ; ρ 2 ) processes [16, Theorem 1.1] (see also [32,5]) while avoiding additional calculation. Proof of Lemma 3.2. Suppose that h is a GFF on H with constant boundary data c = −λ as depicted in Figure 3.1 and Figure 3.2. The main construction in this proof actually makes sense for any c such that c ≤ −λ and c + θ L χ > −λ (and we will make use of this fact later). For each z ∈ R, we let η 1 z be the flow line of h starting at z with angle θ L . Note that η 1 z is an SLE κ ( −c−θ L χ λ − 1; c+θ L χ λ − 1) process (see Figure 2.3). Our hypotheses on c imply that both −c − θ L χ λ − 1 ≥ κ 2 − 2 and c + θ L χ λ − 1 ≤ − κ 2 < κ 2 − 2. In the latter inequality, we used that κ ∈ (2, 4). Consequently, η 1 z almost surely does not hit (−∞, z) but almost surely does intersect (z, ∞) (see 3; note also that since κ ∈ (2, 4) we have that κ − 4 > −2). Therefore η 2 z almost surely exits H at z and cannot be continued further (though it may hit R in (−∞, z) before exiting; see Figure 2.5). We let U 0 be the connected component of H \ (η 1 1 ∪ η 2 1 ) which contains 0 on its boundary. Similarly, we let U 1 be the connected component of . We will extract this from the time-reversal symmetry of SLE κ (ρ 1 ; ρ 2 ) [16, Theorem 1.1]. The area between the pair of paths can be understood as a countable sequence of "beads". Some of these beads have boundaries that intersect the negative real axis, some the positive real axis, some neither axis, and some both axes (see Figure 2.5). H \ (η 1 0 ∪ η 2 0 ) which contains 1 on its boundary. Let ψ 1 : U 1 → S be the conformal transformation, as indicated in Figure 3.2, which takes the left and rightmost points of R ∩ ∂U 1 to −∞ and +∞, respectively, and 1 to z. Let S 1 be the image of the restrictions of η 1 1 and η 2 1 to U 1 . Given U 1 , S 1 is equal in law to the bead sequence constructed in Figure 2.9 (see Figure 3.2). The same is also true for U 0 when we define S 0 analogously. Note that R 1/2 is an anti-conformal automorphism of H which swaps 0 and 1. Thus ψ 1 • R 1/2 is an anti-conformal map from R 1/2 (U 1 ) (which is a neighborhood of 0) to S. Thus, Lemma 3.2 is a consequence of Lemma 3.4, stated and proved just below (and which uses c = −λ ). Before we state and prove Lemma 3.4, we first need the following lemma which gives the reflection invariance of the pair of paths T z = {η 1 z , η 2 z } for Figure 3.1). The analogous sequence of beads beginning at 1 will at some point merge with the sequence beginning at 0 since their boundaries are given by flow lines with the same angle (see [15,Theorem 1.5]). Let B be the first bead that belongs to both sequences. In general, this bead may intersect any subset of the three intervals (−∞, 0), (0, 1), and (1, ∞) (see Figure 2.5). (In the sketch above, it intersects both (0, 1) and (1, ∞).) Let U 1 be the connected component of H \ (η 1 0 ∪ η 2 0 ) which contains 1 and let ψ 1 : U 1 → S be the conformal map which takes 1 to x, x ∈ ∂ L S fixed, and the left and right most points of ∂U 1 ∩ R to −∞ and ∞, respectively. Then h • ψ −1 1 − χ arg(ψ −1 1 ) is a GFF on S whose boundary data is as depicted on the right side, which is exactly the same as in Figure 2.9. 1. Resample T 0 from its original law (leaving S 1 unchanged) and then resample T 1 from its original law (leaving S 0 unchanged). 2. Resample T 1 from its original law (leaving S 0 unchanged) and then resample T 0 from its original law (leaving S 1 unchanged) Let X 1 = T 0 ∪ T 1 . Clearly, the law of X 1 is invariant under K. Let Y 1 be the image of X 1 under R 1/2 . Since K is itself symmetric under R 1/2 , the law of Y 1 is also invariant under K. We inductively define X n and Y n by applying K (using the same coin tosses and choices for new T 0 and T 1 values) to X n−1 and Y n−1 . Note that each X n (resp. Y n ) has the same law as X 1 (resp. Y 1 ). Let K be the first time for which, during the rerandomization, we start by resampling T 0 and find that the first bead B which is contained in both T 0 and T 1 intersects both (0, 1) and (1, ∞), as depicted in Figure 3.2, and then we resample T 1 . We note that this happens with positive probability in each application of K. Clearly, X K = Y K (since they have the same S 0 component after resampling T 0 , and this remains true after the T 1 component is resampled for both). Thus X n = Y n for all n ≥ K and K is almost surely finite. Thus X 1 and Y 1 must indeed have the same law as desired. = ρ 2 = κ 2 − 4. Let (η 1,− , η 1,+ ) be independent SLE κ ( κ 2 − 4; κ 2 − 4) curves in a bounded Jordan domain D with η 1,− connecting y with x and η 1,+ connecting x to y, x, y ∈ ∂D distinct. Lemma 3.2 implies that the law of the set T z 1 (η 1,− ) which consists of the closure of the set of points which lie between the outer boundaries of η 1,− before and after hitting z 1 is equal in distribution to the corresponding set T z 1 (η 1,+ ) for η 1,+ . Therefore we can construct a coupling of SLE κ ( κ 2 − 4; κ 2 − 4) processes (η 2,− , η 2,+ ) such that T 1 = T z 1 (η 2,− ) = T z 1 (η 2,+ ). which lie between the outer boundaries of η 1,− before and after hitting z 1 , is equal in law to the corresponding set T z 1 (η 1,+ ) for η 1,+ . Consequently, there exists a coupling (η 2,− , η 2,+ ) such that T 1 := T z 1 (η 2,− ) = T z 1 (η 2,+ ). Note that the order in which η 2,− visits the connected components of D \ T 1 is the reverse of that of η 2,+ [15,Proposition 7.32]. Moreover, the conditional law of η 2,− given T 1 is independently an SLE κ ( κ 2 − 4; κ 2 − 4) process in each of these components and likewise for η 2,+ given T 1 [15,Proposition 7.32]. We will now explain how to iterate this procedure. Let (d j ) be a sequence that traverses N × N in diagonal order, i.e. d 1 = (1, 1), d 2 = (2, 1), d 3 = (1, 2), etc. Suppose that k ≥ 2. We inductively take D k = {z n,k } to be a countable, dense subset of ∂T k−1 and z k = z d k . Applying Lemma 3.2 again, we know that T z k (η k,− ), the closure of the set of points which lie be- again thus implies that the law of the set T z 2 (η 2,− ) which consists of the closure of the set of points which lie between the outer boundaries of η 2,− before and after hitting z 2 is equal in distribution to the corresponding set T z 2 (η 2,+ ) for η 2,+ . Therefore we can construct a coupling of (η 3,− , η 3,+ ) such that T 2 = T z 2 (η 3,− ) = T z 2 (η 3,+ ). The proof proceeds by successively coupling the boundary between the future and past of the two curves until one is almost surely the time-reversal of the other. tween the outer boundaries of the set of points visited by η k,− before and after hitting z k , is equal in law to T z k (η k,+ ), the corresponding set for η k,+ . Thus by resampling η k,− and η k,+ in the connected component of D \ ∪ k−1 j=1 T j with z k on its boundary (and leaving the curves otherwise fixed), we can construct a coupling (η k+1,− , η k+1,+ ) such that T k := T z k (η k+1,− ) = T z k (η k+1,+ ) almost surely. Then the conditional law of η k+1,− and η k+1,+ in each of the complementary components of D \ ∪ k j=1 T j is independently that of an SLE κ ( κ 2 − 4; κ 2 − 4) process and η k+1,− visits these connected components in the reverse order of η k+1,+ [15,Proposition 7.32]. To complete the proof, we will show that, up to reparameterization, the uniform distance of the time-reversal of η k,+ to η k,− converges to 0 almost ::::::::::::: c 1 < −λ :::::::::::::::::: c 2 ≥ λ − πχ z η 1 z w = η 1 z (τ 1 z ) η 2 z :::::::: −λ −θLχ ::::::: λ −θLχ ::::::::::::: −λ −(θL +π)χ ::::::::::: λ −(θL +π)χ Figure 2.9 when the constant upper and lower strip boundary data is modified so that the counter flow line η from +∞ to −∞ is still boundary filling, but at least one of the ρ i (say the one corresponding to the lower boundary) is strictly less than the critical value κ 2 − 4. As in Figure 2.9, the paths η 1 z and η 2 z describe the outer boundary of η before and after hitting z. The law of the pair of paths in a neighborhood of z is absolutely continuous with respect to the law that one would obtain if the upper strip boundary were removed, so that both paths go between 0 and ∞ in H (and it is not hard to see that the local picture of the pair of paths converges to the half-plane picture upon properly rescaling). In this case, the "angle" between the right path and (0, ∞) is less than that between the left path and (−∞, 0). Since the opposite is true for the reflected pair of paths (about the vertical line through z), the time-reversal of a boundary-filling SLE κ (ρ 1 ; ρ 2 ) can only be an SLE κ (ρ 1 ; ρ 2 ) (for some ρ 1 , ρ 2 ) if ρ 1 = ρ 2 = κ 2 − 4 (recall also Lemma 3.3). surely. Note that η k,− d = η 1,− and η k,+ d = η 1,+ for all k. Consequently, the sequence (η k,− , η k,+ ) is tight (with respect to the topology induced by the uniform distance). Let (η − , η + ) be any subsequential limit. Let C k be the collection of connected components of D \ ∪ k j=1 T j . For w ∈ D, let C k (w) be the element of C k which contains w (take C k (w) = ∅ if there is no such connected component). Let d k = diam(C k (w)), where we take d k = 0 if C k (w) = ∅. It suffices to show that lim k→∞ d k = 0 almost surely (the limit exists since d k is decreasing). Let σ k be the time that η − enters C k (w) and let τ k be the time that η − leaves (by the construction, these are also the same times that η j,− enters and leaves C k (w) for all j > k). By the continuity of η − , it suffices to show that lim k→∞ (τ k − σ k ) = 0 almost surely (ρ L ; ρ R ) process with ρ L = ρ R = κ 2 − 2. To go from the left figure to the right figure, add the constant function 2λ to left side of the strip minus the path and −2λ to the right side to obtain a new field h with the boundary conditions shown on the right. By the reversibility of SLE κ (ρ L ; ρ R ) processes [16, Theorem 1.1], h is a GFF with the boundary data indicated on the right side and the time-reversal of η is a flow line of h from the top to the bottom. Note that h − h is piecewise constant, equal to −2λ to the left of the path and 2λ to the right. (We have not defined the difference on the almost surely zero-Lebesgue measure path η, but this does not effect the interpretation of this difference as a random distribution.) constant regions given by an appropriate SLE κ (ρ L ; ρ R ) curve. We illustrate this principle for flow lines in Figure 4.1, which we recall from [16]. The expectation field E(z) := (E(h − h))(z) where h and h are as in Figure 4.1 is a linear function equal to −2λ on the left boundary of the strip and 2λ on the right boundary. If we write P L (z) for the probability that z is to the left of the curve and P R (z) = 1 − P L (z) for the probability that z is to the right, then E(z) = −2λ P L (z) + 2λ P R (z), which implies that P L (z) is a linear function equal to 1 on the left boundary of the strip and 0 on the right boundary. In light of the results of this paper, one can produce a variant of Figure 4.1 involving counterflow lines when κ ∈ (2, 4) so that κ ∈ (4, 8). We illustrate this in Figure 4 To see this, we note that 2λ −λ = λ − π 2 χ and λ−2λ = −λ + π 2 χ. Consequently, if ψ is the conformal map which rotates the strip 90 degrees in the counterclockwise direction, the coordinate change formula (2.5) implies that the boundary conditions of the GFF h • ψ −1 − χ arg(ψ −1 ) agree with those of the GFF on the horizontal strip as depicted in Figure 2.8. This is a path that is boundary filling but not space filling and it divides the strip into countably many regions that lie "left" of the path and countably many that lie "right" of the path. As in Figure 4.1, we can go from the middle to the right figure by adding −2λ to the left side of the path and 2λ to the right side of the path. By the reversibility of SLE κ (ρ L ; ρ R ) processes, the resulting field h is a GFF with the boundary data equal to −1 times the boundary data in the left figure, and the time-reversal of η is the flow line of h from the top to the bottom. Note that h − h is piecewise constant, equal to 2λ on the left of the path and −2λ on the right. is a linear function equal to 2(λ − 2λ ) < 0 on the left and 2(2λ − λ) > 0 on the right. If we define P L (z) and P R (z) as above, then in this setting we have E(z) = 2λP L (z) − 2λP R (z). This implies that P L is a linear function equal to 2(λ−2λ )+2λ 4λ = 1 − λ /λ = (4 − κ)/4 ∈ (0, 1/2) on the left side and 1 minus this value, which is κ/4 ∈ (1/2, 1), on the right side. At first glance it is counterintuitive that points near the left boundary are more likely to be on the right side of the path. This is the opposite of what we saw in Figure 4.1. To get some intuition about this, consider the extreme case that κ and κ are very close to 4. In this case, the ρ L and ρ R values in Figure 4.2 are very close to −2, which means, intuitively, that when η(t) is on the left side of the strip, it traces very closely along a long segment of the left boundary before (at some point) switching over to the right side and tracing a long segment of that boundary, etc. Given this intuition, it is not so surprising that points near the left boundary are more likely to be to the right of the path. When κ is close to 8 (so that the counterflow line is close to being space filling) P L and P R are close to the constant function 1 2 . Here the intuition is that the path is likely to get very near to any given point z, and once it gets near it has a roughly equal chance of passing z to the left or right. Figure 1 . 1 : 11The curve on the left represents an SLE κ (ρ 1 ; ρ 2 ) where κ ∈ (4, 8) and ρ 1 , ρ 2 ≥ κ 2 − 4. It was shown in [15, Theorem 1.4] and [16, Theorem 1. Figure 1 . 2 : 12.2,[15, Proposition 7.32] implies that the conditional law of η within each of these bubbles is again an independent SLE κ ( κ 2 − 4; κ 2 − 4) process. Let η be an SLE κ ( κ 2 − 4; κ 2 − 4) (which almost surely hits every point on ∂D) and let z be a fixed point on ∂D. denote the Dirichlet inner product of f and g where dx is the Lebesgue measure on D. Let H(D) be the Hilbert space closure of C ∞ 0 (D) under (·, ·) ∇ . The continuum Gaussian free field h (with zero boundary conditions) is the so-called standard Gaussian on H(D). It is given formally as Figure 2 . 1 : 21The set of flow lines in D is the pullback via a conformal map ψ of the set of flow lines in D provided h is transformed to a new function h in the manner shown. Figure 2 . 3 : 23Suppose that h is a GFF on H with the boundary data depicted above. Then the flow line η of h starting from 0 is an SLE κ (ρ L ; ρ R ) curve in H where |ρ L | = |ρ R | = 2. Conditional on η([0, τ ]) for any η stopping time τ , h is equal in distribution to a GFF on H \ η([0, τ ]) with the boundary data on η([0, τ ]) depicted above (the notation a : which appears adjacent to η([0, τ ]) is explained in some detail inFigure 2.2). It is also possible to couple η ∼ SLE κ (ρ L ; ρ R ) for κ > 4 with h and the boundary data takes on the same form. The difference is in the interpretation -η is not a flow line of h, but for each time τ , the left and right outer boundaries of the set η ([0, τ ]), traced starting from η(τ ), are flow lines of h with appropriate angles. Figure 2 . 4 : 24Suppose that h is a GFF on H with the boundary data depicted above. For each θ ∈ R, let η θ be the flow line of the GFF h + θχ. This corresponds to setting the initial angle of η θ to be θ. Just as if h were a smooth function, if θ 1 < θ 2 then η θ 1 lies to the right of η θ 2 [15, Theorem 1.5]. .4. .4. Figure 2 . 5 : 25.6 in the special case of a single flow line η θ with angle θ emanating from x and targeted at y and a single counterflow line η emanating from y. When θ > 1 χ (λ − λ ) = π 2 , η θ almost surely passes to the left of (though may hit the left boundary of) η [15, Theorem 1.Suppose that h is a GFF on the strip S with the boundary data depicted above and η is the flow line of h starting at 0. The interaction of η with the upper boundary ∂ U S of ∂S depends on a, the boundary data of h on ∂ U S. Curves shown represent almost sure behaviors corresponding to the three different regimes of a (indicated by the closed boxes). The path hits ∂ U S almost surely if and only if a ∈ (−λ, λ). When a ≥ λ, it tends to −∞ (left end of the strip) and when a ≤ −λ it tends to ∞ (right end of the strip) without hitting ∂ U S. This can be rephrased in terms of the weights ρ: an SLE κ (ρ) process almost surely does not hit a boundary interval ( Figure 2 . 6 : 26We can construct SLE κ flow lines and SLE κ , κ = 16/κ ∈ (4, ∞) Figure 2 2Figure 2.7: (Continuation of Figure 2.6). We now assume that the boundary data for h is as depicted above and that ρ 1 , ρ 2 > κ 2 − 4. Then η ∼ SLE κ (ρ 1 ; ρ 2 ). Let η θ L and η θ R be the left and right boundaries of the counterflow line η , respectively. One can check that in this case, η θq ∼ SLE κ (ρ q 1 ; ρ q 2 ) with ρ q 1 , ρ q 2 > −2 for q ∈ {L, R} (see Figure 2.3 and recall the transformation rule (2.5)). Each connected component C of D\(η θ L ∪η θ R ) which lies between η θ L and η θ R has two distinguished points x C and y Cthe first and last points on ∂C traced by η θ L (as well as by η θ R ). In each such C, the law of η is independently an SLE κ ( κ 2 − 4; κ 2 − 4) process from y C to x C [15, Proposition 7.30]. If we apply a conformal change of coordinates ψ : C → S with ψ(x C ) = −∞ and ψ(y C ) = ∞, then the law of h • ψ −1 − χ arg(ψ −1 ) is a GFF on S whose boundary data is depicted on the right hand side. Moreover, ψ(η ) is the counterflow line of this field running from +∞ to −∞ and almost surely hits every point on ∂S. This holds more generally whenever the boundary data is such that η θ L , η θ R make sense as flow lines of h until terminating at y (i.e., the continuation threshold it not hit until the process terminates at y). Figure 2 . 8 : 28of angle restricted SLE κ trajectories in the same imaginary geometry [Suppose that h is a GFF on S whose boundary data is depicted above and fix z in the lower boundary ∂ L S of S. Then the counterflow lineη of h from ∞ to −∞ is an SLE κ ( κ 2 − 4; κ 2 − 4) process (seeFigure 2.3 and recall the transformation rule (2.5)) and almost surely hits z, say at time τ z . The left boundary of η ([0, τ z ]) is almost surely equal to the flow line η 1 z of h starting at z with angle θ L = π 2 stopped at time τ 1 z , the first time it hits the upper boundary ∂ U S of S. The conditional law of h given η 1 z ([0, τ 1 z ]) in each connected component of S \ η 1 z ([0, τ 1 z ]) which lies to the right of η 1 z ([0, τ 1 z ]) is the same as h itself, up to a conformal change of coordinates which preserves the entrance and exit points of η . The conditional law of η within each such component is independently that of an SLE κ ( κ 2 − 4; κ 2 − 4) from the first to last endpoint [15, Proposition 7.32]. . 9 . 9Let ∂ L S and ∂ U S denote the lower and upper boundaries of S, respectively. Fix z ∈ ∂ L S and let η 1 z be the flow line of h starting at z with angle θ L := (λ − λ )/χ = π 2this is the flow line of h + θ L χ starting at z. Due to the choice of boundary data, η 1 z almost surely hits ∂ U S (see Figure 2.5), say at time τ 1 z . Let η 2 z be the flow line of h with angle θ L starting from w = η 1 z (τ 1 z ) in the left connected component of S \ η 1 z ([0, τ 1 z ]). Due to the choice of boundary data, η 2 z almost surely hits ∂ L S at z (though it will hit ∂ L S first in other places, see Figure 2.5), say at time τ 2 z . Figure 2 . 25 as well as [15, Remark 5.3]). Conditionally on η 1 z , we let η 2 z be the flow line of h in the left connected component of H \ η 1 z starting at ∞ with angle θ L . Then η 2 z is an SLE κ (κ − 4; c+θ L χ λ − 1) process in the left connected component of H \ η 1 z from ∞ to z (see Figure 2.4 and Figure 2. Figure 3 . 1 : 31We consider the analog of Figure 2.9 in which S is replaced by the entire upper half plane H. We let h be a GFF on H with constant boundary data −λ as depicted above. For z ∈ ∂H, we let η 1 z be the flow line of h with angle θ L starting at z. Conditional on η 1 z , we let η 2 z be the flow line of h with angle θ L starting at ∞ in the left connected component of H \ η 1 z . The particular case z = 0 is depicted above. In this case, the symmetry of the law of the pair of paths under reflection about the vertical line through 0 holds if and only if c = −λ , as in the figure (Lemma 3.3) Figure 3.2: (Continuation of Figure 3.1). The analogous sequence of beads beginning at 1 will at some point merge with the sequence beginning at 0 since their boundaries are given by flow lines with the same angle (see [15, Theorem 1.5]). Let B be the first bead that belongs to both sequences. In general, this bead may intersect any subset of the three intervals (−∞, 0), (0, 1), and (1, ∞) (see Figure 2.5). (In the sketch above, it intersects both (0, 1) and (1, ∞).) Let U 1 be the connected component of H \ (η 1 0 ∪ η 2 0 ) which contains 1 and let ψ 1 : U 1 → S be the conformal map which takes 1 to x, x ∈ ∂ L S fixed, and the left and right most points of ∂U 1 ∩ R to −∞ and ∞, respectively. Then h • ψ −1 1 − χ arg(ψ −1 1 ) is a GFF on S whose boundary data is as depicted on the right side, which is exactly the same as in Figure 2.9. Figure 3 . 4 : 34The first step in the coupling procedure used in the proof of Theorem 1.2 for ρ 1 Figure 3 . 35: (Continuation of Figure 3.4). Suppose that z 2 ∈ ∂D \ T 1 . Then η 2,− and η 2,+ are both SLE κ ( κ 2 − 4; κ 2 − 4) processes in the connected component C of D \ T 1 which contains z 2 on its boundary. Applying Lemma 3.2 Figure 3 . 6 : 36The above figure illustrates what happens in the setting of Figure 4 . 1 : 41Consider a GFF h on the infinite vertical strip [−1, 1] × R whose boundary values are depicted on the left side above. The flow line η of h from the bottom to the top is an SLE κ . 2 . 2In this case, the expectation field E(z) := (E(h − h))(z) Figure 4 . 2 : 42Consider a GFF h on the infinite vertical strip [−1, 1] × R whose boundary values are depicted on the left above. The counterflow line η of h from the bottom to the top depicted in the middle above is an SLE κ (ρ L ; ρ R ) process with ρ L = ρ R = κ 2 − 4. ). See [15, Lemma 5.2] and [15, Remark 5.3]. These facts hold for all κ > 0. − 4). This case is treated inFigure 3.6, together with the discussion of the half-plane problem for general constant boundary values c that was given in the proofs of Lemma 3.2 and Lemma 3.3.4 CouplingsOne interesting aspect of time-reversal theory is that it allows us to couple two Gaussian free fields h and h with different boundary conditions in such a way that their difference is almost surely piecewise harmonic. In fact, for special choices of boundary conditions, one can arrange so that this difference is almost surely piecewise constant, with the boundary between Acknowledgments. We thank David Wilson and Dapeng Zhan for helpful discussions.z ∈ ∂H (up to a time-reversal and reparameterization of the paths). Proof. We first suppose that c = −λ . ByFigure 2.3 (see also the beginning of the proof of Lemma 3.2), we know that η 1 z ∼ SLE κ ( κ 2 − 2; − κ 2 ) and, conditionally on η 1 z , η 2 z ∼ SLE κ (κ − 4; − κ 2 ) from ∞ to z (the κ − 4 force point lies between η 2 z and η 1 z ). By the time-reversal symmetry of SLE κ (ρ 1 ; ρ 2 ) processes [16, Theorem 1.1], this in turn implies that the law of the time-reversal R(η 2 z ) of η 2 z given η 1 z is an SLE κ (− κ 2 ; κ − 4) process from z to ∞ in the left connected component of H \ η 1 z . ByFigure 2.4, this in turn implies that Proof. By rescaling and translating, we may assume without loss of generality that z = 0 and w = 1. For a ∈ {0, 1}, we let T a = η 1 a ∪ η 2 a . We first observe that T a is independent of S 1−a for a ∈ {0, 1} (where we recall that S a and U a are defined in the proof of Lemma 3.2) and, by Lemma 3.3, we know that the law of T a is invariant under R a .We now consider the following rerandomization transition kernel K. Based on the result of a fair coin toss, we either Remark 3.5. We can define T z for z ∈ ∂ U S analogously and we have a reflection invariance result which is analogous to Lemma 3.2. This is described inFigure 3.3.Iteration procedure exhausts curveIn view of Lemma 3.1 and Lemma 3.2, we can now complete the proof of Theorem 1.1 and Theorem 1.2.Proof of Theorem 1.2. Let D ⊆ C be a bounded Jordan domain and fix x, y ∈ ∂D distinct. We construct a sequence of couplings (η k,− , η k,+ ) of SLE κ ( κ 2 − 4; κ 2 − 4) curves on D with η k,− connecting y to x and η k,+ connecting x to y as follows. We take η 1,− and η 1,+ to be independent. Let D 1 = {z n,1 } be a countable, dense collection of points in ∂D and let z 1 = z 1,1 . Lemma 3.2 implies that the law of T z 1 (η 1,− ), the closure of the set of points This implies that, with ξ k = inf{t ∈ [σ k , τ k ] : η − (t) = z} (recall that η − almost surely has to hit z since it fills the boundary of C k (w)), we have either ω(|ξ k − σ k |) ≥ 1 (k) > k. By the construction, we have. ∩ ∂c, In the former case, we have |σ j(k) − τ j(k) | ≤ |σ k − τ k | − |ξ k − τ k | ≤ |σ k −. τ k | − ω −1 ( 1 4 d k∩ ∂C k (w) is dense in ∂C k (w). This implies that, with ξ k = inf{t ∈ [σ k , τ k ] : η − (t) = z} (recall that η − almost surely has to hit z since it fills the boundary of C k (w)), we have either ω(|ξ k − σ k |) ≥ 1 (k) > k. By the construction, we have [σ j(k) , τ j(k) ] is contained in either [σ k , ξ k ] or [ξ k , τ k ]. In the former case, we have |σ j(k) − τ j(k) | ≤ |σ k − τ k | − |ξ k − τ k | ≤ |σ k − τ k | − ω −1 ( 1 4 d k ). Similarly, in the latter case, |σ j(k) − τ j(k) | ≤ |σ k − τ k | − |σ k − ξ k | ≤ |σ k − τ k | − ω −1. 4 d kSimilarly, in the latter case, |σ j(k) − τ j(k) | ≤ |σ k − τ k | − |σ k − ξ k | ≤ |σ k − τ k | − ω −1 ( 1 4 d k ). This leads to a contradiction if lim k→∞ d k = 0 and therefore lim k→∞ d k = 0, almost surely, as desired. This leads to a contradiction if lim k→∞ d k = 0 and therefore lim k→∞ d k = 0, almost surely, as desired. Non-critical boundary-filling paths. Non-critical boundary-filling paths We now prove Theorem 1.3, which states that SLE κ. We now prove Theorem 1.3, which states that SLE κ (ρ 1 The construction of the (d + 1)-dimensional Gaussian droplet. G Ben Arous, J.-D Deuschel, Comm. Math. Phys. 1792G. Ben Arous and J.-D. Deuschel. The construction of the (d + 1)- dimensional Gaussian droplet. Comm. Math. Phys., 179(2):467-488, 1996. Two-dimensional critical percolation: the full scaling limit. Federico Camia, Charles M Newman, Comm. Math. Phys. 2681Federico Camia and Charles M. Newman. Two-dimensional critical per- colation: the full scaling limit. Comm. Math. Phys., 268(1):1-38, 2006. Universality in the 2D Ising model and conformal invariance of fermionic observables. D Chelkak, S Smirnov, Invent. MathD. Chelkak and S. Smirnov. Universality in the 2D Ising model and conformal invariance of fermionic observables. Invent. Math. Commutation relations for Schramm-Loewner evolutions. Julien Dubédat, Comm. Pure Appl. Math. 6012Julien Dubédat. Commutation relations for Schramm-Loewner evolu- tions. Comm. Pure Appl. Math., 60(12):1792-1847, 2007. Duality of Schramm-Loewner evolutions. Julien Dubédat, Ann. Sci.Éc. Norm. Supér. 424Julien Dubédat. Duality of Schramm-Loewner evolutions. Ann. Sci.Éc. Norm. Supér. (4), 42(5):697-724, 2009. SLE and the free field: partition functions and couplings. Julien Dubédat, J. Amer. Math. Soc. 224Julien Dubédat. SLE and the free field: partition functions and cou- plings. J. Amer. Math. Soc., 22(4):995-1054, 2009. The Gaussian free field and SLE 4 on doubly connected domains. Christian Hagendorf, Denis Bernard, Michel Bauer, J. Stat. Phys. 1401Christian Hagendorf, Denis Bernard, and Michel Bauer. The Gaus- sian free field and SLE 4 on doubly connected domains. J. Stat. Phys., 140(1):1-26, 2010. Hadamard's formula and couplings of SLEs with free field. K Izyurov, K Kytölä, ArXiv e-printsK. Izyurov and K. Kytölä. Hadamard's formula and couplings of SLEs with free field. ArXiv e-prints, June 2010. Dominos and the Gaussian free field. R Kenyon, Annals of Probability. 29R. Kenyon. Dominos and the Gaussian free field. Annals of Probability, 29:1128-1137, 2001. Conformally invariant processes in the plane. Gregory F Lawler, Mathematical Surveys and Monographs. 114American Mathematical SocietyGregory F. Lawler. Conformally invariant processes in the plane, volume 114 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2005. Conformal invariance of planar loop-erased random walks and uniform spanning trees. Gregory F Lawler, Oded Schramm, Wendelin Werner, Ann. Probab. 321BGregory F. Lawler, Oded Schramm, and Wendelin Werner. Conformal invariance of planar loop-erased random walks and uniform spanning trees. Ann. Probab., 32(1B):939-995, 2004. Off-critical lattice models and massive SLEs. Nikolai Makarov, Stanislav Smirnov, Nikolai Makarov and Stanislav Smirnov. Off-critical lattice models and massive SLEs. pages 362-371, 2010. Fluctuations for the Ginzburg-Landau model on a bounded domain. J Miller, J. Miller. Fluctuations for the Ginzburg-Landau model on a bounded domain. 2010. Universality for SLE(4). J Miller, J. Miller. Universality for SLE(4). 2010. Imaginary Geometry I: interacting SLE paths. J Miller, S Sheffield, J. Miller and S. Sheffield. Imaginary Geometry I: interacting SLE paths. 2012. Imaginary Geometry II: reversibility of SLE κ (ρ 1 ; ρ 2 ) for κ ∈ (0, 4). J Miller, S Sheffield, J. Miller and S. Sheffield. Imaginary Geometry II: reversibility of SLE κ (ρ 1 ; ρ 2 ) for κ ∈ (0, 4). 2012. On homogenization and scaling limit of some gradient perturbations of a massless free field. Ali Naddaf, Thomas Spencer, Comm. Math. Phys. 1831Ali Naddaf and Thomas Spencer. On homogenization and scaling limit of some gradient perturbations of a massless free field. Comm. Math. Phys., 183(1):55-84, 1997. The noise in the circular law and the Gaussian free field. Brian Rider, Bálint Virág, Art. ID rnm006. 33Brian Rider and Bálint Virág. The noise in the circular law and the Gaussian free field. Int. Math. Res. Not. IMRN, (2):Art. ID rnm006, 33, 2007. Basic properties of SLE. Steffen Rohde, Oded Schramm, Ann. of Math. 1612Steffen Rohde and Oded Schramm. Basic properties of SLE. Ann. of Math. (2), 161(2):883-924, 2005. A contour line of the continuum Gaussian free field. O Schramm, S Sheffield, ArXiv e-printsO. Schramm and S. Sheffield. A contour line of the continuum Gaussian free field. ArXiv e-prints, August 2010. Scaling limits of loop-erased random walks and uniform spanning trees. Oded Schramm, Israel J. Math. 118Oded Schramm. Scaling limits of loop-erased random walks and uniform spanning trees. Israel J. Math., 118:221-288, 2000. A percolation formula. Oded Schramm, Electron. Comm. Probab. 6Oded Schramm. A percolation formula. Electron. Comm. Probab., 6:115-120 (electronic), 2001. Harmonic explorer and its convergence to SLE 4. Oded Schramm, Scott Sheffield, Ann. Probab. 336Oded Schramm and Scott Sheffield. Harmonic explorer and its conver- gence to SLE 4 . Ann. Probab., 33(6):2127-2148, 2005. Contour lines of the two-dimensional discrete Gaussian free field. Oded Schramm, Scott Sheffield, Acta Math. 2021Oded Schramm and Scott Sheffield. Contour lines of the two-dimensional discrete Gaussian free field. Acta Math., 202(1):21-137, 2009. Local sets of the Gaussian free field: slides and audio. S Sheffield, S. Sheffield. Local sets of the Gaussian free field: slides and audio. www.fields.utoronto.ca/0506/percolationsle/sheffield1, www.fields.utoronto.ca/audio/0506/percolationsle/sheffield2, www.fields.utoronto.ca/audio/0506/percolationsle/sheffield3. Conformal weldings of random surfaces: SLE and the quantum gravity zipper. S Sheffield, ArXiv e-printsS. Sheffield. Conformal weldings of random surfaces: SLE and the quan- tum gravity zipper. ArXiv e-prints, December 2010. Conformal Loop Ensembles: The Markovian characterization and the loop-soup construction. S Sheffield, W Werner, ArXiv e-printsS. Sheffield and W. Werner. Conformal Loop Ensembles: The Markovian characterization and the loop-soup construction. ArXiv e-prints, June 2010. Exploration trees and conformal loop ensembles. Scott Sheffield, Duke Math. J. 1471Scott Sheffield. Exploration trees and conformal loop ensembles. Duke Math. J., 147(1):79-129, 2009. Critical percolation in the plane: conformal invariance, Cardy's formula, scaling limits. Stanislav Smirnov, C. R. Acad. Sci. Paris Sér. I Math. 3333Stanislav Smirnov. Critical percolation in the plane: conformal invari- ance, Cardy's formula, scaling limits. C. R. Acad. Sci. Paris Sér. I Math., 333(3):239-244, 2001. Conformal invariance in random cluster models. I. Holomorphic fermions in the Ising model. Stanislav Smirnov, Ann. of Math. 1722Stanislav Smirnov. Conformal invariance in random cluster mod- els. I. Holomorphic fermions in the Ising model. Ann. of Math. (2), 172(2):1435-1467, 2010. Random planar curves and Schramm-Loewner evolutions. Wendelin Werner, Lectures on probability theory and statistics. BerlinSpringer1840Wendelin Werner. Random planar curves and Schramm-Loewner evo- lutions. In Lectures on probability theory and statistics, volume 1840 of Lecture Notes in Math., pages 107-195. Springer, Berlin, 2004. Reversibility of chordal SLE. Dapeng Zhan, Ann. Probab. 364Dapeng Zhan. Reversibility of chordal SLE. Ann. Probab., 36(4):1472- 1494, 2008. Reversibility of some chordal SLE(κ; ρ) traces. Dapeng Zhan, J. Stat. Phys. 1396Dapeng Zhan. Reversibility of some chordal SLE(κ; ρ) traces. J. Stat. Phys., 139(6):1013-1032, 2010.
[]
[ "Hard Scattering Based Luminosity Measurement at Hadron Colliders", "Hard Scattering Based Luminosity Measurement at Hadron Colliders" ]
[ "Walter T Giele \nTheory Division, CERN\nFermi National Accelerator Laboratory\n60510, 1211Batavia, Geneva 23IL, CHSwitzerland\n", "Stéphane A Keller \nTheory Division, CERN\nFermi National Accelerator Laboratory\n60510, 1211Batavia, Geneva 23IL, CHSwitzerland\n" ]
[ "Theory Division, CERN\nFermi National Accelerator Laboratory\n60510, 1211Batavia, Geneva 23IL, CHSwitzerland", "Theory Division, CERN\nFermi National Accelerator Laboratory\n60510, 1211Batavia, Geneva 23IL, CHSwitzerland" ]
[]
A strategy to determine the luminosity at Hadron Colliders is discussed using the simultaneous W -boson and Z-boson event counts. The emphasis of the study will be on the uncertainty induced by the parton density functions. Understanding this source of uncertainties is crucial for a reliable luminosity determination using the W -boson and Z-boson events. As an example we will use the D0 run 1 results to extract the luminosity using the vector boson events and compare the result with the traditional method. Subsequently we will look at the implications for the top cross section uncertainties using the extracted luminosity.
null
[ "https://arxiv.org/pdf/hep-ph/0104053v1.pdf" ]
18,577,719
hep-ph/0104053
0eff9d654a355ae589a4c9a0468c2e30085424b8
Hard Scattering Based Luminosity Measurement at Hadron Colliders 4 Apr 2001 Walter T Giele Theory Division, CERN Fermi National Accelerator Laboratory 60510, 1211Batavia, Geneva 23IL, CHSwitzerland Stéphane A Keller Theory Division, CERN Fermi National Accelerator Laboratory 60510, 1211Batavia, Geneva 23IL, CHSwitzerland Hard Scattering Based Luminosity Measurement at Hadron Colliders 4 Apr 2001 A strategy to determine the luminosity at Hadron Colliders is discussed using the simultaneous W -boson and Z-boson event counts. The emphasis of the study will be on the uncertainty induced by the parton density functions. Understanding this source of uncertainties is crucial for a reliable luminosity determination using the W -boson and Z-boson events. As an example we will use the D0 run 1 results to extract the luminosity using the vector boson events and compare the result with the traditional method. Subsequently we will look at the implications for the top cross section uncertainties using the extracted luminosity. Introduction A luminosity measurement based on a well understood hard scattering process is desirable. Such a method gives good control over the theoretical uncertainties and a systematic approach to further reduce the uncertainties is possible. Also, the measured luminosity will be correlated with other hard scattering processes in the same experiment. This leads to a smaller uncertainty in the comparison between experiment and theory as the correlated luminosity uncertainty partly cancels. Only when comparing results between different experiments is the full luminosity uncertainty relevant. The method to determine the luminosity outlined in this paper is based on the principle of comparing the theoretical cross section to the measured number of W -boson events [1]. However, because of the presence of the PDF uncertainties the theoretical prediction is a probability density and a more sophisticated formalism to extract the luminosity is needed. Furthermore, by looking at the correlated W -boson and Z-boson events simultaneously we not only measure the luminosity but also provide a consistency check. This because the ratio of the W -boson over the Z-boson cross sections is independent of the luminosity. In section 2 we will review some of the theoretical considerations needed for the calculations. In particular the use of the optimized PDF sets of ref. [2] together with the needed physics parameters used in the predictions of the W -boson and Z-boson cross sections. Before extracting the luminosity we will first look in section 3 at the published D0 W -boson and Z-boson cross sections [3,4]. Comparing the measured cross sections with the theory predictions, which now include the PDF uncertainties, will give us a better understanding of some of the issues involved. Section 4 will outline the method and as an example use the D0 run 1a [3] and run 1b [4] results to determine the luminosity. Next, in section 5 we will look at the top quark pair predictions in relation to the measured luminosity. First of all we want to predict the measured cross section which can be compared to other experiments. Secondly, we want to compare the measured number of topquark pair events to the theory. In the latter comparison the luminosity uncertainty is strongly reduced and potentially challenging the theory further than currently is possible. Section 6 summarizes our findings and outlook for future hadron collider experiments M W (GeV) B(W → l ± ν) M Z (GeV) B(Z → l + l − ) α −1 QED (M Z ) 80 .419 ± 0.056 (10.56 ± 0.14)% 91.1882 ± 0.0022 (3.3688 ± 0.0026)% 128.896 ± 0.090 Table 1: The value of the physics parameters used in the theory predictions. Their uncertainties have been neglected with respect to the larger PDF uncertainties in the predictions. Theoretical Considerations The most important aspect of the method is to be able to quantify the dominant source of uncertainty in the theory prediction of the W -boson and Z-boson cross sections. The physics parameters used in the prediction (such as the vectorboson mass and width, the electroweak coupling constants, etc) are known up to a high precision relative to the experimental uncertainties. The values used are listed in table 1. However, the PDF's carry a large uncertainty incurred by the experimental data used to determine the PDF's. We will use the optimized PDF sets of ref. [2]. These PDF sets have been optimized with respect to deep inelastic proton scattering data. As we will see the employed method of numerical integration over the functional space of all possible PDF's is well fitted to handle the uncertainty estimates in the cross section calculations. Important issues such as the correlation between the W -boson and Z-boson cross section predictions induced by the PDF's and the non-gaussian aspect of the predictions can be handled without any effort. The cross section predictions will be performed at next-to-leading order in the strong coupling constant using the DYRAD Monte Carlo [5]. While the next-to-next-to-leading order matrix elements are known [6], the PDF evolution is not known up to the matching order. Moreover, we want to use the extracted luminosity to predict the number of events for other observables. To be consistent in such a procedure all theoretical predictions should be at the same order. We can use the next-to-next-to-leading order matrix element calculation to get an estimate of the remaining uncertainties due to the truncation of the perturbative expansion. For the W -boson and Z-boson cross sections this uncertainty is around 2% and is well below current experimental uncertainties. We will also make next-to-leading order cross section predictions for the topquark pair production using the HVQ Monte Carlo of ref. [7]. This Monte Carlo is based on the calculations of ref. [8]. A word of caution has to be given to the acceptance corrections needed for the W -boson and Z-boson events given the incomplete leptonic coverage of the detector. These acceptance corrections have to be calculated using the theory model and hence are dependent on the parton density functions. [3,4] for the W -boson and Z-boson cross sections specifying the statistical, systematic and luminosity uncertainty. Also given the ratio R with the statistical and systematic uncertainties and the integrated luminosity L measurement. While this can be easily incorporated in a full analysis the current published results for the W -boson , Z-boson and top quark pair production cross sections use a particular parton density function for the calculation of the acceptance corrections. For this paper we have to neglect this effect which most likely is small compared to other uncertainties in the problem. However, it can introduce a bias and only the experiments themselves could properly take the correlation of the acceptance correction with the parton density functions into account. Cross Section Results and Comparisons In this section we will look at the quoted D0 W -boson and Z-boson cross sections which uses the nondiffractive inelastic pp collision based luminosity measurement used by D0. We will compare the individual W -boson and Z-boson results to the theoretical predictions, now including the PDF uncertainty. All the experimental results needed in this paper are collected in table 2. Using the DYRAD Monte Carlo with the parameter choice of table 1 we make 100 predictions for the W -boson and Z-boson cross sections for each of the optimized PDF sets. The 100 PDF's randomly selected out of a set of 100,000 PDF's are sufficient for the analysis in this paper. This leaves us with the basic probability density function for the vector boson cross section P pdf (σ V ) = 1 N N i=1 δ(σ V − σ t V (F i )) ,(1) where the sum runs over the N = 100 optimized PDF's in the set. The theoretical prediction σ t V (F i ) depends on the PDF F i . This is a scatter plot representation of the probability density function. To calculate confidence level intervals based on only the theory prediction we use a histogram representation of the probability density function σ(W ) (nb) σ(Z) (nb) D0P pdf (σ V ) = 1 ∆N N i=1 Θ(σ V − 1 2 ∆ − σ t V (F i )) × Θ(σ t V (F i ) − σ V − 1 2 ∆) ,(2) with the bin width ∆ to be chosen 0.1 nb for the W -boson cross section and 0.01 nb for the Z-boson cross section. Using the histogram representation we can define the confidence level probability CL(σ V ) = d σ ′ V Θ(L 2 (σ ′ V ) − L 2 (σ V )) × P pdf (σ ′ V ) ,(3) with the log-likelyhood given by L 2 (σ V ) = −2 × log(P pdf (σ V )/P expmax ) ,(4) and The results for the 12 optimized PDF's are shown in table 4 together with the MRS99 [9] and CTEQ5 [10] predictions. The predicted one sigma standard deviation uncertainty varies between 1% and 8% depending on the chosen set. P expmax = max σ V P pdf (σ V ) .(5)D0 1a D0 1b σ(W ) σ(Z) σ(W ) σ(Z) ZEUS-MRST Comparison with the experimental results should be done slightly different than by looking at overlaps in the confidence level intervals. The reason is that the experimental response function P exp (σ e V |σ t V ) can be used. This probability density function gives the probability of measuring σ e V given a true nature value σ t V and is a condensation of the experimental uncertainty analysis. In this case the experimental response function is simply a onedimensional gaussian with a width equal to the combined statistical and systematic uncertainties as given in table 2. We no longer have to construct the histogram of eq. 2 with an arbitrary parameter ∆. Instead the PDF probability density for measuring σ e V given a particular PDF set is P pdf (σ e V ) = 1 N N i=1 P exp σ e V |σ t V (F i ) .(6) Using eq. 3 we calculate the confidence level of the particular D0 measurements CL(σ meas V ). That is, the likelihood that a repeat of the experiment renders a worse agreement with the theory. The results for both the D0 run 1a and run 1b are given in table 5. As is clear from the table the agreement with the theory for the run 1a results is excellent, with the H1-MRST and H1+LEP-MRST sets being the most disagreeing. The comparison with run 1b is more challenging as the accuracy of the experimental results was increased dramatically. Yet, with the exception of the two H1 set predictions all other PDF's render excellent agreement. Luminosity Determinations We could use the individual W -boson or Z-boson cross section to determine the luminosity. However, such a method would not give us a luminosity independent measure of how well the data describes the theoretical model. This is an important question as not all the optimized sets might be correctly describing the hadron collider data. The ratio R of the W -boson and Zboson cross sections gives us a luminosity independent quantity and could function as a measure of the wellness of the particular PDF set to describe the data. This leads to the obvious method of deriving the luminosity from the correlated W -boson and Z-boson events. Using the D0 results of table 1 we can determine experimental luminosity response function P luminosity exp (L|σ W , σ Z , N W , N Z ) = 1 2π |C ij | exp − 1 2 D i C −1 ij D j(7) where D = L × σ W − N W L × σ Z − N Z(8) and C ij the error correlation matrix. The run 1a and run 1b one standard deviation ellipses together with the 100 prediction of each of the optimized PDF's are shown in fig. 1 As is obvious from the figure the increased accuracy of run 1b has a dramatic impact on the experimental result. The D0 one standard deviation ellipses are show using the measured luminosity and the luminosity uncertainty itself is incorporated in the contour. Changing the value of the luminosity will move the ellipse over the green line of constant ratio of the W -boson and Z-boson cross sections. Now we have to convert the results of fig. 1 into a luminosity measurement including the PDF uncertainty. For the remainder the experimental luminosity uncertainty is excluded from the calculation of the correlation matrix in the experimental luminosity function of eq. 7. The PDF probability function for the combined W -boson and Z-boson cross sections is in the monte carlo approximation given by the scatter function Figure 1: The optimized PDF scatter predictions (blue) compared to the D0 run 1a (magenta) and run 1b (red) measurements. Also indicated is the D0 ratio measurement (green). where σ (i) P pdf (σ W , σ Z ) = 1 N N i=1 δ σ W − σ (i) W δ σ Z − σ (i) Z(9)V = σ V (F i ). Note that this scatter function is shown in fig. 1. Using this PDF probability function for the W -boson and Z-boson cross section together with the experimental luminosity response function of eq. 7 we can construct the luminosity probability density function given the number of observed W -boson , N W , and Z-boson events, N Z , P pdf (L|N W , N Z ) = d σ W d σ Z P luminosity exp (L|σ W , σ Z , N W , N Z ) × P pdf (σ W , σ Z ) = 1 N N i P luminosity exp (L|σ (i) W , σ (i) Z , N W , N Z ) ,(10) and the probability measure expressed in the confidence level CL(L|N W , N Z ) = d L ′ Θ(P pdf (L|N W , N Z )−P pdf (L ′ |N W , N Z ))×P pdf (L ′ |N W , N Z ) . (11) Using CL(L|N exp W , N exp Z ) we calculate the 31.73% confidence level interval as an estimator of the luminosity based on the experimental observations. The results are listed in table 6. As can be seen the derived luminosities are very competitive with the traditional determination used by D0. However, the confidence level intervals give no indication how well the experiment is described by the theory for the preferred luminosity. For this we have to calculate the probability a repeat of the measurement gives a worse agreement with the theory at the optimum luminosity. To do this we consider all possible outcomes of the experiment and integrate the PDF probability density function over the regions where the agreement with the theory is worse CL(N W , N Z ) = d N ′ W d N ′ Z Θ max L (P pdf (L|N W , N Z )) − max L (P pdf (L|N ′ W , N ′ Z )) × max L (P pdf (L|N ′ W , N ′ Z )) .(12) Note that the functional dependence is actually one dimensional as this confidence level is scale invariant, i.e. CL(N W , N Z ) = CL(κN W , κN Z ). In this sense the luminosity independent confidence level defined here is equivalent to the more traditional measure of agreement between experiment and theory of the ratio R = N W /N Z . The results are listed in table 6. For run 1a all optimized PDF's have a satisfactory agreement at the preferred luminosity. As the experimental uncertainties are strongly reduced for the run 1b results the agreement between this experimental result and the theory is more challenging. Even so most optimized PDF's do very well. Also indicated in the table for the run 1b results is the confidence level using the next-to-next-to-leading order matrix elements. It is useful to see the effect of the truncation of the Table 5: The 31.73% confidence level intervals for the luminosity measurements based on the total PP cross sections together with the determination based on the correlated W -boson and Z-boson measurement. Also shown is the confidence level of the data describing the theory at the optimized luminosity, CL exp ≡ CL(N exp W , N exp Z ). Also indicated in brackets is the confidence level using the next-to-next-to-leading order matrix elements. perturbative series for the hard matrix element. As is clear the inclusion of higher order seems in general to increase the agreement between experiment and theory. However, for a true estimate of the perturbative component of the uncertainty we need to increase the order of the PDF's evolution as well. Currently this is not possible and will have to wait until all calculations have been completed [11]. Using the measured luminosity. The luminosity determined in the previous section using next-to-leading order perturbative QCD can be used for the other data in the experiment. We will take as an example the topquark pair production. In the first column of table 7 we show the 31.73% confidence level theory predictions including the Table 6: The collection of top quark pair predictions using the optimized PDF sets. The first column is the 31.73% confidence level interval using eqs. 2 and 3. The second column is the measured D0 cross section using eq. 13. The third column is the expected number of top quark pair events using eq. 15. The last column is the confidence level of the observed number of top quark pair events using the probability density of eq. 16. PDF uncertainties using eqs. 2 and 3 with ∆ = 0.1 pb. As can be seen the PDF uncertainties on the topquark pair cross section are substantial. The published D0 run 1b topquark pair event rate is based on an enlarged data set with an integrated luminosity of L = 125 pb −1 . To estimate the efficiency corrected number of topquark pair events for the W -boson and Z-boson sample we simply scale the luminosity down to the quoted D0 run 1b integrated luminosity of 84.5 pb −1 . This gives a number of topquark pair events of N obs tt = 465 ± 150. Using this number of observed events we can derive the measured cross sections by taking the ratio of the number of observed events over the luminosity measurements. The probability density function is given by eq. 10 with the substitution L = N tt /σ tt P pdf (σ tt |N tt , N W , N Z ) = 1 N N i=1 P luminosity exp (N tt /σ tt |σ (i) W , σ (i) Z , N W , N Z ) = P pdf (N tt /σ tt |N W , N Z ) .(13) The 31.73% confidence level interval is given in the second column of table 7. Note that only the uncertainty induced through the luminosity uncertainty is included. The actual uncertainty on the number of observed topquark pair events is not as it would overwhelm the luminosity uncertainty for the current results. To include the experimental uncertainty in the topquark pair cross section we have to use the experimental response function density P exp (N observed tt |N nature tt ) (i.e. the detector uncertainty) to get the topquark pair cross section probability function P obs (σ tt |N tt , N W , N Z ) = d N nat tt P exp (N tt |N nat tt ) × P pdf (N nat tt /σ tt |N W , N Z ) .(14) The advantage of this way of using the luminosity is that one can compare the derived topquark pair cross section with other experiments. When comparing directly with the theory one can use the luminosity correlation between the vectorboson production and the topquark pair production to further reduce the luminosity uncertainty. This we do by prediction the expected number of topquark pairs given the number of W -boson and Z-boson events. The resulting formula is closely related to eq. 13, however now the luminosity substitution is inside the monte carlo summation over PDF's as σ tt now is dependent on the PDF P pdf (N tt |N W , N Z ) = 1 N N i=1 P luminosity exp (N tt /σ (i) tt |σ (i) W , σ (i) Z , N W , N Z ) ,(15) where the triplet (σ (i) W , σ (i) Z , σ (i) tt ) are the next-to-leading order predictions using PDF F (i) out of the optimized set. The results are shown in the third column of table 7. Using the experimental response function we get the smooth prediction for the probability density function of observing N tt topquark pair events given N W and N Z vectorboson events, now including the experimental detector uncertainties P obs (N tt |N W , N Z ) = d N nat tt P pdf (N nat tt |N W , N Z ) × P exp (N tt |N nat tt ) . (16) By converting this probability density to a confidence level probability one can calculate the likelyhood the observed number of topquark pair events agrees with the theory predictions. The confidence level for the "observed" number of topquark pair events is shown in column 4 of table 7. Note the 32% exerimental uncertainty is now included in the estimate. Conclusions The luminosity determination using the W -boson and Z-boson event rates can easily compete with the traditional methods with respect to accuracy. An added feature is that when comparing observables to the theory the luminosity uncertainty partly cancels because the observable dependence on the PDF's is correlated to the W -boson and Z-boson dependence. This leads to more accurate comparisons between theory and experimental result. The traditional luminosity determination offers no such correlations as it is not based on perturbatively calculable processes. Also, by including additional measurements in the PDF optimalization we can systematically improve the luminosity uncertainty to a level required by the physics of the TEVATRON run 2 or the LHC. Furthermore, the method can be extended to next-tonext-to-leading order once the required calculations are completed resulting in an excellent control of theoretical uncertainties. Using the preferred H1+BCDMS+E665-MRST PDF set, which includes data from three mutually consistant experiments, we find a predicted W -boson cross section of 2.44 ± 0.07 nb where the uncertainty is due to the PDF's. Comparing with the D0 measured cross section in run 1b this leads to a confidence level of 26%. Similar, the predicted Z-boson cross section of 0.232 + 0.007 − 0.006 gives a run 1b confidence level of 34%. From this we can conclude the optimized PDF set describes the collider physics well in parton fraction range relevant for the vectorboson physics. This means we can confidently continue to determine the run 1b D0 integrated luminosity for this sample. We find an integrated luminosity of 80.3+2.3−2.8 pb −1 with a maximized confidence level of 67%, reflecting the fact that the correlated W -boson and Z-boson data is well described by the PDF set. Using the measured luminosity we can continue to predict the number of observed topquark pair events. Including the PDF and luminosity uncertainty we expect 401 + 17 − 16. Comparing to the measured (but scaled) D0 run 1b measurement one finds a confidence level of 53%. The method described in this paper can at minimum be used as an check on the traditional luminosity measurement. However, given its potential better accuracy and partial cancellation of the correlated luminosity uncertainties one can contemplate replacing the traditional method in future experiments. The TEVATRON run II results will offer an excellent testing ground for these ideas. D0 Figure 2 : 2The run 1b luminosity determination is shown. In red is the D0 inelastic pp based determination, while in blue is the luminosity probability density function based on the W -boson and Z-boson event rates. Table 2 : 2The D0 run 1a/1b results Table 3 : 3The 31.73% confidence level intervals for the W -boson and Z-boson cross sections. Table 4 : 4The confidence levels (in percentages) for the measured values of the vectorboson cross sections using all optimized PDF sets. . V A Khoze, A D Martin, R Orava, M G Ryskin, hep- ph/0010163V. A. Khoze, A. D. Martin, R. Orava and M. G. Ryskin, hep- ph/0010163; . M Dittmar, F Pauss, D Zurcher, hep-ex/9705004Phys. Rev. 567284M. Dittmar, F. Pauss and D. Zurcher, hep-ex/9705004 (Phys. Rev. D56 (1997).) 7284 . W T Giele, S A Keller, D A Kosower, hep-ph/0104052W. T. Giele, S. A. Keller and D. A. Kosower, hep-ph/0104052. The D0 collaboration, hep-ex/9901040 (Phys. Rev. D60 (1999) 052003). The D0 collaboration, hep-ex/9901040 (Phys. Rev. D60 (1999) 052003). The D0 collaboration, see the D0 collaboration web page. The D0 collaboration, see the D0 collaboration web page. . W T Giele, E W N Glover, D A Kosower, hep-ph/9302225Nucl. Phys. 403W. T. Giele, E. W. N. Glover and D. A. Kosower, hep-ph/9302225 (Nucl. Phys. B403 (1993) 633). . R Hamberg, W L Van Neerven, T Matsuura, Nucl. Phys. 249343R. Hamberg, W. L. van Neerven and T. Matsuura, Nucl. Phys. B249 (1991) 343. . M L Mangano, P Nason, G Ridolfi, Nucl. Phys. 373295M. L. Mangano, P. Nason and G. Ridolfi, Nucl. Phys. B373 (1992) 295. . P Nason, S Dawson, R K Ellis, Nucl. Phys. 303607P. Nason, S. Dawson and R .K. Ellis, Nucl. Phys. B303 (1988) 607; . W Beenakker, H Kuijf, W L Van Neerven, J Smith, Phys. Rev. 4054W. Beenakker, H. Kuijf, W. L. van Neerven and J. Smith, Phys. Rev. D40 (1989) 54. . A D Martin, R G Roberts, W J Stirling, R S Thorne, hep- ph/9906231Nucl. Phys. Proc. Suppl. 79A. D. Martin, R. G. Roberts, W. J. Stirling and R. S. Thorne, hep- ph/9906231 (Nucl. Phys. Proc. Suppl. 79 (1999) 105). . H L Lai, J Huston, S Kuhlmann, J Morfin, F Olness, J F Owens, J Pumplin, W K Tung, hep-ph/9903282Eur. Phys. J. 12H. L. Lai, J. Huston, S. Kuhlmann, J. Morfin, F. Olness, J. F. Owens, J. Pumplin and W. K. Tung, hep-ph/9903282 (Eur. Phys. J. C12 (2000) 375). hep-ph/0006154 (Nucl. Phys. B588 (2000) 345). W L Van Neerven, A Vogt, hep- ph/0103123Phys. Lett. 490W. L. van Neerven and A. Vogt, hep-ph/0006154 (Nucl. Phys. B588 (2000) 345); hep-ph/0007362 (Phys. Lett. B490 (2000) 111); hep- ph/0103123; . A Retey, J A M Vermaseren, hep-ph/0007294A. Retey and J. A. M. Vermaseren, hep-ph/0007294.
[]
[ "Formation of Embryos of the Earth and the Moon from a Common Rarefied Condensation and Their Subsequent Growth 1", "Formation of Embryos of the Earth and the Moon from a Common Rarefied Condensation and Their Subsequent Growth 1" ]
[ "S I Ipatov *e-mail:[email protected] \nVernadsky Institute of Geochemistry and Analytical Chemistry\nRussian Academу of Sciences\n119991MoscowRussia\n" ]
[ "Vernadsky Institute of Geochemistry and Analytical Chemistry\nRussian Academу of Sciences\n119991MoscowRussia" ]
[]
Embryos of the Moon and the Earth may have formed as a result of contraction of a common parental rarefied condensation. The required angular momentum of this condensation could largely be acquired in a collision of two rarefied condensations producing the parental condensation. With the subse-quent growth of embryos of the Moon and the Earth taken into account, the total mass of asformed embryos needed to reach the current angular momentum of the Earth-Moon system could be below 0.01 of the Earth mass. For the low lunar iron abundance to be reproduced with the growth of originally iron-depleted embryos of the Moon and the Earth just by the accretion of planetesimals, the mass of the lunar embryo should have increased by a factor of 1.3 at the most. The maximum increase in the mass of the Earth embryo due to the accumulation of planetesimals in a gas-free medium is then threefold, and the current terrestrial iron abundance is not attained. If the embryos are assumed to have grown just by accumulating solid planetesimals (without the ejection of matter from the embryos), it is hard to reproduce the current lunar and terrestrial iron abundances at any initial abundance in the embryos. For the current lunar iron abundance to be reproduced, the amount of matter ejected from the Earth embryo and infalling onto the Moon embryo should have been an order of magnitude larger than the sum of the overall mass of planetesimals infalling directly on the Moon embryo and the initial mass of the Moon embryo, which had formed from the parental condensation, if the original embryo had the same iron abundance as the planetesimals. The greater part of matter incorporated into the Moon embryo could be ejected from the Earth in its multiple collisions with planetesimals (and smaller bodies).Keywords: embryos of the Moon and the Earth, formation of the Moon, collision of rarefied condensations, angular momentum, planetesimals, lunar iron abundance, multi-impact model
10.1134/s0038094618050040
[ "https://arxiv.org/pdf/2003.09925v1.pdf" ]
125,923,323
2003.09925
610271cae80d705814b0d09790d3c87f75c59095
Formation of Embryos of the Earth and the Moon from a Common Rarefied Condensation and Their Subsequent Growth 1 S I Ipatov *e-mail:[email protected] Vernadsky Institute of Geochemistry and Analytical Chemistry Russian Academу of Sciences 119991MoscowRussia Formation of Embryos of the Earth and the Moon from a Common Rarefied Condensation and Their Subsequent Growth 1 10.1134/S0038094618050040Received October 25, 2017; in final form, March 20, 2018 Embryos of the Moon and the Earth may have formed as a result of contraction of a common parental rarefied condensation. The required angular momentum of this condensation could largely be acquired in a collision of two rarefied condensations producing the parental condensation. With the subse-quent growth of embryos of the Moon and the Earth taken into account, the total mass of asformed embryos needed to reach the current angular momentum of the Earth-Moon system could be below 0.01 of the Earth mass. For the low lunar iron abundance to be reproduced with the growth of originally iron-depleted embryos of the Moon and the Earth just by the accretion of planetesimals, the mass of the lunar embryo should have increased by a factor of 1.3 at the most. The maximum increase in the mass of the Earth embryo due to the accumulation of planetesimals in a gas-free medium is then threefold, and the current terrestrial iron abundance is not attained. If the embryos are assumed to have grown just by accumulating solid planetesimals (without the ejection of matter from the embryos), it is hard to reproduce the current lunar and terrestrial iron abundances at any initial abundance in the embryos. For the current lunar iron abundance to be reproduced, the amount of matter ejected from the Earth embryo and infalling onto the Moon embryo should have been an order of magnitude larger than the sum of the overall mass of planetesimals infalling directly on the Moon embryo and the initial mass of the Moon embryo, which had formed from the parental condensation, if the original embryo had the same iron abundance as the planetesimals. The greater part of matter incorporated into the Moon embryo could be ejected from the Earth in its multiple collisions with planetesimals (and smaller bodies).Keywords: embryos of the Moon and the Earth, formation of the Moon, collision of rarefied condensations, angular momentum, planetesimals, lunar iron abundance, multi-impact model INTRODUCTION Several models of formation of the Moon have been proposed. Its formation from a swarm of small bodies is considered in the coaccretion theory (see, e.g., Ruskol, 1960Ruskol, , 1963Ruskol, , 1971Ruskol, , 1975. The primary source of the near-Earth swarm of bodies in the Schmidt-Ruskol-Safronov model is the capture of particles of the preplanetary disk during their collisions ("free-free" and "free-bound"). Svetsov et al. (2012) have noted that this approach predicts the formation of satellite systems with a total mass of just ~10 -5 -10 -4 of planetary mass m p . In order to model the formation of massive (0.01m p -0.1m p ) planetary satellites, the authors have examined the role of the material ejected in collisions between planetesimals and the Earth in the replenishment of the protolunar swarm. Svetsov et al. (2012) have concluded that the total mass of bodies ejected from the Earth in colli-1 Reported at the Sixth International Bredikhin Conference (September 4-8, 2017, Zavolzhsk, Russia). sions between planetesimals and the Earth with a velocity of 12-20 km/s is sufficient to form a Moonsized satellite. The hypothesis of multiple collisions (macroimpacts) between planetesimals and the Earth embryo (multi-impact model) has also been considered by Ringwood (1989), Vityazev and Pechernikova (1996), Gorkavyi (2004Gorkavyi ( , 2007, Citron et al. (2014), and Aharonson (2015, 2017). In the calculations of Citron et al. (2014), the collision velocities varied from v par to 1.4v par , where v par is the parabolic velocity on the surface of the Earth embryo. It was demonstrated that the ratio of the mass of ejected matter to the mass of matter incorporated into the disk near the proto-Earth and the concentration of iron in the matter incorporated into the disk increase with the collision velocity. Rufu and Aharonson (2017) have demonstrated that near-vertical collisions result in lower fractions of the impactor material in the ejected matter. It was assumed in numerous studies (e.g., Hartmann and Davis, 1975;Cameron and Ward, 1976;Canup and Asphaug, 2001;Canup, 2004Canup, , 2012401 IPATOV Canup et al., 2013;Cuk and Stewart, 2012;Cuk et al., 2016;Barr, 2016) that the Moon has formed as a result of ejection of the silicate mantle of the Earth in its col-lision with a Mars-sized body. Several modifications of the massive impact (megaimpact) model have been proposed in order to reproduce the current composition of the Earth and the Moon. Cuk and Stewart (2012) have demonstrated that a body with a mass of (0.026-0.1)M E , where M E is the mass of the Earth, infalling onto the rapidly-rotating (with a period of ~2.5 h) proto-Earth may produce a lunarforming disk consisting primarily of the terrestrial mantle material. Canup (2012) has demonstrated that the Earth and the Moon with similar compositions could be produced in a head-on collision between two bodies of similar masses (with a mass ratio no larger than 1.5). The models of Cuk and Stewart (2012) and Canup (2012) require the subsequent removal of a fraction of the angular momentum of the Earth-Moon system through the orbital resonance between the Sun and the Moon. The semimajor axis of orbit of the formed Moon embryo in (Salmon and Canup, 2012;Cuk and Stewart, 2012) was 6r E -7r E , where r E is the radius of the Earth. Owing to tidal interactions, the Earth-Moon distance may increase relatively rapidly to 30r E (Touma and Wisdom, 1994;Pahlevan and Morbidelli, 2015). Pahlevan and Morbidelli (2015) have found that the Earth-Moon distance had increased to 20r E -40r E in 10 6 -10 7 years. The current Earth-Moon distance is 60.4r E . It was noted in (Rufu and Aharonson, 2017) that tidal interactions make the formed smaller satellite of the Earth move away from the Earth and eventually approach the more massive satellite that has been formed earlier and has originally been the more remote one. Stewart et al. (2013) have noted that the K/Th ratios for Mercury, Venus, Earth, and Mars are similar, but this ratio for the Moon is roughly ten times lower. The low lunar K/Th ratio is attributed to the high-temperature formation in a massive impact. Norris and Wood (2017) attribute the deficit of volatiles on the Earth to the evaporation in a megaimpact and the subsequent recondensation of matter in the absence of nebular gas. Several other studies focused on the megaimpact model have been discussed in (Barr, 2016). According to (Kaib and Cowan, 2015), the probability for the proto-Earth and the impactor to have the same oxygen isotope ratios as the current Earth and Moon is no higher than 5%. Ipatov (1993Ipatov ( , 2000 has modeled numerically the evolution of disks of gravitating bodies merging in collisions. In the examination of the feeding zone of terrestrial planets, the initial bodies were classified into four groups according to the distance from the Sun. The simulated evolution of this disk revealed strong mixing of planetesimal bodies, and the compositions of formed planets with m pl > 0.5M E were almost the same. Therefore, a considerable number of celestial bodies with similar compositions could be present in the feeding zone of the Earth and Venus (if each of these bodies formed as a result of a sufficiently large number of planetesimal collisions). The O isotope composition on the Earth varies from that on Mars, Vesta, and the majority of meteorites (Elkins-Tanton, 2013). This may be attributed to the influence of bodies beyond the Jovian orbit on the formation of Mars and asteroids. The composition of celestial bodies formed in the terrestrial region was probably more uniform and differed from the composition of Mars and asteroids. In our view, a model of the formation of the Earth and the Moon with the accumulation of a large number of planetesimals has a fair chance to reproduce the similar isotopic compositions of the Earth and the Moon. The canonic model of a massive impact (megaimpact) has certain drawbacks of a primarily geochemical nature. It does not provide a satisfactory explanation for the compositional similarity (e.g., the closeness of concentrations of isotopes of oxygen, iron, hydrogen, silicon, magnesium, titanium, potassium, tungsten, and chromium) of the Earth and the Moon, since the greater part of lunar matter in this model originates from the impactor instead of from the proto-Earth (Galimov et al., 2005;Galimov, 2011;Galimov and Krivtsov, 2012;Elkins-Tanton, 2013;Clery, 2013;Barr, 2016). It is assumed in the megaimpact hypothesis that a magma ocean forms on the planetary surface after the collision. Jones (1998) has noted the lack of evidence in favor of the existence of such an ocean on the Earth at any point in history. According to Galimov (2011), the megaimpact theory fails to account for the lack of an isotope shift between lunar and terrestrial matter because the ejected material should be 80-90% vapor, and the K, Mg, and Si isotope compositions may change considerably during evaporation. A model of formation of embryos of the Moon and the Earth as a result of contraction of a rarefied dust condensation in a protoplanetary gas-dust cloud has been proposed in the work of (Galimov et al., 2005;Galimov, 1995Galimov, , 2011Galimov, , 2013Galimov and Krivtsov, 2012;Vasil'ev et al., 2011). The evaporation of FeO from dust particles is taken into account in this model, which agrees better with the geochemical data on the composition of lunar matter. The authors of the above studies have noted that the hypothesis of parallel formation of the Moon and the Earth in the collapse and fragmentation of a large dust condensation agrees with geochemical evidence. It follows from the analysis of the 182 Hf-184 W system performed by Galimov (2013) that the Moon could not have formed earlier than 50 million years after the origin of the Solar System. Having studied the Rb-Sr system, Galimov concluded that the Moon should have evolved in a medium with a higher Rb/Sr ratio prior to its emergence as a condensed body. The FORMATION OF EMBRYOS OF THE EARTH AND THE MOON 403 large atomic weight of rubidium makes its escape from the lunar surface impossible; this may occur only on the heated surface of small bodies or particles. Therefore, in Galimov's view, the initial lunar matter remained in a dispersed state (e.g., in the form of a gas-dust condensation) for the first 50 million years. It was assumed in the above-mentioned papers by Galimov and his coauthors that the stability of a gas-dust condensation could be maintained for a considerably long time by intense gas emission from the surface of particles and, possibly, by ionization and radiation repulsion due to the decay of short-lived isotopes. Note that the protosolar gas-dust cloud had existed before its contraction and the formation of the Sun and the protoplanetary disk, and rubidium could also escape from particles, e.g., during the formation of the Sun. Galimov et al. (2005), Galimov (2011), andGalimov andKrivtsov (2012) have modeled numerically the formation of embryos of the Earth-Moon system from a rarefied condensation and have studied the growth of solid embryos of the Moon and the Earth by particle accumulation. Galimov and Krivtsov (2012) have reported the results of calculations of contraction of a condensation with a mass equal to the combined mass of the Moon and the Earth and a radius of 5.5r E . Galimov et al. assumed that ~40% of volatile matter (FeO included) of dust particles, which formed the embryos, evaporated, and the initially high-temperature embryos of the Moon and the Earth were similarly depleted in iron. This 40% reduction in the particle mass transpired within (3-7)×10 4 years in their model. According to the estimates of Galimov, the evaporation of 40% of the mass of matter of the initial chondritic composition results in a reduction in the concentration of iron to lunar levels. The evaporation of material from the surface of particles made the interval of condensation contraction longer. Other particles in the condensation, which were located at a greater distance from its center and were not incorporated into the original embryos, were cooler and retained iron. The vapor flow from the surface of particles produced a repulsive force that, acting together with gas, prevented the contraction of the condensation. The embryos grew and accumulated iron-enriched particles that were located in the outer part of the condensation at the moment of embryo formation. The embryo of the Earth grew faster than the Moon embryo. This is the reason why the Moon has retained a relatively low iron abundance, while the Earth has accumulated the greater part of the remaining dust condensation and acquired a considerable amount of iron. Vasil'ev et al. (2011) and Galimov and Krivtsov (2012) have modeled the collisions between particles and embryos. The initial positions of dust particles were distributed uniformly over a cylindrical surface, and the initial velocities were zero. The authors have found that a 26.2-fold increase in the mass of the Earth embryo corresponds to a 1.31-fold increase in the mass of the Moon embryo. Galimov and Krivtsov (2012) Galimov and Krivtsov (2012), the similarity of isotopic characteristics of the Earth and the Moon presents unsurmountable problems for the megaimpact hypothesis. These authors have also demonstrated that their model provides a much better (com-pared to the megaimpact hypothesis) explanation for the following data: (1) the abundance of siderophile elements (W, P, Co, Ni, Re, Os, Ir, Pt, etc.) on the Earth and in the lunar mantle is lower than the expected values based on the known distribution coefficients; (2) Hf/W isotope data for the modern Earth and Moon; (3) isotopic geochemistry of Xe, Pb, and Sr. Galimov et al. (2005) have noted that the observed distribution of siderophile elements on the Moon could also be obtained from the initial material, and its core has formed in the conditions of partial melting. According to the model proposed in (Galimov and Krivtsov, 2012), ~50-70 million years after the beginning of formation of the Solar System, a rarefied condensation with a mass equal to that of the Earth-Moon system contracted in 10 4 -10 5 years and thus formed embryos of the Moon and the Earth. Such long existance times of condensations in the early Solar System have not been obtained by scientists specializing in the formation and evolution of condensations. Marov et al. (2008) believe that the evolution of the circumsolar protoplanetary disk to the point of formation of a dustenriched subdisk took 1-2 million years, and the subdisk then contracted and formed dust condensations within ~0.1 million years. In the model of Makalkin and Ziglina (2004), trans-Neptunian objects with diameters up to 1000 km form within an interval on the order of a million years after the onset of formation of the Solar System. In the majority of recent studies focused on the formation of planetesimals (Cuzzi et al., 2008(Cuzzi et al., , 2010Cuzzi and Hogan, 2012;Johansen et al., 2007Johansen et al., , 2009aJohansen et al., , 2009bJohansen et al., , 2011Johansen et al., , 2012Lyra et al., 2008;2009;Youdin, 2011;Youdin and Kenyon, 2013), the actual time of formation (after the onset of formation from a condensed gas-dust disk) and contraction of rarefied condensations does not exceed 1000 revolutions about the Sun; in certain models, it is as short as several tens of revolutions about the Sun. The contraction of condensations and the formation of satellite systems in the trans-Neptunian belt occur within ~100 years in the model of Nesvorny et al. (2010). In order to obtain longer contraction times, one needs to take the factors inhibiting the process of contraction of rarefied condensations into account. The times of contraction of condensations to the density of solid bodies in the studies of Titarenko (1989a, 1989b) are as long as several IPATOV million years (depending on the optical properties of dust and gas and the type and concentration of shortlived radioactive isotopes in condensations). Beletskii and Grushevskii (1991) have found that the angular momentum of contracting rarefied protoplanets could decrease considerably due to tidal interactions with the Sun. Several authors consider the formation of condensations with masses exceeding that of Mars possible. For example, the formation of rarefied condensations with a mass of ~0.1M E -0.6M E was examined in (Lyra et al., 2008). These condensations form due to the Rossby wave instability rather than by accumulating multiple smaller condensations. Ipatov (2017а) has reviewed the studies focused on the formation of rarefied condensations. In the present study, the possible scenarios for the formation of embryos of the Moon and the Earth from a rarefied condensation and the subsequent growth of these embryos are discussed. The concept of contraction of rarefied condensations, which was considered earlier by Ipatov (Ipatov, 2010(Ipatov, , 2014Ipatov, 2017bIpatov, , 2017c and Nesvorny et al. (2010) in the context of the formation of trans-Neptunian satellite systems, is taken as a basis. Ipatov (2010) and Nesvorny et al. (2010) have assumed that trans-Neptunian satellite systems have formed as a result of contraction of rarefied condensations. Ipatov (2010) has demonstrated that the angular momenta of the observed trans-Neptunian satellite systems are equal to the angular momenta of colliding rarefied condensations of the same masses smaller than their Hill spheres. The angular momentum of two colliding condensations may be negative, which is also true of the angular momentum of trans-Neptunian satellite systems. Nesvorny et al. (2010) have calculated the contraction of rarefied condensations in the trans-Neptunian region and found the initial conditions in which this contraction resulted in the formation of binary (or triple) systems. They have found that the gas resistance forces do not exert any significant effect on the formation of a binary system via contraction. Ipatov (2017b) has demonstrated that the angular momentum needed to form trans-Neptunian satellite systems in the process of contraction of parental condensations could be acquired in condensation collisions. This model of formation of trans-Neptunian satellite systems may provide an explanation for the observed orbits of components of these systems (Ipatov, 2017c). Ipatov (2015) has demonstrated that the angular momentum needed to form embryos of the Moon and the Earth in the process of contraction of a parental rarefied condensation could be acquired in a collision between two condensations. Ipatov (2017a) believed that the formation of these embryos was similar to the formation of trans-Neptunian satellite systems. Ipatov (2010Ipatov ( , 2014Ipatov ( , 2017bIpatov ( , 2017c) has considered a model with satellite systems of small bodies forming as a result of collisions of condensations that produce a condensation with sufficient angular momentum. Ipatov (2010) has found that the angular momentum of two colliding condensations (with radii r 1 and r 2 and masses m 1 and m 2 ), which had circular heliocentric orbits with semimajor axes close to a prior to the collision, is K s =k Θ (G• M S ) 1/2 (r 1 +r 2 ) 2 • m 1 • m 2 • (m 1+ m 2 ) -1 a -3/2 , (1) where G is the gravitational constant, M S is the mass of the Sun, and the difference between the semimajor axes of orbits of condensations is Θ(r 1 + r 2 ). At (r 1 + r 2 )/a << Θ, k Θ ≈ 1−1.5Θ 2 . The k Θ values vary from -0.5 to 1. The average value, |k Θ |, is 0.6. The values of K s and k Θ are positive at 0 < Θ < 0.8165 and negative at 0.8165 << 1. If two identical condensations with their radii equal to k H r H , where r H is the Hill radius of a condensation with mass m 1 = m 2 , collide, it follows from (1) that K s ≈0.96k Θ •k H 2 •a 1/2 •m 1 5/3 •G 1/2 •M S -1/6(2) Let us denote the angular momentum in a typical collision of two identical condensations, which are the size of their Hill spheres, in circular heliocentric orbits as K s2 . Using formula (2), one may determine that the ratio of angular momentum K ΣEM of the Earth-Moon system (K ΣEM ≈ 3.45 × 10 34 kg m 2 s -1 ) to angular momentum K s2 is roughly equal to 0.0335 at k Θ = 0.6 and to 0.02 at k Θ =1 if condensation masses m 1 =0.5×1.0123M Е . Thus, the angular momentum in such a collision in the considered model may be 50 times higher than the current angular momentum of the Earth-Moon system. If the eccentricities of heliocentric orbits are nonzero, the angular momentum of colliding condensations may exceed the value for circular orbits. In the model considered below, the mass and the angular momentum of a condensation produced in a collision are the same as those of the colliding condensations. In reality, a fraction of the mass and the angular momentum gets lost in the collision (especially in grazing collisions) and in the process of condensation contraction. Therefore, the mass and the angular momentum of colliding condensations could be larger than those of the parental condensation and the satellite system formed as a result of contraction of the parental condensation. It follows from Fig. 2 in (Nesvorny et al., 2010) that the mass of the formed solid binary object was approximately 5 times lower than the mass of the parental condensation. The introduction of the effect of gas within the condensation into calculations is likely to reduce the mass and momentum loss in the formation of embryos. Since K s2 is proportional to m 1 5 3 , K s2 = K ΣEM for k Θ = 0.6 at 2m 1 ≈ 0.0335 3/5 × 1.0123M Е ≈ 0.13M Е . In the case of circular heliocentric orbits, the maximum (at k Θ =1) K s2 value of 1.7 × 10 36 kg m 2 s -1 is 0.6 -1 times higher than the above typical one (at k Θ = 0.6). Then, K ΣEM /K s2 ≈ 0.0335 × 0.6 ≈ 0.02 and 2m 1 ≈ 0.02 3/5 M Е ≈ 0.096M Е . Thus, the angular momentum of the Earth-Moon system could be acquired in a collision of two condensations in circular heliocentric orbits with their total mass being no lower than the mass of Mars. Surville et al. (2016) have concluded that largescale dust rings, which are then subjected to streaming instability, form after vortex dissipation. The ring mass in their models could be as large as 0.6M Е , and the ring width was on the order of (2-3) × 10 -3 a, where a is the distance between the ring and the Sun. Such a ring makes the formation and collision of two condensations in relatively close orbits possible. It is mentioned below in Subsection 2.1 that the initial mass of a rarefied condensation producing embryos of the Earth and the Moon may be relatively small (0.01M Е or even smaller) if one takes into account the increase in the angular momentum of embryos associated with the increase in their mass. Since K s = J s ω c , it follows from (1) that the angular velocity of a condensation produced in a collision of two condensations is ω c = 2.5k Θ χ −1 r 1 + r 2  2 r −2 m 1 m 2 m 1 + m 2  −2 Ω,(3) where Ω = (GM S /a 3 ) 1/2 is the angular velocity of motion of the condensation around the Sun. The moment of inertia of the condensation with radius r and mass m is J s = 0.4χmr 2 , where χ characterizes the matter distribution within the condensation (χ = 1 for a homogeneous spherical condensation considered by Nesvorny et al. (2010)). At r 1 = r 2 , r 3 = 2r 1 3 , m 1 = m 2 = m/2, and χ = 1, ω c = 1.25 × 2 1/3 k Θ Ω ≈ 1.575k Θ Ω. According to Safronov (1969), the initial angular velocity of a rarefied condensation (relative to its center of mass) is 0.2Ω for a spherical condensation and 0.25Ω for a plane circle. The initial angular velocity is always positive and may be almost an order of magnitude lower than the angular velocity acquired in a collision of condensations. The initial angular velocity of a condensation is insufficient to form a satellite system (see below). The contribution of the initial rotation to angular momentum K s of the parental condensation may be more significant if the condensation contracted prior to a collision. Let us consider a collision of two identical spherical condensations with masses m 1 and radii equal to k col r H (r H is the Hill radius of the condensation). It is assumed that each of them was initially formed with a radius of k in r H and an angular velocity of 0.2Ω. The angular momentum of a spherical condensation produced after a collision is then K s ≈(0.96•k Θ •k col 2 +0.077•χ•k in 2 )a 1/2 m 1 5/3 G 1/2 M S -1/6 . The contribution of the initial rotation at χ = 1 is larger than that of the collision if k in /k col > 2.7 and k Θ = 0.6 (or k in /k col > 3.5 and k Θ = 1) in this formula. If we consider condensations that are denser at the center (χ < 1), the contribution of the collision to K s may be larger than that of the initial rotation at k in /k col < 3χ -1/2 . It follows from formula (3) that ω c is proportional to 2 2/3 k r 3 (1 + k r ) 2 (1 + k r 3 ) -8/3 at r 2 = k r r 1 . Specifically, the values of ω c at k r = 0.5 and k r = 1/3 are lower than the value at k r = 1 by a factor of approximately 3 and 10, respectively. Let us consider the merger of two colliding condensations of equal densities with masses k m m and (1-k m )m where 0 < k m < 1 and an initial angular velocity of k Ω Ω (the typical k Ω value is 0.2). The component of the angular momentum of the formed condensation with radius r associated with the initial rotation of colliding condensations is then K sin =k Ω Ω(0.4χmr in 2 )[(1 -k m ) 5/3 + k m 5/3 ], where r in is the radius of the condensation with mass m and density equal to that of the initial condensations. The collision-induced component of the angular momentum of the condensation formed with mass m and radius r col is K sc =k Θ •Ω·m·r col 2 ·k m (1-k m )•[(1-k m ) 1/3 +k m 1/3 ] 2 . At k Ω = 0.2, k Θ = χ = 1, r col = r in , and k m = 1/28, K sc ≈ 0.8K sin ; therefore, if the condensations did not contract prior to the collision and the ratio of their radii is 3 (the mass ratio is 27), the contribution of the initial rotation to the final angular momentum is slightly larger than the collisional contribution. Thus, in order for the contribution of the collision of condensations to the angular momentum of the parental condensation to be more significant than that of the initial rotation, the radii of colliding condensations should decrease by a factor of no more than three prior to the collision and should differ by a factor of no more than three. Angular Momentum of a Condensation Formed by Accumulation of Smaller Objects Condensations and embryos formed from a condensation may grow by accumulating smaller objects. In the models of Drazkowska et al. (2016), planetesimals formed from condensations typically incorporated ~20% of the total amount of solid matter, while the remaining matter of the protoplanetary disk went into smaller objects. A certain fraction of the mass and the angular momentum of the parental rarefied condensation, which had contracted and formed embryos of the Earth-Moon system, could be acquired in the IPATOV process of accumulation of smaller objects by the parental condensation. Ipatov (1981bIpatov ( , 2000Ipatov ( , 2017b has studied the angular momentum of a condensation for several models of its growth by accumulation of smaller objects. If radius r of a growing condensation is equal to k H r H (k H is a constant and r H is the Hill radius of the growing condensation) and the modulus of its tangential velocity component is |v τ |=0.6v c •r•a -1 , angular momentum K s of a condensation with mass m f , which has grown by accumulating small objects, is written as (Ipatov, 2017b) K s ≈0.173k H 2 G 1/2 a 1/2 m f 5/3 M S -1/6 ΔK,(4) where v c is the velocity of motion of this condensation in a circular heliocentric orbit with radius a, M S is the mass of the Sun, and K = K + -K is the difference between positive K + and negative K changes in the angular momentum of the condensation after the infall of small celestial objects (K + + K -= 1). The condensation contraction in the process of accumulation of smaller objects was neglected in the derivation of (4). The values of K for various eccentricities and semi-major axes of heliocentric orbits and masses of objects approaching the condensation to within the radius of the considered sphere were given in (Ipatov, 1981а, 1981b and, in brief, in (Ipatov, 2000). It was found that K ≈ 0.9 at almost circular heliocentric orbits of objects and a condensation radius close to the Hill radius. If the condensation growth from m o to m f is considered, m f 5/3 in (4) should be replaced by m f 5/3m o 5/3 . Formula (4) is valid both for condensations and for solid bodies. Infalling dust particles and bodies could be originally located at different distances from the Sun away from the condensation (the longer the condensation life-time is, the farther away they could be positioned). Dust particles could migrate toward the condensation under the influence of gravity, radiation pressure, solar wind, and the Poynting-Robertson effect. The motion of bodies is influenced by gravity and the Yarkovsky effect. Angular Momentum of a Condensation Needed to Form Embryos of the Earth and the Moon The initial angular velocities of condensations were taken equal to ω о = k ω Ω о , where Ω о = (Gm/r 3 ) 1/2 is the circular velocity on the condensation surface, in the calculations of contraction of condensations (with mass m and radius r) in the trans-Neptunian region performed by Nesvorny et al. (2010). The values of k ω = 0.5, 0.75, 1, and 1.25 and condensation radii equal to 0.4r H , 0.6r H , and 0.8r H , where r H is the Hill radius of a condensation with mass m, were considered. Note that Ω o /Ω = 3 1/2 (r H /r) 3/2 ≈ 1.73(r H /r) 3/2 . If r<<r H , then Ω<<Ω o . In the case of Hill spheres, assuming that angular velocity ω c ≈ 1.575k Θ Ω of a condensation formed in a collision of two identical condensations is equal to ω о , we obtain k ω ≈ 0.909k Θ /χ. This implies that one may obtain the values of ω c = ω о corresponding to k ω up to 0.909 in condensation collisions with k Θ = χ = 1. In the case of collision of two condensations, the size of their Hill spheres and subsequent contraction of the formed condensation to radius r с , the angular velocity of the contracted condensation is ω rc = ω H (r H /r c ) 2 , where ω H ≈ 1.575k Θ Ω. Assuming that ω о = k ω (Gm/r c 3 ) 1/2 , we find that ω rc /ω о for such a condensation with radius r c is proportional to r c -1/2 . At r c /r H = 0.6, for angular momentum K s of colliding condensations the size of their Hill spheres may correspond to k ω up to 0.909/0.6 1/2 ≈ 1.17. In (Nesvorny et al., 2010), binary or triple systems were obtained only at k ω = 0.5 or 0.75. Thus, it follows that the initial angular velocities of condensations corresponding to the formation of binary systems could be acquired in condensation collisions. Let us compare the angular velocity acquired by a condensation while accumulating smaller objects with the angular velocity ω о needed to form a satellite system during condensation contraction. Comparing K s = J s ω о (ω о = k ω Ω о and J s = 0.4χmr 2 ) to the value of K s calculated using formula (4), we obtain K ≈ 0.8χk ω (for any r and m). It follows that K at χ = 1 is roughly equal to 0.4, 0.5, and 0.6 at k ω = 0.5, 0.6, and 0.75, respectively. The variation of the condensation density and χ in the process of accumulation is neglected in these estimates. Since the density may be higher at smaller distances from the condensation center, the typical χ value is lower than unity. The K values are normally lower for colliding objects with higher densities and higher eccentricities of heliocentric orbits (Ipatov, 1981a(Ipatov, , 1981b(Ipatov, , 2000. The above estimates do not contradict the notion that a condensation growing by accumulating smaller objects could acquire, in certain cases, an angular velocity needed to form a binary system. Since Ω o /Ω ≈ 1.73(r H /r) 3/2 , Ω ≈ 0.58Ω o at r = r H , and the initial angular velocity of rotation of a rarefied spherical condensation about its center of mass is (Safronov, 1969) 0.2Ω ≈ 0.12Ω o . If r<<r H , then Ω<<Ω o . It follows from the above estimates that the angular velocity and the angular momentum of a condensation acquired in the process of its formation from a protoplanetary disk were insufficient to form a satellite system. Galimov and Krivtsov (2012) and Le-Zakharov and Krivtsov (2013) have calculated the gravitational collapse of a condensation with a mass equal to the current mass of the Earth-Moon system (m ≈ 1.01M E ) and radius r ≈ 5.5r E ≈ 0.023r H , where r E is the radius of the Earth. In their two-dimensional calculations, a satellite system formed at an initial angular velocity of condensation rotation ω о > 0.64Ω о , but an average number of two formed clusters was attained at ω о ≈ 1.1Ω о . Galimov and Krivtsov (2012) have considered ω s = (3π/4) 1/2 Ω о ≈ 1.535Ω о instead of Ω о used in (Nesvorny et al., 2010). In the three-dimensional calculations in (Galimov and Krivtsov, 2012), two embryos formed if the angular velocity of condensation rotation fell within the interval from Ω о to 1.46Ω о . At ω о < Ω о , only a central body without satellites formed in most cases, and a considerable fraction of the momentum could be carried away by particles leaving the contracting condensation. In (Nesvorny et al., 2010), satellites formed at lower initial angular velocities (falling within the 0.5Ω о -0.75Ω о range). The discrepancies between the results of these two studies may be attributed to the differences in chaotic velocities of particles/bodies forming condensations, in modeling techniques, and in masses and sizes of the condensations considered. Several satellites could be formed at higher ω о values. Galimov and Krivtsov (2012) have considered the evaporation from millimeter particles, which had formed a rarefied condensation, in order to simulate the formation of the Earth-Moon system in the process of contraction of a condensation with the same (as for this system) angular momentum (K s = K ΣEM at r ≈ 5.5r E and ω о ≈ 0.12Ω о ) and obtain embryos with low iron abundances (iron was removed partially from the particles during evaporation). In the model with evaporation, two embryos formed at ω о ≈ 0.12Ω о . In the model without evaporation, the angular momentum for a condensation with m = 1.0123M Е and r = 0.023r H at ω о = Ω о is roughly eight times higher than that at ω о ≈ 0.12Ω о (i.e., K s ≈ 8K ΣEM ). Any angular momentum values used in (Galimov et al., 2005;Galimov and Krivtsov, 2012) could be acquired in collisions of condensations with a total mass lower than the mass of the Earth. In order to acquire the needed angular momentum, the condensation produced in a collision should have a radius larger than r = 0.023r H (the value used in the calculations of the above authors), although it may be smaller than the Hill radius. The parental condensation formed in a collision may contract to r = 0.023r H . As noted in Subsection 1.1, angular momentum K ΣEM of the current Earth-Moon system could be acquired in a collision of two rarefied condensations (with their radii equal to r H ) in circular heliocentric orbits with total mass m tot no lower than 0.1M E . At m tot ≈ 1.01M E and condensation radii equal to their Hill radii, the angular momentum may be as large as 50K ΣEM . Therefore, even if a considerable fraction of the angular momentum is lost in the process of condensation contraction, the angular momenta considered in (Galimov et al., 2005;Galimov and Krivtsov, 2012) still remain feasible. Angular momentum K si in a collision of two identical condensations with total mass m f = 1.0123M Е is equal to K ΣEM at k H ≈ 0.17 and k Θ = 0.6 or at k H ≈ 0.13 and k Θ = 1. These relations demonstrate that if the major part of the angular momentum of the parental condensation with a mass equal to that of the current Earth-Moon system was acquired in a collision of two identical condensations, their radii were larger than 0.1r H . This value is higher than the radius of the parental condensation (0.023r H ) examined in (Galimov and Krivtsov, 2012). Therefore, in order to acquire the needed angular momentum, the condensation considered in this study should be the result of the contraction of a larger condensation. At k H = 0.02, we have K si = K ΣEM only at m f ≈ 13M Е . The above estimates suggest that any initial angular velocities and momenta considered in (Galimov and Krivtsov, 2012;Nesvorny et al., 2010) could be attained after the contraction of a condensation produced in a collision of condensations fitting within their Hill spheres. The radii of initial condensations considered in the modeling of condensation formation are usually comparable to the Hill radii. The condensation formed after contraction of a larger condensation to a radius of 0.023r H could contain objects larger than the millimeter dust particles examined in (Galimov and Krivtsov, 2012). It was demonstrated in several studies (see, e.g., Johansen et al., 2007) that condensations in the feeding zone of terrestrial planets contained decimeter-sized objects. These objects could have a fractal structure (Kolesnichenko and Marov, 2013;Marov, 2017), and the mechanism of FeO evaporation from their surface corresponded to the one considered in (Galimov and Krivtsov, 2012). Technically, the condensation with a radius of 0.023r H was regarded in the studies of Galimov et al. as the central region of a condensation with a radius of r H . However, the question still remains how such a massive (1.01M Е ) small central region of a condensation could form from millimeter particles. In the general case, the initial distance between embryos formed in the process of condensation contraction could be rather large and even close to the Hill radius (the Hill radii for 1.01M E , 0.1M E , and 0.02M E are 235r E , 109r E , and 64r E , respectively). However, the distance between the initial embryos in the megaimpact and multi-impact models and the model of Galimov et al. was small. In the model of condensation growth by accumulation of small objects, the K s value calculated using Eq. (4) at K = 0.9, m f = M E + M M (the sum of current masses of the Earth and the moon), k H = 1, and a = 1 AU is more than 24.5 times higher than the current angular momentum K ΣEM of the Earth-Moon system (including the moment of axial rotation of the Earth). IPATOV Since the K s value in Eq. (4) is proportional to m f 5/3 , K s =K ΣEM at m f ≈ 0.15(M E + M M ) and K = 0.9 or at m f ≈ 0.2(M E + M M ) and K = 0.5. The current angular momentum of the Earth-Moon system is positive. Therefore, an angular momentum equal to K sEM for final condensation mass m f ≈ 0.15(M E + M M ) may be acquired at k H = 1 and K = 0.9 with any contribution of collisions of the considered parental condensation with small objects (i.e., with any contribution of the collision of two large condensations) to the angular momentum of the parental condensation that contracted and formed embryos of the Moon and the Earth. The above estimates of the condensation mass needed to form the Earth-Moon system may be reduced if one takes the increase in K s during the growth of embryos of the Moon and the Earth into account. It is theoretically possible that the angular momentum of a condensation needed to form the Earth-Moon system was acquired through the accumulation of small objects by a condensation with final mass m f >0.15M E , but we believe that the collision of large condensations produced the dominant contribution to the angular momentum of the parental condensation. If this were not the case, the parental condensations of Venus and Mars could also acquire angular momenta sufficient to form large satellites. It is likely that the Earth differed from other terrestrial planets in that the condensations contracting to form embryos of these planets did not collide with massive condensations and thus did not acquire angular momenta required to form massive satellites. The collision of condensations producing a condensation with an angular momentum sufficient to form an embryo of a planet with a massive satellite could occur only in the evolutionary history of the Earth. The accumulation of small objects fails to account for the current tilts (~23°-25°) of the rotation axes of the Earth and Mars, since the tilt of the rotation axis of a condensation (and a solid planetary embryo) is near-zero in the case of accumulation of small objects. The larger the contribution of small objects to the formation of the parental condensation for the Earth-Moon system, the lower the possible masses of condensations in the primary collision. It would be instructive to determine, using the model of condensation formation, the maximum masses of two condensations that are both located at a distance of ~1 AU from the Sun and differ in mass by no more than an order of magnitude. GROWTH OF SOLID EMBRYOS OF THE EARTH AND THE MOON Relative Variations of Masses of Embryos of the Earth and the Moon When Accumulating Planetesimals In the present subsection, the relative growth of embryos of the Earth and the Moon while accumulating planetesimals (or any other objects falling within the Hill sphere of the Earth) is compared. The increase in mass of a celestial body is proportional to the square of the effective radius r eff (area of a circle with a radius equal to effective radius r eff ). Effective radius r eff is the impact parameter at which a planet (celestial body) is reached. It is written as r ef =r·(1+(v p /v r ) 2 ) 1/2 ,(5) where v p is the parabolic velocity on the planetary surface and v r is the relative velocity at infinity (Okhotsimskii, 1968, pp. 36-37). If v r > v p (e.g., for comets infalling onto the Earth from highly eccentric orbits), r eff is close to r. If the relative velocities are low and (v p /v r ) 2 is much larger than 1, r eff is close to r(v p /v r ), where v p = (2Gm/r) 1/2 and m is the mass of a planet with radius r. Then, r eff is close to r(v p /v r ) = r(2Gm/r) The above estimates of the relative increase in masses of embryos were obtained in the model where the distance between these embryos is larger than the Hill sphere of the Earth embryo. Owing to the gravitational influence of the Earth embryo, the probability of a collision between a planetesimal and the Moon embryo increases as the distance between the embryos gets shorter. This factor enhances the relative increase in mass of the Moon embryo in the case of a short distance between the embryos and makes the conclusions regarding the iron abundance in growing embryos (see below) even more well-founded. If r eff is close to r, dm/dt is proportional to r 2 (i.e., to ρ -2/3 m 2/3 , since r is proportional to (m/ρ) 1/3 and r 2 is proportional to (m/ρ) 2/3 , where k 1 =k d 2/3 . If r eff is proportional to r 2 , the embryo of the Earth grows faster than the Moon embryo. For example, the mass of the Moon embryo increases by a factor of 2 at k d = 1 and 2.3 at k d = 1.65, while the mass of the Earth embryo increases by a factor of 10. It is conceivable that the effective radii of the proto-Earth and the proto-Moon were proportional to r at sufficiently high eccentricities of planetesimal orbits. The growth of m M /m Mo then outpaces the m E /m Eo growth. The above models demonstrate that the relative growth of embryos of the Earth and the Moon depended to a considerable extent on the eccentricities of orbits of infalling planetesimals. Increase in the Angular Momentum of Embryos of the Earth-Moon System The total angular momentum K Σ of embryos of the Earth and the Moon increased as they grew. This momentum included the angular momenta of embryos relative to their centers of mass and angular momentum K ME of embryos of the Earth and the Moon relative to their common center of mass. The growth of K Σ was influenced by many factors. The current angular momentum of the Earth and the Moon relative to their centers of mass amounts to 17% of the total angular momentum of the system (Barr, 2016). According to Eq. (2), the angular momentum of a condensation with mass m c formed in a collision of two identical condensations which moved before their collision in circular heliocentric orbits is proportional to m c ), the angular momentum of a planet grown from an embryo with mass m co should not depend on m co . Angular momentum K ME of embryos of the Earth and the Moon relative to their common center of mass may be written as K ME = (r EM G) 1/2 m M m E (m E + m M ) -1/2 , where r EM is the distance between the embryos. At r EM =const and m E >>m M , K ME is proportional to m M •m E 1/2 . It was obtained above at k d = 1.65 and r eff proportional to r 2 that the mass of the Moon embryo increases by a factor of 2.3, while the mass of the Earth embryo increases by a factor of 10. Then, if m E increases 10 times, m M •m E 1/2 increases by a factor of 7.6 ≈ 10 0.88 . Let us assume that this growth of m M •m E 1/2 by a factor of m E 0.88 is also taken place at other m E values. It was already noted that K Σ may be equal to current angular momentum K ΣEM of the Earth-Moon system at m EM = m E + m M ≈ 0.1(M E + M M ). Let us consider a model with K ME being the major part of angular momentum K Σ of the system. According to Eq. (2), K Σ = (α/0.1) 5/3 K ΣEM for the initial embryos with total mass α(M E + M M ). Assuming that K Σ increased by a factor of α -0.88 and became equal to K ΣEM in during planetesimal accumulation and increase in the mass of embryos from α(M E + M M ) to M E + M M , we obtain the following: (α/0.1) 5/3 α -0.88 = 1. From this relationship we get α ≈ 0.0078. Therefore, the initial mass of a condensation producing embryos of the Earth and the Moon could theoretically be lower than 0.01(M E + M M ). This estimate was obtained while modeling the increases in the masses of embryos without regard for the sources of infalling bodies and remains valid in the case of infall of matter ejected from the Earth embryo onto the Moon embryo. Since a fraction of the matter escapes in the process of condensation contraction and the distance between the initial embryos was originally shorter than the Earth-Moon distance, the larger estimate of the mass of the initial condensation is more probable. The estimates of mass of the original parental condensation increase when one takes into account the fact that the infall of planetesimals onto the embryos primarily occurred at an interembryo distance shorter than the current Earth-Moon distance. On the other hand, it is often assumed that the axial rotation period of the Earth was shorter in the past; therefore, its angular momentum was larger. The consideration of this factor reduces the contribution of the K ME growth induced by the accumulation of planetesimals by the embryos. Variation of the Iron Abundance in the Growing Embryos of the Earth and the Moon under the Infall of Planetesimals According to (Galimov et al., 2005;Galimov, 2011;Galimov and Krivtsov, 2012), the original embryos of the Earth and the Moon produced as a result of condensation contraction contained a relatively small amount of iron, and the Earth, which grew faster due to the accumulation of dust, acquired more iron than the Moon. In the present subsection, we consider the growth of iron-depleted embryos of the Earth and the Moon induced exclusively by the infall of planetesimals. A simple supplementary model may be used to IPATOV estimate the maximum increase in mass m M of the Moon embryo. According to this model, the initial embryos contained no iron, while the infalling matter contained 33% Fe. The abundance of iron on the Moon is then 0.33(1m r ), where m r is the ratio of the initial mass of the Moon embryo to the current mass of the Moon. Assuming that the current iron abundance on the Moon is 8% (Barr, 2016), we obtain m r = 0.76 and a 1.3-fold growth of the Moon embryo from the 0.33(1m r ) = 0.08 relation. This estimate agrees with those reported in (Galimov and Krivtsov, 2012), where the Moon embryo grew by a factor of 1.31 as the mass of the Earth embryo increased by a factor of k E = 26.2. With these estimates of Galimov and Krivtsov, increment dm of embryo mass m is proportional to m 2 (i.e., r eff is proportional to r 3 ). In the considered auxiliary model, the iron abundance on the Earth estimated at k E = 26.2 is 0.33(1 -1/26.2) = 0.317, which is close to the actual value of 32%. In view of Galimov and Krivtsov (2012), the concentration of iron in dust particles after evaporation of 40% of their matter is close to the iron abundance on the Moon. If this is the case, the current iron abundance is reproduced only if the Moon embryo did not accumulate any planetesimals at all, and it follows from the 0.33(1m Eo /M E ) + 0.08m Eo /M E = 0.32 relation that mass m Eo of the initial Earth embryo with 8% Fe was 0.04M E . With these estimates, the mass of the Earth embryo grows by a factor of 25 while the mass of the Moon remains unchanged. In order to obtain the current iron abundance on the Earth and the Moon with a nonzero iron concentration in the initial embryos, increment dm of the embryo mass should be proportional to m p , where p > 2. The calculations of increases in the masses of embryos of the Earth and the Moon in (Galimov and Krivtsov, 2012;Vasil'ev et al., 2011) were performed in the model where the particle velocities were zero at the boundaries of a cylinder in the Hill sphere of the larger embryo. The embryos formed in the process of con-traction of a condensation with a radius of 5.5 Earth radii and a mass of 1.023M E grew by accumulating iron-enriched material in the outer region of the condensation that remained within the Hill sphere after the formation of the initial embryos. Thus, the total mass of material within the Hill sphere in the model of Galimov et al. should be considerably larger than M E (with the inner and outer regions of the condensation both having a mass on the order of M E ). In this model, particles forming a condensation with a radius of 5.5 Earth radii were depleted in iron, while the other particles in the Hill sphere were ironenriched. According to (Galimov and Krivtsov, 2012), the total mass of initial embryos needed to reproduce the current iron abundance on the Earth and the Moon is 0.047M E = M E /26.2 + M E /(1.31 × 81.3). The calculations of con-traction of the condensation with a radius of 5.5 Earth radii were performed in this study for a condensation mass of 1.023M E . Galimov et al. have not indicated where 95% of iron-depleted material from the inner part of the condensation had gone before the embryos started growing by accumulating matter from the outer part of the condensation. Let us discuss several modifications of the model of Galimov et al. that may rectify some of the above drawbacks. In order to form small embryos, one may calculate the contraction of a less massive condensation (with a mass considerably smaller than M E ) so as to obtain embryos that incorporate almost all irondepleted material. In our view, the calculations of migration of particles toward embryos from the cylinder surface within the Hill sphere of the Earth embryo in (Galimov and Krivtsov, 2012;Vasil'ev et al., 2011) may also be interpreted as the inflow of matter from outside the Hill sphere. However, in the case of matter inflow from outside the Hill sphere, the considered model with zero relative velocities is hardly relevant. Owing to the condensation rotation, the particle velocities were nonzero even inside the condensation. If the velocities were zero, the particles would very quickly (according to (Wahlberg Jansson and Johansen, 2014), the free-fall time is 25 years) reach the center of the condensation and would not "wait" for the embryos to form in the hot inner part of the condensation, where particle evaporation alone took tens of thousands of years (Galimov and Krivtsov, 2012). The higher the relative velocities of particles on their entry into the Hill sphere, the smaller the difference in the relative growth of mass for embryos of the Earth and the Moon (the smaller the relative growth of the Earth embryo). The higher (compared to planetesimals) relative probability of infall of particles onto the Earth embryo (compared to the Moon embryo) obtained in (Galimov and Krivtsov, 2012;Vasil'ev et al., 2011) is established by zero relative particle velocities. The relative probability should be even higher to ensure the abovementioned dm growth proportional to m p at p > 2. For this growth to be sustained, one may assume that the ejection of matter from the surface of the Moon embryo in the process of infall of planetesimals almost halted its growth. It is also not implausible that the contraction of the central part of the condensation, which resulted in the formation of embryos, was accompanied by the contraction of the entire condensation with the size of the Hill sphere. Galimov and Krivtsov (2012) have noted that the rapid contraction of the central (r < 5.5r E ) part of the condensation was hindered by the high temperature in this region. The outer (r > 5.5r E ) part was assumed to be cooler. Thus, it seems that the temperature in the outer part of the condensation should be less of an obstacle to contraction than in the central part, and a question then arises as to how the relatively cool outer part of the condensation could survive in No studies into the formation of condensations with masses no lower than the Earth mass in the region of terrestrial planets have been published yet. Therefore, the question of the possibility of the formation of massive condensations considered in (Galimov and Krivtsov, 2012;Vasil'ev et al., 2011) remains open. A fraction of the condensation material, which was not incorporated into the embryos, could leave the Hill sphere during condensation contraction, thus increasing the mass of the parental condensation. A consider-able amount of matter possibly falling onto embryos of the Earth and the Moon could be present outside the Hill sphere of the parental condensation. The objects (e.g., planetesimals) falling onto embryos of the Earth and the Moon in the model considered in Subsection 2.1 originated from outside the Hill sphere. In the calculations of Ipatov (1993Ipatov ( , 2000, the mean eccentricities of planetesimal orbits in the feeding zone of terrestrial planets exceeded 0.2 (and then 0.3) at certain stages of evolution. At such eccentricities, parameter p was below 4/3. The dependence of r eff on r may vary from r to r 2 (depending on eccentricities) during accumulation of planetesimals entering the Hill spheres of embryos from the outside. In these extreme cases, dm is proportional to m 2/3 or m 4/3 , respectively, and the ratio of masses of embryos of the Moon and the Earth (m M /m E ) increases faster with m E than in (Galimov and Krivtsov, 2012;Vasil'ev et al., 2011). If r ef is proportional to r 2 (the case of low-eccentricity planetesimal orbits), a 1.3-fold increase in the mass of the Moon embryo (to the current mass of the Moon) corresponds, according to Eq. (6), to a 2.4-fold and 2.7-fold (to M E ) increase in the mass of the Earth embryo at k d = 1.65 and k d = 1, respectively. In the model with no iron in the initial embryos, the abundance of iron on the Earth does not exceed 0.33(1-1/2.7) ≈ 0.21 (i.e., is lower than the current level). With this m E growth, the concentration of iron in planetesimals should be no lower than 0.32/(1 -1/2.7) = 0.32 × 2.7/1.7 ≈ 0.5 (which is infeasible) in order to reproduce the current iron abundance (32%) on the Earth. If the concentration of iron in the initial embryos is nonzero, the current iron abundance on the Moon is attained as the mass of the Moon embryo increases by a factor smaller than 1.3. Even at an iron concentration of 8% in the Earth embryo, the iron abundance does not exceed 0.24 after the mass of the embryo increases 2.4-2.7-fold. The above estimates suggest that it is hard to repro-duce the current iron abundances on the Earth and the Moon at any initial Fe concentrations in the initial embryos growing exclusively through the accumulation of solid planetesimals (without matter ejection on impacts). In order to obtain the current iron abundances at a nonzero (although low) iron concentration in the initial embryos, the increase dm in the embryo mass should be proportional to m p , where p > 2. Is it possible (excluding the case of matter ejection from embryos)? Parameter p for the motion of solid bodies in a gas-free medium does not exceed 4/3. If the gas drag is taken into account and/or dust particles are considered, the value of p may vary. Levison et al. (2015) believed that planetesimals grew immediately after their formation by accumulating pebble-sized bodies in the presence of gas. The growth of embryos in the presence of gas may be examined using the formulas presented in (Ormel and Klahr, 2010;Levison et al., 2015;Chambers, 2017). According to Levison et al. (2015), the cross section for the capture of bodies by an embryo is given by S=4πG·m e t s v rel -1 exp -ξ ,(7) where ξ = 2[t s v rel 3 /(4Gm e )] 0.65 , m e is the mass of the embryo, v rel is the velocity of the body relative to the embryo, and t s is the stopping time due to aerodynamic drag. It follows from Eq. (7) that the relative increase in the mass of embryos of the Earth and the Moon in unit time is proportional to m rEM exp -ζ (but depends also on the values of t s and v rel , which differ from one embryo to the other), where ζ=m rEM -0.65 , and m rEM is the ratio of masses of embryos. At m rEM = 10 and 30, m rEM exp -ζ assumes a value of 8 and 27, respectively. Note that at m rEM > 5, 0.7 < exp −ζ < 1 and m rEM exp -ζ is close to m rEM . The relative increase in the mass of the Earth embryo is then no larger than in the model of infall of solid bodies in the case of low relative velocities with r eff 2 proportional to m 4/3 , which was considered above (in the beginning of Subsection 2.1). The results reported in (Hughes and Boley, 2017) demonstrate that the influence of gas on the effective cross section of an embryo depends on the size of infalling objects, the size and the density of the embryo, and on the distance from the embryo to the Sun. In the calculations performed in this study for a distance of 1 AU from the Sun, objects ~0.3 cm in size were captured most efficiently in a gas. The ratio of the effective radius to the radius of the object for smaller particles in a gas may be considerably lower than the ratio in formula (5) for a gas-free medium, since such particles in a gas flow around the embryo. Table 2 from (Hughes and Boley, 2017) shows that the increase in the embryo mass attributable to the accumulation of larger objects differs by a factor of no more than two from the mass gain at such object sizes as ensure the maximum growth. Additional studies, which would include, among other things, the mass distribution of particles and other small objects, are needed in order IPATOV to draw certain conclusions regarding the relative growth of embryos of the Earth and the Moon through the accumulation of small objects moving in a gas. If we consider the infall of a large number of relatively small bodies onto these embryos, they could acquire matter with roughly the same isotopic composition. Growth of the Embryo of the Moon Induced by the Infall of Matter Ejected from the Embryo of the Earth When bombarded by planetesimals, the embryo of the Moon, which was originally located closer to the embryo of the Earth, could grow primarily by accumulating iron-depleted matter of the crust and the mantle of the Earth ejected from the surface of the Earth embryo in its collisions with planetesimals and smaller bodies. This source of growth of the Moon embryo does not impose any significant constraints on the initial masses of embryos of the Moon and the Earth and their parental condensation. This model does not require the initial embryos of the Moon and the Earth to be depleted in iron. The ratio of contributions of infalling planetesimals and the matter ejected from the Earth embryo to the growth of the Moon embryo depends on the results of collisions of planetesimals with the embryos and on the distance between the embryos. Since the velocities of collisions with the Moon embryo are lower for the bodies ejected from the Earth embryo, the probability of their capture is higher than the probability of capture of directly infalling bodies. If the iron abundance in the initial Moon embryo and in planetesimals was 0.33 and the iron abundance in the crust of the Earth and on the Moon was 0.05 and 0.08, respectively, fraction k E of matter of the Earth crust in the Moon should be ~0.9 (this follows from relation 0.05k E + 0.33(1k E ) = 0.08). Therefore, in order to reproduce the current iron abundance on the Moon, the amount of matter ejected from the Earth embryo and accumulated by the Moon embryo should be an order of magnitude larger than the sum of the total mass of planetesimals falling onto the Moon embryo and the initial mass of the Moon embryo formed from the parental condensation (if the iron abundance in the initial embryo was the same as in planetesimals). The estimated fraction of matter of the Earth's crust in the Moon decreases as the mass of the Moon embryo formed in the process of condensation contraction increases and as the concentration of iron in it decreases. The considered approach with the initial embryos of the Moon and the Earth forming from a common parental condensation differs considerably from the one used in (Ringwood, 1989;Vityazev and Pechernikova, 1996;Gorkavyi, 2004Gorkavyi, , 2007Svetsov et al., 2012;Citron et al., 2014;Aharonson, 2015, 2017), where the formation and growth of the Moon embryo primarily through the accumulation of matter of the Earth's crust ejected from the Earth embryo in multiple collisions with bodies from the protoplanetary disk was considered. In the model used in the present study, both embryos formed from the same condensation. The subsequent growth of embryos of the Moon and the Earth formed in the process of contraction of the parental condensation was the same as in the multi-impact model. Matter incorporated into the Moon embryo could be ejected from the Earth in multiple collisions between planetesimals (and smaller bodies) and the Earth, while Rufu and Aharonson (2017) considered only ~20 massive collisions. Objects ejected from the Earth embryo in collisions with other objects were more likely to be incorporated into the large Moon embryo than to merge with similar low-mass objects. Therefore, the presence of the large Moon embryo formed during the contraction of a condensation made the formation of a larger (compared to the case of formation exclusively from matter ejected from the Earth) satellite of the Earth possible. This is the likely reason why Venus lacks a satellite. The parental condensations of embryos of Venus, Mars, and Mercury did not acquire an angular momentum sufficient to form a large satellite. Planetesimals falling onto Venus and the Earth had approximately the same mass and velocity distributions. Matter was also ejected from the surface of Venus after its collisions with these planetesimals, but no satellite was formed from this matter. The masses of impactors in the above-cited studies focused on the multi-impact model did not exceed 0.1M E . Collisions of the proto-Earth with impactors with masses below 0.1M E were considered in the megaimpact model even before the calculations within the multi-impact model. The needed angular momentum at such masses may be acquired in the megaimpact model only in a grazing collision with the protolunar disk formed primarily from the impactor material and containing a considerable amount of iron (Canup, 2012). Trying to reproduce the composition of the Moon within the megaimpact model, (Cuk and Stewart, 2012) have considered an impactor with a mass below 0.1M E , an almost head-on collision, and a pre-collision axial rotation period of the proto-Earth of 2.3 h, while (Canup, 2012) has modeled an impactor with a mass of 0.4M E -0.5M E . Since the formation of the Moon in a single collision is not required in the multi-impact model, a rapidly rotating proto-Earth or a very massive impactor are also not needed to reproduce the composition of the Moon. Two condensations, which had collided and formed the condensation that contracted to produce embryos of the Moon and the Earth, could move in different planes around the Sun prior to the collision. Therefore, the orbital plane of the Moon embryo was not necessarily aligned with the ecliptic plane. Note that the angle between the Moon's orbit and the ecliptic plane is 5.1°. A single collision (or a series of collisions) between the solid Earth embryo and a massive object or an additional collision of condensations are needed for the rotation axis of the Earth to acquire the current tilt. A collision of condensations could contribute to this tilt if at the time when the parental condensation had split into two components, the radius of the condensation producing the Earth embryo was, at the moment of this collision, shorter than the semi-major axis of the orbit of the condensation that produced the Moon embryo and moved around the Earth embryo. Ipatov (1981b) has demonstrated that if the vector of the angular momentum of a planet with respect to its center of mass is perpendicular to its orbital plane prior to the collision with an impactor and the vector of gain in the angular momentum in the collision is perpendicular to the vector of this momentum, the ratio of masses of the impactor and the planet is m I /m pl ≈ 2.5r pl χ tan I / (αv par T pl (1 + tan 2 I) 1/2 ) , where r pl is the radius of the planet, T pl is the period of its axial rotation, αv par is the tangential component of the collision velocity, v par is the parabolic velocity on the planetary surface, and tanI is the tangent of angle I between the axis of rotation of the planet after the collision and the normal to the orbital plane of the planet. At χ = α = 1, T pl = 24 h , and I = 23.44°, we obtain m I /m pl ≈ 0.0065. Thus, the current tilt of the rotation axis of the Earth could be acquired in a collision with an impactor with a mass of ~0.01M E . The considered model may also be applicable to the formation of an exoplanet with a large satellite. CONCLUSIONS The angular momentum of a parental condensation needed to form embryos of the Earth and the Moon could mostly be acquired in a collision of two rarefied condensations producing the parental condensation. The angular momentum of the Earth-Moon system could be acquired in a collision of two condensations in circular heliocentric orbits with their total mass being no lower than the mass of Mars. With the subsequent growth of embryos of the Moon and the Earth taken into account, the total mass of embryos, which had formed in the process of contraction of the parental condensation, needed to reach the current angular momentum of the Earth-Moon system could be below 0.01 of the Earth's mass. For the low lunar iron abundance to be reproduced with the growth of originally iron-depleted embryos of the Moon and the Earth just by the accretion of planetesimals, the mass of the lunar embryo should have increased by a factor of 1.3 at the most. The maximum increase in the mass of the Earth embryo due to the accumulation of planetesimals in a gas-free medium is then threefold, and the current terrestrial iron abundance is not attained. If the embryos are assumed to have grown just by accumulating solid planetesimals (without the ejection of matter from the embryos), it is hard to reproduce the current lunar and terrestrial iron abundances at any initial abundance in the embryos. In order to obtain the current iron abundance on the Earth and the Moon with a certain concentration of iron in the initial embryos, increment dm of the embryo mass should be proportional to m p , where p ≥ 2. Parameter p for the motion of solid bodies in a gas-free medium does not exceed 4/3. In order to reproduce the current iron abundance on the Moon, the amount of matter ejected from the Earth embryo and accumulated by the Moon embryo should be an order of magnitude larger than the sum of the total mass of planetesimals falling onto the Moon embryo and the initial mass of the Moon embryo formed from the parental condensation (if the iron abundance in the initial embryo was the same as in planetesimals). The greater part of matter incorporated into the Moon embryo could be ejected from the Earth in its multiple collisions with planetesimals (and smaller bodies). , k d = ρ E /ρ M is the ratio of the density of the growing Earth with mass m E to the density of the growing Moon with mass m M (k d ≈ 1.65 for the current densities of the Earth and the Moon), and m Eo and m Mo are the initial masses of embryos of the Earth and the Moon, respectively. If m M =0.0123m E , m Eo = 0.1m E , and m E = M E , relation (6) holds true at k d = 1 and m Mo = 0.00605M E and at k d = 1.65 and m Mo = 0.0054M E . 5 3 . 3It follows from Eq. (4) that the growth of the angular momentum of a planet with mass m pl under the infall of bodies with velocities proportional to the parabolic velocity on the planetary surface is proportional to m have demonstrated that their model of formation of the Earth and the Moon agrees with geochemical data. For example, the formation of the Earth and the Moon from a common condensation explains why the O isotope composition (16 O/ 17 O/ 18 O) and the 53 Cr/ 52 Cr, 46 Ti/ 47 Ti, and 182 W/ 184 W ratios are the same on the Moon and the Earth. In view of ACKNOWLEDGMENTSThis study was supported financially by the Russian Science Foundation, project no. 17-17-01279 (formation and growth of embryos of the Earth and the Moon), and program no. 28 of the Presidium of the Russian Academy of Sciences under state assignment no. 00137-2018-0033 to the Vernadsky Institute of Geochemistry and Analytical Chemistry (research into the angular momentum of colliding celestial bodies).The author wishes to thank Academician M. Ya. Marov for his interest in the research and helpful suggestions, which have improved the manuscript, and Academician E.M. Galimov for a detailed account of his model. On the origin of Earth's Moon. A C Barr, J. Geophys. Res. Planets. 121Barr, A.C., On the origin of Earth's Moon, J. Geophys. Res. Planets, 2016, vol. 121, pp. 1573-1601. https://arxiv.org/ pdf/1608.08959.pdf. Structure of celestial bodies' rotary motions and Solar system formation. V V Beletskii, A V Grushevskii, Preprint of Keldysh Institute of Applied Mathematics RAS. 32103in RussianBeletskii, V.V. and Grushevskii, A.V., Structure of celestial bodies' rotary motions and Solar system formation, Preprint of Keldysh Institute of Applied Mathematics RAS, 1991, no. 103, 32 p., in Russian. The origin of the Moon. A G W Cameron, W R Ward, Lunar Planet. Sci. Conf. 7Cameron, A.G.W. and Ward, W.R., The origin of the Moon, Lunar Planet. Sci. Conf., 1976, vol. 7, pp. 120-122. Simulations of a late lunar-forming impact. R M Canup, Icarus. 1682Canup, R.M., Simulations of a late lunar-forming impact, Icarus, 2004, vol. 168, no. 2, pp. 433-456. Forming a Moon with an Earth-like composition via a giant impact. R M Canup, Science. 338Canup, R.M., Forming a Moon with an Earth-like composition via a giant impact, Science, 2012, vol. 338, pp. 1052-1055. Origin of the Moon in a giant impact near the end of the Earth's formation. R M Canup, E Asphaug, Nature. 4126848Canup, R.M. and Asphaug, E., Origin of the Moon in a giant impact near the end of the Earth's formation, Nature, 2001, vol. 412, no. 6848, pp. 708-712. Lunarforming impacts: High-resolution SPH and AMR-CTH simulations. R M Canup, A C Barr, Crawford , D A , Icarus. 222Canup, R.M., Barr, A.C., and Crawford, D.A., Lunar- forming impacts: High-resolution SPH and AMR- CTH simulations, Icarus, 2013, vol. 222, pp. 200- 219. Steamworlds: atmospheric structure and critical mass of planets accreting icy pebbles. J Chambers, ID A30Astrophys. J. 849Chambers, J., Steamworlds: atmospheric structure and critical mass of planets accreting icy pebbles, Astrophys. J., 2017, vol. 849, art. ID A30, 12 p. Moon formation from multiple large impact. R I Citron, O Aharonson, H Perets, H Genda, abs. #2085Proc. 45th Lunar and Planet Sci. Conf. 45th Lunar and Planet Sci. ConfThe Woodlands, TXCitron, R.I., Aharonson, O., Perets, H., and Genda, H., Moon formation from multiple large impact, Proc. 45th Lunar and Planet Sci. Conf., The Woodlands, TX, 2014, abs. #2085. Impact theory gets whacked. D Clery, Science. 3426155Clery, D., Impact theory gets whacked, Science, 2013, vol. 342, no. 6155, pp. 183-185. Making the Moon from a fast-spinning Earth: A giant impact followed by resonant despinning. M Cuk, S T Stewart, Science. 338Cuk, M. and Stewart, S.T., Making the Moon from a fast-spinning Earth: A giant impact followed by resonant despinning, Science, 2012, vol. 338, pp. 1047-1052. Tidal evolution of the Moon from a high-obliquity, high-angular-momentum Earth. M Cuk, D P Hamilton, S J Lock, S T Stewart, Nature. 539Cuk, M., Hamilton, D.P., Lock, S.J., and Stewart, S.T., Tidal evolution of the Moon from a high-obliquity, high-angular-momentum Earth, Nature, 2016, vol. 539, pp. 402-406. Toward planetesimals: dense chondrule clumps in the protoplanetary nebula. J N Cuzzi, R C Hogan, K Sharif, Astrophys. J. 687Cuzzi, J.N., Hogan, R.C., and Sharif, K., Toward planetesimals: dense chondrule clumps in the protoplanetary nebula, Astrophys. J., 2008, vol. 687, pp. 1432-1447. Towards ini-tial mass functions for asteroids and Kuiper belt objects. J N Cuzzi, R C Hogan, W F Bottke, Icarus. 208Cuzzi, J.N., Hogan, R.C., and Bottke, W.F., Towards ini-tial mass functions for asteroids and Kuiper belt objects, Icarus, 2010, vol. 208, pp. 518-538. Primary accretion by turbulent concentration: The rate of planetesimal formation and the role of vortex tubes. J N Cuzzi, R C Hogan, abs. #2536Proc. 43rd Lunar and Planetary Sci. Conf. 43rd Lunar and Planetary Sci. ConfThe Woodlands, TXCuzzi, J.N. and Hogan, R.C., Primary accretion by turbulent concentration: The rate of planetesimal formation and the role of vortex tubes, Proc. 43rd Lunar and Planetary Sci. Conf., The Woodlands, TX, 2012, abs. #2536. Close-in plan-etesimal formation by pile-up of drifting pebbles. J Drazkowska, Y Alibert, Moore , B , ID 105Astron. Astrophys. 594Drazkowska, J., Alibert, Y., and Moore, B., Close-in plan-etesimal formation by pile-up of drifting pebbles, Astron. Astrophys., 2016, vol. 594, art. ID 105, 12 p. Planetary science: Occam's origin of the Moon. L T Elkins-Tanton, Nature Geosci. 612Elkins-Tanton, L.T., Planetary science: Occam's origin of the Moon, Nature Geosci., 2013, vol. 6, no. 12, pp. 996-998. Earth-Moon system: origin problem. E M. ; Galimov, E M Galimov, Ed, Problemy zarozhdeniya i evolyutsii biosfery (Problems on Biosphere Origin and Evolution). MoscowNaukain RussianGalimov, E.M., Earth-Moon system: origin problem, in Problemy zarozhdeniya i evolyutsii biosfery (Problems on Biosphere Origin and Evolution), Galimov, E.M., Ed., Moscow: Nauka, 1995, pp. 8- 45, in Russian. Dynamic model for the formation of the Earth-Moon system. E M Galimov, A M Krivtsov, A V Zabrodin, M S Legkostu-Pov, T M Eneev, Yu I Sidorov, Geochem. Int. 4311Galimov, E.M., Krivtsov, A.M., Zabrodin, A.V., Legkostu-pov, M.S., Eneev, T.M., and Sidorov, Yu.I., Dynamic model for the formation of the Earth-Moon system, Geochem. Int., 2005, vol. 43, no. 11. pp. 1045-1055. Earth-Moon system formation: stateof-the-art. E M. ; Galimov, E M Galimov, Ed, Problemy zarozhdeniya i evolyutsii biosfery (Problems on Biosphere Origin and Evolution). MoscowLIBROKOMin RussianGalimov, E.M., Earth-Moon system formation: state- of-the-art, in Problemy zarozhdeniya i evolyutsii biosfery (Problems on Biosphere Origin and Evolution), Galimov, E.M., Ed., Moscow: LIBROKOM, 2008, pp. 213-222, in Russian. Formation of the Moon and the Earth from a common supraplanetary gas-dust cloud (lecture presented at the XIX all-Russia symposium on isotope geochemistry on. E M Galimov, Geochem. Int. 496Galimov, E.M., Formation of the Moon and the Earth from a common supraplanetary gas-dust cloud (lecture presented at the XIX all-Russia symposium on isotope geochemistry on November 16, 2010), Geochem. Int., 2011, vol. 49, no. 6. pp. 537-554. Origin of the Moon. E M Galimov, A M Krivtsov, New Concept. 168De GruyterGalimov, E.M. and Krivtsov, A.M., Origin of the Moon. New Concept, Berlin: De Gruyter, 2012. 168 p. U-Pb) with respect to the problems on planet formation by the example of Moon-Earth sys-tem. E M Galimov, -W Hf, J-Pu-Xe Rb-Sr, Problemy zarozhdeniya i evolyutsii biosfery (Problems of Biosphere Origin and Evolution), Galimov, E.M. Ed., MoscowKRASANDAnalysis of isotopic systems. in RussianGalimov, E.M., Analysis of isotopic systems (Hf-W, Rb-Sr, J-Pu-Xe, U-Pb) with respect to the problems on planet formation by the example of Moon-Earth sys-tem, in Problemy zarozhdeniya i evolyutsii biosfery (Problems of Biosphere Origin and Evolution), Galimov, E.M., Ed., Moscow: KRASAND, 2013, pp. 47-59, in Russian. The new model of the origin of the Moon. N N Gorkavyi, Bull. Am. Astron. Soc. 36861Gorkavyi, N.N., The new model of the origin of the Moon, Bull. Am. Astron. Soc., 2004, vol. 36, p. 861. Formation of the Moon and double asteroids. N N Gor&apos;kavyi, Izv. Krymskoi Astrofiz. Observ. 1032in RussianGor'kavyi, N.N., Formation of the Moon and double asteroids, Izv. Krymskoi Astrofiz. Observ., 2007, vol. 103, no. 2, pp. 143-155, in Russian. Satellite-sized planetesimals and lunar origin. W K Hartmann, D R Davis, Icarus. 24Hartmann, W.K. and Davis, D.R., Satellite-sized planetesimals and lunar origin, Icarus, 1975, vol. 24, pp. 504-515. Simulations of small solid accretion on to planetesimals in the presence of gas. A G Hughes, A C Boley, Mon. Notic. Roy. Astron. Soc. 472Hughes, A.G. and Boley, A.C., Simulations of small solid accretion on to planetesimals in the presence of gas, Mon. Notic. Roy. Astron. Soc., 2017, vol. 472, pp. 3543-3553. Numerical research of accumulating bodies angular momentums. S I Ipatov, Preprint of Institute of Applied Mathematics of the USSR Acad. Sci. 10128 p., in RussianIpatov, S.I., Numerical research of accumulating bodies angular momentums, Preprint of Institute of Applied Mathematics of the USSR Acad. Sci., 1981a, no. 101, 28 p., in Russian. Some problems on planets axial rotation formation. S I Ipatov, Preprint of Institute of Applied Mathematics of the USSR Acad. Sci. 10228 p., in RussianIpatov, S.I., Some problems on planets axial rotation formation, Preprint of Institute of Applied Mathematics of the USSR Acad. Sci., 1981b, no. 102, 28 p., in Russian. Migration of bodies in the accretion of planets. S I Ipatov, Sol. Syst. Res. 271Ipatov, S.I., Migration of bodies in the accretion of planets, Sol. Syst. Res., 1993, vol. 27, no. 1, pp. 65-79. Migratsiya nebesnykh tel v Solnechnoi sisteme (Celestial Bodies' Migration in the Solar System). S I Ipatov, URSSMoscow320 p., in RussianIpatov, S.I., Migratsiya nebesnykh tel v Solnechnoi sisteme (Celestial Bodies' Migration in the Solar System), Moscow: URSS, 2000 320 p., in Russian. http:// www.rfbr.ru/rffi/ru/books/o_29239. The angular momentum of two collided rarefied preplanetesimals and the formation of binaries. S I Ipatov, Mon. Notic. Roy. Astron. Soc. 403Ipatov, S.I., The angular momentum of two collided rarefied preplanetesimals and the formation of binaries, Mon. Notic. Roy. Astron. Soc., 2010, vol. 403, pp. 405-414. http://arxiv.org/abs/0904.3529. Formation, Detection, and Characterization of Extrasolar Habitable planets. S I Ipatov, N Haghighipour, Proc. IAU Symp. IAU SympCambridge Univ. Press8Angular momenta of collided rarefied preplan-etesimalsIpatov, S.I., Angular momenta of collided rarefied preplan-etesimals, in Proc. IAU Symp., no. 293 "Formation, Detection, and Characterization of Extrasolar Habitable planets", Haghighipour, N., Ed., Cambridge Univ. Press, 2014, vol. 8, pp. 285-288. http://arxiv.org/ abs/1412.8445. The Earth-Moon system as a typical binary in the Solar System. S I Ipatov, SPACEKAZAN-IAPS-2015. Marov, M.Ya., Ed., Kazan: Kazan UnivIpatov, S.I., The Earth-Moon system as a typical binary in the Solar System, in "SPACEKAZAN- IAPS-2015", Marov, M.Ya., Ed., Kazan: Kazan Univ., 2015, pp. 97-105. Formation of embryos of the Earth and the Moon from the common condensation and their subsequent FORMATION OF EMBRYOS OF THE EARTH AND THE MOON 415 growth, in Materialy vosemnadtsatoi mezhdunarodnoi konferentsii "Fiziko-khimicheskie i petrofizicheskie issledovaniya v naukakh o Zemle" (Moskva, 2-4 oktyabrya. S I Ipatov, Proc. 18th Int. Conf. 18th Int. ConfMoscow; MoscowInstitute of Geology of Ore Deposits, Petrography, Mineralogy and Geochemistry RASPhysical-Chemical and Petrophysical Researches in Earth Sciences. in RussianIpatov, S.I., Formation of embryos of the Earth and the Moon from the common condensation and their subsequent FORMATION OF EMBRYOS OF THE EARTH AND THE MOON 415 growth, in Materialy vosemnadtsatoi mezhdunarodnoi konferentsii "Fiziko-khimicheskie i petrofizicheskie issledovaniya v naukakh o Zemle" (Moskva, 2-4 oktyabrya, Borok, 6 oktyabrya 2017 g.) (Proc. 18th Int. Conf. "Physical-Chemical and Petrophysical Researches in Earth Sciences" (Moscow, Oct. 2- 4, Borok, Oct. 6, 2017)), Moscow: Institute of Geology of Ore Deposits, Petrography, Mineralogy and Geochemistry RAS, 2017a, pp. 122-125. http://www.igem.ru/petromeet- ing_XVIII/tbgdocs/sbornik_2017.pdf, in Russian. Formation of trans-Neptunian satellite systems at the stage of condensations. S I Ipatov, Sol. Syst. Res. 514Ipatov, S.I., Formation of trans-Neptunian satellite systems at the stage of condensations, Sol. Syst. Res., 2017b, vol. 51, no. 4, pp. 294-314. https://arxiv.org/abs/1801.05217. Origin of orbits of secondaries in the discovered trans-Neptunian binaries. S I Ipatov, Sol. Syst. Res. 515Ipatov, S.I., Origin of orbits of secondaries in the discovered trans-Neptunian binaries, Sol. Syst. Res., 2017c, vol. 51, no. 5, pp. 409-416. https://arxiv.org/abs/1801.05254. Rapid planetesimal formation in turbulent circumstellar disks. A Johansen, J S Oishi, M.-M Mac Low, H Klahr, T Henning, Youdin , A , Nature. 448Johansen, A., Oishi, J.S., Mac Low, M.-M., Klahr, H., Henning, T., and Youdin, A., Rapid planetesimal formation in turbulent circumstellar disks, Nature, 2007, vol. 448, pp. 1022-1025. Zonal flows and long-lived axisymmetric pressure bumps in magnetoro-tational turbulence. A Johansen, A Youdin, H Klahr, Astrophys. J. 697Johansen, A., Youdin, A., and Klahr, H., Zonal flows and long-lived axisymmetric pressure bumps in magnetoro-tational turbulence, Astrophys. J., 2009a, vol. 697, pp. 1269-1289. Particle clumping and planetesimal formation depend strongly on metallicity. A Johansen, A Youdin, Mac Low, M.-M , Astrophys. J. 704Johansen, A., Youdin, A., and Mac Low, M.-M., Particle clumping and planetesimal formation depend strongly on metallicity, Astrophys. J., 2009b, vol. 704, pp. L75-L79. Highresolution simulations of planetary formation in turbulent proto-planetary discs. A Johansen, H Klahr, T Henning, Astron. Astrophys. 529Johansen, A., Klahr, H., and Henning, T., High- resolution simulations of planetary formation in turbulent proto-planetary discs, Astron. Astrophys., 2011, vol. 529, art. ID A62, 16 p. Adding par-ticle collisions to the formation of asteroids and Kuiper belt objects via streaming instabilities. A Johansen, A N Youdin, Y Lithwick, Astron. Astrophys. 537Johansen, A., Youdin, A.N., and Lithwick, Y., Adding par-ticle collisions to the formation of asteroids and Kuiper belt objects via streaming instabilities, Astron. Astrophys., 2012, vol. 537, art. ID A125, 17 p. Tests of the giant impact hypothesis. J H Jones, abs. #4045Proc. Lunar and Planet. Sci. Origin of the Earth and Moon Conf. Lunar and Planet. Sci. Origin of the Earth and Moon ConfMonterey, CAJones, J.H., Tests of the giant impact hypothesis, Proc. Lunar and Planet. Sci. Origin of the Earth and Moon Conf., Monterey, CA, 1998, abs. #4045. http://www.lpi.usra. edu/meetings/origin98/pdf/4045.pdf. The feeding zones of terres-trial planets and insights into Moon formation. N A Kaib, N B Cowan, Icarus. 252Kaib, N.A. and Cowan, N.B., The feeding zones of terres-trial planets and insights into Moon formation, Icarus, 2015, vol. 252, pp. 161-174. Modeling of aggregation of fractal dust clusters in a laminar proto-planetary disk. A V Kolesnichenko, M Marov, Ya, Sol. Syst. Res. 472Kolesnichenko, A.V. and Marov, M.Ya., Modeling of aggregation of fractal dust clusters in a laminar proto-planetary disk, Sol. Syst. Res., 2013, vol. 47, no. 2, pp. 80-98. Calculation of collision dynamics of gravitating particles for simulating "Earth-Moon" system caused by gravitational collapse of dust cloud. A A Le-Zakharov, A M Krivtsov, Problemy zarozhdeniya i evolyutsii biosfery (Problems of Biosphere Origin and Evolution), Galimov, E.M. Ed., MoscowKRASANDin RussianLe-Zakharov, A.A. and Krivtsov, A.M., Calculation of collision dynamics of gravitating particles for simulating "Earth-Moon" system caused by gravitational collapse of dust cloud, in Problemy zarozhdeniya i evolyutsii biosfery (Problems of Biosphere Origin and Evolution), Galimov, E.M., Ed., Moscow: KRASAND, 2013, pp. 61-81, in Russian. Growing the gas-giant planets by the gradual accumulation of pebbles. H F Levison, K A Kretke, M J Duncan, Nature. 5247565Levison, H.F., Kretke, K.A., and Duncan, M.J., Growing the gas-giant planets by the gradual accumulation of pebbles, Nature, 2015, vol. 524, no. 7565, pp. 322-324. Assembling the first protoplanetary cores in low mass self-gravitating circumstellar disks of gas and solids. W Lyra, A Johansen, H Klahr, N Piskunov, Astron. Astrophys. 491Embryos grown in the dead zoneLyra, W., Johansen, A., Klahr, H., and Piskunov, N., Embryos grown in the dead zone. Assembling the first protoplanetary cores in low mass self-gravitating circumstellar disks of gas and solids, Astron. Astrophys., 2008, vol. 491, pp. L41-L44. Planet formation bursts at the borders of the dead zone in 2D numerical simulations of circumstellar disks. W Lyra, A Johansen, A Zsom, H Klahr, N Piskunov, Astron. Astrophys. 497Lyra, W., Johansen, A., Zsom, A., Klahr, H., and Piskunov, N., Planet formation bursts at the borders of the dead zone in 2D numerical simulations of circumstellar disks, Astron. Astrophys., 2009, vol. 497, pp. 869-888. Formation of planetesimals in the trans-Neptunian region of the protoplanetary disk. A B Makalkin, I N Ziglina, Sol. Syst. Res. 38Makalkin, A.B. and Ziglina, I.N., Formation of planetesimals in the trans-Neptunian region of the protoplanetary disk, Sol. Syst. Res., 2004, vol. 38, pp. 288-300. From proto-Sun cloud to the planetary system: model of evolution for gas-dust disk, in Problemy zarozhdeniya i evolyutsii biosfery (Problems of Bio-sphere Origin and Evolution. M Marov, Ya, A V Kolesnichenko, A B Makalkin, V A Dorofeeva, I N Ziglina, F V Sirotkin, A A Chernov, E M Galimov, Ed, URSSMoscowin RussianMarov, M.Ya., Kolesnichenko, A.V., Makalkin, A.B., Dorofeeva, V.A., Ziglina, I.N., Sirotkin, F.V., and Chernov, A.A., From proto-Sun cloud to the planetary system: model of evolution for gas-dust disk, in Problemy zarozhdeniya i evolyutsii biosfery (Problems of Bio-sphere Origin and Evolution), Galimov, E.M., Ed., Moscow: URSS, 2008, pp. 233-273, in Russian. Ot Solnechnoi sistemy vglub' Vselennoi (Space. From the Solar System deep into the Universe). M Marov, Ya, Kosmos, FizmatlitMoscow536 p., in RussianMarov, M.Ya., Kosmos. Ot Solnechnoi sistemy vglub' Vselennoi (Space. From the Solar System deep into the Universe), Moscow: Fizmatlit, 2017, 536 p., in Russian. Evolution of selfgravitating clumps of a gas / dust nebula participating in the accumulation of planetary bodies. V P Myasnikov, V I Titarenko, Sol. Syst. Res. 23Myasnikov, V.P. and Titarenko, V.I., Evolution of self- gravitating clumps of a gas / dust nebula participating in the accumulation of planetary bodies, Sol. Syst. Res., 1989a, vol. 23, pp. 7-13. Evolution of a self-gravitating gas / dust clump with allowance for radiative transfer in a diffusional approximation. V P Myasnikov, V I Titarenko, Sol. Syst. Res. 23Myasnikov, V.P. and Titarenko, V.I., Evolution of a self-gravitating gas / dust clump with allowance for radiative transfer in a diffusional approximation, Sol. Syst. Res., 1990, vol. 23, pp. 126-133. For-mation of Kuiper belt binaries by gravitational collapse. D Nesvorny, A N Youdin, D C Richardson, Astron. J. 140Nesvorny, D., Youdin, A.N., and Richardson, D.C., For-mation of Kuiper belt binaries by gravitational collapse, Astron. J., 2010, vol. 140, pp. 785-793. Earth's volatile contents established by melting and vaporizarion. C A Norris, B J Wood, Nature. 549Norris, C.A. and Wood, B.J., Earth's volatile contents established by melting and vaporizarion, Nature, 2017, vol. 549, pp. 507-510. D E Okhotsimskii, Dinamika kosmicheskikh poletov (Space Flights Dynamics). MoscowMSU158in RussianOkhotsimskii, D.E., Dinamika kosmicheskikh poletov (Space Flights Dynamics), Moscow: MSU, 1968, 158 p., in Russian. The effect of gas drag on the growth of protoplanets. Analytical expressions for the accretion of small bodies in laminar disks. C W Ormel, H H Klahr, Astron. Astrophys. 520Ormel, C.W. and Klahr, H.H., The effect of gas drag on the growth of protoplanets. Analytical expressions for the accretion of small bodies in laminar disks, Astron. Astrophys., 2010, vol. 520, art. ID A43, 15 p. Collisionless encounters and the origin of the lunar inclination. K Pahlevan, A Morbidelli, Nature. 5277579Pahlevan, K. and Morbidelli, A., Collisionless encounters and the origin of the lunar inclination, Nature, 2015, vol. 527, no. 7579, pp. 492-494. Flaws in the giant impact hypothesis of lunar origin. A E Ringwood, Earth Planet. Sci. Lett. 95Ringwood, A.E., Flaws in the giant impact hypothesis of lunar origin, Earth Planet. Sci. Lett., 1989, vol. 95, nos. 3-4, pp. 208-214. A multiple impact hypothesis for Moon formation. R Rufu, O Aharonson, abs. #1151Proc. 46th Lunar and Planetary Sci. Conf. 46th Lunar and Planetary Sci. ConfThe Woodlands, TXRufu, R. and Aharonson, O., A multiple impact hypothesis for Moon formation, Proc. 46th Lunar and Planetary Sci. Conf., The Woodlands, TX, 2015, abs. #1151. A multiple-impact origin for the Moon. R Rufu, O Aharonson, Nature Geosci. 10Rufu, R. and Aharonson, O., A multiple-impact origin for the Moon, Nature Geosci., 2017, vol. 10, pp. 89-94. The origin of the Moon. I. Formation of a swarm of bodies around the Earth. E L Ruskol, English translation: Soviet Astronomy. 37Ruskol, E.L., The origin of the Moon. I. Formation of a swarm of bodies around the Earth, Astron. Zh., 1960, vol. 37, no. 4, pp. 690-702. English translation: Soviet Astronomy, 1960, vol. 4. On the origin of the Moon. II. The growth of the Moon in the circumterrestrial swarm of satellites. E L Ruskol, Ruskol, E.L., On the origin of the Moon. II. The growth of the Moon in the circumterrestrial swarm of satellites, IPATOV Astron, Zh, Rus-sian. English translation: Soviet Astronomy. 40221Astron. Zh., 1963, vol. 40, no. 2, pp. 288-296, in Rus-sian. English translation: Soviet Astronomy, 1963, vol. 7, pp. 221. Origin of the Moon. III. Some aspects of the dynamics of the circumterrestrial swarm. E L Ruskol, Russian. English translation: Soviet Astronomy. 484Astron. Zh.Ruskol, E.L., Origin of the Moon. III. Some aspects of the dynamics of the circumterrestrial swarm, Astron. Zh., 1971, vol. 48, no. 4, pp. 819-829, in Russian. English translation: Soviet Astronomy, 1971, vol. 15. . E L Ruskol, Proiskhozhdenie Luny, Origin of the Moon). 188Naukain RussianRuskol, E.L., Proiskhozhdenie Luny (Origin of the Moon), Moscow: Nauka, 1975, 188 p., in Russian. Evolyutsiya doplanetnogo oblaka i obrazovanie Zemli i planet (Evolution of Protoplanetary Cloud and Formation of the Earth and Planets). V S Safronov, Keter Publishing House212Jerusalem (IsraelEnglish translation: Safronov, V.S. Evolution of the protoplanetary cloud and formation of the Earth and planetsSafronov, V.S., Evolyutsiya doplanetnogo oblaka i obrazovanie Zemli i planet (Evolution of Protoplanetary Cloud and Formation of the Earth and Planets), Moscow: Nauka, 1969, in Russian. English translation: Safronov, V.S. Evolution of the protoplanetary cloud and formation of the Earth and planets, Jerusalem (Israel): Israel Program for Scientific Translations, Keter Publishing House, 1972, 212 p. Lunar accretion from a Roche-interior fluid disk. J Salmon, R M Canup, Astrophys. J. 760Salmon, J. and Canup, R.M., Lunar accretion from a Roche-interior fluid disk, Astrophys. J., 2012, vol. 760, art. ID A83, 18 p. Giant impacts, volatile loss, and the K/Th ratios on the Moon, Earth, and Mercury. S T Stewart, Z M Leinhardt, M Humayun, abs. #2306Proc. 44th Lunar and Planetary Sci. Conf. 44th Lunar and Planetary Sci. ConfThe Woodlands, TXStewart, S.T., Leinhardt, Z.M., and Humayun, M., Giant impacts, volatile loss, and the K/Th ratios on the Moon, Earth, and Mercury, Proc. 44th Lunar and Planetary Sci. Conf., The Woodlands, TX, 2013, abs. #2306. Dust capture and long-lived density enhancements triggered by vortices in 2D protoplanetary disks. C Surville, L Mayer, D N Lin, art. ID A82Astrophys. J. 8311Surville, C., Mayer, L., and Lin, D.N., Dust capture and long-lived density enhancements triggered by vortices in 2D protoplanetary disks, Astrophys. J., 2016, vol. 831, no. 1, art. ID A82, 27 p. A model of Moon formation from ejecta of macroimpacts on the Earth. V V Svetsov, G V Pechernikova, A V Vityazev, abs. #1808Proc. 43rd Lunar and Planetary Sci. Conf. 43rd Lunar and Planetary Sci. ConfThe Woodlands, TXSvetsov, V.V., Pechernikova, G.V., and Vityazev, A.V., A model of Moon formation from ejecta of macroimpacts on the Earth, Proc. 43rd Lunar and Planetary Sci. Conf., The Woodlands, TX, 2012, abs. #1808. Evolution of the Earth-Moon system. J Touma, J Wisdom, Astron. J. 1085Touma, J. and Wisdom, J., Evolution of the Earth- Moon system, Astron. J., 1994, vol. 108, no. 5, pp. 1943-1961. Study of the planet-satellite system growth process as a result of the accumulation of dust cloud material. S V Vasil&apos;ev, A M Krivtsov, E M Galimov, Sol. Syst. Res. 455Vasil'ev, S.V., Krivtsov, A.M., and Galimov, E.M., Study of the planet-satellite system growth process as a result of the accumulation of dust cloud material, Sol. Syst. Res., 2011, vol. 45, no. 5, pp. 410-419. Early differentiation of the Earth and lunar composition problem. A V Vityazev, G V Pechernikova, Fiz. Zemli. 6in RussianVityazev, A.V. and Pechernikova, G.V., Early differentiation of the Earth and lunar composition problem, Fiz. Zemli, 1996, no. 6, pp. 3-16, in Russian. Formation of pebble-pile planetesimals. K Wahlberg Jansson, A Johansen, Astron. Astrophys. 570Wahlberg Jansson, K. and Johansen, A., Formation of pebble-pile planetesimals, Astron. Astrophys., 2014, vol. 570, art. ID A47, 10 p. https://arxiv.org/abs/1408.2535. On the formation of planetesimals via secular gravitational instabilities with turbulent stirring. A N Youdin, ID A99Astrophys. J. 731Youdin, A.N., On the formation of planetesimals via secular gravitational instabilities with turbulent stirring, Astrophys. J., 2011, vol. 731, art. ID A99, 18 p. From disks to planets. A N Youdin, S J Kenyon, T D Oswalt, L M French, P Kalas, Eds, 10.1007/978-94-007-5606-9_1.TranslatedbyD.SafinPlanets, Stars and Stellar Systems. DordrechtSpringer Sci.+Business Media3Youdin, A.N. and Kenyon, S.J., From disks to planets, in Planets, Stars and Stellar Systems, Oswalt, T.D., French, L.M., and Kalas, P., Eds., Dordrecht: Springer Sci.+Business Media, 2013, vol. 3, pp. 1-62. http://dx.doi.org/10.1007/978-94-007-5606- 9_1. Translated by D. Safin
[]
[ "Replica analysis of Bayesian data clustering", "Replica analysis of Bayesian data clustering" ]
[ "Alexander Mozeika [email protected] \nDepartment of Mathematics\nKing's College London\nThe StrandWC2R 2LSLondonUK\n\nLondon Institute for Mathematical Sciences\n35a South StW1K 2XFMayfair, LondonUK\n", "Anthony Cc Coolen † ‡ § †institute For Mathematical ", "Molecular Biomedicine ", "\nKing's College London\nHodgkin BuildingSE1 1ULLondonUK\n" ]
[ "Department of Mathematics\nKing's College London\nThe StrandWC2R 2LSLondonUK", "London Institute for Mathematical Sciences\n35a South StW1K 2XFMayfair, LondonUK", "King's College London\nHodgkin BuildingSE1 1ULLondonUK" ]
[]
We use statistical mechanics to study model-based Bayesian data clustering. In this approach, each partition of the data into clusters is regarded as a microscopic system state, the negative data log-likelihood gives the energy of each state, and the data set realisation acts as disorder. Optimal clustering corresponds to the ground state of the system, and is hence obtained from the free energy via a low 'temperature' limit. We assume that for large sample sizes the free energy density is self-averaging, and we use the replica method to compute the asymptotic free energy density. The main order parameter in the resulting (replica symmetric) theory, the distribution of the data over the clusters, satisfies a self-consistent equation which can be solved by a population dynamics algorithm. From this order parameter one computes the average free energy, and all relevant macroscopic characteristics of the problem. The theory describes numerical experiments perfectly, and gives a significant improvement over the mean-field theory that was used to study this model in past.
10.1088/1751-8121/ab59af
[ "https://arxiv.org/pdf/1810.02627v3.pdf" ]
118,949,687
1810.02627
e912d23c7ed4289626cee6c223ee0436f5b1c37c
Replica analysis of Bayesian data clustering 18 Nov 2019 Alexander Mozeika [email protected] Department of Mathematics King's College London The StrandWC2R 2LSLondonUK London Institute for Mathematical Sciences 35a South StW1K 2XFMayfair, LondonUK Anthony Cc Coolen † ‡ § †institute For Mathematical Molecular Biomedicine King's College London Hodgkin BuildingSE1 1ULLondonUK Replica analysis of Bayesian data clustering 18 Nov 2019arXiv:1810.02627v3 [cond-mat.dis-nn]numbers: 7510Nr0250Tt0570Fh0550+q We use statistical mechanics to study model-based Bayesian data clustering. In this approach, each partition of the data into clusters is regarded as a microscopic system state, the negative data log-likelihood gives the energy of each state, and the data set realisation acts as disorder. Optimal clustering corresponds to the ground state of the system, and is hence obtained from the free energy via a low 'temperature' limit. We assume that for large sample sizes the free energy density is self-averaging, and we use the replica method to compute the asymptotic free energy density. The main order parameter in the resulting (replica symmetric) theory, the distribution of the data over the clusters, satisfies a self-consistent equation which can be solved by a population dynamics algorithm. From this order parameter one computes the average free energy, and all relevant macroscopic characteristics of the problem. The theory describes numerical experiments perfectly, and gives a significant improvement over the mean-field theory that was used to study this model in past. Introduction Analytical tools of statistical mechanics are nowadays applied widely to statistical inference problems (see e.g. [1] and references therein). The central object of study in parameter inference is an expression for the likelihood of the data, which encodes information about the model that generated the data and the sampling process. The traditional maximum likelihood (ML) method infers model parameters from the data, but is often intractable (see e.g. [2]) or can lead to overfitting [3]. The Bayesian framework represents a more rigorous approach to parameter inference. It requires assumptions about the 'prior probability' of model parameters, and expresses the 'posterior probability' of the parameters, given the data, in terms of the data likelihood. In the so-called maximum a posteriori probability (MAP) method, one computes the most probable parameters, according to the posterior probability. MAP cures overfitting in ML partially by providing a 'regulariser' [1]. Both ML and MAP methods can be seen as optimisation problems, in which the data likelihood and posterior parameter probability, respectively, play the role of the objective function. With a trivial sign change this objective function can be mapped into an 'energy' function to be minimised, so that ML and MAP parameter inference can both equivalently be seen as computing a ground state in statistical mechanics [4,5]. Clustering is a popular type of inference where one seeks to allocate statistically similar data points to the same category (or cluster), in an unsupervised way. It is used in astrophysics [6], biology [7], and many other areas. The assumed data likelihood in ML and Bayesian model-based clustering methods is usually a Gaussian Mixture Model (GMM) [6,8]. The GMM likelihood, however, is analytically intractable, and one hence tends to resort to variational approximations [8] or computationally intensive Monte Carlo methods [9]. Furthermore, the number of model parameters, in particular the number of partitions of the data, is extensive, even if we fix the dimension of the data to be finite, which leads to additional difficulties [10]. For this reason, not many analytical results are available for model-based clustering (MBC), leaving mostly (many) numerical studies. Here, even when the number of parameters d is kept finite, the matrix of 'allocation' variables C [8] which we ultimately want to infer is growing with the sample size N . The situation is complicated further if, in addition to C, we are also inferring the number of true clusters K. In the GMM approach, the number of clusters is usually found by adding a 'penalty' term to the log-likelihood function, such as for the Bayesian information criterion (BIC) or the integrated complete-data likelihood (ICL) [11]. These penalty based approaches sometimes lead to conflicting results [6]. A direct solution to the above problems is to follow the approach of statistical mechanics and compute the partition function [4,5]. This approach is usually not pursued by statisticians, and in this case has not yet been pursued fully by physicists either (in spite of their familiarity with such calculations). Popular Machine Learning textbooks written by physicists, such as [8] or the more recent [12], cover only the (algorithmic) variational mean-field approach for the case when K is unknown, and the (non-Bayesian) expectation-maximisation algorithm for the case when K is known. Most statistical mechanics approaches to data clustering [13,14,15] use some heuristic measure of data dissimilarity as an energy function, rather than an actual statistical model of the data, or limit themselves to the simple case of assuming only two clusters [16,17,18] in the high dimensional regime where d → ∞ and N → ∞, with d/N finite. The work of [17] and [18] is mainly concerned with the inference of parameters of two isotropic Gaussians from a balanced sample, i.e. a very restricted model of the data which does not take into account correlations, different cluster sizes, data with more than two clusters, etc. The former is concerned with the inference of the centres of the assumed Gaussians, and the latter with finding a single 'direction' in the data. Hence both studies do not formally address the MBC problem. Furthermore, in [16] the Bayesian approach is used to infer 'prototype vectors', such as centres of Gaussians, etc., of the same dimension as the data, so also this work is not addressing the MBC problem systematically either. Finally, we note that none of the above papers refer to previous work on MBC in the low-dimensional regime of finite d and N → ∞. To our knowledge, only one study considers the high-dimensional regime of a specific Bayesian GMM clustering problem, namely [19]. A systematic statistical mechanical treatment of the Bayesian clustering problem is still lacking. In this paper we consider a more general model-based Bayesian clustering protocol, which allows for simultaneous inference of the number of clusters in the data and their components, based on stochastic partitions of the data (SPD) [20]. SPD assumes priors on the partitions to compute the MAP estimate of data partitions. The mean-field (MF) theory of Bayesian SPD inference was developed recently in [21]. That study used the negative log-likelihood as the energy function, and computed its average over the data and the partitions. It led to a simple and intuitive analytical framework, which makes non-trivial predictions about low energy states and the corresponding (MAP) data partitions. However, these predictions are only correct in the regime of 'weak' correlations [21]. In this paper we pursue a full statistical mechanical treatment of the Bayesian clustering problem covering all correlation regimes. To this end we analyse the free energy, and we use the replica method [22] to compute its average over the data. This, unlike MF, allows us to compute the average energy of the optimal partitions. Furthermore, the present analysis produces a simple algorithmic framework, with the population dynamics [22] clustering algorithm at its heart, for the simultaneous inference of the number of clusters in the data and their components. This can be seen as a first non-variational result for this type of problems [8]. Model of the data and Bayesian cluster inference Let us assume that we observe a data sample X = {x 1 , . . . , x N }, where x i ∈ R d for all i. Each vector x i are assumed to have been generated independently from one of K distributions, which are members of a parametrized family P (x|θ). M 1 data-points are sampled from P (x|θ 1 ), with parameter θ 1 , M 2 data-points are sampled from P (x|θ 2 ), etc. We clearly have the constraint K µ=1 M µ = N , and we assume that M µ ≥ 1 for all µ. We will say that x i (or its index i) belongs to 'cluster' µ if x i was sampled from P (x|θ µ ). The above sampling scenario can be described by the following distribution: P (X|C, K, θ 1 , . . . , θ K ) = K µ=1 N i=1 P ciµ (x i |θ µ )(1) which is parametrised by the the partition matrix, or 'allocation' matrix [8], C. Each element of this matrix [C] iµ = c iµ computes an indicator function 1 [x i ∼ P (x|θ µ )], i.e. is nonzero if and only if x i is sampled from P (x|θ µ ). Furthermore, we have µ≤K c iµ = 1 for all i ∈ [N ] ‡, i.e. x i belongs to only one cluster, and M µ (C) = i≤N c iµ ≥ 1 for all µ ∈ [K], i.e. empty clusters are not allowed §. Suppose we now want to infer the partition matrix C and the number of clusters K. The Bayesian approach to this problem (see e.g. [8]) would be to assume prior distributions for parameters and partitions, P (θ µ ) and P (C, K) = P (C|K)P (K) , and to consider subsequently the posterior distribution P (C, K|X) = P (X|C, K)P (C|K)P (K) Ñ K=1 P (K) C P (X|C,K)P (C|K) = e −NFN (C, X) P (C|K)P (K) Ñ K=1 P (K) C e −NFN (C, X) P (C|K) ,(2) where we have defined the log-likelihood densitŷ F N (C, X) = − 1 N K µ=1 log e N i=1 ciµ log P (xi|θµ) θµ (3) ‡ Throughout this paper the notation [N ] will be used to represent the set {1, . . . , N }. § We note that the distribution (1) could be also defined by using set notation, see e.g. [21]. The simplest route, following the 'Principle of Insufficient Reason', is to choose uniform P (C|K) and P (K). The former is then given by P (C|K) = 1/K! S(N, K), where S(N, K) is the Stirling number of the second kind (S(N, K) ≃ K N /K! as N → ∞ [23]), and the latter is given by P (K) = 1/N . and the short-hand f (θ µ ) θµ = dθ µ P (θ µ )f (θ µ ). Expression (2) can be used to infer the most probable partition C [21]. For each K ≤ N we can computê C |K = argmax C P (C|X, K) = argmax C e −NFN (C, X) P (C|K)(4) and the MAP estimator (Ĉ,K) = argmax C,K P (C, K|X) = argmax C,K e −NFN (C, X) P (C|K)P (K)(5) Furthermore, we can use (2) to compute the distribution of cluster sizes P (K|X) = e −N fN (K,X) P (K) Ñ K=1 P (K) e −N fN (K,X) ,(6) where f N (K, X) = − 1 N log C e −NFN (C, X) P (C|K) .(7) 3. Statistical mechanics and replica approach Size independent identities When the prior P (C, K) = P (C|K)P (K) is chosen to be uniform ¶, MAP inference of clusters and cluster numbers according to (4,5) requires finding the minimum min CFN (C, X) of the negative log-likelihood (3), which is a function of the data X = (x 1 , . . . , x N ). Here we assume that X is sampled from the distribution q(X|L) = C q(C|L) L ν=1 N i=1 q ciν ν (x i ) ,(8) where q(C|L) and q ν (x) are, respectively, the 'true' distribution of partitions, of size L, and the true distribution of data in these partitions. We note that the above expression will generally differ from the form (2), which allows to study various scenarios describing 'mismatch' between the assumed model and the actual data. The minimum ofF N (C, X) can be computed within the statistical mechanics framework (see e.g. [5]), via the zero 'temperature' limit of the 'free energy' (density), using min CFN (C, X) = lim β→∞ f N (β, X), with f N (β, X) = − 1 βN log C e −βNFN (C,X)(9) Although the free energy f N (β, X) is a function of the randomly generated data X, we expect that in the thermodynamic limit N → ∞, i.e. for inference with an infinite amount of data, it will be self-averaging, i.e. lim N →∞ f 2 N (β, X) X − f N (β, X) 2 X = 0. This implies that instead of (9) we can work with the average free energy density where the average · · · X is generated by the distribution q(X|L). We note that if the prior P (C|K) is uniform, i.e. P (C|K) = 1/K! S(N, K), then f N (β) is equivalent to f N (β) = − 1 βN log C e −βNFN (C,X) X ,(10)f N (β) = − 1 βN log C P (C|K)e −βNFN (C,X) X + φ N (β). (11) with φ N (β) = − 1 βN log[K!S(N, K)]. The replica identity log z = lim n→0 n −1 log z n allows us to write the relevant part of the average free energy density as f N (β) − φ N (β) = − lim n→0 1 βN n log C P (C|K)e −βNFN (C,X) n X .(12) The standard route for computing averages via the replica method [22] is to evaluate the above for integer n, following by taking n → 0 via analytical continuation. So C P (C|K)e −βNFN (C,X) n X = C 1 · · · C n n α=1 P (C α |K) e −βN n α=1F N (C α ,X) X = e −βN n α=1F N (C α ,X) {C α } X ,(13) where the average · · · {C α } refers to the replicated distribution n α=1 P (C α |K). We next compute the average over X (see Appendix A for details) which leads us to the following integral e −βN n α=1F N (C α ,X) {C α },X = {dQ dQ dA dÂ} e N Ψ[{Q,Q};{A,Â}] ,(14) with Ψ[{Q,Q}; {A,Â}] = i n α=1 K µ=1 dxQ α µ (x)Q α µ (x) + i ν,µÂ (ν, µ)A(ν, µ) + β n α=1 K µ=1 1 N log e N dx Q α µ (x) log P (x|θµ) θµ + ν,µ A(ν, µ) log dx q ν (x) e −i n α=1Q α µα (x) + 1 N log e −iN ν,µÂ (ν,µ)A(ν,µ|C,{C α }) {C α };C ,(15) where the average · · · {C α };C refers to the distribution q(C|L) n α=1 P (C α |K). Finally, using the above result in our formula for the average free energy (12) gives us f N (β) = − lim Inference for large N For finite N , equation (16) is as complicated as its predecessor (11). The former can, however, be computed via saddle-point integration when N → ∞, provided we are allowed to take this limit first and the replica limit n → 0 later. Now we obtain f (β) = − 1 β lim n→0 1 n extr {Q,Q,A,Â} Ψ[{Q,Q}; {A,Â}] + φ(β),(17) where φ(β) = lim N →∞ φ N (β). The further calculation requires knowledge of the average in the last term of the functional (15), which can be written in the form e −iN ν,µÂ (ν,µ)A(ν,µ|C,{C α }) {C α };C = {N (ν,µ)} P N [{N (ν, µ)}] e −i ν,µÂ (ν,µ)N (ν,µ) ,(18) where the set of variables {N (ν, µ)}, which are governed by the distribution P N [{N (ν, µ)}] = C {C α } q(C|L) n α=1 p(C α |K) ν,µ δ N (ν,µ);N A(ν,µ|C,{C α }) ,(19) are subject to the hard constraints ν,µ N (ν, µ) = N (the sample size), µ N (ν, µ) = N (ν) (the sample size of a data generated from q ν (x)), and ν,µ\µα N (ν, µ) = N (µ α ) > 0 (the size of the cluster µ α in replica α). To compute the average (18) we will assume that for N → ∞ the distribution P N [{N (ν, µ)}] approaches the associated (soft constrained) multinomial distributioñ P N [{N (ν, µ)}] = N ! ν,µ N (ν, µ)! ν,µÃ (ν, µ) N (ν,µ) ,(20) where ν,µÃ (ν, µ) = 1 andÃ(ν, µ) > 0. In this case we would find simply e −iN ν,µÂ (ν,µ)A(ν,µ|C,{C α }) {C α };C = ν,µÃ (ν, µ) e −iÂ(ν,µ) N .(21) The above assumption can by justified by the following large deviations argument. Particle gas representation of replicated partitions The multinomial distribution (20) describes n copies, i.e. replicas, of N 'particles' distributed over K reservoirs. For A = (a 1 , . . . , a N ) this distribution is given by P (A) = N i=1 P (a i ),(22) where P (a i ) =Ã(ν, µ) = Prob(a i (1) = ν, a i (2) = µ 1 , . . . , a i (n + 1) = µ n ) denotes the probability that a particle i has 'colour' ν ∈ [L] and is in 'reservoir' µ 1 ∈ [K] of replica n = 1, reservoir µ 2 ∈ [K] of replica n = 2, etc. The state A of this 'gas' of particles is a 'partition' if the reservoirs are not empty, i.e. if N α µα (A) = i≤N δ µα; ai(α+1) > 0 for all α and µ α . If A is sampled from the distribution P (A), this will happen with high probability as N → ∞ if the marginalÃ(µ α ) = ν,µ\µαà (ν, µ) > 0. To show this we first compute the average N α µα (A) A = A P (A) N α µα (A): N α µα (A) A = N i=1 ai P (a i ) δ µα; ai(α+1) = N ai P (a i )δ µα; ai(α+1) = N ν,µ\µαà (ν, µ) = NÃ(µ α ). (23) Thus the average N α µα (A) A > 0. Secondly, for ǫ > 0 we consider the probability of observing the event N α µα (A) / ∈ (N (Ã(µ α ) − ǫ), N (Ã(µ α ) + ǫ)). Clearly, Prob N α µα (A) / ∈ (N (Ã(µ α )−ǫ), N (Ã(µ α )+ǫ))(24) = Prob N α µα (A)/N ≤Ã(µ α )−ǫ + Prob N α µα (A)/N ≥Ã(µ α )+ǫ . For any λ > 0, the second term can be bounded using Markov's inequality, as follows Prob N α µα (A)/N ≥Ã(µ α )+ǫ = Prob e λN α µα (A) ≥ e λN (Ã(µα)+ǫ) ≤ e λN α µα (A) A e −λN (Ã(µα)+ǫ) ,(25) with the average e λN α µα (A) A = A P (A) e λN α µα (A) = N i=1 ai P (a i ) e λδ µα ; a i (α+1) = 1 +Ã(µ α )(e λ − 1) N .(26) Hence Prob N α µα (A) ≥ N (Ã(µ α )+ǫ) ≤ e −N I(λ,ǫ) ,(27) where [24] of binary distributions with probabilities p, q ∈ [0, 1]. We may now write I(λ, ǫ) = − log(1 +Ã(µ α )(e λ − 1)) + λ(Ã(µ α ) + ǫ) is a rate function. The latter has its maximum at λ * = log[(Ã(µ α ) 2 +Ã(µ α )ǫ−Ã(µ α )−ǫ)/(Ã(µ α )(Ã(µ α )−1+ǫ)], and I(λ * , ǫ) = D(Ã(µ α )+ǫ ||Ã(µ α )), where D(p ||q) = p log( p q ) + (1 − p) log( 1−p 1−q ) ≥ 0 is the Kullback-Leibler divergenceProb N α µα (A) ≥ N (Ã(µ α )+ǫ) ≤ e −N D(Ã(µα)+ǫ ||Ã(µα)) .(28) Following similar steps to bound the first term of (24) gives us also the inequality Prob N α µα (A) ≤ N (Ã(µ α )−ǫ) ≤ e −N D(Ã(µα)−ǫ ||Ã(µα)) .(29) In combination, our two bounds directly lead to Prob N α µα (A) / ∈ (N (Ã(µ α )−ǫ), N (Ã(µ α )+ǫ)) ≤ 2 e −N min σ∈{−1,1} D(Ã(µα)+σǫ ||Ã(µα))(30) The probability for one or more of the events N α µα (A) / ∈ (N (Ã(µ α )−ǫ), N (Ã(µ α )+ǫ)) to occur (of which there are nK ) can be bounded using Boole's inequality in combination with (30), as follows Prob ∪ α,µα N α µα (A) / ∈ (N (Ã(µ α )−ǫ), N (Ã(µ α )+ǫ)) ≤ n α=1 K µα=1 Prob N α µα (A) / ∈ (N (Ã(µ α )−ǫ)), N (Ã(µ α )+ǫ)) ≤ 2nK e −N minα,µ α min σ∈{−1,1} D(Ã(µα)+σǫ ||Ã(µα)) .(31) We conclude that for N → ∞ the deviations of the random variables N α µα (A) from their averages NÃ(µ α ) decay exponentially with N . Let us next consider the entropy density H(A)/N = − ai P (a i ) log P (a i ) = − ν,µÃ (ν, µ) logÃ(ν, µ) = − νà (ν) logÃ(ν) − ν,µÃ (ν)Ã(µ|ν) logÃ(µ|ν). (32) If we assume thatà (µ|ν) = n α=1à (µ α |ν),(33) then H(A)/N = − νà (ν) logÃ(ν) − n ν,µÃ (ν)Ã(µ|ν) logÃ(µ|ν). (34) The entropy of the distribution q(C|L) { n α=1 p(C α |K)}, used in (19), is given by H(p, q)/N = − 1 N C {C α } q(C|L) n α=1 p(C α |K) log q(C|L) n α=1 p(C α |K) = H(q)/N + nH(p)/N ,(35) with (34), we see that the two expressions are equal for large N whenÃ(ν) = 1/L andÃ(µ|ν) = 1/K. In this case, the distribution (19) apparently approaches the multinomial distribution (20). We expect this also to be true when the distribution q(C|L) is uniform, but subject to the constraints H(q) = − C q(C|L) log q(C|L) and H(p) = − C p(C|K) log p(C|K).N i=1 c iν = NÃ(ν). Replica Symmetric theory Simplification of the saddle-point problem Using the assumptions (21) and (33), we obtain a simplified expression for (15): Ψ[{Q,Q}; {A,Â}] = i n α=1 K µ=1 dxQ α µ (x)Q α µ (x) + ν,µ A(ν, µ) iÂ(ν, µ) + log dx q ν (x) e −i n α=1Q α µα (x) + β n α=1 K µ=1 1 N log e N dx Q α µ (x) log P (x|θµ) θµ + log ν,µÃ (ν)e −iÂ(ν,µ) n α=1à (µ α |ν)(36) The extrema of this functional are seen to be the solutions of the following equations: A(ν, µ) = i log dx q ν (x) e −i n α=1Q α µα (x) (37) A(ν, µ) =à (ν)e −iÂ(ν,µ) n α=1à (µ α |ν) ν,μà (ν)e −iÂ(ν,μ) n α=1à (μ α |ν) (38) Q α µ (x) = ν,µ δ µ;µα A(ν, µ) q ν (x) e −i n γ=1Q γ µγ (x) dx q ν (x) e −i n γ=1Q γ µγ (x) (39) Q α µ (x) = iβ e N dx Q α µ (x) log P (x|θ) log P (x|θ) θ e N dx Q α µ (x) log P (x|θ) θ .(40) For N → ∞ we can evaluate the integrals in the last equation with the Laplace method [25], givinĝ Q α µ (x) = iβ log P (x|θ α µ ) θ α µ = argmax θ dx Q α µ (x) log P (x|θ).(41) Upon eliminating the conjugate order parameters {Q,Â} from our coupled equations and considering large N , we obtain after some straightforward manipulations the following expression for the nontrivial part of the average free energy (17), f (β) − φ(β) = − lim n→0 1 βn log νà (ν) dx q ν (x) n α=1 K µ=1à (µ|ν)e β log P (x|θ α µ )(42) and the following closed equations for the remaining order parameters {Q, A}: Q α µ (x) = ν,µ δ µ;µα A(ν, µ) q ν (x) e n γ=1 β log P (x|θ γ µγ ) dx q ν (x) e n γ=1 β log P (x|θ γ µγ ) ,(43)A(ν, µ) =à (ν) dx q ν (x) n α=1à (µ α |ν) e β log P (x|θ α µα ) νà (ν) dx qν(x) n α=1 μαà (μ α |ν) e β log P (x|θ α µα )(44) In order to take the replica limit n → 0 in (42,43,44) we will make the the 'replica symmetry' (RS) assumption [22], which here translates into Q α µα (x) = Q µα (x). It then follows from (41), in turn, that θ α µ = θ µα . The RS structure allows us to take the replica limit (see Appendix B for details) and find the following equations: Q µ (x) = νà (ν) q ν (x)à (µ|ν) e β log P (x|θµ) μà (μ|ν) e β log P (x|θμ) θ µ = argmax θ dx Q µ (x) log P (x|θ) (45) A(µ|ν) = dx q ν (x)à (µ|ν) e β log P (x|θµ) μà (μ|ν) e β log P (x|θμ) A(ν) =Ã(ν)(46) and the asymptotic form of the average free energy f (β) = − 1 β dx L ν=1à (ν)q ν (x) log K µ=1à (µ|ν) e β log P (x|θµ) + φ(β). (47) The physical meaning of the order parameters Q µ (x) and A(µ|ν) becomes clear if we define the following two densities Q µ (x|C, X) = 1 N N i=1 c iµ δ(x − x i ) (48) A(ν, µ|C, X) = 1 N N i=1 c iµ 1 [x i ∼ q ν (x)] .(49) If we sample C from the Gibbs-Boltzmann distribution P β (C|X) = 1 Z β (X) P (C|K)e −βNFN (C,X) ,(50) where Z β (X) = C P (C|K)e −βNFN (C,X) is the associated partition function, and with the conditional averages G(C) C|X = C P (C|K)G(C), then one finds that Q µ (x) = lim N →∞ Q µ (x|C, X) C|X X ,(51)A(ν, µ) = lim N →∞ A(ν, µ|C, X) C|X X ,(52) (see Appendix C for details). So, asymptotically, Q µ (x) is the average distribution of data in cluster µ, and A(ν, µ) is the average fraction of data originating from the distribution q ν (x) that are allocated by the clustering process to cluster µ. RS theory for β → ∞ Let us study the behaviour of the RS order parameter equations (45), (46) and (47) in the zero temperature limit β → ∞. First, for the order parameter Q µ (x), governed by the equation (45), and any test function a µ we consider the sum µ Q µ (x)a µ = νà (ν) q ν (x) µÃ (µ|ν) e β log P (x|θµ) a µ µ ′′à (µ ′′ |ν) e β log P (x|θ µ ′′ ) = νà (ν) q ν (x) µ ′Ã(µ ′ |ν) e −β(maxμ log P (x|θμ)−log P (x|θ µ ′ )) a µ ′ µ ′′Ã(µ ′′ |ν) e −β(maxμ log P (x|θμ)−log P (x|θ µ ′′ )) = νà (ν) q ν (x) µ ′Ã(µ ′ |ν) e −β∆ µ ′ (x) a µ ′ µ ′′Ã(µ ′′ |ν) e −β∆ µ ′′ (x) ,(53) where ∆ µ (x) = maxμ log P (x|θμ) − log P (x|θ µ ). For β → ∞ the average will tend to lim β→∞ µ ′Ã(µ ′ |ν) e −β∆ µ ′ (x) a µ ′ µ ′′Ã(µ ′′ |ν) e −β∆ µ ′′ (x) = µ ′ 1 [∆ µ ′ (x) = 0]Ã(µ ′ |ν) a µ ′ µ ′′ 1 [∆ µ ′′ (x) = 0]Ã(µ ′′ |ν) .(54) Hence for β → ∞ we may write Q µ (x) = νà (ν) q ν (x) 1 [∆ µ (x) = 0]Ã(µ|ν) µ ′ 1 [∆ µ ′ (x) = 0]Ã(µ ′ |ν) .(55) Similarly, equation (46) for the order parameter A(µ|ν) gives us µ A(µ|ν)a µ = dx q ν (x) µÃ (µ|ν) e −β∆µ(x) a µ μà (μ|ν) e −β∆μ(x) ,(56) so for β → ∞ we may write, assuming the expectation and limit operators commute, µ A(µ|ν)a µ = dx q ν (x) µ 1 [∆ µ (x) = 0]Ã(µ|ν) a µ μ 1 [∆μ(x) = 0]Ã(μ|ν) .(57) We note that A(µ) = dx Q µ (x), as a consequence of the (48) and (49). Finally, taking β → ∞ in the average free energy density (47) gives us lim β→∞ f (β) − φ(β) = − lim β→∞ 1 β νà (ν) dx q ν (x) log e β maxμ log P (x|θμ) K µ=1à (µ|ν)e −β∆µ(x) = − νà (ν) dx q ν (x) max µ log P (x|θ µ ) − lim β→∞ 1 β νà (ν) dx q ν (x) × log K µ=1à (µ|ν) 1 [∆ µ (x) > 0] e −β∆µ(x) + 1 [∆ µ (x) = 0] = − dx νà (ν)q ν (x) max µ log P (x|θ µ ) − lim β→∞ 1 β νà (ν) dx q ν (x) log K µ=1à (µ|ν)1 [∆ µ (x) = 0] − lim β→∞ 1 β νà (ν) dx q ν (x) log 1+ K µ=1 1 [∆ µ (x) > 0]Ã(µ|ν) e −β∆µ(x) K µ=1 1 [∆ µ (x) = 0]Ã(µ|ν) = − νà (ν) dx q ν (x) max µ log P (x|θ µ ),(58) The average energy e(β) = lim N →∞ F N (C, X) C|X X is given by (see Appendix D) e(β) = − K µ=1 dx Q µ (x) log P (x|θ µ ),(59) where Q µ (x) is a solution of the equation (45). The latter reduces to (55) when β → ∞, and hence in this limit we find e(∞) = − K µ=1 νà (ν) dx q ν (x) log P (x|θ µ ) 1 [∆ µ (x) = 0]Ã(µ|ν) µ ′ 1 [∆ µ ′ (x) = 0]Ã(µ ′ |ν) . (60) It is trivial to show (and intuitive) that e(∞) = f (∞). For finite β, the average free energy f (β) − φ N and the energy e(β), given by equations (47,59), can be used to compute the average entropy density of the Gibbs-Boltzmann distribution (50) via the Helmholtz free energy f (β) = e(β) − 1 β s(β), s(β) = − lim N →∞ 1 N C P β (C|X) log P β (C|X) X(61) From the Helmholtz free energy we immediately infer that lim β→∞ s(β)/β = 0. RS theory for β → 0 The RS theory simplifies considerably in the high temperature limit β → 0. Here the order parameter Q µ (x), which is governed by the equation (45), is given by Q µ (x) = νà (ν)Ã(µ|ν) q ν (x).(62) The fraction of data points originating from the distribution q ν (x) assigned to cluster µ, A(µ, ν), isÃ(ν)Ã(µ|ν) due to (46). Using this in (59) gives the average energy e(0) = − L ν=1à (ν) K µ=1à (µ|ν) dx q ν (x) log P (x|θ µ )(63) where θ µ = argmax θ dx Q µ (x) log P (x|θ). We note that (63) is equal to F (Ã) = L ν=1à (ν) K µ=1à (µ|ν)D(q ν ||P µ ) + L ν=1à (ν)H(q ν ),(64) where H(q ν ) is the differential entropy of q ν (x), which is also the entropy function of the mean-field theory [21]. For finite N , the average energy e N (β) = F N (C, X) C X is a monotonic non-increasing function of β. Also the limits lim β→∞ e N (β) and lim β→0 e N (β) exist. Thus e N (∞) ≤ e N (0) for N finite and hence the average energy e(∞) is bounded from above by the mean-field entropy F (Ã), i. e. e(∞) ≤ F (Ã). For model distributions P (x|θ µ ) with non-overlapping supports for different θ µ , this upper bound can be optimised by replacing F (Ã) with minà F (Ã) and hence in this case e(∞) ≤ miñ A F (Ã).(65) The minimum is computed over all prior parametersÃ(µ|ν) satisfying the constraints A(µ|ν) > 0 and µ≤Kà (µ|ν) = 1. Finally, we note that for K = 1, as a consequence of Q µ (x) = ν≤Là (ν) q ν (x), we will have e(∞) = F (Ã). Recovery of true partitions Equation (55) for Q µ (x) can be used to derive the following expression for the distributionQ µ (x) = Q µ (x)/ dx Q µ (x) of data that are assigned to cluster µ: Q µ (x) = νà (ν) q ν (x) 1[∆µ(x)=0]Ã(µ|ν) Zν (x) dx νà (ν) q ν (x) 1[∆µ(x)=0]Ã(µ|ν) Zν (x)(66)∆ µ (x) = max µ log P (x|θμ) − log P (x|θ µ ) θ µ = argmax θ dxQ µ (x) log P (x|θ), where Z ν (x) = µ 1 [∆ µ (x) = 0]Ã(µ|ν) . Suppose we knew the number of true clusters, i.e. K = L. If our clustering procedure was perfect we would then expect that each cluster holds data from at most one distribution, i.e. we expectQ µ (x) = q µ (x) to be a solution of the following equation q µ (x) = νà (ν) q ν (x) 1[∆µ(x)=0]Ã(µ|ν) Zν (x) dx νà (ν) q ν (x) 1[∆µ(x)=0]Ã(µ|ν) Zν (x) .(67) This is certainly true if 1 [∆ µ (x) = 0]Ã(µ|ν) = δ ν;µ Z ν (x) for all x in the domain of q µ (x). The latter condition implies that dx q ν (x) 1 [∆ µ (x) = 0]Ã(µ|ν)Z −1 ν (x) = δ ν;µ which, by the definition of order parameter A(µ|ν), is equivalent to A(µ|ν) = δ ν;µ , i.e. all data from the distribution q ν (x) are in cluster µ. Thus if dx q ν (x) 1 [∆ µ (x) = 0]Ã(µ|ν) Z ν (x) = δ ν;µ(68) holds for all pairs (ν, µ) in a bijective mapping of the set [K] to itself, thenQ µ (x) = q µ (x) is a solution of equation (67). Let us define the set S P (x) = {µ | ∆ µ (x) = 0} and consider the average µ A(µ|ν)µ: dx q ν (x) µ 1 [∆ µ (x) = 0]Ã(µ|ν)µ Z ν (x) = dx 1 [|S P (x)| > 1]+1 [|S P (x)| = 1] q ν (x) µ 1 [∆ µ (x) = 0]Ã(µ|ν)µ Z ν (x) = dx q ν (x) argmax µ log P (x|θ µ ) + dx 1 [|S P (x)| > 1] q ν (x) µ 1 [∆ µ (x) = 0]Ã(µ|ν)µ Z ν (x)(69) We note that the second term is a contribution of sets that can be characterized as {x | P (x|θ µ1 ) = P (x|θ µ2 ), µ 1 < µ 2 }, for some (µ 1 , µ 2 ). If we assume that this term is zero + , then one of the consequences of (68) is equivalence of the two averages µ A(µ|ν)µ = ν = dx q ν (x)argmax µ log P (x|θ µ ),(70) and dx q ν (x)argmax µ log P (x|θ µ ) = dx q ν (x)argmin µ log P −1 (x|θ µ ) = argmin µ dx q ν (x) log P −1 (x|θ µ ) = argmin µ D(q ν ||P µ ) = ν,(71) where D(q ν ||P µ ) is the Kullback-Leibler distance between the distributions q ν (x) and P (x|θ µ ). Thus if (68) holds, then the results (70,71) show that the max and expectation operators commute. Using this property in the average energy (60) gives e(∞) = − νà (ν) q ν (x) max µ log P (x|θ µ )dx = νà (ν) min µ q ν (x) log P −1 (x|θ µ ))dx = νà (ν) min µ D(q ν ||P µ ) + νà (ν), H(q ν )(72) and in the distribution (55) it leads to the equation Q µ (x) = νà (ν) q ν (x) δ µ;argmaxμ log P (x|θμ) θ µ = argmax θ Q µ (x) log P (x|θ)dx.(73) We note that the above average energy and the MF (64) average energy are both bounded from below by the average entropy νà (ν)H(q ν ). This bound is saturated when all D(q ν ||P µ ) terms vanish, i.e. when the model matches the data exactly. Implementation and application of the RS theory Population dynamics algorithm Equation (55) for the order parameter Q µ (x) can be solved numerically by a population dynamics algorithm [5] which can be derived as follows. Firstly, we re-arrange the equation for Q µ (x): Q µ (x) = νà (ν) q ν (x) 1 [∆ µ (x) = 0]Ã(µ|ν) µ ′ 1 [∆ µ ′ (x) = 0]Ã(µ ′ |ν) = νà (ν) q ν (x) 1 [|S P (x)| > 1]+1 [|S P (x)| = 1] 1 [∆ µ (x) = 0]Ã(µ|ν) µ ′ 1 [∆ µ ′ (x) = 0]Ã(µ ′ |ν) = νà (ν) q ν (x) 1 [|S P (x)| = 1] 1 [∆ µ (x) = 0] + · · · · · · + νà (ν) q ν (x) 1 [|S P (x)| > 1] 1 [∆ µ (x) = 0]Ã(µ|ν) µ ′ 1 [∆ µ ′ (x) = 0]Ã(µ ′ |ν) .(74) Secondly, we note that the data distribution νà (ν) q ν (x) can be replaced by a large sample X, i.e. by the data itself, via the empirical distribution N −1 i≤N δ(x−x i ), which can be also written as N −1 ν≤L iv ≤Nν δ(x−x iν ). Here N ν , which satisfies lim N →∞ N (ν)/N =Ã(ν), is the number of data-points sampled from q ν (x). Upon using both of these representations of νà (ν) q ν (x) in equation (74) we obtain Q µ (x) = 1 N N i=1 δ(x−x i )1 [|S P (x i )| = 1] 1 [∆ µ (x i ) = 0] + · · · (75) · · · + 1 N L ν=1 Nν iv =1 δ(x−x iν )1[|S P (x iν )| > 1] 1 [∆ µ (x iν ) = 0]Ã(µ|ν) µ ′ 1 [∆ µ ′ (x iν ) = 0]Ã(µ ′ |ν) . Finally, it is very unlikely to find in X, sampled from a distribution of continuous random variables νà (ν) q ν (x), data points which satisfy |S P (x)| > 1, so the second term in ( 75) is almost surely zero for any sample X of finite size. Thus Q µ (x) = 1 N N i=1 δ(x−x i )1 [|S P (x i )| = 1] 1 [∆ µ (x i ) = 0] = 1 N N i=1 δ(x−x i )δ µ;argmaxμ log P (xi|θμ) = 1 N N i=1 δ µ;µi δ(x − x i )(76) where µ i = argmaxμ log P (x i |θμ). Using the above in equation (55), we obtain for µ ∈ [K] the following system of equations Q µ (x) = 1 N N i=1 δ µ,µi δ(x − x i ) θ µ = argmax θ dx Q µ (x) log P (x|θ) µ i = argmaxμ log P (x i |θμ)(77) This set can be solved numerically as follows. We create a 'population' of random variables {µ i : i ∈ [N ]} where µ i ∈ [K] are at first sampled uniformly. We use this population to compute the parameters θ µ ; The latter are then used to compute a new population {µ i }. The last two steps are repeated until one observes convergence of the energy e(∞) = − K µ=1 dx Q µ (x) log P (x|θ µ ). Finally, we note that using instead equation (73) as our starting point would lead us to the same population dynamics equations. Thus, for continuous data distributions νà (ν) q ν (x) represented by a large finite sample, the equations (55) and (73) are equal. The population dynamics simplifies significantly if we assume that the distribution p(x|θ) is the multivariate Gaussian N (x|m, Λ −1 ) = |2πΛ −1 | − 1 2 e − 1 2 (x−m) T Λ(x−m)(78) with mean m and precision matrix (inverse covariance matrix) Λ. The parameters θ µ = (m µ , Λ −1 µ ) we can be estimated directly from the population via the equations m µ = 1 N j=1 δ µ;µj N i=1 δ µ;µi x i Λ −1 µ = 1 N j=1 δ µ;µj N i=1 δ µ;µi (x i −m µ )(x i −m µ ) T ,(79) where µ i is given by µ i = argmax µ log N (x i |m µ , Λ −1 µ ) (80) = argmax µ − 1 2 Tr Λ µ (x i − m µ )(x i − m µ ) T + 1 2 log |Λ µ | − d 2 log 2π. Population dynamics algorithm for finite β Also equation (45) can be solved via population dynamics. However, to replace the distribution of data νà (ν) q ν (x) with its empirical version N −1 N i=1 δ(x−x i ) we must assume thatÃ(μ|ν) =Ã(μ). For µ ∈ [K], this gives us the following equations: Q µ (x) = 1 N N i=1 δ(x − x i )w i (µ) w i (µ) =à (µ) e β log P (xi|θµ) μà (μ) e β log P (xi|θμ) θ µ = argmax θ dx Q µ (x) log P (x|θ).(81) They can be solved by creating a population {(w i (1), . . . , w i (K)) : i ∈ [N ]} and using the above equations to update this population until convergence of the free energy f (β) = − 1 βN N i=1 log K µ=1à (µ) e β log P (xi|θµ) .(82) Finally, we note that both population dynamics algorithms derived in this subsection look somewhat similar to the Expectation-Maximisation (EM) algorithm, see e.g. [8]. Comparing the Gaussian EM, used for maximum likelihood inference of Gaussian mixtures, with (79) shows that the main difference is that EM uses the average δ µ;µi EM , over some 'EM-measure', instead of the delta function δ µ;µi . Gaussian EM is hence an 'annealed' version of the population dynamics (79), but exactly how to relate the two algorithms in a more formal manner is not yet clear. Numerical experiments In the mean-field (MF) theory of Bayesian clustering in [21], the average entropy (64) (derived via a different route) was the central object. It was mainly used for the Gaussian data model P (x|θ µ ) ≡ N x|m µ , Λ −1 µ , where it becomes the MF entropy F (Ã) = 1 2 K µ=1à (µ) log (2πe) d Λ −1 µ (Ã) ,(83) where Λ −1 µ (Ã) is the covariance matrix Λ −1 µ (Ã) = L ν=1à (ν|µ) (x−m µ (Ã))(x−m µ (Ã)) T ν ,(84) and m µ (Ã) = L ν=1à (ν|µ) x ν is the mean. Here we use · · · ν for the averages generated by q ν (x). We note that (83) is also equal to F (Ã) = µ,νà (ν, µ)D(q ν ||N µ (Ã)) + L ν=1à (ν)H(q ν ),(85) where N µ (Ã) ≡ N x|m µ (Ã), Λ −1 µ (Ã) . In addition, for the Gaussian model, the Laplace method, quite often used in statistics to approximate likelihoods [10], applied to the log-likelihood (3) for N → ∞ gives the entropŷ F N (C, X) = 1 2 K µ=1 M µ (C) N log (2πe) d Λ −1 µ (C, X) ,(86) where Λ −1 µ (C, X) is the empirical covariance of data in the cluster µ and M µ (C) = i≤N c iµ is its size. This expression can be minimized for clustering, either by gradient descent [21] or any other algorithm. The MF (83) makes non-trivial predictions about F N (C, X), such as on structure of its local minima, etc., and correctly estimated F N ≡ min CFN (C, X) for Gaussian data. However, it systematicaly overestimateŝ F N when K > L and when the separations between clusters are small [21]. We expect the present replica theory, related to the MF theory via inequality e(∞) ≤ F (Ã), to be more accurate. To test this expectation, we generated samples from two isotropic Gaussian distributions N (m 1 , I) and N (m 2 , I). Each sample X, split equally between the distributions, is of size N = 2000 and dimension d = 10. We note that for any given N and d, there exists an ǫ > 0 such that most of the x i in sample X lie inside the two spheres centred at m 1 and m 2 and both of radius d(1+ǫ) * . The latter suggests that the Euclidean distance ∆ = ||m 1 − m 2 ||, measured relative to the natural scale √ d, can be use as a measure of the degree of separation [26] between the 'clusters' centred at m 1 and m 2 (see Figure 1). * The probability of being outside a sphere is bounded from above by N e −dI(ǫ) , where I(ǫ) = log(1+ǫ) −1 +ǫ /2 (see Appendix E). A much tighter bound, given by N Γ (d/2, d(1 + ǫ)/2) /Γ(d/2), uses that for x sampled from N (m, I) the squared Euclidean distance ||x − m|| 2 follows the χ 2 distribution. The data was generated for ∆/ √ d ∈ 1 2 , 1, 3 2 , 2, 5 2 , from left to right. Middle: Data projected into two dimensions for ∆ increasing from left to right. The quality of the clustering, measured by the 'purity' ρ N , obtained by the population dynamics clustering algorithm is increasing with ∆. ρ N was measured for 10 random samples of data, but only the minimum, median and maximum values of ρ N (numbers connected by lines) are shown. The size of each sample, split equally between the distributions, was N = 20000 and the clustering algorithm assumed the number of clusters to be K = 2. Top:F N + log(K) (red crosses connected by lines), with the log-likelihoodF N ≡ min CFN (C, X) computed by a gradient descent algorithm, shown as a function of the assumed number of clusters K. Symbols, connected by lines and with error bars, denote the average and ± one standard deviation, measured over 10 random samples of data. Bottom: The log-likelihoodF N (red crosses connected by lines) is compared with the results of the mean-field theory (blue line) and population dynamics (connected black squares). For K ≥ 2 only the mean-field lower bound d 2 log(2πe) is plotted. We used gradient descent to find the low entropy states of (86) for our data. For each sample X we ran the algorithm from 10 different random initial states C (0), and computedF N (C (∞), X). The latter was used to estimateF N ≡ min CFN (C, X). For this data, the log-likelihood functionF N + log(K) has a minimum at K = 2, i.e. when the number of assumed clusters K equals the number of true clusters L, so it can be used reliably to infer true number of clusters. However, this inference method no longer works when the separation ∆ is too small (see Figure 1), but the 'quality' of clustering, as measured by the purity ρ N (C,C) = 1 [27] the clustering obtained by algorithm C with the true clusteringC, for K = 2, i.e. for the true number of clusters, is still reasonable♯ as can be seen in Figure ♯ We note that 0 < ρ N ≤ 1 with ρ N = 1 corresponding to a perfect recovery of true clusters and with ρ N ≈ 1/L corresponding to a random (unbiased) assignment into clusters. N K µ=1 max ν N i=1 c iµciν which compares 1. The predictions of the MF theory forF N , minà F (Ã), is F 1 = 1 2 d log(2πe) + 1 2 log[1 + (∆/2) 2 ] for K = 1, and F 2 = 1 2 d log(2πe) for K = 2. Thus F 1 ≥ F 2 , as required. Furthermore, if log(2) ≥ 1 2 log[1+(∆/2) 2 ], which happens when ∆ ≤ 2 √ 3, then F 2 + log(K) ≥ F 1 , so the MF theory is unable to recover the true number of clusters when the separation ∆ is small. The numerical results forF N + log(K) are in qualitative agreement with the predicted values, but the MF predictions forF N are indeed found to be inaccurate when the separation ∆ is small, and wrong, F K ≥ F 2 by equation (85), when K > 2. See Figure 1. To test the predictions of our replica theory we solve the Gaussian population dynamics equations (79) and (80) for the data with the same statistical properties as in the above gradient descent experiments, but with a population size N = 20, 000. We find that the average energy e(∞) = − µ≤K dx Q µ (x) log N (x|m µ , Λ −1 µ ),(87) as computed by the population dynamics algorithm, is in good agreement with the value ofF N obtained by gradient descent minimization (see Figure 1). The residual differences observed between e(∞) andF N are finite size effects. Furthermore, we note that the numerical complexity of the population dynamics algorithm is consistent with the lower bound that is linear in N (on average), as follows from the complexity analysis in [21]. Finally, we compare the Gaussian variant of the population dynamics clustering algorithm with a popular software package [11] which uses EM algorithm to estimate the maximum L N (X) of the log-likelihood ℓ N (X) = N i=1 log K µ=1 w(µ) N (x i |m µ , Σ µ )(88) with respect to the parameters of the Gaussian mixture model (GMM) µ≤K w(µ) N (x i |m µ , Σ µ ), which are the means m µ , the covariances Σ µ and the weights w(µ) ≥ 0, where µ≤K w(µ) = 1. To this end we consider inferring number of clusters in the samples of a Gaussian data with more than L = 2 clusters, non-identity covariance matrices and a relatively large number of dimensions (see Figures 2,3 and 5). The software package uses the Bayesian Information Criterion (BIC) 2L N − n N log(N ), where n N is the number of parameters used in GMM, and the population dynamics algorithm usesF N + log(K), with the log-likelihood F N ≡ min CFN (C, X) estimated by the average energy e(∞), to infer the number of clusters in the data. For uncorrelated data we observe in Figure 2 that inference success in both methods is strongly affected by the degree of separation ∆ of the clusters in the data, as measured by the Euclidean distance between the means of Gaussians. For small ∆ the recovery of the true number L = 3 of clusters is not possible. A simple MF argument, similar to the one used for L = 2, predicts that this inference failure latter will happen when ∆ ≤ 2 √ 3, i.e. exactly as for L = 2. However, both algorithms are found to 'work' below this MF threshold (see Figure 2) suggesting that the MF argument gives an upper bound. For correlated data, even when the separation parameter ∆ is zero, the true number of clusters can still be recovered correctly by both algorithms (see Figure 3). In all numerical experiments described in Figures 2 and 3 FN + log(K) K Figure 2. Inferring the number of clusters in data generated from Gaussian distributions N (mµ, I) with separation ∆ = ||mµ − mν ||, where (µ, ν) ∈ [3]. The sample, split equally between the distributions, is of size N = 3 × 10 4 , and the data dimension is d = 10. The data was generated for ∆/ √ d ∈ 1 2 , 1, 3 2 , 2, 5 2 , but results shown here (from left to right) are only for ∆/ √ d ∈ 1 2 , 1, 3 2 . Top: BIC ≡ 2L N − n N log(N ), where L N is the log-likelihood of GMM estimated by EM algorithm and n N is the number of parameters, as a function of K. Bottom: F N +log(K), whereF N ≡ min CFN (C, X) is the log-likelihood function computed by the population dynamics algorithm, as a function of K. density −L N /N , estimated by EM algorithm, is an upper bound for the log-likelihood densityF N computed by Gaussian population dynamics. This points, at least in the regime of finite dimension d and sample size N → ∞, to a possible relation between these likelihood functions. In the high dimensional regime d → ∞ and N → ∞, with d/N finite, both algorithms fail to find the correct number of clusters (see Figure 5), but they fail differently. The algorithm which uses Gaussian population dynamics, which was derived assuming finite d and N → ∞, predicts more than L = 3 clusters in the data, and the algorithm which uses EM predicts only one cluster. However, the population dynamics 'almost' predicts the correct number L = 3 of clusters: the changes in the log-likelihood functionF N + log(K) in the K > 3 regime are much smaller than in the K ≤ 3 regime, as can be seen in Figure 5. This behaviour is also observed for similarly generated data with the same sample size but with higher dimensions (not shown here), suggesting that taking into account the effect of the dimension d properly in the present theoretical framework could lead to improvements in inference. Discussion In this paper we use statistical mechanics to study model-based Bayesian clustering. The partitions of data are microscopic states, the negative log-likelihood of the data is the energy of these states, and the data act as disorder in the model. The optimal (MAP) partition corresponds to the minimal energy state, i.e. the ground state of this system. The latter can be obtained from the free energy via a low 'temperature' limit, so to investigate MAP inference we evaluate the free energy. We assume that in a very large system, i.e. for a large sample size, the free energy (density) is self- averaging. This allows us to focus on the disorder-averaged free energy, using the replica method. Following the prescription of the replica method we first compute the average for an integer n number of replicas, then we take the large system limit followed by the limit n → 0. The latter is facilitated by assuming replica symmetry (RS) in the order parameter equation. The main order parameter in the theory is the (average) distribution of data in each cluster µ ∈ [K]. In the low temperature limit, the equations of the RS theory allow us to study the low energy states of the system. In this limit the average free energy and average energy are identical. We show that the true partitions of the data are recovered exactly when the assumed number of clusters K and the true number of clusters L are equal, and the model distributions P (x|θ µ ) have non overlapping supports for different θ µ . The high temperature limit of the RS theory recovers the mean-field theory of [21]. In this latter limit, the average energy, which equals the MF entropy [21], is dominated by the prior. The MF entropy is an upper bound for the low temperature average energy, and can be optimised by selecting the prior. Our order parameter equation can be solved numerically using a population dynamics algorithm. Using this algorithm for the Gaussian data very accurately reproduces the results obtained by gradient descent, minimising the negative log-likelihood of data, algorithm even in the regime of a small separations between clusters and when K > L where the MF theory gives incorrect predictions [21]. The zero temperature population dynamics algorithm can be used for MAP inference. There are several interesting directions into which to extend the present work. Many current studies use the so-called Rand index [28], or the 'purity' [27], for measuring the dissimilarity between the true and inferred clusterings of data, but it would be also interesting to estimate the probability that the inferred clustering is 'wrong'. Another direction is to consider the high dimensional regime where N → ∞ and d → ∞, with d/N finite. We envisage that here the task of separating clusters may be 'easier' than in the lower dimensional d/N → 0 regime, due to the 'blessing of dimensionality' phenomenon [29], according to which most data sampled from high-dimensional Gaussian distributions reside in the 'thin' shell of a sphere (see Appendix E). Both the early study [30] and the more recent study [31] on Bayesian discriminant analysis indicate that the classification of data, a supervised inference problem closely related to clustering, becomes significantly easier in the high-dimensional regime. Alternatively, the high dimensional regime could also cause overfitting, and one may want to quantify this phenomena by using a more general information-theoretic measure of overfitting [3]. Acknowledgements This work was supported by the Medical Research Council of the United Kingdom (grant MR/L01257X/1). Appendix A. Disorder average In this Appendix we study the average e −βN n α=1F N (C α ,X) {C α } X = dx 1 · · · dx N C q(C|L) L ν=1 N i=1 q ciν ν (x i ) e −βN n α=1F N (C α ,X) {C α } = dx 1 · · · dx N L ν=1 N i=1 q ciν ν (x i ) e −βN n α=1F N (C α , X) {C α };C , (A.1) where the average · · · {C α };C now refers to the distribution { n α=1 P (C α |K)} q(C|L). If we define the density Q µ (x|C α , X) = 1 N N i=1 c α iµ δ(x − x i ), (A.2) then we may write −N n α=1F N (C α , X) = n α=1 K µ=1 log e N i=1 c α iµ log P (xi|θµ) θµ = n α=1 K µ=1 log e N dx Qµ(x|C α ,X) log P (x|θµ) θµ (A.3) and for (A.1) we obtain dx 1 · · · dx N L ν=1 N i=1 q ciν ν (x i ) e β n α=1 K µ=1 log e N Qµ(x|C α ,X) log P (x|θµ)dx θµ {C α };C = dx 1 · · · dx N L ν=1 N i=1 q ciν ν (x i ) × n α=1 K µ=1 x dQ α µ (x) δ Q α µ (x) − Q µ (x|C α , X) × e β n α=1 K µ=1 log e N Q α µ (x) log P (x|θµ )dx θµ {C α };C = dQ dQ e iN n α=1 K µ=1 Q α µ (x)Q α µ (x)dx × e β n α=1 K µ=1 log e N Q α µ (x) log P (x|θµ )dx θµ × N i=1 dx i L ν=1 q ciν ν (x i ) e −i n α=1 K µ=1 c α iµQ α µ (xi) {C α };C . (A.4) Using the properties of {c iν }, the last line in the above expression can be rewritten as N i=1 dx i L ν=1 q ciν ν (x i ) e −i n α=1 K µ=1 c α iµQ α µ (xi) {C α };C = N i=1 L ν=1 c iν dx q ν (x)e −i n α=1 K µ=1 c α iµQ α µ (x) {C α };C = e N i=1 log L ν=1 ciν dx qν (x) exp −i n α=1 K µ=1 c α iµQ α µ (x) {C α };C . (A.5) Since c iν , c α iν ∈ {0, 1}, subject to L ν=1 c iν = K µ=1 c α iµ = 1, it follows that the vectors c = (c 1 , . . . , c L ), c i = (c i1 , . . . , c iL ), c α = (c α 1 , . . . , c α K ) and c α i = (c α i1 , . . . , c α iK ), will satisfy the identities c · c i = δ c,ci and c α · c α i = δ c α ,c α i . Inserting c c i · c = 1 and c α c α i · c α = 1 into the exponential function in the average (A.5) now gives, with µ = (µ 1 , . . . , µ n ) ∈ {1, . . . , K} n : N i=1 log L ν=1 c iν dx q ν (x)e −i n α=1 K µ=1 c α iµQ α µ (x) = c {c α } N i=1 c · c i n α=1 c α · c α i log L ν=1 c ν dx q ν (x)e −i n α=1 K µ=1 c α µQ α µ (x) = ν,µ N i=1 c iν n α=1 c α iµα c {c α } c ν n α=1 c α µα × log L ν ′ =1 c ν ′ dx q ν ′ (x)e −i n α=1 K µ ′ α =1 c α µ ′ αQ α µ ′ α (x) = ν,µ N i=1 c iν n α=1 c α iµα log dx q ν (x) e −i n α=1Q α µα (x) , (A.6) where we used the identities c α c α µ = 1 for all (α, µ), and c c ν log[ ν ′ c ν ′ φ ν ′ ] = log φ ν for all ν. Let us now define the density A(ν, µ|C, {C α }) = 1 N N i=1 c iν n α=1 c α iµα , (A.7) where N A(ν, µ|C, {C α }) is the number of data-points that are sampled from the distribution q ν (x) and assigned to clusters µ 1 , . . . , µ n for the n replicas, respectively. Using this definition and (A.6) in equation (A.5) converts the latter expression into (14), as claimed. e N ν,µ A(ν,µ|C,{C α })log dx qν (x) exp −i n α=1Q α µα (x) {C α };C = ν,µ dA(ν, µ) δ [A(ν, µ) − A(ν, µ|C, {C α })] {C α };C × e N ν, Appendix B. Derivation of RS equations The RS assumption implies that Q α µα (x) = Q µα (x), from which one deduces θ α µ = θ µα via (41). Insertion of these forms into the right-hand side of (43), using (44), leads to We can now take the replica limit n → 0, and obtain (45). Using the RS assumption in (44) gives us the following expression for the marginal A(ν) = µ A(ν, µ): ν,µ δ µ;µα A(ν, µ) q ν (x) e n γ=1 β log P (x|θµ γ ) q ν (x) e n γ=1 β log P (x|θµ γ ) dx = ν,µ δ µ;µαà (ν) dx q ν (x)A(ν) =à (ν) dx q ν (x) µÃ (µ|ν) e β log P (x|θµ) n νà (ν) dx qν(x) μà (μ|ν) e β log P (x|θμ) n (B.2) Hence lim n→0 A(ν) =Ã(ν). The RS equation for the conditional A(µ|ν) becomes A(µ|ν) = dx q ν (x) n α=1à (µ α |ν) e β log P (x|θµ α ) νà (ν) dx qν(x) μà (μ|ν) e β log P (x|θμ) n (B.3) Its conditional marginal is A(µ|ν) = dx q ν (x)Ã(µ|ν) e β log P (x|θµ) μà (μ|ν) e β log P (x|θμ) n−1 νà (ν) dx qν(x) μà (μ|ν) e β log P (x|θμ) n , (B.4) which for n → 0 becomes (46): A(µ|ν) = dx q ν (x)à (µ|ν) e β log P (x|θµ) μà (μ|ν) e β log P (x|θμ) (B.5) Finally, inserting Q α µα (x) = Q µα (x) and θ α µ = θ µα into the nontrivial part of the average free energy (42) and taking the limit n → 0 gives equation (47): f (β) − φ(β) = − lim n→0 1 βn log νà (ν) dx q ν (x) K µ=1à (µ|ν) e β log P (x|θµ) n = − 1 β νà (ν) dx q ν (x) log K µ=1à (µ|ν) e β log P (x|θµ) (B.6) Appendix C. Physical meaning of observables Let us consider the following two averages: Q µ (x) = Q µ (x|C, X) C|X X , (C.1) A(ν, µ) = A(ν, µ|C, X) C|X X , (C.2) in which · · · C|X is generated by the Gibbs-Boltzmann distribution (50) and the disorder average · · · X by the distribution (8). Using the replica identity C W (C)F (C) C W (C) = lim n→0 C W (C)F (C) C W (C) n−1 = lim n→0 C 1 . . . C n F (C 1 ) n α=1 W (C α ) (C.3) we may write for any test function g(x) dx Q µ (x) = C 1 · · · C n dx Q µ (x|C 1 , X)g(x) n α=1 P (C α |K)e −βNFN (C α ,X) X = e −βN n α=1F N (C α ,X) dx Q µ (x|C 1 , X)g(x) X {C α } . (C.4) Following the same steps we used in computing the disorder average in (13) where the distribution Q 1 µ (x) is the solution of equation (43). Thus, assuming that the replica symmetry assumption is correct, the physical meaning of the distribution in the our RS equation (45) . The above expression can now be used, following the same steps as for the Q µ (x) order parameter, to show that for N → ∞ and n → 0 the following will hold: Appendix E. 'Sphericity' of Normally distributed samples Here we show that almost all points of any random sample from the d-dimensional Normal distribution N (x|m, Σ), with mean m and covariance Σ, lie in the annulus d(λ max −ǫ) < ||x−m|| 2 < d(λ max +ǫ), where || · · · || is the Euclidean norm and λ max is the maximum eigenvalue of Σ, for sufficiently large d and 0 < ǫ ≪ 1. If x is sampled from N (x|m, Σ), then ||x − m|| 2 = Tr(Σ). We want to bound the following probability: Prob(||x−m|| 2 / ∈ (Tr(Σ)−dǫ, Tr(Σ)+dǫ)) = Prob(||x−m|| 2 ≤ Tr(Σ)−dǫ) + Prob(||x−m|| 2 ≥ Tr(Σ)+dǫ). If, in contrast, we observe a sample x 1 , . . . , x N from N (x|m, Σ), instead of a single vector x, then the probability Prob(∪ N i=1 ||x i −m|| 2 / ∈ (Tr(Σ)−dǫ, Tr(Σ)+dǫ) ) that at least one of the events ||x i −m|| 2 / ∈ (Tr(Σ)−dǫ, Tr(Σ)+dǫ) occurs, can be bounded by combining Boole's inequality with inequalities (E.4) and (E.8): Prob(∪ N i=1 ||x i −m|| 2 / ∈ (Tr(Σ)−dǫ, Tr(Σ)+dǫ) ) ≤ N i=1 Prob(||x i −m|| 2 / ∈ (Tr(Σ)−dǫ, Tr(Σ)+dǫ)) ≤ 2N exp − d 2 min Φ dλ min Tr(Σ)−dǫ , Φ dλ max Tr(Σ)+dǫ (E.14) Repeating similar steps to those followed earlier then gives for λ max > λ min : Prob(∪ N i=1 ||x i −m|| 2 / ∈ (Tr(Σ)−dǫ, Tr(Σ)+dǫ) ) ≤ 2N exp − d 2 Φ λ max λ max +ǫ (E.15) provided ǫ ∈ (0, λ max (λ max −λ min )/(λ max +λ min )), whereas for Σ = λI we have Prob(∪ N i=1 ||x i −m|| 2 / ∈ (Tr(Σ)−dǫ, Tr(Σ)+dǫ) ) ≤ 2N exp − d 2 Φ λ λ+ǫ , (E.16) provided ǫ ∈ (0, λ). It is now clear that there is a function d(ǫ, λ max , N ) > 0 such that for d > d(ǫ, λ max , N ) almost all points of a sample from N (x|m, Σ) lie in the annulus † † d(λ max − ǫ) < ||x − m|| < d(λ max + ǫ). ¶ For non-uniform P (C|K) we have to minimiseF N (C, X) − N −1 log P (C|K) instead ofF N (C, X). log {dQ dQ dA dÂ} e N Ψ[{Q,Q};{A,Â}] + φ N (β). (16) For the case of uniform distributions q(C|L) = 1/L!S(N, L) and p(C|K) = 1/K!S(N, K) the latter entropies are, respectively, log(L!S(N, L)) and log(K!S(N, K)). This gives us H(p, q)/N = log(L)+n log(K) in the limit N → ∞. Comparing this asymptotic result for H(p, q)/N with H(A)/N in Figure 1 . 1Bayesian clustering of data generated from Gaussian distributions N (m 1 , I) and N (m 2 , I), with separation ∆ = ||m 1 − m 2 ||. The sample, split equally between the distributions, is of size N = 2000, and the data dimension is d = 10. Figure 3 .Figure 4 . 34Inferring the number of clusters in data generated from Gaussian distributions N (0, Σµ) with (from left to right) µ ∈[3], µ ∈ [4] and µ ∈[5]. The samples of dimension d = 10, split equally between the distributions, were, respectively, of the size N = 3×10 4 , N = 4×10 4 and N = 5×10 4 . The covariance matrices Σµ were sampled from the Wishart distribution with d + 1 degrees of freedom and precision matrix I. Top: BIC ≡ 2L N − n N log(N ), where L N is the log-likelihood of GMM estimated by EM algorithm and n N is the number of parameters, as a function of K. Bottom:F N + log(K), withF N computed by the population dynamics algorithm, as a function of K. The log-likelihood densities −L N /N (top dotted line) andF N (bottom solid line) plotted as functions of, respectively, the cluster separation ∆/ √ d (computed at the inferred number of clusters) and inferred number of clusters K for the data described inFigures 2 and 3. Figure 5 . 5Inferring number of clusters in the data generated from Gaussian distributions N (mµ,Σµ) with separation ∆/ √ d = ||mµ − mν || = 5.2, where (µ, ν) ∈ [3]. The sample, split equally between the distributions, is of size N = 3 × 10 3 , and of dimension d = 500. The (diagonal) covariance matrices Σµ were sampled from χ 2 distribution with 3 degrees of freedom. The maximum and minimum diagonal entries in these matrices is, respectively, 17.064 and 0.017, so ∆/ √ d = 5.2 ensures that clusters in the sample are well separated (see Appendix E). Left:F N + log(K), withF N ≡ min CFN (C, X) computed by the population dynamics algorithm, as a function of K. Right: BIC ≡ 2L N − n N log(N ), where L N is the log-likelihood of GMM estimated by EM algorithm and n N is the number of parameters, as a function of K. = µ A(ν,µ)log dx qν (x) exp[{dA dÂ} e NΨ[{Q};{A,Â}] , ν, µ) iÂ(ν, µ) + log dx q ν (x) e e −iN ν,µÂ (ν,µ)A(ν,µ|C,{C α }) {C α };C . (A.9) Finally, using (A.8) in the average (A.4) gives us the integral γ=1à (µ γ |ν) e β log P (x|θµ γ ) νà (ν) dx qν (x) n γ=1 μγà (μ γ |ν) e β log P (x|θμ γ ) × q ν (x) e n γ=1 β log P (x|θµ γ ) q ν (x) e n γ=1 β log P (x|θµ γ ) dx = νà (ν)q ν (x)Ã(µ|ν) e β log P (x|θµ) μà (μ|ν) e β log P (x|θμ) n−1 νà (ν) dx qν(x) μà (μ|ν) e β log P (x|θμ) X {C α } = {dQ dQ dA dÂ} e N Ψ[{Q,Q};{A,Â}] dx Q 1 µ (x)g(x), (C.5) and for n → 0, using {dQ dQ dA dÂ} e N Ψ[{Q,Q};{A,Â}] Q 1 µ (x) dx = 1, this leads us for N → ∞ to the desired asymptotic result lim N →∞ dx Q µ (x)g(x) = lim N →∞ {dQ dQ dA dÂ} e N Ψ[{Q,Q};{A,Â}] dx Q 1 µ (x)g(x) {dQ dQ dA dÂ} e N Ψ[{Q,Q};{A,Â}] is given by (51). Similarly we can work outA(ν, µ) = A(ν, µ|C, X) used the definitionsc iν = 1 [x i ∼ q ν (x)] and P (X|C) ciν ν (x i ). Substitution of the definition of P β (C|X) allows us to work out the average further: ;µ1 A(ν, µ|C,{C α })X|C C {C α } , (C.8) in which A(ν, µ|C,{C α }) is defined in equation (A.7) {dQ dQ dA dÂ} e N Ψ[{Q,Q};{A,Â}] µ δ µ;µ1 A(ν, µ) {dQ dQ dA dÂ} e N Ψ[{Q,Q};{A,Â}]= µ δ µ;µ1 A(ν, µ), (C.9)where A(ν, µ) is the solution of equation(43). From this we deduce that (52) indeed gives the physical meaning of the RS expression (46). used the replica identity (C.3). Assuming initially that n ∈ N allows us to compute the average over X in the above expression log P (xi|θ)θ X {C α } = − e −βN n α=1F N (C α ,X) 1 N K µ=1 log e N dxQµ(x|C 1 ,X)log P (x|θ) θ X {C α } (D.2) and, with the short-hand Ψ[. . .] = Ψ[{Q,Q}; {A,Â}] and after taking the replica limit n → 0 within the RS ansatz, we then arrive at equation (e N dx Q 1 µ (x)log P (x|θ) θ {dQ dQ dA dÂ} e N Ψ[...] for sufficienty small positive α we can use the Markov inequality to obtain Prob(||x−m|| 2 ≥ Tr(log |I−αΣ|+α(Tr(Σ)+dǫ)) .(E.2) the log-likelihood−875000 −874500 −874000 −873500 −873000 1 2 3 4 5 −910000 −908000 −906000 −904000 1 2 3 4 5 −945000 −940000 −935000 −930000 −925000 −920000 1 2 3 4 5 BIC 14.55 14.6 14.65 14.7 1 2 3 4 5 15.1 15.15 15.2 1 2 3 4 5 15.3 15.5 15.7 1 2 3 4 5 + This is certainly true for model distributions P (x|θµ) with non-overlapping supports. The last line, which assumes that Σ −1 − αI is positive definite, follows from (78). Denoting the eigenvalues of the covariance matrix Σ by λ 1 , . . . , λ d , we can bound log |I − αΣ| = d ℓ=1 log(1 − αλ(ℓ)) from below by d log(1 − αλ max ), where λ max = max ℓ λ(ℓ). Using this in (E.2) gives us the simpler inequalityThe function d log(1 − αλ max ) + α(Tr(Σ) + dǫ) is found to have its maximum at α = (Tr(Σ) + dǫ − dλ max )/(λ max (Tr(Σ) + dǫ)), which allows us to optimise the upper bound in (E.3) and produce the inequality, and is exactly zero when x = 1. Secondly, we derive a similar bound for the second probability in (E.1):Moreover, since Tr(Σ) ≤ dλ max , we may also write Prob(||x−m|| 2 / ∈ (Tr(Σ)−dǫ, Tr(Σ)+dǫ))The remaining extrema are given bywith ǫ 1 = λ max (λ max −λ min ) λ max + λ min , ǫ 2 = λ max −λ min (E.12)Furthermore, when λ max = λ min = λ, i.e. Σ = λI, one obtains ǫ ∈ (0, λ) : min Φ λ λ−ǫ , Φ λ λ+ǫ = Φ λ λ + ǫ (E.13) . M Advani, S Ganguli, Phys. Rev. X. 631034Advani M and Ganguli S 2016 Phys. Rev. X 6 031034 . A Mozeika, O Dikmen, J Piili, Phys. Rev. E. 9010101Mozeika A, Dikmen O, and Piili J 2014 Phys. Rev. E 90 010101 . A C C Coolen, J E Barrett, P Paga, Perez-Vicente Cj, J. Phys. A: Math. Theor. 50375001Coolen A C C, Barrett J E, Paga P, and Perez-Vicente CJ 2017 J. Phys. A: Math. Theor. 50 375001 H Nishimori, Statistical Physics of Spin Glasses and Information Processing: An Introduction. OxfordOxford University PressNishimori H 2001 Statistical Physics of Spin Glasses and Information Processing: An Introduction (Oxford: Oxford University Press) . M Mézard, A Montanari, Physics, and Computation. Oxford University PressInformationMézard M and Montanari A 2009 Information, Physics, and Computation (Oxford: Oxford University Press) . R S De Souza, M L L Dantas, Costa-Duarte M V Feigelson, E D Killedar, M Lablanche, P Y Vilalta, R Krone-Martins, A Beck, R Gieseke, F , Mon. Not. R. Astron. Soc. 4722808de Souza R S, Dantas M L L, Costa-Duarte M V, Feigelson E D, Killedar M, Lablanche P Y, Vilalta R, Krone-Martins A, Beck R, and Gieseke F 2017 Mon. Not. R. Astron. Soc. 472 2808 . W P Hanage, C Fraser, J Tang, T R Connor, J Corander, Science. 3241454Hanage W P, Fraser C, Tang J, Connor T R, and Corander J 2009 Science 324 1454 C Bishop, Pattern Recognition and Machine Learning. BerlinSpringerBishop C M 2006 Pattern Recognition and Machine Learning (Berlin: Springer) . A Nobile, A T Fearnside, Stat. Comput. 17147Nobile A and Fearnside A T 2007 Stat. Comput. 17 147 . C Guihenneuc-Jouyaux, J Rousseau, J. Comput. Graph. Stat. 1475Guihenneuc-Jouyaux C and Rousseau J 2005 J. Comput. Graph. Stat. 14 75 . L Scrucca, M Fop, T B Murphy, A E Raftery, J. 8205Scrucca L, Fop M, Murphy T B, and Raftery A E 2016 R J. 8 205 Bayesian reasoning and machine learning. David Barber, Cambridge University PressDavid Barber. Bayesian reasoning and machine learning. Cambridge University Press, 2012. . K Rose, E Gurewitz, G C Fox, Phys. Rev. Lett. 65945Rose K, Gurewitz E, and Fox G C 1990 Phys. Rev. Lett. 65 945 . M Blatt, S Wiseman, Domany E , Phys. Rev. Lett. 763251Blatt M, Wiseman S, and Domany E 1996 Phys. Rev. Lett. 76 3251 . M Luksza, M Lässig, J Berg, Phys. Rev. Lett. 105220601Luksza M, Lässig M, and Berg J 2010 Phys. Rev. Lett. 105 220601 . T L H Watkin, Nadal J-P , J. Phys. A: Math. Gen. 271899Watkin T L H and Nadal J-P 1994 J. Phys. A: Math. Gen. 27 1899 . N Barkai, H Sompolinsky, Phys. Rev. E. 501766Barkai N and Sompolinsky H 1994 Phys. Rev. E. 50 1766 . M Biehl, A Mietzner, J. Phys. A: Math. Gen. 271885Biehl M and Mietzner A 1994 J. Phys. A: Math. Gen. 27 1885 15) is very loose, so it makes more sense to consider the probability that ∪ i≤N {||x i − m|| 2 ≥ d(λmax + ǫ)}, i.e. that at least one x i in the sample X lies outside the ball B √ d(λmax+ǫ) (m), given by Prob(∪ i≤N {x i / ∈ B √ d(λmax+ǫ) (m)}) ≤ N exp. † † For Small D, the bound in (E.. − d 2 Φ( λmax λmax+ǫ )† † For small d, the bound in (E.15) is very loose, so it makes more sense to consider the probability that ∪ i≤N {||x i − m|| 2 ≥ d(λmax + ǫ)}, i.e. that at least one x i in the sample X lies outside the ball B √ d(λmax+ǫ) (m), given by Prob(∪ i≤N {x i / ∈ B √ d(λmax+ǫ) (m)}) ≤ N exp[− d 2 Φ( λmax λmax+ǫ )]. T Lesieur, De Bacco, C Banks, J Krzakala, F Moore, C Zdeborová, L , 54th Annual Allerton Conference on Communication, Control, and Computing. Allerton601Lesieur T, De Bacco C, Banks J, Krzakala F, Moore C, and Zdeborová L 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton) p 601 . J Corander, M Gyllenberg, T Koski, Adv. Data Anal. Class. 33Corander J, Gyllenberg M, and Koski T 2009 Adv. Data Anal. Class. 3 3 . A Mozeika, A C C Coolen, Phys. Rev. E. 9842133Mozeika A and Coolen A C C 2018 Phys. Rev. E 98 042133 M Mézard, G Parisi, M Virasoro, Spin Glass Theory and Beyond: An Introduction to the Replica Method and Its Applications. SingaporeWorld ScientificMézard M, Parisi G, and Virasoro M 1987 Spin Glass Theory and Beyond: An Introduction to the Replica Method and Its Applications (Singapore: World Scientific) . B C Rennie, A J Dobson, J. Comb. Theory. 7116Rennie B C and Dobson A J 1969 J. Comb. Theory 7 116 T Cover, J A Thomas, Elements of Information Theory. New YorkWileyCover T M and Thomas J A 2012 Elements of Information Theory (New York: Wiley) . De Bruijn, N , Asymptotic Methods in Analysis. DoverDe Bruijn N G 1981 Asymptotic Methods in Analysis (New York: Dover) S Dasgupta, 40th Annual Symposium on Foundations of Computer Science p. 634Dasgupta S 1999 40th Annual Symposium on Foundations of Computer Science p 634 C Manning, P Raghavan, H Schütze, Introduction to information retrieval. Cambridge University PressManning C, Raghavan P, and Schütze H 2008 Introduction to information retrieval (Cambridge University Press) . W Rand, J. Am. Stat. Assoc. 66846Rand W M 1971 J. Am. Stat. Assoc. 66 846 . A Gorban, I Y Tyukin, Phil. Trans. R. Soc. A. 376Gorban A N and Tyukin I Y 2018 Phil. Trans. R. Soc. A 376 20170237 . N Barkai, H S Seung, H Sompolinsky, Phys. Rev. Lett. 703167Barkai N, Seung H S, and Sompolinsky H 1993 Phys. Rev. Lett. 70 3167 . A Shalabi, M Inoue, J Watkins, E De Rinaldis, Coolen A C C , Stat. Methods Med. Res. 27336Shalabi A, Inoue M, Watkins J, De Rinaldis E, and Coolen A C C 2018 Stat. Methods Med. Res. 27 336
[]
[ "Boost-invariant Leptonic Observables and Reconstruction of Parent Particle Mass", "Boost-invariant Leptonic Observables and Reconstruction of Parent Particle Mass" ]
[ "Sayaka Kawabata \nDepartment of Physics\nTohoku University\n980-8578SendaiJapan\n", "Yasuhiro Shimizu \nDepartment of Physics\nTohoku University\n980-8578SendaiJapan\n\nIIAIR\nTohoku University\n980-8578SendaiJapan\n", "Yukinari Sumino \nDepartment of Physics\nTohoku University\n980-8578SendaiJapan\n", "Hiroshi Yokoya \nNCTS\nNational Taiwan University\n10617TaipeiTaiwan\n" ]
[ "Department of Physics\nTohoku University\n980-8578SendaiJapan", "Department of Physics\nTohoku University\n980-8578SendaiJapan", "IIAIR\nTohoku University\n980-8578SendaiJapan", "Department of Physics\nTohoku University\n980-8578SendaiJapan", "NCTS\nNational Taiwan University\n10617TaipeiTaiwan" ]
[]
We propose a class of observables constructed from lepton energy distribution, which are independent of the velocity of the parent particle if it is scalar or unpolarized. These observables may be used to measure properties of various particles in the LHC experiments. We demonstrate their usage in a determination of the Higgs boson mass.The data and analyses from the experiments at the CERN Large Hadron Collider (LHC) are attracting increasing attention. The main goals of these experiments are discoveries of the Higgs boson and of signals of physics beyond the Standard Model. Once the Higgs boson or other new particles are found, the next step is to uncover properties of these particles. There are well-known challenges in performing accurate measurements for such purposes: (i) In reconstructing the kinematics of events, it is difficult to reconstruct jet energy scales accurately. This is in contrast to electron and muon energy-momenta, which can be measured fairly accurately. (ii) Often interesting events include undetected particles which carry off missing momenta. In this case, reconstruction of missing momenta is non-trivial. (iii) The parton distribution function (PDF) of proton is needed in predicting cross sections and kinematical distributions. Our current knowledge of PDF is limited, which induces relatively large uncertainties in these predictions. As a consequence of these difficulties, for instance, it is difficult to measure the energy-momentum of a produced new particle or predict accurately their statistical distributions.To circumvent these difficulties, many sophisticated methods for kinematical reconstruction of events have been devised. See, for instance, [1, 2] and references therein. In these methods, one takes advantage of various kinematical constraints, which follow from specific event topologies, to complement uncertainties induced by the above difficulties. Still, in most cases challenges remain to reduce systematic uncertainties originating from ambiguities in jet energy scales, PDF and other nonperturbative or higher-order QCD effects. Generally it is quite non-trivial to keep these uncertainties under control, both theoretically and experimentally.In this paper we propose a class of observables which can be used to measure properties of particles produced in the LHC experiments, by largely avoiding the above uncertainties. We consider a particle X which is scalar or unpolarized and whose decay daughters include one or more charged leptons ℓ= e ± or µ ± . The observables are constructed from the energy distribution of ℓ and are independent of the velocity of X. Although experimental cuts and backgrounds induce corrections to this property, we will show that systematic uncertainties are suppressed and can be kept under control. Furthermore, we can utilize the degree of freedom of the observables to reduce or control effects induced by cuts and backgrounds.In the first part of the paper, we explain the construction of these observables, in the two-body decay and multi-body decay cases separately. In the latter part, we demonstrate its usefulness in a determination of the Higgs boson mass using the vector-boson fusion process.2-body decay: X → ℓ + Y Suppose a parent particle X decays into two particles, one of which is a lepton ℓ whose energy is E 0 (monochromatic) in the rest frame of X. In the case that X is scalar or unpolarized, the normalized lepton energy distribution in a boosted frame, in which X has a velocity β, is given byin the limit where the mass of the lepton is neglected.Here, y is the rapidity of X in the boost direction[13], related to β as e 2y = (1 + β)/(1 − β). The step function is defined such that θ(cond.) = 1 if cond. is satisfied, and θ(cond.) = 0 otherwise. We construct an observable g(E ℓ /E 0 ) from the lepton energy E ℓ in the boosted frame such that the expectation value g is independent of β. For later convenience, we write g(E ℓ /E 0 ) = [dG(x)/dx] x=E ℓ /E0 . Hence,This shows that g is the even part F (y) + F (−y) of F (y) ≡ G(e y )/(e y − e −y ). It is β(y)-independent when F is an odd function of y plus a constant independent of y. Namely, G(e y ) = (even fn. of y) + const. × (e y − e −y ). Let us demonstrate usage of g . In the case G = e y − e −y , we obtain 1/E 2 ℓ = 1/E 2 0 . Thus, (mathematically) we can reconstruct E 0 from the lepton energy distribution irrespective of β of the parent particle. In the case G(e y ) = (even fn. of y), g(E ℓ /E 0 ) = 0. Conversely, we can adjust E 0 , which enters g(E ℓ /E 0 ) as an external parameter, such that g vanishes and determine the true value of E 0 . We give two examples of G for the latter case: (a) G (a) n = 1/[2 n cosh(ny)], corresponding to g = E n+1 0 E n−1 ℓ (E 2n 0 − E 2n ℓ )/(E 2n 0 + E 2n ℓ ) 2 . As n increases, contributions of the E ℓ distribution from large |y| region (E ℓ ≪ E 0 or E ℓ ≫ E 0 ) become more suppressed. (b) G (b) r = θ(r < e −|y| < 1)(e −|y| − r), corresponding to g = θ(r < E ℓ /E 0 < 1) − (E 0 /E ℓ ) 2 θ(1 < E ℓ /E 0 < r −1 ).
10.1016/j.physletb.2012.03.050
[ "https://arxiv.org/pdf/1107.4460v2.pdf" ]
118,516,081
1107.4460
67e6b8299dfbf00080736f32608ea43ddb4be439
Boost-invariant Leptonic Observables and Reconstruction of Parent Particle Mass 13 Apr 2012 Sayaka Kawabata Department of Physics Tohoku University 980-8578SendaiJapan Yasuhiro Shimizu Department of Physics Tohoku University 980-8578SendaiJapan IIAIR Tohoku University 980-8578SendaiJapan Yukinari Sumino Department of Physics Tohoku University 980-8578SendaiJapan Hiroshi Yokoya NCTS National Taiwan University 10617TaipeiTaiwan Boost-invariant Leptonic Observables and Reconstruction of Parent Particle Mass 13 Apr 2012(Dated: April 16, 2012)arXiv:1107.4460v2 [hep-ph]numbers: 1180Cr1385Hd1480Bn We propose a class of observables constructed from lepton energy distribution, which are independent of the velocity of the parent particle if it is scalar or unpolarized. These observables may be used to measure properties of various particles in the LHC experiments. We demonstrate their usage in a determination of the Higgs boson mass.The data and analyses from the experiments at the CERN Large Hadron Collider (LHC) are attracting increasing attention. The main goals of these experiments are discoveries of the Higgs boson and of signals of physics beyond the Standard Model. Once the Higgs boson or other new particles are found, the next step is to uncover properties of these particles. There are well-known challenges in performing accurate measurements for such purposes: (i) In reconstructing the kinematics of events, it is difficult to reconstruct jet energy scales accurately. This is in contrast to electron and muon energy-momenta, which can be measured fairly accurately. (ii) Often interesting events include undetected particles which carry off missing momenta. In this case, reconstruction of missing momenta is non-trivial. (iii) The parton distribution function (PDF) of proton is needed in predicting cross sections and kinematical distributions. Our current knowledge of PDF is limited, which induces relatively large uncertainties in these predictions. As a consequence of these difficulties, for instance, it is difficult to measure the energy-momentum of a produced new particle or predict accurately their statistical distributions.To circumvent these difficulties, many sophisticated methods for kinematical reconstruction of events have been devised. See, for instance, [1, 2] and references therein. In these methods, one takes advantage of various kinematical constraints, which follow from specific event topologies, to complement uncertainties induced by the above difficulties. Still, in most cases challenges remain to reduce systematic uncertainties originating from ambiguities in jet energy scales, PDF and other nonperturbative or higher-order QCD effects. Generally it is quite non-trivial to keep these uncertainties under control, both theoretically and experimentally.In this paper we propose a class of observables which can be used to measure properties of particles produced in the LHC experiments, by largely avoiding the above uncertainties. We consider a particle X which is scalar or unpolarized and whose decay daughters include one or more charged leptons ℓ= e ± or µ ± . The observables are constructed from the energy distribution of ℓ and are independent of the velocity of X. Although experimental cuts and backgrounds induce corrections to this property, we will show that systematic uncertainties are suppressed and can be kept under control. Furthermore, we can utilize the degree of freedom of the observables to reduce or control effects induced by cuts and backgrounds.In the first part of the paper, we explain the construction of these observables, in the two-body decay and multi-body decay cases separately. In the latter part, we demonstrate its usefulness in a determination of the Higgs boson mass using the vector-boson fusion process.2-body decay: X → ℓ + Y Suppose a parent particle X decays into two particles, one of which is a lepton ℓ whose energy is E 0 (monochromatic) in the rest frame of X. In the case that X is scalar or unpolarized, the normalized lepton energy distribution in a boosted frame, in which X has a velocity β, is given byin the limit where the mass of the lepton is neglected.Here, y is the rapidity of X in the boost direction[13], related to β as e 2y = (1 + β)/(1 − β). The step function is defined such that θ(cond.) = 1 if cond. is satisfied, and θ(cond.) = 0 otherwise. We construct an observable g(E ℓ /E 0 ) from the lepton energy E ℓ in the boosted frame such that the expectation value g is independent of β. For later convenience, we write g(E ℓ /E 0 ) = [dG(x)/dx] x=E ℓ /E0 . Hence,This shows that g is the even part F (y) + F (−y) of F (y) ≡ G(e y )/(e y − e −y ). It is β(y)-independent when F is an odd function of y plus a constant independent of y. Namely, G(e y ) = (even fn. of y) + const. × (e y − e −y ). Let us demonstrate usage of g . In the case G = e y − e −y , we obtain 1/E 2 ℓ = 1/E 2 0 . Thus, (mathematically) we can reconstruct E 0 from the lepton energy distribution irrespective of β of the parent particle. In the case G(e y ) = (even fn. of y), g(E ℓ /E 0 ) = 0. Conversely, we can adjust E 0 , which enters g(E ℓ /E 0 ) as an external parameter, such that g vanishes and determine the true value of E 0 . We give two examples of G for the latter case: (a) G (a) n = 1/[2 n cosh(ny)], corresponding to g = E n+1 0 E n−1 ℓ (E 2n 0 − E 2n ℓ )/(E 2n 0 + E 2n ℓ ) 2 . As n increases, contributions of the E ℓ distribution from large |y| region (E ℓ ≪ E 0 or E ℓ ≫ E 0 ) become more suppressed. (b) G (b) r = θ(r < e −|y| < 1)(e −|y| − r), corresponding to g = θ(r < E ℓ /E 0 < 1) − (E 0 /E ℓ ) 2 θ(1 < E ℓ /E 0 < r −1 ). We propose a class of observables constructed from lepton energy distribution, which are independent of the velocity of the parent particle if it is scalar or unpolarized. These observables may be used to measure properties of various particles in the LHC experiments. We demonstrate their usage in a determination of the Higgs boson mass. The data and analyses from the experiments at the CERN Large Hadron Collider (LHC) are attracting increasing attention. The main goals of these experiments are discoveries of the Higgs boson and of signals of physics beyond the Standard Model. Once the Higgs boson or other new particles are found, the next step is to uncover properties of these particles. There are well-known challenges in performing accurate measurements for such purposes: (i) In reconstructing the kinematics of events, it is difficult to reconstruct jet energy scales accurately. This is in contrast to electron and muon energy-momenta, which can be measured fairly accurately. (ii) Often interesting events include undetected particles which carry off missing momenta. In this case, reconstruction of missing momenta is non-trivial. (iii) The parton distribution function (PDF) of proton is needed in predicting cross sections and kinematical distributions. Our current knowledge of PDF is limited, which induces relatively large uncertainties in these predictions. As a consequence of these difficulties, for instance, it is difficult to measure the energy-momentum of a produced new particle or predict accurately their statistical distributions. To circumvent these difficulties, many sophisticated methods for kinematical reconstruction of events have been devised. See, for instance, [1,2] and references therein. In these methods, one takes advantage of various kinematical constraints, which follow from specific event topologies, to complement uncertainties induced by the above difficulties. Still, in most cases challenges remain to reduce systematic uncertainties originating from ambiguities in jet energy scales, PDF and other nonperturbative or higher-order QCD effects. Generally it is quite non-trivial to keep these uncertainties under control, both theoretically and experimentally. In this paper we propose a class of observables which can be used to measure properties of particles produced in the LHC experiments, by largely avoiding the above uncertainties. We consider a particle X which is scalar or unpolarized and whose decay daughters include one or more charged leptons ℓ= e ± or µ ± . The observables are constructed from the energy distribution of ℓ and are independent of the velocity of X. Although experimental cuts and backgrounds induce corrections to this property, we will show that systematic uncertainties are suppressed and can be kept under control. Furthermore, we can utilize the degree of freedom of the observables to reduce or control effects induced by cuts and backgrounds. In the first part of the paper, we explain the construction of these observables, in the two-body decay and multi-body decay cases separately. In the latter part, we demonstrate its usefulness in a determination of the Higgs boson mass using the vector-boson fusion process. 2-body decay: X → ℓ + Y Suppose a parent particle X decays into two particles, one of which is a lepton ℓ whose energy is E 0 (monochromatic) in the rest frame of X. In the case that X is scalar or unpolarized, the normalized lepton energy distribution in a boosted frame, in which X has a velocity β, is given by (1) in the limit where the mass of the lepton is neglected. D β (E ℓ ; E 0 ) = 1 (e y − e −y )E 0 θ(e −y E 0 < E ℓ < e y E 0 ), Here, y is the rapidity of X in the boost direction [13], related to β as e 2y = (1 + β)/(1 − β). The step function is defined such that θ(cond.) = 1 if cond. is satisfied, and θ(cond.) = 0 otherwise. We construct an observable g(E ℓ /E 0 ) from the lepton energy E ℓ in the boosted frame such that the expectation value g is independent of β. For later convenience, we write g(E ℓ /E 0 ) = [dG(x)/dx] x=E ℓ /E0 . Hence, g = dE ℓ D β (E ℓ ; E 0 ) g(E ℓ /E 0 ) = G(e y ) − G(e −y ) e y − e −y .(2) This shows that g is the even part F (y) + F (−y) of F (y) ≡ G(e y )/(e y − e −y ). It is β(y)-independent when F is an odd function of y plus a constant independent of y. Namely, G(e y ) = (even fn. of y) + const. × (e y − e −y ). Let us demonstrate usage of g . In the case G = e y − e −y , we obtain 1/E 2 ℓ = 1/E 2 0 . Thus, (mathematically) we can reconstruct E 0 from the lepton energy distribution irrespective of β of the parent particle. In the case G(e y ) = (even fn. of y), g(E ℓ /E 0 ) = 0. Conversely, we can adjust E 0 , which enters g(E ℓ /E 0 ) as an external parameter, such that g vanishes and determine the true value of E 0 . We give two examples of G for the latter case: (a) G (a) n = 1/[2 n cosh(ny)], corresponding to g = E n+1 0 E n−1 ℓ (E 2n 0 − E 2n ℓ )/(E 2n 0 + E 2n ℓ ) 2 . As n in- creases, contributions of the E ℓ distribution from large |y| region (E ℓ ≪ E 0 or E ℓ ≫ E 0 ) become more suppressed. (b) G (b) r = θ(r < e −|y| < 1)(e −|y| − r), corresponding to g = θ(r < E ℓ /E 0 < 1) − (E 0 /E ℓ ) 2 θ(1 < E ℓ /E 0 < r −1 ). This observable is independent of the E ℓ distribution in the regions E ℓ < rE 0 and E ℓ > r −1 E 0 . Many-body decay: X → ℓ + anything We assume that we know the theoretical prediction for the lepton energy distribution in the rest frame of the parent particle X, dΓ X→ℓ+anything /dE 0 , where E 0 is the lepton energy in this frame. We define an observable in the boosted frame as O G ≡ N dE 0 E 2 0 dΓ X→ℓ+anything dE 0 d dx G(x) x=E ℓ /E0 .(3) It depends on the lepton energy E ℓ in the boosted frame and on the parameters of dΓ/dE 0 such as the parent particle mass m X . N is an arbitrary normalization constant independent of E l . We can prove (see below) that, if X is scalar or unpolarized, and if we take the same G as in the 2-body decay case, O G is independent of β of X. In particular, in the case that G(e y ) is an even function of y, O G = 0. The parent particle mass enters as an external parameter in the definition of O G . Hence, we can use this property to determine m X , provided that other parameters are known. g in the 2-body decay case can be regarded as a special case of O G . Proof: The lepton energy distribution in the boosted frame is given by f β (E ℓ ) = dE ′ 0 dΓ X→ℓ+anything dE ′ 0 D β (E ℓ ; E ′ 0 ) .(4) Hence, O G = dE ℓ f β (E ℓ ) O G = N dE 0 dE ′ 0 dΓ dE 0 dΓ dE ′ 0 G(e y E ′ 0 /E 0 ) − G(e −y E ′ 0 /E 0 ) E 0 E ′ 0 (e y − e −y ) ,(5) where we integrated over E ℓ . In the case G( x −1 ) = G(x), G(e y E ′ 0 /E 0 ) − G(e −y E ′ 0 /E 0 ) = G(e y E ′ 0 /E 0 ) − G(e y E 0 /E ′ 0 ) is anti-symmetric under the exchange of E 0 and E ′ 0 , while all other parts are symmetric. It fol- lows that O G = 0. In the case G(x) = x − x −1 , G(e y E ′ 0 /E 0 ) − G(e −y E ′ 0 /E 0 ) = (E ′ 0 /E 0 + E 0 /E ′ 0 )(e y − e −y ), so that y-dependence cancels out. (Q.E.D.) Since we must know the theoretical prediction for dΓ/dE 0 to construct O G , we consider use of O G mainly for the purpose of precision measurements, after the nature of the parent particle (such as its decay modes) is roughly determined by other methods. In principle, different functions G in constructing O G may be used to determine simultaneously more than one parameters in a decay process. In this first study, however, we consider the case where only one parameter is unknown. Under a realistic experimental condition, in which β of X has a distribution D X (β), the lepton energy distribution in the laboratory frame is given by D(E ℓ ) = dβ D X (β)f β (E ℓ ). In this case, the expectation value O G D ≡ dE ℓ D(E ℓ ) O G is independent of D X . Hereafter, we choose G to be an even function of y and use O G D = 0 to determine m X . Generally various experimental cuts, which affect D(E ℓ ), are imposed due to detector acceptance effects and for event selection purposes. Furthermore, there are contributions from background processes which also modify D(E ℓ ). After incorporating these effects, there is no guarantee for O G to vanish. Suppose that D(E ℓ ) is modified to D(E ℓ ) + δD(E ℓ ) by these effects, where the distributions are normalized as dE ℓ D = dE ℓ (D + δD) = 1. In the case |δD/D| ≪ 1, if O G D+δD = 0 is used to extract m X , the obtained value is systematically shifted from the true value by an amount δm X = − O G δD ∂O G ∂m X D ,(6) where we neglected O(δD 2 ) corrections. We can use this formula to study systematic corrections in the determination of m X . δD should be estimated by Monte Carlo (MC) simulations which take into account realistic experimental conditions. Errors in the estimate of δD contribute as systematic uncertainties in the determination of m X . [14] The production of Higgs bosons via vector-boson fusions is expected to be observed with a good signal-tonoise ratio, if the Higgs boson mass m H is within the range 135 GeV < ∼ m H < ∼ 190 GeV, hence it is considered as a promising channel for the Higgs boson discovery. We perform a MC simulation to study feasibility of m H determination using the observable O G and the decay modes H → W W ( * ) → ℓℓνν (ℓℓ = eµ, µµ, ee). We generate the events for the signal and background processes using MadEvent [3], which are passed to PYTHIA [4] and then to the fast detector simulator PGS [5]. We set √ s = 14 TeV. The strategy of our analysis follows, to a large extent, that of [6], which studied the prospect of Higgs boson search using the vector-boson fusion process in the AT-LAS experiment. We first repeated the analysis of [6] in the case of H → eµνν mode using our analysis tools and imposing the same cuts. We reproduced the numbers in Tab. 4 of [6] reasonably well, considering differences in both analyses, such as different detector simulators, different PDF's, and different jet clustering algorithms (we use the cone algorithm with R = 0.5; we do not correct the jet energy scales given by the output of PGS): regarding the signal events, we reproduced the efficiencies of the cuts involving only leptons within a few % accuracy and those involving jets within a few tens % accuracy; in total, the efficiency of all the cuts was reproduced with 6% accuracy. On the other hand, the cross section of the signal is smaller by 30% in our analysis as compared to that of [6]. The difference originates from the different scales and different PDF's used in the event generators. We do not correct the difference in normalization of the cross sections; it may result in overestimates of the statistical errors given below. For simplicity, in our analysis we omit the Higgs boson production via gg fusion and the Higgs decay modes including τ 's. In principle, part of these modes including e or µ in the final states can be used as signal events, since dΓ H→ℓ+anything /dE 0 is calculable. In our analysis of m H reconstruction, ideally two criteria need to be satisfied to ensure use of the observable O G : (1) The lepton energy distribution in the Higgs rest frame agrees with the theoretical prediction dΓ H→ℓℓνν /dE 0 . (2) The lepton angular distribution in the Higgs rest frame is isotropic. The effects of cuts should not violate these criteria significantly. Jets in the signal process are associated with the Higgs production process and are independent of the Higgs decay process. Therefore, cuts involving only jets would not affect the above criteria but only affect the β distribution of the Higgs boson. By contrast, cuts involving leptons can affect the above criteria significantly. Taking this into account, we use the same cuts as in [6] except the following two cuts: (A) Lepton acceptance No muon isolation requirement; P T (e) > 15 GeV, P T (µ) > 10 GeV, |η ℓ | < 2.5; Only one pair of identified leptons in each event. (B) Lepton cuts M ℓℓ < 45 GeV, P T (e), P T (µ) < 120 GeV. Concerning (A), since lepton isolation requirement as well as cuts on lepton P T and η bias the lepton angular distribution in the Higgs rest frame, we loosen the cuts and requirement as much as possible. Concerning (B), it is important to select events with leptons in the same directions, in order to reduce background events. We note that, if we impose the same cut on M ℓℓ in the theoretical prediction for dΓ H→ℓℓνν /dE 0 , this cut does not contribute to systematic errors. Hence, we tighten the Lorentz invariant cut and omit other framedependent angular cuts. By modification of the cuts (A) and (B), the efficiency of the signal events reduces by a few tens %, while variations of the efficiencies of the background events are small. [15] Tab. I lists estimates of (cross section)×(efficiency) of the signal and background events, after all the cuts are imposed. [16] For the backgrounds, we simulate only tt + W t and W W +jet (electroweak) events, which are shown to be the major backgrounds in [6]. We estimate contributions of other backgrounds to be negligible, compared to the uncertainties discussed below. Consistently with the signal, we omit background events with ℓ = e, µ from τ . We test the two criteria with the signal events which passed all the cuts. The histogram in Fig. 1 shows the lepton (ℓ = µ) energy distribution in the Higgs rest frame for the H → µµνν mode and MC input m MC H = 150 GeV, where we looked up the parton-level neutrino momenta in each event to reconstruct the Higgs momentum. We generated a large-statistics event sample (about 12,000 events after applying all the cuts) in order to focus on the systematic effects caused by the cuts. The theoretical prediction for dΓ H→ℓℓνν /dE 0 with the cut M ℓℓ < 45 GeV (no other cuts are applied) is also plotted with a red line. A good agreement is observed. Figs. 2 show the lepton cos θ distributions in the Higgs rest frame of these events, where θ is the angle measured from the Higgs boost direction. The total event sample is divided into four groups of equal size, in the increasing order of β (or y) of the Higgs boson. The average y value for each group is displayed. The events with low y have a distribution closer to isotropic one, whereas the events with large boost factors have strong distortion of the cos θ distribution. Since Higgs bosons with large y are boosted mostly in the beam directions, the lepton P T and η cuts and electron isolation requirement bias the cos θ distribution. In particular, depletion of events in the cos θ ≃ 1 region of large y samples is caused by the lepton η cuts. On the other hand, Higgs bosons with small y are boosted in random directions, so that the lepton acceptance effects do not bias the lepton cos θ distribution strongly. Thus, the second criterion is satisfied only by events with small boost factors. We may choose G appropriately to suppress contributions of events with large y. By examining various G, we find that G = G (a) n for n ≈ 4 is an optimal choice. As n increases, O G becomes less sensitive to events with large y but more sensitive to statistical fluctuations. [17] In m H determination, typically there are two solutions which satisfy O G = 0, one above and one below the W W threshold 2M W . We believe that, unless m H is very close to 2M W [18], the correct solution can be identified relatively easily, using profiles of the lepton energy distribution, dilepton invariant mass distribution, etc. Hence, we consider deviations only around the correct solution. Fig. 3 shows the muon energy distributions separately for the signal events and the tt + W t and W W +jet background events, which passed through all the cuts for the [20] As seen in Fig. 4 In a similar manner, we can estimate the systematic shift δm H , as defined in eq. (6), in the m H reconstruction by varying the input Higgs mass value in MC simulations and using different modes. The values of δm H for some sample points are listed in Tab. II. The magnitude of δm H due to all the cuts on the signal events is smaller for ℓℓ = µµ than for eµ and is the largest for ee. This is because the acceptance corrections are smaller for µ than e. We confirm |δm H /m H | ≪ 1, which shows that deviations from the ideal limit is suppressed and kept under control. Tab. III shows estimates of the statistical uncertainties (standard deviations) in the m H determination, ∆m stat. H , in the case that N ℓ leptons are used to take the expectation value O G . The factors in the square-root correspond to unity for an integrated luminosity of 100 fb −1 and using leptons of the signal events in the respective modes. The bottom row lists the upper bounds of combined statistical errors of the three modes; these are the largest among the three modes, when the number of leptons is set as the sum of all three modes, N tot ℓ = N (µµ) ℓ +N (eµ) ℓ +N (ee) ℓ . We use O G with G = G (a) n=4 in these estimates. The estimates are derived in the following way. We compute δO 2 G ≡ O 2 G − O G 2 and ∂ O G /∂m H using the MC signal events which passed all the cuts. Since the statistical error of O G is given by ∆O stat. Fig. 4. Now we examine stability and reliability of our prediction, using the µµ mode and MC input m MC H = 150 GeV. In Fig. 5(a) we vary the factorization scale µ fac in PDF and PYTHIA and plot O G as a function of m H : µ fac is taken as 1/2, 1 and 2 times its default value of MadEvent (P T of each scattered parton from each proton in the vector-boson fusion case). G = δO G / √ N ℓ , we convert it to ∆m stat. H using the tangent ∂ O G /∂m H evaluated at each input value m H = m MC H ; c.f. [22] δm H (and hence the value of m H where O G (m H ) =0) changes by about 0.8 GeV as µ fac is varied from 1/2 to 2. In Fig. 5(b) we vary the cut value of the invariant mass of tagged two jets from 550 GeV to 1 TeV. Corresponding variation of δm H is about 0.8 GeV. In Fig. 5(c) we vary the values of the P T cuts of tagged jets, P T,j1 > 40 GeV + δP T,j and P T,j2 > 20 GeV + δP T,j , between δP T,j = 0 and 20 GeV. (δP T,j = 0 is the default value.) Variation of δm H is about 1.0 GeV. These features agree with our expectation that m H determination is insensitive to PDF and cuts involving only jets, since only the β distribution of the Higgs boson would be affected. Moreover, we find that O G as a function of m H is also fairly stable. On the other hand, δm H and O G (m H ) depend strongly on the lepton cuts. In Fig. 5(d) we vary the values of the P T cut of the leading muon between 10 GeV and 30 GeV (while keeping the P T cut value of the subleading muon to 10 GeV). The line O G (m H ) moves downwards and changes shape as the cut becomes tighter. This stems from the fact that the P T cut suppresses lower part of the lepton energy distribution. Cuts involving missing transverse momentum P miss T (= − P had T − P ℓℓ T ) may also affect the m H determination, since the cuts affect the lepton energy distribution through indirect restrictions on P ℓℓ T . Here, P ℓℓ T denotes the transverse momentum of the tagged dilepton system, while P had T denotes the sum of the transverse momenta of all other visible particles. Since magnitudes of systematic uncertainties in P had T and P ℓℓ T measurements are rather different, instead of varying cuts involving P miss T , we mul-tiply | P had T | by a scale factor of 0.9, 1.0 and 1.1; we keep all the cut values unchanged. See Fig. 5(e). This variation of energy scale affects restrictions on P ℓℓ T indirectly, since bounds on P miss T = − P had T − P ℓℓ T are kept fixed. We find that variation of δm H is about 1.6 GeV. In Fig. 5(f) we include the background contributions and vary the leading lepton P T cut. (Through Fig. 5(a)-(e) only leptons from the signal events are used.) Compared to Fig. 5(d), each line moves upwards and the line shape is modified slightly by inclusion of the backgrounds. We have already seen this effect in the shift of the reconstructed Higgs mass in Fig. 4 and Tab. II. From these examinations and similar examinations for other input Higgs mass values and decay modes, we estimate that our prediction is fairly insensitive to uncertainties in PDF and jet variables. [23] On the other hand, the prediction is strongly dependent on the lepton P T cuts. It is dependent also on other lepton acceptance corrections. These effects with respect to only leptons can in principle be estimated accurately, by understanding detector coverage and detector performances well. We note that all the lines in Fig. 5(f) can be plotted using the real experimental data and can be compared with the prediction. Since these lines can be predicted accurately and are dependent on MC input Higgs mass value m MC H , we can determine the Higgs mass by a fit of O G (m H ) , provided the background contributions can be estimated accurately. Similarly, all the lines in Figs. 5(b) and (c) can be compared, after inclusion of background contributions, with the corresponding ones plotted using the real experimental data. Alternatively we may make similar comparisons by loosening various cuts. This procedure raises relative weight of background contributions, so that we can test the prediction for the background contributions. We may also make use of the large degree of freedom of the observable O G for further tests of the prediction. In the case that higher lepton P T cuts are unavoidable, determination of the Higgs mass by a fit of O G (m H ) would be more realistic, as mentioned above, since O G (m H ) is shifted considerably. [24] Explicitly, a fit can be carried out in the following way. We can compute O G (m H ) using lepton energy distributions measured in a real experiment and MC simulation, respectively, as O G (m H ) exp ≡ dE ℓ D exp (E ℓ ; m true H )O G (E ℓ ; m H ), (7) O G (m H ) MC ≡ dE ℓ D MC (E ℓ ; m MC H )O G (E ℓ ; m H ). (8) Defining a distance between O G (m H ) exp and O G (m H ) MC as d 2 (m MC H ) ≡ m2 m1 dm H w(m H ) × [ O G (m H ) exp − O G (m H ) MC ] 2 ,(9) and minimizing d(m MC H ), we may reconstruct the Higgs mass. Here, w(m H ) ≥ 0 denotes an appropriate weight. As the lepton P T cuts become tighter, sensitivity to the Higgs mass decreases. For instance, in the case m true H = 150 GeV, if we impose P T,lep1 > 25 GeV and P T,lep2 > 10 GeV to the leading and subleading leptons, respectively, we estimate that the statistical error of the Higgs mass, reconstructed by the above fit, to be about 5.0 GeV for an integrated luminosity of 100 fb −1 . The size of error is not very dependent on choices of m 1 and m 2 or the weight w. (We took m 1 = 120 GeV, m 2 = 240 GeV and w = 1.) Comparing Tab. III and the examination in Figs. 5, we anticipate that the statistical error will dominate over systematic errors. The accuracy of m H determination is better for a lighter Higgs mass, within the range 150 GeV < ∼ m H < ∼ 200 GeV. (e.g. If m H = 150 GeV, it would be reconstructed with 2-3% accuracy with 100 fb −1 integrated luminosity.) Let us comment on the effects of leptons from τ 's, which we have neglected. Since these leptons would have a lower energy spectrum, the effects of the lepton P T cuts are likely to be enhanced. We anticipate from Fig. 5(d) that the effects are to move the line O G (m H ) downwards. Nevertheless, we expect that the additional contribution from τ 's and the cut effect will be accurately predictable and should not deteriorate the m H determination. The main purpose of the present study on m H reconstruction is to demonstrate that the mass reconstruction using O G is applicable to such a complicated process. We chose the W W -fusion mode (rather than gg-fusion mode) since a good signal-to-noise ratio is beneficial in demonstrating clearly the characteristics of the present method. We confirmed stability of our prediction except against purely leptonic cuts. Good understanding of background contributions is also important. Experimentally m H would be reconstructed very accurately using the decay modes H → ZZ ( * ) → ℓℓℓℓ in the relevant mass range [10]. The W W ( * ) decay modes can be used to test consistency of the Higgs mass values, using our method or those of [11,12]. Ref. [11] uses leptonic variables but depends severely on PDF. We reemphasize that our method is independent of PDF in the leading-order. In a future work, we may examine the gg-fusion mode, in which the statistical error would be smaller while systematic errors would be larger than the W W -fusion mode. The observables proposed in this paper would have wide applications. For example, in supersymmetric models, O G can be applied to a slepton decay into a lepton plus an undetected particle; since it is a two-body decay, its analysis would be straightforward. Another possible application is a measurement of the top quark mass. A preliminary analysis indicates that the top quark polarization effects are small. The authors express their condolences for victims of disasters in Japan. The disasters also struck the current project heavily. The authors hope the completion of this work to be a step forward in revival from the tragedy. The work of Sumino is supported in part by Grant-in-Aid for scientific research No. 23540281 from MEXT, Japan. PACS numbers: 11.80.Cr,13.85.Hd,14.80.Bn FIG. 1 :FIG. 2 : 12Lepton energy distribution in the Higgs rest frame. Lepton cos θ distribution in the Higgs rest frame. FIG. 3 : 3Muon energy distribution after applying all the cuts for H → µµνν mode. Histograms represent muons from the signal (corresponding to m MC H = 150 GeV), tt + W t and W W +jet events (overlayed, from back to front). 5 × OG as a function of mH in the theoretical prediction for dΓ H→ℓℓνν /dE0|M ℓℓ <45GeV . Muon energy distribution given inFig. 3is used: solid (dashed) line represents the expectation value taken with respect to the signal-plusbackground (signal) MC events. H → µµνν mode. The signal events are generated with m MC H = 150 GeV and the number of events after the cuts is about 12,000. The normalizations of the background events have been rescaled according to the respective cross sections to match the number of the signal events. Using this distribution the expectation value O G is computed; see Fig. 4.[19] The observable O G in eq. (3) is defined with G = G (a) n=4 and the theoretical prediction for dΓ H→ℓℓνν /dE 0 is computed imposing a cut M ℓℓ < 45 GeV on the dilepton invariant mass. The value of O G changes as a function of m H , which enters as an external parameter in the theoretical prediction for dΓ H→ℓℓνν /dE 0 | M ℓℓ <45GeV . Without any cuts and without background contributions, O G would cross zero at m H = 150 GeV in the large statistics limit. reconstruction using N ℓ leptons. (e, µ from τ are not included.) The factors in the square-root correspond to unity for an integrated luminosity of 100 fb −1 and using leptons of the signal events. at δm H = +0.8 GeV above the MC input value.[21] FIG. 5 : 510 5 × OG vs. mH (parameter in dΓ H→ℓℓνν /dE0|M ℓℓ <45GeV ). MC events with m MC H = 150 GeV and for µµ mode are used. Through (a)-(e) only the signal events are used, while in (f) both signal and background events are used. All the cut values, except the ones shown explicitly in figs. (b)(c)(d)(f), are kept fixed to their default values. Signal (m H )Background 150 GeV 180 GeV 200 GeV tt + W t W W +jetseµ mode [fb] 2.00 2.21 1.07 0.53 0.13 µµ mode [fb] 1.33 1.47 0.68 0.31 0.05 ee mode [fb] 0.76 0.89 0.42 0.24 0.04 TABLE I : ICross section×efficiency, after all the cuts. e, µ from τ are not included. , due to the effects of the cuts, the expectation value O G taken with respect to the signal events crosses zero at about +2.6 GeV above the MC input value. This value is consistent with δm H determined from eq. (6). O G with respect to the signal-plus-background eventsℓℓ m H =150 GeV m H =180 GeV m H =200 GeV Signal +5.7 −0.9 −5.5 eµ Bkg: tt + W t −4 +14 +16 W W +jets −1 +3 +5 Signal +2.6 +0.5 −3.4 µµ Bkg: tt + W t −2 +11 +16 W W +jets 0 0 −1 Signal +7.6 −1.8 −13.2 ee Bkg: tt + W t −1 +5 +5 W W +jets 0 +2 +2 TABLE II : IIEstimate of systematic shift δmH [GeV], as defined in eq. (6). (Corresponding systematic error is different; see text.) m H [GeV] 150 180 200 eµ mode [GeV] 4.1 400 N ℓ 11 442 N ℓ 14 214 N ℓ µµ mode [GeV] 5.5 266 N ℓ 14 294 N ℓ 20 136 N ℓ ee mode [GeV] 5.7 152 N ℓ 14 178 N ℓ 18 84 N ℓ Combined [GeV] < 3.1 818 N tot ℓ < 8.2 914 N tot ℓ < 11 434 N tot ℓ TABLE III : IIIEstimates of the statistical error ∆m stat. T system (used in Tab. 7 of[6]) is omitted. We expect many additional ways for optimization in the mH determination, such as inclusion of this cut.[16] After this paper was submitted to arxiv, theATLAS[9]. Hence, predictions with higher lepton PT cut values may be more realistic. In principle, we can also use the dilepton trigger, in which case lower PT cut values may be used. . A Barr, C Lester, P Stephens, J. Phys. G. 292343A. Barr, C. Lester and P. Stephens, J. Phys. G 29, 2343 (2003); . W S Cho, K Choi, Y G Kim, C B S Park ; W, K Cho, Y G Choi, C B Kim, Park, Phys. Rev. Lett. 10031701Phys. Rev.W. S. Cho, K. Choi, Y. G. Kim, C. B. Park, Phys. Rev. Lett. 100, 171801 (2008). W. S. Cho, K. Choi, Y. G. Kim, C. B. Park, Phys. Rev. D79, 031701 (2009); . M Burns, K Kong, K T Matchev, M Park, JHEP. 081081M. Burns, K. Kong, K. T. Matchev, M. Park, JHEP 0810, 081 (2008). . A J Barr, C G Lester, J. Phys. G. 37123001A. J. Barr and C. G. Lester, J. Phys. G 37 (2010) 123001. . F Maltoni, T Stelzer, JHEP. 030227F. Maltoni and T. Stelzer, JHEP 0302 (2003) 027; . J , JHEP. 070928J. Al- wall et al., JHEP 0709 (2007) 028; . J , JHEP. 1106128J. Alwall et al., JHEP 1106, 128 (2011). . T Sjostrand, S Mrenna, P Skands, JHEP. 060526T. Sjostrand, S. Mrenna and P. Skands, JHEP 0605 (2006) 026. . S Asai, Eur. Phys. J. 322S. Asai et al., Eur. Phys. J. C32S2, 19-54 (2004). . T Aaltonen, arXiv:1103.3233hep-exT. Aaltonen et al., arXiv:1103.3233 [hep-ex]. ATLAS-CONF-2011-134. ATLAS Collaboration, ATLAS-CONF-2011-134, http://cdsweb.cern.ch/record/1383837/files/ATLAS-CONF-2011-134.pdf. . G L Bayatian, J. Phys. G. 34995G. L. Bayatian et al., J. Phys. G 34, 995 (2007). . G Davatz, M Dittmar, F Pauss, Phys. Rev. D. 7632001G. Davatz, M. Dittmar and F. Pauss, Phys. Rev. D 76 (2007) 032001. . A J Barr, B Gripaios, C G Lester, JHEP. 090772A. J. Barr, B. Gripaios, C. G. Lester, JHEP 0907, 072 (2009); . K Choi, S Choi, J S Lee, C B Park, Phys. Rev. 8073010K. Choi, S. Choi, J. S. Lee, C. B. Park, Phys. Rev. D80, 073010 (2009). y is defined with respect to the boost direction of X. This should be distinguished from the (pseudo-) rapidity η, defined with respect to the beam direction of experiments, used in the simulation study of mH reconstruction. y is defined with respect to the boost direction of X. This should be distinguished from the (pseudo-) rapidity η, de- fined with respect to the beam direction of experiments, used in the simulation study of mH reconstruction. mH-dependent cut on the transverse mass MT of ℓℓ-P miss. mH-dependent cut on the transverse mass MT of ℓℓ- P miss
[]
[ "arXiv:astro-ph/0006063v1 5 Jun 2000 Quasar environments at 0.5 ≤ z ≤ 0.8", "arXiv:astro-ph/0006063v1 5 Jun 2000 Quasar environments at 0.5 ≤ z ≤ 0.8" ]
[ "M Wold \nStockholm Observatory\n\n", "M Lacy \nIGPP\nLawrence Livermore National Labs\nUniversity of California\nDavis\n", "P B Lilje \nInstitute of Theoretical Astrophysics\nUniversity of Oslo\n\n", "S Serjeant \nAstrophysics Group\nImperial College London\n\n" ]
[ "Stockholm Observatory\n", "IGPP\nLawrence Livermore National Labs\nUniversity of California\nDavis", "Institute of Theoretical Astrophysics\nUniversity of Oslo\n", "Astrophysics Group\nImperial College London\n" ]
[]
Over the past few years, we have been collecting data with the Nordic Optical Telescope (NOT) on the galaxy environments around active galactic nuclei (AGN). Here we present some results from a sample of 21 radio-loud and 20 radio-quiet quasars in the redshift range 0.5 ≤ z ≤ 0.8.We find a few quasars in very rich environments, perhaps as rich as Abell class 1-2 clusters, but more often the quasars seem to prefer groups and poorer clusters. We also find that on average the galaxy environments around radio-loud and radio-quiet quasars are indistinguishable, consistent with the findings that both powerful radio-loud and radio-quiet quasars appear to be hosted by luminous galaxies with luminosities higher than the break in the luminosity function(Dunlop et al. 1993;Taylor et al. 1996). Comparing the galaxy richnesses in the radio-loud quasar fields with quasar fields in the literature, we find a weak, but significant, correlation between quasar radio luminosity and environmental richness.
null
[ "https://arxiv.org/pdf/astro-ph/0006063v1.pdf" ]
204,932,107
astro-ph/0006063
3d17fc432b0117b73388572350dca42567a63d6c
arXiv:astro-ph/0006063v1 5 Jun 2000 Quasar environments at 0.5 ≤ z ≤ 0.8 M Wold Stockholm Observatory M Lacy IGPP Lawrence Livermore National Labs University of California Davis P B Lilje Institute of Theoretical Astrophysics University of Oslo S Serjeant Astrophysics Group Imperial College London arXiv:astro-ph/0006063v1 5 Jun 2000 Quasar environments at 0.5 ≤ z ≤ 0.8 Over the past few years, we have been collecting data with the Nordic Optical Telescope (NOT) on the galaxy environments around active galactic nuclei (AGN). Here we present some results from a sample of 21 radio-loud and 20 radio-quiet quasars in the redshift range 0.5 ≤ z ≤ 0.8.We find a few quasars in very rich environments, perhaps as rich as Abell class 1-2 clusters, but more often the quasars seem to prefer groups and poorer clusters. We also find that on average the galaxy environments around radio-loud and radio-quiet quasars are indistinguishable, consistent with the findings that both powerful radio-loud and radio-quiet quasars appear to be hosted by luminous galaxies with luminosities higher than the break in the luminosity function(Dunlop et al. 1993;Taylor et al. 1996). Comparing the galaxy richnesses in the radio-loud quasar fields with quasar fields in the literature, we find a weak, but significant, correlation between quasar radio luminosity and environmental richness. Introduction The differences and similarities between radio-loud and radio-quiet quasars (hereafter RLQs and RQQs, respectively) have kept astronomers busy for a long time. At essentially all wavelength ranges, except at radio wavelengths, the appearance of radio-loud and radio-quiet quasars is similar. RQQs appear compact with a weak radio component coinciding with the optical quasar nucleus, whereas RLQs have extended lobes of radio emission with hotspots at the outer edges of the radio structure. The radio-emitting lobes are being fed by powerful jets emerging from a bright, central core. RQQs can also have jet-like structures (Blundell & Beasley 1998), although with bulk kinetic powers ∼ 10 3 times lower than for RLQs (Miller, Rawlings & Saunders 1993). This suggests that both quasar types have jet-producing central engines, but that the efficiency of the jet production mechanism is very different in the two cases. One way to learn more about how the two quasar types are related is to study their host galaxies and their galactic environment. A longstanding belief that RQQs are hosted by spiral galaxies and RLQs by ellipticals is now being questioned. Recent studies (Dunlop et al. 1993;Taylor et al. 1996;McLure et al. 1999;Hughes et al. 2000) have found that powerful quasars at z > ∼ 0.5, both RLQs and RQQs, seem to exist in galaxies above the break in the luminosity function at L * , but a clear picture has still not emerged. Some studies claim a high fraction of disk morphologies amongst the radio-quiets (e.g. Percival et al. 2000), whilst others suggest that nearly all quasars are in giant ellipticals (e.g. McLure et al. 1999). The galaxy environment on scales larger than the host galaxy is also interesting. It may provide clues about quasar formation and evolution, since a period of quasar activity may be triggered by interactions and mergers (e.g. Stockton & MacKenty 1983;Ellingson, Green & Yee 1991). Also, a comparison of the galaxy environments around different types of AGN may help constrain the so-called 'Unified Models', e.g. the orientation-dependent unified scheme where a RLQ and a radio galaxy are believed to be intrinsically the same type of object, but viewed at different orientations to the line of sight (Barthel 1989). In the orientation-dependent unified scheme for RLQs and radio galaxies, one expects to find that the galaxy environments around the two are the same. Radio galaxies and RLQs at 0.5 ≤ z ≤ 0.8 are often found in environments with richer than average galaxy density (Yee & Green 1987;Hill & Lilly 1991;Ellingson, Yee & Green 1991;Wold et al. 2000), from groups of galaxies and poorer clusters to Abell Class 1 clusters or richer. This is perhaps not surprising since giant elliptical galaxies frequently reside in the centres of galaxy clusters. But what is the galaxy environment like for RQQs in this redshift range, and how does it compare to the environment around RLQs with comparable AGN luminosities? In order to make a meaningful comparison one needs a method to quantify the galaxy environment that takes into account the depth of the survey, the angular coverage and also corrects for foreground and background galaxies. One such parameter that has been much used when quantifying galaxy environments around AGN is the amplitude of the spatial cross-correlation function, the 'clustering amplitude'. Yee & López-Cruz (1999) find that the clustering amplitude is a robust estimator of galaxy richness in clusters. The first attempt at comparing the environments around radio-loud and radioquiet quasars by using the clustering amplitude was made by Yee & Green (1984). They found only a marginal difference between RLQ and RQQ fields, but later Yee & Green (1987) did deeper imaging of the same RQQ fields, and added more RLQ fields to the study, this time finding an even smaller difference, but they were unable to draw any firm conclusions due to the small number (seven) of RQQ fields. The problem was later addressed by Ellingson et al. (1991b), who also used the clustering amplitude to quantify the environments around a sample of 32 RLQs and 33 RQQs at 0.3 < z < 0.6. They found the RLQs to exist in richer than average galaxy environment, and frequently also in clusters as rich as Abell class 1. The RQQs, on the other hand, were found to be much less frequently situated in rich galaxy environments, suggesting that the two quasar types may be physically different objects. Since this study, there has not been much work aimed at comparing the Mpc-scale environments of the two quasar populations at moderate redshifts using the clustering amplitude. We have therefore undertaken a study using the NOT to collect data on the fields around two samples of 0.5 ≤ z ≤ 0.8 RLQs and RQQs that are matched in redshift and AGN luminosity. The redshift range was chosen to extend previous work which went up to z ∼ 0.6, to as high a redshift as possible consistent with keeping the redshifted 4000Å break shortward of the I-band. We selected quasars randomly from complete flux-limited samples spanning a wide range in both optical and radio luminosity (for the RLQs). By extending the redshift range of previous studies, whilst maintaining the luminosity range, we are able to better disentangle trends in the environmental richness due to cosmic evolution from those due to radio and/or optical luminosity. Our assumed cosmology has H 0 = 50 km s −1 Mpc −1 , Ω 0 = 1 and Λ = 0. The quasar samples The RLQ sample was selected from two different radio-optical flux limited surveys, the Molonglo/APM Quasar Survey (Serjeant 1996;Maddox et al., in prep.;Serjeant et al., in prep.) and the 7C quasar survey (Riley et al. 1999) and consists of 21 radioloud steep-spectrum quasars with redshifts 0.5 ≤ z ≤ 0.82 covering a wide radio luminosity range of 23.8 ≤ log L 408MHz /WHz −1 sr −1 ≤ 26.7. The 20 RQQs were selected from three optical surveys with different flux limits in order to cover a wide range in quasar B-band luminosity within the given redshift range. Eight of the quasars are from the faint Durham/AAT UVX survey of Boyle et al. (1990) and ten of the quasars are from the intermediate luminosity Large Bright Quasar Survey by Hewett, Foltz & Chaffee (1995). There are also two high-luminosity quasars in the sample, selected from the Bright Quasar Survey (BQS) by Schmidt & Green (1983). In Fig. 1 we show the distribution of the RLQs and the RQQs in the redshiftluminosity plane. There is a slight tendency for the quasars at bright M B 's to appear at the highest redshifts. The correlation between redshift and luminosity is a wellknown feature in flux-limited samples, but here it may instead be an artifact of the narrow redshift range of the sample. In our analysis we investigate if there are any correlations between the environmental richness and the quasar redshift and luminosity. It is therefore important that there is no underlying correlation between redshift and luminosity in the samples. We use Spearman's partial rank correlation coefficients for this analysis, allowing the correlation coefficient between two variables (e.g. environmental richness and radio luminosity) to be determined when holding the third variable constant (redshift). For more details about the RLQ sample, see Wold et al. (2000). The RQQ sample will be presented by Wold et al. (in prep.). Control fields Since we are concerned with investigating if there is an excess of galaxies in the quasar fields, we aim to have a good determination of the foreground and background counts. For this purpose, we obtained several images of random fields in the sky at approximately the same galactic latitudes as the quasar fields. There are twelve different control fields of which five were imaged in two filters. In total, they cover 18 arcmin 2 in V , 58.7 arcmin 2 in R and 73.7 arcmin 2 in I. They were imaged along with sources in the quasar sample, so they have the same depth and were obtained in exactly the same manner as the quasar fields. This is important in order to obtain a robust estimate of the galaxy clustering in the quasar fields (Yee & López-Cruz 1999). Observations Most of the data were obtained using the High Resolution Adaptive Camera (HiRAC) at the NOT, equipped with either the 1k SiTe or the 2k Loral CCD giving a field of view of 3×3 and 3.7×3.7 arcmin, respectively. Some images were also obtained using the ALFOSC in imaging mode, with the 2k Loral CCD with pixel scale 0.189 arcsec. Typically, the integrations were divided into four exposures of 600 s each. Two RLQ fields were imaged with the 107-in telescope at the McDonald Observatory, and five RLQ fields with the HST (Serjeant, Rawlings & Lacy 1997). The bulk of the data were obtained under photometric conditions, and the seeing FWHM is less than one arcsec in 11 out of the 14 RLQ fields imaged with the NOT, and in 15 out of the 20 RQQ fields. In several images 'fuzz' from the host galaxy is clearly visible around the stellar image of the quasar. The filters were chosen such as to give preference to early-type galaxies with strong 4000Å breaks at the quasar redshifts. For an early-type galaxy at z ≥ 0.67 the 4000Å break moves from R-band into I-band, so for the z ≥ 0.67 quasars we used I-band imaging and for the z < 0.67 quasars we used R-band. We also imaged 20 of the quasar fields in two filters, either V and R, or R and I depending on the redshift of the quasar so as to straddle the rest frame 4000Å break. For photometry and object detection we processed the images in focas (Faint Object Classification and Analysis System). By performing completeness simulations in the images we find that the data are complete down to 24.0 in V , 23.5 in R and 23.0 in I, with errors of ±0.3 mag at the limits. All quasar fields, except two, lie at galactic latitudes |b| > 42 • and have galactic reddening E (B − V ) < 0.063. We corrected for galactic extinction using an electronic version of the maps by Burstein & Heiles (1982) and the Galactic extinction law by Cardelli, Clayton & Mathis (1989). Results To quantify the amount of excess galaxies in the quasars fields we use the amplitude, B gq , of the spatial galaxy-quasar cross-correlation function, ξ (r) = B gq r −γ , where γ = 1.77. The amplitude is evaluated at a fixed radius of 0.5 Mpc at the quasar redshift, corresponding to ≈ 1 arcmin at z = 0.7 and has units of Mpc 1.77 . Longair & Seldner (1979) showed that B gq can be found by first obtaining the amplitude of the angular cross-correlation function, which is directly proportional to the relative excess of galaxies. See Wold et al. (2000) for details of the analysis. We counted the number of galaxies within the 0.5 Mpc radius in the quasar fields and averaged the counts for the R and I-band data, i.e. for the z < 0.67 and the z ≥ 0.67 quasar fields. The average counts are shown in Fig. 2 where we also have plotted the average galaxy counts from the control images for comparison. The two left plots in Fig. 2 show the average R-band counts for the z < 0.67 RLQ (13) and RQQ (7) fields, and the two plots to the right show the average I-band counts for the z ≥ 0.67 I-band RLQ (6) and RQQ (13) fields. In the RLQ fields there appears to be a small excess of faint galaxies at R and I > ∼ 21, whereas the R-band RQQ fields show no excess. However, there is a clear excess of galaxies in the z ≥ 0.67 RQQ fields at I > 20. The errors in the background galaxy counts were calculated as 1.3 √ N in order to take into account the non-random fluctuations in the counts due to the clustered nature of field galaxies. We calculated the net excess of galaxies in each quasar field by subtracting the average background counts, and thereafter computed the clustering amplitude, B gq . In Fig. 3 we show B gq for the RLQ and the RQQ fields as a function of redshift and quasar B absolute magnitude. The dotted line across the plots show the value obtained by Davis & Peebles (1983) for the amplitude of the local galaxy-galaxy autocorrelation function, B gg = 60 Mpc 1.77 . The mean clustering amplitudes for the RLQ and the RQQ samples are 213±66 and 189±83 Mpc 1.77 , respectively. We thus make two observations, first that the mean clustering amplitudes for the quasar fields are significantly larger than that of local galaxies, implying that the quasars exist in fields (Davis & Peebles 1983). Note that the environmental richness is similar for both RLQs and RQQs. with richer than average galaxy density. Second, we note that the mean clustering amplitudes for the two samples are practically indistinguishable, i.e. on average, there is no difference in the galaxy environments on 0.5 Mpc scales for the RLQs and the RQQs. On average, both the RLQs and the RQQs seem to prefer environments similar to galaxy groups and poorer clusters of galaxies. The quasar environments span a wide range, however. Some individual fields show no significant excess of galaxies, and other fields appear to be very rich, e.g. the field around the RQQ BQS 1538+477 with B gq in the range 1100-1200 Mpc 1.77 and a galaxy excess of ≈ 40-50 galaxies. Wold et al. (2000) argue that an amplitude of ≈ 740 Mpc 1.77 corresponds to Abell richness class > ∼ 1, so the cluster candidate around BQS 1538+447 must qualify as a richness class 2 cluster. Two other fields with clustering amplitudes of 785±255 and 703±250 Mpc 1.77 are probable Abell class 1 clusters. Plotting the galaxies in the four richest RQQ fields in a colour-magnitude diagram, reveals a hint of a red sequence at R − I ≈ 1.5-1.6, i.e. tentative evidence that these fields likely contain galaxy clusters at z ∼ 0.7-0.8, see Fig. 4. As seen in Fig. 3 there are no obvious trends in B gq with either redshift or Figure 4. Colour-magnitude diagram of the galaxies in the four richest RQQ fields ( z = 0.74 ± 0.02). There are 364 galaxies in this plot, and the distribution is smoothed with a Gaussian filter with smoothing lengths 0.5 mag in I magnitude and 0.125 in R−I. Few galaxy clusters are known at z > ∼ 0.7, but it seems that the expected colour of the red sequence in clusters at these redshifts lies in the range 1.3 < ∼ R − I < ∼ 2 (Clowe et al. 1998;Luppino & Kaiser 1997). quasar luminosity. There is a hint that the low-z RQQ fields have lower clustering amplitude than the higher redshift RQQ fields, but this is most likely an artifact of the narrow redshift range. The Spearman partial rank correlation coefficient giving the correlation between B gq and redshift is 0.4 with a 1.5σ significance. A link between radio luminosity and environmental richness? Looking at B gq for the RLQs as a function of radio luminosity, reveals a weak, but significant, correlation between B gq and radio luminosity with much scatter. This is shown in Fig. 5 where we also have plotted B gq 's for RLQ fields as found by Yee & Green (1987), Ellingson et al. (1991b) and Yee & Ellingson (1993). Here the, correlation coefficient between B gq and L 408MHz , holding redshift constant, is 0.4 with a 3.4σ significance. Does the correlation between radio luminosity and environmental richness imply that environment is the primary factor in controlling the radio luminosity of a RLQ? There are at least three ways in which such a situation could come about. In the first, the environment determines the bulk kinetic power in the radio jets. According to the relation between a galaxy's black hole mass and the mass of the spheroidal component (Kormendy & Richstone 1995;Magorrian et al. 1998), giant elliptical hosts of radio galaxies and RLQs should have high black hole masses, ∼ 10 8 -10 9 M ⊙ . Assuming that the radio jets are powered by accretion and that the accretion rate is proportional to the black hole mass, galaxies with massive black holes will power more luminous Figure 5. Clustering amplitude for the quasars in our sample (filled circles) plotted together with RLQs in the literature as a function of radio luminosity. Open circles show data from Yee & Green (1987), stars are data from Ellingson et al. (1991b) and diamonds are data from Yee & Ellingson (1993). We find a correlation coefficient of 0.4 with a 3.4σ significance. radio sources. These massive galaxies will prefer richer environments and thus the correlation between radio luminosity and environmental richness may just reflect an increasing mass of the host. The second possibility is that more fuel for the quasar and its radio jets is available in a group or cluster environment. A group or a poor cluster environment may be ideal for the fuelling of a black hole as encounters will be more common than in the field and will be of low enough relative velocity to disrupt the interacting galaxies and cause gas to flow into the centre (Ellingson et al. 1991a). A third possibility is that the radio luminosity is almost independent of the bulk kinetic power in the radio jets and is instead largely determined by the density of the environment into which the source expands. Wold et al. (2000) constructed a simple model with these assumptions and found that the predicted relation between B gq and radio luminosity was much too steep to fit the data, thereby ruling out as strong a B gq -L 408MHz dependence as we would see if all RLQs had the same jet power and environment was entirely responsible for determining the radio luminosity. Nevertheless, as the luminosity function for the radio jet power is likely to be steeply declining at high powers, it seems not unlikely that selection effects could operate to produce some correlation between L 408MHz and B gq without it being as strong as it would be in this rather extreme model in which the jet power is the same for all sources. Given the large scatter in the correlation it is however quite possible that both environment and radio jet power play important roles in determining the radio luminosity. The relationship between radio sources and their environments must be complex, and the vast majority of radio sources may lie in some sort of cluster-like environment, from groups of only a few galaxies to clusters as rich as Abell class 1 or more. Given this correlation, one might expect the RQQs, with radio luminosities typically two-three orders of magnitude lower than the RLQs, to be sited in poorer environments. Instead we find that the environments around RLQs and RQQs are indistinguishable. This is however fully consistent with the B gq -radio luminosity correlation. That the large-scale environments around RLQs and RQQs are similar suggests that the process that decides on radio loudness in a quasar is not dependent on the environment on Mpc scales, but may instead be found in the central regions of the host galaxy. When an epoch of quasar activity is triggered in a galaxy, e.g. as a result of galaxy interactions and mergers, then some central process decides whether it becomes a RLQ or a RQQ. If a RLQ is born, the environment into which the source expands will to some extent determine its radio luminosity, and we observe the B gq -L 408MHz correlation. If instead a RQQ is born, we do not observe any correlations between quasar luminosity (in this case optical luminosity) and environmental richness, because this quasar does not have extended radio lobes that can interact with the surrounding galaxy environment. The radio jets and the radio lobes extending beyond the host galaxy thus work like sensors from which we can read off the physical conditions in the intergalactic medium. The evolutionary states of AGN-selected clusters The fuelling mechanism of luminous AGN is still not understood, although it is thought that companion galaxies in groups and clusters may be able to supply fuel either through mergers or via a cluster cooling flow (e.g. Hall, Ellingson & Green 1997). The merger and cooling flow models, however, make very different predictions for the evolutionary state of the cluster surrounding the AGN. If the AGN is fuelled by mergers and interactions, we might expect that the cluster is still forming by merging of sub-clumps, but in the cooling flow scenario the cluster may be well-established and virialized. We have therefore started to investigate the evolutionary states of AGN-selected galaxy clusters by weak lensing techniques. Using deep images obtained with the ALFOSC on the NOT in sub-arcsec seeing, we have mapped the projected mass distribution in rich, X-ray luminous AGN-selected clusters, and preliminary results show that we have comfortable detections of the weak lensing signal (> 3σ). These observations will also allow us to estimate the mass-to-light ratio for the clusters, and to investigate whether the mass-to-light ratio is different than for clusters selected solely on the basis of bright optical or X-ray emission from their baryonic matter component. The weak lensing technique is being used more and more frequently since it is a powerful method for investigating galaxy clusters (see e.g. Dahle et al., these proceedings). The combination of the good seeing conditions at the NOT and the wide field imagers with good resolution currently available and underway at the NOT (the focalreducer FRED) makes the NOT a powerful instrument for doing weak lensing observations. the organizers of the meeting for support. Figure 1 . 1The left-hand plot shows the location of the RLQs (filled circles) andthe RQQs (open circles) in the redshift-luminosity plane. To the right are the radio luminosities at 408 MHz of the RLQs as a function of redshift. Figure 2 . 2Average galaxy number counts in the quasar (open circles) and control fields (filled circles). The dotted lines show the completeness limits of R = 23.5 and I = 23.0. Figure 3 . 3Clustering amplitudes for the RLQs (filled circles) and the RQQs (open circles) as a function of redshift and quasar B absolute magnitude. The dotted lines show the value of the local galaxy-galaxy clustering amplitude B gg = 60 Mpc 1.77 Acknowledgments. We are grateful to the staff at the NOT and the McDonald Observatory for help with the observations. We also thank the British Research Council and the Research Council of Norway for support. MW also wishes to thank . P D Barthel, ApJ. 336606Barthel, P.D., 1989, ApJ, 336, 606 . K M Blundell, A J Beasley, MNRAS. 299165Blundell, K.M., & Beasley, A.J., 1998, MNRAS, 299, 165 . B J Boyle, R Fong, T Shanks, B A Peterson, MNRAS. 2431Boyle, B.J., Fong, R., Shanks, T., & Peterson, B.A., 1990, MNRAS, 243, 1 . D Burstein, C Heiles, AJ. 871165Burstein, D., & Heiles, C., 1982, AJ, 87, 1165 . J A Cardelli, G C Clayton, J S Mathis, ApJ. 345245Cardelli, J.A., Clayton, G.C., & Mathis, J.S., 1989, ApJ, 345, 245 . D Clowe, G A Luppino, N Kaiser, J P Henry, I M Gioia, ApJL. 49761Clowe, D., Luppino, G.A., Kaiser, N., Henry, J.P., & Gioia, I.M., 1998, ApJL, 497, 61 . M Davis, P J E Peebles, ApJ. 267465Davis, M., & Peebles, P.J.E., 1983, ApJ, 267, 465 . J S Dunlop, G L Taylor, D H Hughes, E I Robson, MNRAS. 264455Dunlop, J.S., Taylor, G.L., Hughes, D.H., & Robson, E.I., 1993, MNRAS, 264, 455 . E Ellingson, R F Green, H K C Yee, ApJ. 378476Ellingson, E., Green, R.F., & Yee, H.K.C., 1991a, ApJ, 378, 476 . E Ellingson, H K C Yee, R F Green, ApJ. 37149Ellingson, E., Yee, H.K.C., & Green, R.F., 1991b, ApJ, 371, 49 . P Hall, E Ellingson, R F Green, AJ. 1131179Hall, P., Ellingson, E., & Green, R.F., 1997, AJ, 113, 1179 . P C Hewett, C B Foltz, F H Chaffee, AJ. 1091498Hewett, P.C., Foltz, C.B., & Chaffee, F.H., 1995, AJ, 109, 1498 . G J Hill, S J Lilly, ApJ. 3671Hill G.J., & Lilly, S.J., 1991, ApJ, 367, 1 . D H Hughes, M J Kukula, J S Dunlop, T Boroson, preprint (astroph/0002021Hughes, D.H., Kukula, M.J., Dunlop, J.S., & Boroson, T., 2000, preprint (astro- ph/0002021) . J Kormendy, D Richstone, ARA&A. 33581Kormendy, J., & Richstone, D., 1995, ARA&A, 33, 581 . M S Longair, M Seldner, MNRAS. 189433Longair, M.S., & Seldner, M., 1979, MNRAS, 189, 433 . G A Luppino, N Kaiser, ApJ. 47520Luppino, G.A., & Kaiser, N., 1997, ApJ, 475, 20 . J Magorrian, ApJ. 1152285Magorrian, J., et al., 1998, ApJ, 115, 2285 . R J Mclure, M J Kukula, J S Dunlop, S A Baum, C P O&apos;dea, D H Hughes, MNRAS. 308377McLure, R.J., Kukula, M.J., Dunlop, J.S., Baum, S.A., O'Dea, C.P., & Hughes, D.H., 1999, MNRAS, 308, 377 . P Miller, S Rawlings, R Saunders, MNRAS. 263425Miller, P., Rawlings, S., & Saunders, R., 1993, MNRAS, 263, 425 . W J Percival, L Miller, R J Mclure, J S Dunlop, preprint (astroph/0002199Percival, W.J., Miller, L., McLure, R.J., & Dunlop, J.S., 2000, preprint (astro- ph/0002199) . J M Riley, S Rawlings, R G Mcmahon, K M Blundell, P Miller, M Lacy, E M Waldram, MNRAS. 307293Riley, J.M., Rawlings, S., McMahon, R.G., Blundell, K.M., Miller, P., Lacy M., & Waldram, E.M., 1999, MNRAS, 307, 293 . M Schmidt, R F Green, ApJ. 269352Schmidt, M., & Green, R.F., 1983, ApJ, 269, 352 S B G S Serjeant, S Rawlings, M Lacy, Quasar Hosts Conference, D.L. Clements, & I. Perez-Fournon. BerlinSpringer-Verlag188Univ. Oxford Serjeant,DPhil ThesisSerjeant, S.B.G., 1996, DPhil Thesis, Univ. Oxford Serjeant, S., Rawlings, S., & Lacy, M., 1997, in Quasar Hosts Conference, D.L. Clements, & I. Perez-Fournon, Springer-Verlag, Berlin, 188 . A Stockton, J W Mackenty, Nat. 305678Stockton, A., & MacKenty, J.W., 1983, Nat, 305, 678 . G L Taylor, J S Dunlop, D H Hughes, E I Robson, MNRAS. 283930Taylor, G.L., Dunlop, J.S., Hughes, D.H., & Robson, E.I., 1996, MNRAS, 283, 930 . M Wold, M Lacy, P B Lilje, S Serjeant, MNRAS. in press (astroph/9912070Wold, M., Lacy, M., Lilje, P.B., & Serjeant, S., 2000, MNRAS, in press (astro- ph/9912070) . H K C Yee, E Ellingson, ApJ. 41143Yee, H.K.C., & Ellingson, E., 1993, ApJ, 411, 43 . H K C Yee, O López-Cruz, AJ. 117Yee, H.K.C., & López-Cruz, O., 1999, AJ, 117, 1985 . H K C Yee, R F Green, ApJ. 28079Yee, H.K.C., & Green, R.F., 1984, ApJ, 280, 79 . H K C Yee, R F Green, ApJ. 31928Yee, H.K.C., & Green, R.F., 1987, ApJ, 319, 28
[]
[ "Binary pulsars as probes of a Galactic dark matter disk", "Binary pulsars as probes of a Galactic dark matter disk" ]
[ "Andrea Caputo ", "Jesús Zavala ", "Diego Blas ", "\nTheoretical Physics Department\nCenter for Astrophysics and Cosmology, Science Institute\nCERN\nCH-1211Geneva 23Switzerland\n", "\nTheoretical Physics Department\nUniversity of Iceland\nDunhagi 5107ReykjavikIceland\n", "\nCERN\n1211Geneva 23CHSwitzerland\n" ]
[ "Theoretical Physics Department\nCenter for Astrophysics and Cosmology, Science Institute\nCERN\nCH-1211Geneva 23Switzerland", "Theoretical Physics Department\nUniversity of Iceland\nDunhagi 5107ReykjavikIceland", "CERN\n1211Geneva 23CHSwitzerland" ]
[]
As a binary pulsar moves through a wind of dark matter particles, the resulting dynamical friction modifies the binary's orbit. We study this effect for the double disk dark matter (DDDM) scenario, where a fraction of the dark matter is dissipative and settles into a thin disk. For binaries within the dark disk, this effect is enhanced due to the higher dark matter density and lower velocity dispersion of the dark disk, and due to its co-rotation with the baryonic disk. We estimate the effect and compare it with observations for two different limits in the Knudsen number (Kn). First, in the case where DDDM is effectively collisionless within the characteristic scale of the binary (Kn 1) and ignoring the possible interaction between the pair of dark matter wakes. Second, in the fully collisional case (Kn 1), where a fluid description can be adopted and the interaction of the pair of wakes is taken into account. We find that the change in the orbital period is of the same order of magnitude in both limits. A comparison with observations reveals good prospects to probe currently allowed DDDM models with timing data from binary pulsars in the near future. We finally comment on the possibility of extending the analysis to the intermediate (rarefied gas) case with Kn ∼ 1.
10.1016/j.dark.2017.10.005
[ "https://arxiv.org/pdf/1709.03991v2.pdf" ]
54,580,786
1709.03991
8347058c2d23a171276bd50ec0f241f0022fed46
Binary pulsars as probes of a Galactic dark matter disk Andrea Caputo Jesús Zavala Diego Blas Theoretical Physics Department Center for Astrophysics and Cosmology, Science Institute CERN CH-1211Geneva 23Switzerland Theoretical Physics Department University of Iceland Dunhagi 5107ReykjavikIceland CERN 1211Geneva 23CHSwitzerland Binary pulsars as probes of a Galactic dark matter disk Dark Disk, Binary Pulsar As a binary pulsar moves through a wind of dark matter particles, the resulting dynamical friction modifies the binary's orbit. We study this effect for the double disk dark matter (DDDM) scenario, where a fraction of the dark matter is dissipative and settles into a thin disk. For binaries within the dark disk, this effect is enhanced due to the higher dark matter density and lower velocity dispersion of the dark disk, and due to its co-rotation with the baryonic disk. We estimate the effect and compare it with observations for two different limits in the Knudsen number (Kn). First, in the case where DDDM is effectively collisionless within the characteristic scale of the binary (Kn 1) and ignoring the possible interaction between the pair of dark matter wakes. Second, in the fully collisional case (Kn 1), where a fluid description can be adopted and the interaction of the pair of wakes is taken into account. We find that the change in the orbital period is of the same order of magnitude in both limits. A comparison with observations reveals good prospects to probe currently allowed DDDM models with timing data from binary pulsars in the near future. We finally comment on the possibility of extending the analysis to the intermediate (rarefied gas) case with Kn ∼ 1. Introduction Unravelling the nature of dark matter (DM) is among the most fundamental frontiers of modern physics. Despite the growing body of gravitational evidence pointing to this new form of matter, its properties as a particle remain elusive. Given the lack of any convincing signature of nongravitational DM interactions, the gravitational effect of DM on ordinary matter remains the only direct source of observational information about DM properties. Email addresses: [email protected] (Andrea Caputo), [email protected] (Jesús Zavala), [email protected] (Diego Blas) 1 Permanent address: Instituto de Física Corpuscular, Universidad de Valencia and CSIC, Edificio Institutos Investigación, Catedrático José Beltrán 2, 46980 Spain Among the many gravitational DM probes, binary pulsars have been recently proposed as a novel way of deriving model-independent upper bounds on the dark-matter density at different distances from the Galactic center [50]. The presence of DM modifies the binary's orbit due to dynamical friction as the binary moves through the DM ambient medium. Given the extraordinary precision achieved in the measurement of the orbital properties of pulsar binaries, it was concluded in [50] that these objects could be used to put constraints on the central DM density of the Milky Way. Isolated and binary pulsars have also been suggested to explore the scenarios of ultra-light DM candidates, see [32,52,6,19]. In this paper we argue that the potential of binary pulsars as precise probes of the DM distribution increases substantially if one assumes that a fraction of the DM in our galaxy is distributed in a thin disk coplanar and co-rotating with the luminous disk. Such a possibility is materialized in models such as the Partially Interacting DM (PIDM) scenario proposed by [22,21], where a small fraction of the DM can lose energy to collapse into a thin 'dark disk' (in an analogous way as ordinary matter assembles into disk galaxies). This so-called Double-Disk-Dark-Matter (DDDM) scenario has a distinct prediction for the DM phase space distribution in the disk, characterized by a high density, co-rotation with the luminous disk, and small velocity dispersion. Despite its striking features, the possibility of such dark disk remains observationally allowed [36] and there are already different methods suggested for its detection [4,39]. In the following we argue that all these effects increase the dynamical friction induced in binary systems. Remarkably, the enhancement is such that the sensitivities of current measurements of binary pulsars near the galactic plane are reasonably close to probe currently allowed DDDM scenarios. Notice that our results may be applied to any model where DM is expected to generate a disk with properties similar to those of the DDDM scenario, e.g. [17,24,2,9]. Our work is organized as follows: in Sec. 2 we summarize the features of the DDDM model relevant for our analysis. Sec. 3 describes the dynamical friction for the collisionless DM case and ignoring the interaction of the wakes. We also compare our predictions with observations in that section under the assumption of co-rotation of the DDDM dark disk with the center of mass of the binary. In Sec. 4 we extend this analysis to the case of large Knudsen numbers and including the wakes' interactions. We conclude and present our outlook in Sec. 5. DDDM model The DDDM scenario discussed in [22,21] proposes the existence of a subdominant component of the DM sector that has dissipative dynamics. It consists of a massless U (1) D gauge boson, with fine structure α D and interactions with two new fields: a heavy fermion X and a light fermion C, with opposite charges q x = 1 = −q C under U (1) D . The thermal history of this scenario is described in detail in [22]. In the following, we only describe the elements of the formation of a dark disk relevant to our purposes. The formation of a dark disk occurs in an analogous way to baryonic matter assembling into galactic disks. The fraction of DM that is dissipative would fall into the gravitational well of the protohalos predominantly made of the non-interacting DM fraction, which has acquired angular momentum through tidal torques with the surrounding environment. At the time of accretion, part of the DDDM might be in the form of atomic-like states made of heavy and light DM. These dark atoms will become fully ionized due to shock heating during the virialization of the halo leaving a population of free X and C fermions. This dark plasma cools through Compton scattering and Bremsstrhalung off background dark photons [55]. Since the plasma carries angular momentum it will form a rotationally supported dark disk as it dissipates energy and collapses due to gravity. Torques between the dark and baryonic disks will tend to align them into a steady configuration. In the following, we assume that this state has been reached with both discs being aligned and co-rotating 2 . The cooling process is important in our discussion because it sets the vertical velocity dispersion of the disk, which is approximately given by the final temperature of the cooled plasma: σ 2 z ≈ T cooled /m X , where m X is the mass of the heavy fermion X. In [22], assuming the ionization fraction to be between 1% and 10%, the authors estimate T cooled ∼ (0.02 − 0.2)B XC , where B XC = α 2 D m C /2 is the binding energy of the ground state of the dark atom and m C is the mass of the light fermion C. For instance, for m X = 100 GeV, m C = 1 MeV and α D = 0.1 this estimate results in σ z ∼ 9.5 km/s for the region of parameter space where cooling is efficient (see Fig. 5 and Fig. 7 of [22]). This dispersion is at least an order of magnitude smaller than the typical one in the solar neighborhood for a Milky-Way-size halo formed of collisionless DM particles (σ 1D ∼ 100 − 130 km/s; e.g. [38]), where the DM orbits are no longer circular. As we will see below, a low velocity dispersion plays a key role in the constraining power of binary pulsars in the DDDM scenario. Simulations of the formation and evolution of a DDDM disk have not been performed yet, and thus its final distribution remains an analytical approximation, which has been modelled as an isothermal sheet [5], with a density given by: ρ(R, z) = disk M gal DM 8πR 2 d z d e −R/R d sech 2 (z/2z d ),(1) where R is the radial distance in the plane of the disk and z indicates the height above the disk midplane; R d and z d are the scale lengths of the disk in the radial and vertical directions, respectively. The mass fraction of DM in the halo of the Milky Way that could be in the form of a disk is denoted by disk ≡ M disk DDDM /M gal DM . The parameters of the disk, regardless of the PIDM nature, are constrained by the kinematics of the stars in the solar neighborhood (R = R ). In particular, the surface density of the disk below a certain height z 0 , Σ disk (R , |z| < z 0 ) ≡ z0 −z0 ρ(R , z) dz ,(2) is strongly constrained by local stellar kinematics for z 0 1 kpc. If the scale length R d is assumed to be similar to that of the baryonic disk of the Milky Way, R d ∼ 3 kpc, and since the DDDM will be thin (z d z 0 ∼ 1 kpc), then Eq. (2) reduces to: Σ disk (R , |z| < 1 kpc) ∼ disk M gal DM 2πR 2 d e −R /R d . (3) Given that we are assuming R d ∼ 3 kpc and R ∼ 8 kpc, and the total halo mass of the Milky Way is ∼ 10 12 M , then a constraint on Eq. (3) translates into a constrain on disk . A compilation of current bounds on the local surface and volume densities of total matter and visible matter is given in [36] (see their Table 1). For instance, using the results from [7], the surface density of non-baryonic matter is constrained to be: Σ dark (R , |z| < 1.1kpc) = 30 ± 4 M pc 2 ,(4) which implies disk < 0.025, i.e., only a few percent of the total halo mass can be in the form of DDDM. An updated analysis on the constraints on a dark disk by [36] concluded that depending on the method (standard static or one including nonequilibrium features in the tracer stars), the upper bound (95% confidence interval) on Σ disk lies between 3 − 13 M /pc 2 for a thin disk (z d = 10 pc), while for a thick disk (z d = 100 pc), the upper bound changes to 7−32 M /pc 2 (the bound for the non-equilibrium method was extracted from Fig. 10 of [36]). A thin dark disk model with these characteristics has been invoked to potentially explain the apparent periodicity of comet impacts on Earth [54]. One finds: ρ DDDM 0 ≡ ρ(R , z = 0) = Σ disk (R ) 4z d ,(5) r 2 − r 1 is their relative position, and F DM i is the drag force (dynamical friction) due to DM acting on the i-th body. To compute this force, we first refer to the situation of a body of mass M , moving with velocity v through a homogeneous density distribution of collisionless particles with mass m DM M and velocity distribution f (u). The problem can be analyzed as a set of sequential encounters of the object M with particles randomly taken from the distribution f (u), over an interval of time much shorter than the time scale for variations in the velocity v, and much longer than the interaction time. By symmetry arguments, the variation of v perpendicular to the direction of motion (∆ v ⊥ ) vanishes, while the change in the parallel component is given by [26]: dv dt = − 4πG 2 M m DM v 2 v 0 f (u) ln Λ + ln v 2 − u 2 v 2 d 3 u + ∞ v f (u) − 2v u + ln u + v u − v d 3 u ,(10) where ln Λ ≡ λ = ln(b max v 2 /GM ), is the Coulomb logarithm. The scale length b max is the characteristic size of the medium, while GM/v 2 is the typical radius of the sphere of gravitational influence of the orbiting body. The choices of the parameters are somewhat arbitrary. In the following we choose them such that Λ 1 and in particular λ = 20 +10 −10 , to make a direct comparison with the results of [50] in the standard halo scenario. In the limit where the velocity dispersion of the medium is much larger than the velocity of the dragged object (σ v), the equation above greatly simplifies and reduces to two contributions roughly of the same order [50,26]. In this limit, for an isotropic Maxwellian distribution, Eq. (10) reduces to the well known Chandrasekhar's formula [12,13]: dv dt = − 4πG 2 M ρ DM v 2 λ erf (x) − 2x √ π exp(−x 2 ) ,(11) in which we have defined x = v/ √ 2σ and used the Gauss error function erf (x). In [50] the constraints on the DM density from binary pulsars are computed in this limit (σ v), using the previous formula 4 . Since we are mostly interested in the regime σ v, we use the general equation (10). 4 The validity of Eq. (11) and the comparison to the gen- Figure 1: Scheme of a generic binary system in the Galactic reference frame. This is the general situation in which a DM wind could have a net velocity relative to the center of mass of the binary: vw = vw(cosα sinβ, sinα sinβ, cosβ). m 1 m 2 ↵ v w The previous effect represents a total deceleration of the object with mass M . For a binary pulsar, we will be mainly interested in the change in the period of the binary. Since in this section we neglect the possible interaction of one component with its companion's wake, we can use the superposition of forces F DM i for i = 1, 2. For collisionless DM, this approximation holds when the period of the binary P b is larger than the typical time of dispersion of the wake [3]: P b Gm i σ 3 .(12) In the case of a dark disk, the low value of σ implies P b Gm i /σ 3 for typical binary periods of 10 − 100 days and σ ∼ 1 − 10 km/s. As a result, one should in principle include the interaction of the wakes in the analysis. This study is currently missing in the literature and we leave its detailed treatment for the future. We anticipate, however, that the impact of this interaction that one finds in the case of a gaseous medium [33,34] (see also Section 4) suggests that the calculation that follows captures the correct order of magnitude of the effect. Osculating orbits and orbital period variations caused by DDDM We introduce the position vector for the center of mass R = (m 1 r 1 + m 2 r 2 )/M with M = eral equation (10) are discussed in Appendix A of [50]. In the limit σ v they are indeed a very good approximation as can be seen in their Figure 7. m 1 + m 2 and rewrite the drag force as F DM i = −Ab i (m 2 i /M )ṽ i , where A = 4πρ DM G 2 M , and: b i = 1 v i 3 ṽi 0 f (u) ln Λ + ln ṽ i 2 − u 2 v i 2 d 3 u + ∞ vi f (u) −2 v i u + ln u +ṽ i u −ṽ i d 3 u ,(13) withṽ i =ṙ i +v w being the velocity of the i-th companion relative to the DM wind, see Fig. 1 (adapted from [50]). We assume the net velocity relative to the center of mass of the binary to be negligible v w ≈ 0 because in the simplest dark disk scenario the dark disk co-rotates with the galactic disk. It is important to notice that this assumption is not always satisfied for binary pulsars. In particular, the velocity kick followed by the supernova explosion is of order 5 50 km/s (e.g. [58]; notice that current uncertainties in kick velocities are very large). However, with a sufficient number of systems one expects that some of them will have relative velocities v w σ, which is the relevant limit for our conclusions, cf. Fig. 3. Thus, to maximize the possible effect of dynamical friction, we assume that the DDDM particles and the center of mass of the binary move in approximately the same circular orbit, with nearly identical velocities. Under this condition, the equations of motion reduce to: v = − GM r 3 r + a 1 ηv + a 2 · V,(14)V = a 2 ηv + a 3 · V,(15) where v =ṙ, V =Ṙ, and: a 1 = −A(b 1 + b 2 ),(16)a 2 = A 2 [b 1 ∆ + + b 2 ∆ − ] , a 3 = − A 4 b 1 ∆ 2 + + b 2 ∆ 2 − , with ∆ ± = ∆ ± 1, ∆ = √ 1 − 4η, η = µ/M and µ is the reduced mass of the binary. We treat the drag force as a perturbation and use the formalism of osculating orbits [51] to make the perturbative analysis to first order. Following [50,51] and restricting to circular orbits for the binary for simplicity (a good approximations for the systems we refer to given their small eccentricities), we obtain an expression for the time derivative of the orbital period: (17) where P b is the period of the orbit, Γ = v w /v, Ω 0 is the orbital angular velocity and α and β are the azimuthal and polar angles, respectively (see Fig. 1). P DM b (t) = 3P b [a 1 η − a 2 Γ sinβ sin(Ω 0 t − α)], If v w = 0, then Γ = 0, and since we are assuming a circular orbit, the relation ( GM Ω 0 ) 1/3 = |v| ≡ v holds, where v is the relative velocity of the binary members. In this case, Eq.(17) greatly simplifies. Changes in the period in the DDDM scenario As a benchmark, let us consider a binary system with the following orbital parameters: P b = 100 days, m 1 = 1.3 M and m 2 = 0.3 M . In the ordinary DM scenario, typical values for the other relevant quantities (at the solar circle) are v w ∼ 220 km/s, σ ∼ 150 km/s and ρ DM = 0.3 GeV cm −3 . Using Eq. (17) in this case, the secular change of the orbital period results in: |Ṗ b DM | = 6.6 · 10 −19 ,(18) which is too small to be probed by current measurements. Furthermore, it seems challenging to identify such small effect in the future given the typical uncertainties in other known mechanisms that af-fectṖ b [50,18,41]. The expectations drastically change when one considers the DDDM scenario, and v w ∼ 0. In this case, for a binary system with the same characteristics as above (located at the solar circle), with z = 10 pc, immersed in a thin dark disk with z d = 10 pc and thus σ = 1.8 km/s (given Eq. (5)) one obtains: |Ṗ b DDDM | = 4.2 · 10 −13(19) while considering a thicker dark disk, with z d = 100 pc (σ = 9 km/s), one would get: |Ṗ b DDDM | = 1.42 · 10 −14 .(20) These numbers may be within the reach of current or future observations (see below). The dependence Fig. 4 below for the two dark disk cases we have discussed. At small periods, velocities are large and since the dispersion is very small, the term that dominates in Eq. (13) is the first integral, i.e. DM particles with velocities of |Ṗ b DDDM | on P b is shown inEXCLUDED EXCLUDED FOR A THIN DISK z d =100 pc z d =10 pc ��� ��� ��� ��� ��� -���� -���� -���� -���� -���� -���� -���� -���� ���(σ(��/�)) ���(�� � /��) Figure 2:Ṗ DDDM b as a function of σ in the DDDM scenario, for both a thin (red-thick line) and thick (blue-dashed line) disk, for a system with P b = 100 days, m 1 = 1.3 M , m 2 = 0.3 M and vw = 0. The red shaded region (σ > 9 km/s) corresponds to the region excluded for the thick dark disk, while the grey shaded region extends the exclusion region for the thin disc, both required by constraints on the DDDM density (see Sec. 2). smaller than v i dominate the drag. This can be seen more clearly in Eqs. (A.2-A.5) of [50]. The result is that roughly b i ∝ 1/v 3 (since that integral does not have a very strong dependence on v i ). This means that Eq. (17) goes as 1/v 4 ∼ P 4 b , which is roughly the scaling we see in Fig. 4. At large periods, the circular velocities are smaller and the second integral in Eq. (13) starts to become important. The latter has a stronger dependence on v which makes the scaling of |Ṗ b DDDM | with P b shallower. Let us now discuss the dependence of the previous results on σ. This is shown in Fig. 2 for the DDDM case and P b = 100 days, where we see that the dependence is not as strong as in the ordinary case 6 . This is because decreasing σ also leads to a lower density (Eq. (5)) andṖ DDDM b grows linearly with the density (a 1 and a 2 are proportional to A in Eq. (16), and A ∝ ρ DM ). In fact, at the low [50]) starts to become important and thus,Ṗ b peaks at around σ ∼ 7−8 km/s followed by a sharp decline in the high σ end as in the ordinary case. In the regime, σ v w , v orb , we are in the limit discussed in section C.1 of [50] andṖ DDDM b ∼ ρ DM /σ 3 ∝ 1/σ. Another significant feature of the enhancement of the signal in the presence of the dark disk is the assumed low value for v w . As we discussed, this approximation is relevant given the co-rotation of the dark and baryonic disks and the fact that some binary systems may satisfy v w σ. As shown in Fig. 3, larger values of v w lead to a smaller signal. Moreover, the dependence ofṖ DDDM b on v w becomes more relevant for v w σ. The two curves in Fig. 3 are flat for low values of v w , comparable to σ, but when v w ≥ σ the signal decreases rapidly. This happens for larger values of v w in the case of z d = 100 pc since σ is larger. This behaviour can also be seen in Fig. 2 of [50]. σ end,Ṗ DDDM b ∝ ρ DM ∝ σ 2 . In the intermediate regime v w σ ≤ v orb , the inversely proportional dependence ofṖ DDDM b on σ (clearly manifest in z d =100 pc z d =10 pc ��� ��� ��� ��� ��� -���� -���� -���� -���� -���� -���� -���� ���(��(��/�)) ���(���/��) Log(dPb/dt) Log(vw(Km/s)) The co-rotation with the baryonic disk is a common characteristic for all possible dark disks; in fact, even in the case in which a dark disk formed from collisionless DM accreting onto a stellar disk, it would co-rotate with the baryonic one. However, if we consider a typical situation for an "ordinary" dark disk [53], we may have v w ∼ 0, but crucially a dispersion similar to that of stars σ ∼ 100 km/s. Also, the dark disk in these cases has a density 20% of the host halo density at the solar cir- ↓ �����-���� �����-���� �����+���� ↓ �����+�� ↓ �����+���� ↓ �����-���� z d =10 pc z d =100 pc � �� ��� ��� ��� ��� ��� -�� -�� -�� -�� -� ��(����) ���(���/��) which is very similar to the effect from the smooth halo we estimated above (18). Thus, the method proposed is only efficient to constrain disks similar to those appearing in the DDDM scenario. Comparison with observations To compare our previous predictions with observations we have collected the measured value of (or upper limit to)Ṗ b for a number of binary pulsars near the galactic plane, and thus possibly immersed in a dark disk, and with periods P b > 6 days. The theoretical prediction forṖ DDDM b for a given system is computed by taking the coordinate of the Table 1: A collection of promising binary pulsars to probe the DDDM scenario (Fig. 4). The columns are: binary period, expected maximal time variation in the binary period due to the presence of a dark disk, limits in the observed time variation of the binary's period (systems tagged with "ref" have a measured bound onṖ b ; see Table A.2), and eccentricities. binary above the plane (z = z sys ), and assuming a dark disk with z d = z sys . For the density, we take the largest upper bound on ρ DDDM 0 given by the kinematical constraints on Σ disk (R ) (see Eqs. (7)-(8); based on [36]). We further assume v w σ ∼ 10 km/s. Although this approximation is not satisfied by some of the systems whose transverse velocities are large, we prefer to emphasize the case of almost co-rotation as it maximizes the constraining power of binary pulsars for the DDDM scenario. It is to be expected that a fraction of binary pulsars currently known, or to be detected in the future would actually satisfy this condition. These numbers are collected in Table A.2 of the Appendix Appendix A. We find that, even though the signal in the DDDM scenario is much larger than in the standard one, the predicted values for most of the systems are several orders of magnitude away from experimental sensitivity. The main reason for this is that the effect is significant for relatively long periods, for which an accurate measurement ofṖ b is more challenging. Nevertheless there are few interesting cases with relevant constraining power (gathered in Table 1). We compare the constraints/measurements onṖ b with the predictions forṖ DDDM b in Fig. 4. The observational bounds toṖ b are either directly taken from the references (given in the last column of Table A.2, and denoted by ref in Table 1) The situation summarized in Fig. 4 can improve in the near future with the discovery of more systems with a combination of the following characteristics: (i) nearly co-rotating with the baryonic disk, (ii) closer to the Galactic Center (since the density is enhanced substantially relative to the Solar circle); (iii) with orbital periods 100 days (given the dependence ofṖ DDDM b with P b for smaller periods, see Fig. 4); and (iv) small orbital velocities, naturally connected to point (iii), given the strong inverse dependence ofṖ DDDM b on velocity (see Eq. (13)). These desirable properties are not the best to obtain an optimal timing of the system, but these two aspects are not necessarily exclusive. Future surveys, such as SKA, will most likely add O(10 3 ) new systems [37], and hopefully some of them will have the right properties. We note however that even a system with a relatively short period can be quite constraining if a limit onṖ b can be set with high precision. For instance, the binary system J1857+0943 [20], whose period has been measured with 10 digits in precision during seven years, allows us to infer an upper bound: |Ṗ b | < 1.2 · 10 −13 ,(22) which is shown at the bottom left of Fig. 4. For this system, a thin dark disk in the DDDM scenario with z d = z sys , would imply the following period variation caused by dynamical friction: |Ṗ DDDM b | ∼ 8.6 · 10 −16 ,(23) J1751-2857 [20] is another promising system. Its period has been measured with 8 digits in precision for six years, which gives an upper bound: |Ṗ b | < 1.8 · 10 −11 .(24) Since the period of the binary is long (P b = 110.7 days) and its coordinates are suitable for a large density of the dark disk (z sys = 20 pc and R sys ∼ 7.4 kpc), the prediction for the impact of dynamical friction in the DDDM scenario is: |Ṗ DDDM b | ∼ 2.1 · 10 −13 .(25) a factor of ∼ 86 below the observational upper limit. We stress that the bounds derived for these systems are too optimistic given that the transverse velocity of the systems are v w v T ∼ 34 km/s and v w v T ∼ 44 km/s, respectively 7 . As we have 7 http://www.atnf.csiro.au/people/pulsar/psrcat/ highlighted before, these bounds are estimated under the assumption v w σ. Our purpose on presenting them here is to show that the timing of systems with the required properties can be achieved at the desirable level in the foreseeable future. Notice that if v w is indeed close to the transverse velocity of these binaries, then the bounds weaken by approximately a factor of ∼ 10, ∼ 100, respectively. Drag in a gaseous medium: interacting wake pair The previous analysis was done in the approximation of collisionless DM particles and ignoring the interaction between the DM wakes. As we stressed, an analysis where the wakes' interaction is considered seems to be required to properly com-puteṖ DDDM b in the systems we used for our analysis. To our knowledge, this is currently missing in the literature, but one can try to quantify its effect by treating DM as a gaseous medium, where the influence of the wakes has been fully analyzed in a different context. Independently of this motivation, the properties of DM are not well constrained and it is interesting to also constrain DM models beyond the collisionless paradigm, where a fluid approximation may be relevant. Let us consider a binary system moving in an inviscid gaseous medium, which in our case is the DM plasma in the disk. For a model with an isothermal, self-gravitating Maxwellian distribution function, the speed of sound of the DM "fluid" is proportional to the 1D velocity dispersion [5]: c s = √ γσ z ,(26) where γ is the adiabatic index (γ = 5/3 for a monoatomic gas). We stress that in order to use this approach, we need to treat DM as a fluid. Dissipative selfinteracting DM (like the one considered here) is often considered as a perfect fluid [40,29] or even a plasma [14]. Therefore this approximation seems to be accurate, especially in the inner regions of the disk, where the DM density is higher. To understand its validity more quantitatively, let us compare the mean free path of the medium with the characteristic length of the binary system (e.g. semi-major axis a). Following [55] (Eq. (39), section 4.3), we estimate the mean free path for Rutherford scattering of charged DM particles as: ∼ 10 −3 pc cm −3 n C T 10 6 K 2 . 10 −2 α D 2     21 ln 1 + 3T α D ·n 1/3 C     (27) where α D is the fine-structure constant for U (1) D ; T is the temperature of the DM fluid: kT /m X σ z , and n C is the number density of light fermions, given by the ionization fraction n C /n, where n = n XC + n X with n XC the bound state number density and n X = n C is the number density of heavy fermions. The ionization fraction is typically below 10% for the interesting region of the DDDM parameter space where cooling is efficient [22], and we have ρ DDDM ∼ m X n. For instance, if σ z = 1 km/s (which gives T ∼ eV), and for n C /n = 0.1, α D = 10 −2 , m X = 100 GeV and ρ DDDM = 10 GeV/cm 3 , we have: ∼ 10 −5 pc,(28) which is approximately 2 astronomical units. This implies a Knudsen number Kn = /a 1, where a is the major axis of the binary system, which is not truly in the (fully collisional) fluid regime Kn 1, but rather intermediate, corresponding to the case of a rarefied gas. However, given the dependence of Eq. (27) in, for instance α D , it is evident that a part of the interesting regions of the parameter space, will be located in the fluid regime. We again consider a uniform density background and ignore the orbital motion of the DM background with respect to the center of mass of the whole system, v w 0. Under the latter condition, we can neglect the centrifugal and Coriolis forces which may affect the density wakes and the drag forces [1,49]. We now follow closely [33,34] and assume that the perturbed density field over the background α = (ρ − ρ)/ρ 0 is adiabatic and α 1. At linear order, the equations of hydrodynamics imply a three-dimensional wave equation for α: ∇ 2 α − ∂ 2 α c 2 s ∂ 2 t = − 4πG c 2 s ρ ext (x, t),(29) where ρ ext denotes the mass density of the perturbers, i.e., the binary members. Cylindrical coordinates (r, θ, z) are appropriate for our case, and for simplicity we will assume that the pulsars are point masses with equal mass M . They are located at (r p , 0, 0) and (r p , π, 0) at t = 0. Thus, the density of the perturbers is given by: ρ ext (x, t) =M H(t)δ(r − r p )δ(z)· (δ[r p (θ − Ωt)] + δ[r p (θ − π − Ωt)]),(30) where Ω is the angular speed of the perturbers and H(t) is the Heaviside step function. The drag force on each perturber i appearing in (9) is given by [33,34]: F DM i = F DM i,1 + F DM i,2(31) where the force is split in two components, one due to the perturber's own wake and the other due to the wake of the companion. Both of them can be decomposed into a radial and an azimuthal component in the following form (we remove the i dependence to avoid cluttered notation) F DM 1 = −F(I 1,rr + I 1,θθ ), F DM 2 = −F(I 2,rr + I 2,θθ ),(32) with F ≡ 4πρ 0 (GM ) 2 /v 2 p , where v p is the orbital velocity of the perturbers, and where I j,r , I j,θ are the dimensionless drag forces on the perturbers (due to its own wake for j = 1 and due to its companion's wake for j = 2). For the calculation of the period's variation, just the azimuthal components will be taken into account. We use the algebraic fits derived in [33,34], which are accurate within 6-16% for all Mach numbers M: I 1,θ =          0.7706 ln and I 2,θ = M 2 [−0.022(10 − M)tanh(3M/2)] if M < 2.97, M 2 −0.13 + 0.07tan −1 (5M − 15) if M ≥ 2.97,(34) where r min represents the cutoff radii introduced to avoid divergences of the force integrals and it is assumed to be of the order of the characteristic size of the perturber. In order to make a proper comparison with the collisionless approach, we choose the cutoff radius such that ln r p /r min ∼ 15, of the same order as the Coulomb logarithm λ defined in Eq. (10). We remark that this is consistent with the logarithm of the ratio of the size of the orbit to the characteristic size of the compact objects we are dealing with, which is O(10 km). For instance, if we take a perturber with a mass M = 1.3 M , orbiting with a period of 10 days (a ∼ 0.1 au), we get ln( GMp/v 2 p 10 km ) 14. Thus, beyond M 4.4, it is reasonable to assume that I 1,θ , I 2,θ are approximately constant given the large value of ln r p /r min (see Eq. (33)). Numerical results: comparison with the collisionless approach We here compare the results in the previous section to the collisionless case of Sec. 3, both for the case where the wakes' interactions are taken into account and for the case when they are neglected. Since the fluid approach we discuss was derived for equal mass systems, we concentrate in the case with M 1 = M 2 = 1.3 M and focus on the interesting range of σ for the DDDM model, 1 km/s ≤ σ ≤ 10 km/s. As before, we consider two different values for the disk scale length, namely z d = 10 pc (thin disk) and z d = 100 pc (thick disk). Each of these cases has observational upper limits to the velocity dispersion (see Eqs. (7)−(8)). Thus, for the thin disk, the region of interest is σ ∈ (1, 2) km/s, while for the thick disk it is, σ ∈ (1, 9) km/s. In Fig. 5 we plot the ratio between the fluid and collisionless approaches in the cases of a thin (top panel) and thick (bottom panel) disks, choosing σ in each case to be close to the current upper limits on a dark disk. The plot shows the dependence on the Mach number; notice that for a fixed value of σ, a high Mach number implies a smaller period. We focus on the region 10 days ≤ P b ≤ 200 days (shaded area). We also show the results neglecting in the fluid approach and the collisionless approach as a function of the Mach number, for a thin (thick) dark disk with z d = 10(100) pc, and with a value of σ close to current upper bounds. We show the results both including and not including the contribution of the companion wake I 2,θ in the collisional formulation.The binary is set to an equal mass system with M 1 = M 2 = 1.3 M , P b = 100 days, and with coordinates in the Galaxy: R = R , z = z d . The shaded area marks the range of periods of the most promising binaries shown in Fig. 4. the contribution of the companion wake (I 2,θ = 0) in the fluid approach (dashed curves). Although the relevant range of Mach numbers is different for the thick and thin disc cases (since the velocity dispersions are different), the difference between both the fluid and collisionless approaches is similar, and amounts to less than a factor of two. Also in both cases, the influence of the companion's wake is relatively minor. Notice that in the case of the thick disk, for M ∼ 2 the drag force in the fluid approach is in fact larger than the one in the collisionless approach (although not shown in the plot this situation is more pronounced for M ∼ 1). This is to be expected: for such low Mach numbers the contribution of I 1,θ (own wake) clearly dominates the drag force (see Fig.2 in [34]). Near M 1 this term is much more relevant in the fluid case than in the collisionless formulation: perturbers moving at speeds near M = 1 resonantly interact with the pressure waves that they generate in the background medium [48]. In the case of the thin disk (top panel of Fig. 5), the relevant Mach numbers are quite high M 15, which makes the influence of the companion more relevant, albeit still small. The final result is roughly a 10−15% reduction over the effect of the principal (own) wake. Therefore, quantitatively, the difference between the two approaches depends on the details of the specific system in consideration and the specific features of the dark disk. However, for the range of periods 10 days ≤ P b ≤ 200 days (marked with a shaded area in Fig. 5) where we find the most promising binary pulsars (those shown in Fig. 4), the predicted orbital decay is of the same order of magnitude for both approaches, while the interaction of the pair of wakes seems to play only a minor role. We note that the bump seen in the bottom panel of Fig. 5 in the range 2.97 < M < 4.4 is related to the overlapping of the two wake tails, which enhances the drag force in the perturber (see Fig. 2 of [34]). The sharp features around the bump are exacerbated by artificial discontinuities in the fitting functions for I 1,θ and I 2,θ in Eqs. (33)− (34). In summary, the results of the comparison between the different approaches suggest that the main (own) wake of each member of the binary dominates the change in the orbital period, with the effect of the companion wake leading only to a 10 − 15% reduction in the expected signal. Although the latter is strictly valid only in the fluid case, our results bring confidence that the potential constraining power of binary pulsars suggested in Section 3 (particularly Fig. 4 for the absolute value ofṖ b ) in the DDDM model, is expected to remain valid even when the interaction between wakes is taken into account. We expect to substantiate this hope with explicit calculations in the future. Summary and Conclusions We study the signature of a dark matter (DM) disk in the orbital change of a binary pulsar due to the induced gravitational force generated by the DM wakes formed as the binary moves through the DM background. In particular, we concentrate on the Double-Disk-Dark-Matter (DDDM) scenario introduced in [22,21], which exhibits all the properties needed to enhance this effect, originally introduced by [50] in the context of the traditional non-dissipative Cold Dark Matter (CDM) scenario. Contrary to the standard CDM halo, if a subdominant DM component is made of dissipative DM, it would form a rotationally supported flat disk with larger density and much lower velocity dispersion than the CDM triaxial halo, and which is also approximately co-rotating with the baryonic disk. We find that all these features enhance by many orders of magnitude the impact of DM in the orbital change of pulsars, relative to the standard case studied by [50]. It is important to remark that other DM models with dissipative properties may also generate disks, and hence our results might be of interests in those models as well, see e.g. [17,24,2,9]. We model the dark disk as an isothermal sheet (Eq. (1)) with a radial scale length R d = 3 kpc, equal to that of the baryonic disk, and explore the cases where the vertical scale length is set to z d = 10 pc (thin disc) and z d = 100 pc (thick disc). This range covers the interesting cases discussed in the literature. To maximize the impact of these scenarios, we set the density (velocity dispersion) to the largest possible values according to updated stellar kinematics constraints [36] (see Eqs. (7)−(8)) The impact of the presence of a dark disk in the orbital period of a binary pulsar is investigated with two approaches: i) a collisionless approach where DM particles are treated as collisionless, and where each pulsar is influenced only by its own induced wake (see Section 3). This approach was introduced in [50], with the important difference that we explore the regime suitable for a dark disk: v w σ v orb , where v w is the relative velocity between the center of mass of the binary and the DM background (v w ∼ 0 is expected for some systems for a co-rotating disk), σ is the velocity dispersion and v orb is the orbital velocity of the binary. In contrast, the regime studied in [50] is appropriate for a CDM halo where v w σ v orb . ii) a fluid approach where DM particles are assumed to be in the fluid regime and for a binary of equal masses where the interaction of the wake pair is fully considered [33,34]. This approach is expected to be valid when the Knudsen number Kn 1 (where the mean free path is given by the Rutherford scattering of the charged particles, and the characteristic size of interest is the size of the binary's orbit), which is expected to hold in some regions of the relevant parameter space of the DDDM scenario (see Section 4). The consideration of both approaches allows us to give a broad perspective of the problem of interest. In particular, since v orb /σ 1, the motion is supersonic, which suggests that the wake of the companion could be significant (something that is ignored in the collisionless approach so far developed). For this reason, the case with Kn 1 is important, since it allows us to explore whether the main effect is well captured by the calculations that ignore the wakes' interactions. The intermediate regime, that of Kn ∼ 1, is also of interest and remains entirely unexplored. Our intention is to develop a general approach that covers all regimes in the near future. In particular, we highlight the importance to develop, in the collisional approach, a treatment for unequal-mass perturbers, which is the case for most of the observed systems. In [34], in fact, the authors assumed identical orbits for both perturbers, and this assumption would fail soon for unequal masses. Nevertheless, in the present paper our goal is to show the promising aspects of binary pulsars as probes of the DDDM scenario, and to present general expectations based on a hybrid analysis that combines the results from the methods currently available. Using the first (collisionless) approach, we estimate the orbital period variationṖ DDDM b due to a hypothetical DDDM disk for many binary pulsars within that dark disk (see Table A.2). In a couple of cases, our theoretical predictions are relatively close (within a factor of ∼ 100) to current experimental bounds/values ofṖ b (see Fig. 4). These systems are very promising for future follow ups, although we used the approximation v w σ which is likely not satisfied in these cases given their transverse velocities. Still, given the capacity of future surveys, such as SKA [37], to detect new (up to O(10 3 )) systems and get timing measurements with enough precision, we may soon be able to find binary pulsars that satisfy all the required properties to probe the DDDM scenario. Using the second (fluid) approach, we have estimated the impact of the interacting pair of wakes in the orbit of the binary. We find that the companion's wake has only a minor role (10 − 15% effect). Furthermore, we also show that, for the most interesting binary candidates, the inducedṖ DDDM b is of the same order for the three cases we studied: fluid, fluid but neglecting wakes' interactions and collisionless neglecting the wakes' interactions, see Fig. 5. In particular, when the dark disk gets thicker (σ increases), the Mach number becomes smaller and the companion's wake is less relevant. Overall, our results suggest that the expectation of a change in the period of the orbit of binary pulsars of the order of magnitude estimated using the first (collisionless) approach, will hold even if the wakes' interaction were taken into account into a full collisionless approach (something that has not been developed so far). Thus, we conclude that the precise observations of binary pulsars may probe the scenarios of dark disks with properties close to those of allowed DDDM scenarios if the precision of the most promising candidates is improved or if future surveys discover new systems close to the galactic plane and with long enough periods. As we discussed, we leave for the future a more detailed analysis of the case Kn ∼ 1 and a more complete comparison with data. Acknowledgments Figure 3 : 3Ṗ DDDM b as a function of vw in the DDDM scenario with P b = 100 days, m 1 = 1.3 M , m 2 = 0.3 M . We show both cases z d = 10 pc (thin disk; blue-dashed line) and z d = 100 pc (thick disk; red-solid line), with corresponding dispersions σ = 1.8 km/s and σ = 9 km/s. These models are at the limit on current constraints on the DDDM scenario. Figure 4 : 4Comparison between theṖ b measured (or constrained) for most promising binaries and the predictions in the DDDM scenario for a binary with m 1 = 1.3 M , m 2 = 0.3 M . The predictions forṖ DDDM b for a thin (thick) disk with z d = 10(100) pc, and ρ DDDM 0 = 12(3) GeV cm −3 are shown by a blue-dashed (red-solid) line and assuming vw σ. The downward arrows indicate experimental upper bounds onṖ b , while the circles with a central red dot refer to systems for whichṖ b has been directly measured. The grey values are the ratios of the predictions for theṖ DDDM b of the system (maximal effect) with respect to observational values/limits. cle. Therefore, the signal would be several orders of magnitude less relevant than in the DDDM case. For the binary with the characteristics we have been considering:Ṗ ODDM b ∼ −7.5 · 10 −19 , or estimated by considering the error in the determination on P b and the time of the observational campaign. The grey values next to each pulsar's name are the ratios of the predictions for theṖ DDDM b of the system for the maximal effect with respect to observational values/limits. The blue-dashed (red-solid) line shows the generic case of a binary (m 1 = 1.3 M , m 1 = 0.3 M ) under the influence of the thin (thick) disk models that we have considered thus far, with parameters at the upper boundary of current constraints. Figure 5 : 5Top (Bottom) panel: Ratios between the value oḟ P DDDM b A.2: A selection of binary pulsars near the galactic plane that are potential targets to constraint the DDDM model. The columns are (1) Name. (2) Binary period. (3) Distance to the galactic plane. (4) Orbital velocity of the binary. (5) Maximal time variation in the binary period due to the presence of a dark disk. (6) Observational upper bound onṖ b ; those few systems with the tag "ref" have a measurement onṖ b . (7) Reference. DB is grateful to Matthew McCullough, Paolo Pani and Alberto Sesana for discussions. We are grateful to Lijing Shao for comments on the draft. This work was originally inspired by a research visit made by JZ to CERN in the context of the CERN-CKC TH institute during the Summer of 2016, which was funded by CERN and the University of Iceland. JZ acknowledges support by a Grant of Excellence from the Icelandic Research Fund (grant number 173929 − 051). AC acknowledges support by the European project H2020-MSCA-ITN-2015//674896-ELUSIVES. Appendix A. Orbital parameters and predictions for a broad sample of binary pulsars Name P b (days) z sys (kpc) v orb (km/s)Ṗ DDDMb (Ṗ b ) lim Ref. J1751-2857 110.7 -0.02 61.7 −2.1 · 10 −13 1.8 · 10 −11 [20] J1857+0943 12.3 0.06 128.4 −9.76 · 10 −16 1.2 · 10 −13 [20] J0621+1002 8.3 -0.01 146.4 −5.79 · 10 −16 10 −7 [20] J1012-4235 37.97 0.07 88.2 −7.06 · 10 −15 10 −3 [10] J1017-7156 6.51 -0.06 158.7 −4.71 · 10 −16 10 −6 [46] J1125-5825 76.40 0.08 69.9 −2.13 · 10 −14 10 −8 [46] J1125-6014 8.75 0.02 143.9 −9.88 · 10 −16 10 −8 [44] B1259-63 1236.72 -0.04 27.6 −6.07 · 10 −12 ref 1.47 · 10 −8 [57] J1420-5625 40.29 0.01 86.5 −2.18 · 10 −14 10 −6 [28] J1454-5846 12.42 0.02 128.0 −6.49 · 10 −16 10 −8 [11] J1543-5149 8.06 0.05 147.9 −6.47 · 10 −16 10 −9 [46] J1638-4725 1940.9 -0.02 23.8 −6.01 · 10 −11 10 −1 [44] J1727-2946 40.31 0.01 86.5 −6.28 · 10 −15 10 −8 [43] J1740-3052 231.03 -0.01 48.3 −4.40 · 10 −12 ref 3 · 10 −9 [45] J1750-2536 17.14 0.04 115.0 −8.21 · 10 −16 10 −6 [35] J1753-2240 13.64 0.09 124.1 −2.91 · 10 −16 10 −7 [31] J1755-25 9.69 -0.03 139.0 −3.41 · 10 −16 10 −5 [47] J1810-2005 15.01 -0.03 120.2 −1.30 · 10 −15 10 −8 [30] J1811-1736 18.78 0.03 11.5 −4.20 · 10 −16 10 −7 [15] B1820-11 357.76 0.09 41.8 −4.78 · 10 −14 10 −5 [28] J1840-0643 937.1 -0.05 30.3 −2.87 · 10 −13 10 −1 [35] J1850+0124 84.94 0.06 67.4 −1.84 · 10 −13 10 −6 [16] J1903+0327 95.17 -0.03 64.9 −9.42 · 10 −15 ref −33 · 10 −12 [25] J1910+1256 58.47 0.05 76.4 −1.41 · 10 −14 10 −9 [23] J1935+1726 90.76 -0.06 66.0 −1.46 · 10 −14 10 −5 [42] B1953+29 117.35 0.05 60.6 −7.80 · 10 −15 10 −8 [23] J1737-0811 79.51 0.04 68.9 −3.52 · 10 −14 10 −6 [8] J1943+2210 8.31 -0.09 146.4 −5.53 · 10 −17 10 −8 [56] Table Even when this condition is not fully satisfied, a systematic study of all binary systems may allow to exclude the presence of a dark disk. In particular, we neglect gravitational waves (GW) emission. This assumption will always hold in the non-relativistic binaries of interest for our purposes since large periods are tipically needed to enhance the drag force signal. For a discussion on GW emission see e.g.[27]. We are grateful to Lijing Shao for pointing out this to us. For instance, in the ordinary case changing σ from 200 km/s to 50 km/s for the system under study strengthens the signal by almost two orders of magnitudes[50]. Eccentric gravitational instabilities in nearly Keplerian disks. F C Adams, S P Ruden, F H Shu, 347Adams, F. C., Ruden, S. P., Shu, F. H., Dec. 1989. Eccentric gravitational instabilities in nearly Keplerian disks. ApJ347, 959-976. Solving the Hierarchy Problem at Reheating with a Large Number of Degrees of Freedom. N Arkani-Hamed, T Cohen, R T D&apos;agnolo, A Hook, H D Kim, D Pinner, Phys. Rev. Lett. 11725251801Arkani-Hamed, N., Cohen, T., D'Agnolo, R. T., Hook, A., Kim, H. D., Pinner, D., 2016. Solving the Hierarchy Problem at Reheating with a Large Number of Degrees of Freedom. Phys. Rev. Lett. 117 (25), 251801. Dynamical friction in binary systems. J D Bekenstein, R Zamir, ApJ359. Bekenstein, J. D., Zamir, R., Aug. 1990. Dynamical friction in binary systems. ApJ359, 427-437. Fermi-LAT kills dark matter interpretations of AMS-02 data. K Belotsky, R Budaev, A Kirillov, M Laletin, Or not? JCAP. 121Belotsky, K., Budaev, R., Kirillov, A., Laletin, M., Jan. 2017. Fermi-LAT kills dark matter interpretations of AMS-02 data. Or not? JCAP 1, 021. Galactic Dynamics: Second Edition. J Binney, S Tremaine, Princeton University PressBinney, J., Tremaine, S., 2008. Galactic Dynamics: Sec- ond Edition. Princeton University Press. Ultralight Dark Matter Resonates with Binary Pulsars. D Blas, D L Nacir, S Sibiryakov, Phys. Rev. Lett. 11826261102Blas, D., Nacir, D. L., Sibiryakov, S., 2017. Ultralight Dark Matter Resonates with Binary Pulsars. Phys. Rev. Lett. 118 (26), 261102. A Direct Dynamical Measurement of the Milky Way's Disk Surface Density Profile, Disk Scale Length, and Dark Matter Profile at 4 kpc < ∼ R < ∼ 9 kpc. J Bovy, H.-W Rix, Astrophys. J. 779115Bovy, J., Rix, H.-W., 2013. A Direct Dynamical Mea- surement of the Milky Way's Disk Surface Density Pro- file, Disk Scale Length, and Dark Matter Profile at 4 kpc < ∼ R < ∼ 9 kpc. Astrophys. J. 779, 115. . J Boyles, R S Lynch, S M Ransom, I H Stairs, D R Lorimer, M A Mclaughlin, J W T Hessels, Boyles, J., Lynch, R. S., Ransom, S. M., Stairs, I. H., Lorimer, D. R., McLaughlin, M. A., Hessels, J. W. T., The Green Bank Telescope 350 MHz Drift-scan survey. V M Kaspi, V I Kondratiev, A Archibald, A Berndsen, R F Cardoso, A Cherry, C R Epstein, C Karako-Argaman, C A Mcphee, T Pennucci, M S E Roberts, K Stovall, J Van Leeuwen, I. Survey Observations and the Discovery of 13 Pulsars. ApJ763, 80Kaspi, V. M., Kondratiev, V. I., Archibald, A., Bernd- sen, A., Cardoso, R. F., Cherry, A., Epstein, C. R., Karako-Argaman, C., McPhee, C. A., Pennucci, T., Roberts, M. S. E., Stovall, K., van Leeuwen, J., Feb. 2013. The Green Bank Telescope 350 MHz Drift-scan survey. I. Survey Observations and the Discovery of 13 Pulsars. ApJ763, 80. Collapsed Dark Matter Structures. M R Buckley, A Difranzo, Buckley, M. R., DiFranzo, A., 2017. Collapsed Dark Matter Structures. F Camilo, M Kerr, P S Ray, S M Ransom, J Sarkissian, H T Cromartie, S Johnston, J E Reynolds, M T Wolff, P C C Freire, B Bhattacharyya, E C Ferrara, M Keith, P F Michelson, P M Saz Parkinson, K S Wood, Parkes Radio Searches of Fermi Gamma-Ray Sources and Millisecond Pulsar Discoveries. ApJ810. 85Camilo, F., Kerr, M., Ray, P. S., Ransom, S. M., Sarkissian, J., Cromartie, H. T., Johnston, S., Reynolds, J. E., Wolff, M. T., Freire, P. C. C., Bhat- tacharyya, B., Ferrara, E. C., Keith, M., Michelson, P. F., Saz Parkinson, P. M., Wood, K. S., Sep. 2015. Parkes Radio Searches of Fermi Gamma-Ray Sources and Millisecond Pulsar Discoveries. ApJ810, 85. F Camilo, A G Lyne, R N Manchester, J F Bell, I H Stairs, N D&apos;amico, V M Kaspi, A Possenti, F Crawford, N P F Mckay, Discovery of Five Binary Radio Pulsars. ApJ548. Camilo, F., Lyne, A. G., Manchester, R. N., Bell, J. F., Stairs, I. H., D'Amico, N., Kaspi, V. M., Possenti, A., Crawford, F., McKay, N. P. F., Feb. 2001. Discovery of Five Binary Radio Pulsars. ApJ548, L187-L191. Dynamical Friction. I. General Considerations: the Coefficient of Dynamical Friction. S Chandrasekhar, ApJ97. 255Chandrasekhar, S., Mar. 1943. Dynamical Friction. I. General Considerations: the Coefficient of Dynamical Friction. ApJ97, 255. Brownian Motion, Dynamical Friction, and Stellar Dynamics. S Chandrasekhar, Reviews of Modern Physics. 21Chandrasekhar, S., Jul. 1949. Brownian Motion, Dy- namical Friction, and Stellar Dynamics. Reviews of Modern Physics 21, 383-388. Plasma dark matter direct detection. J D Clarke, R Foot, JCAP. 129Clarke, J. D., Foot, R., Jan. 2016. Plasma dark matter direct detection. JCAP 1, 029. The binary pulsar PSR J1811-1736: evidence of a low amplitude supernova kick. A Corongiu, M Kramer, B W Stappers, A G Lyne, A Jessner, A Possenti, N D&apos;amico, O Löhmer, 462Corongiu, A., Kramer, M., Stappers, B. W., Lyne, A. G., Jessner, A., Possenti, A., D'Amico, N., Löhmer, O., Feb. 2007. The binary pulsar PSR J1811-1736: evi- dence of a low amplitude supernova kick. A&A462, 703- 709. F Crawford, K Stovall, A G Lyne, B W Stappers, D J Nice, I H Stairs, P Lazarus, J W T Hessels, P C C Freire, B Allen, N D R Bhat, S Bogdanov, A Brazier, F Camilo, D J Champion, S Chatterjee, I Cognard, J M Cordes, J S Deneva, G Desvignes, F A Jenet, V M Kaspi, B Knispel, M Kramer, J Van Leeuwen, D R Lorimer, R Lynch, M A Mclaughlin, S M Ransom, P Scholz, X Siemens, A Venkataraman, Four Highly Dispersed Millisecond Pulsars Discovered in the Arecibo PALFA Galactic Plane Survey. ApJ757. 90Crawford, F., Stovall, K., Lyne, A. G., Stappers, B. W., Nice, D. J., Stairs, I. H., Lazarus, P., Hessels, J. W. T., Freire, P. C. C., Allen, B., Bhat, N. D. R., Bogdanov, S., Brazier, A., Camilo, F., Champion, D. J., Chatterjee, S., Cognard, I., Cordes, J. M., Deneva, J. S., Desvignes, G., Jenet, F. A., Kaspi, V. M., Knispel, B., Kramer, M., van Leeuwen, J., Lorimer, D. R., Lynch, R., McLaugh- lin, M. A., Ransom, S. M., Scholz, P., Siemens, X., Venkataraman, A., Sep. 2012. Four Highly Dispersed Millisecond Pulsars Discovered in the Arecibo PALFA Galactic Plane Survey. ApJ757, 90. Massive Black Holes from Dissipative Dark Matter. G D&apos;amico, P Panci, A Lupi, S Bovino, J Silk, D'Amico, G., Panci, P., Lupi, A., Bovino, S., Silk, J., 2017. Massive Black Holes from Dissipative Dark Mat- ter. On the orbital period change of the binary pulsar PSR 1913 + 16. T Damour, J H Taylor, 366Damour, T., Taylor, J. H., Jan. 1991. On the orbital period change of the binary pulsar PSR 1913 + 16. ApJ366, 501-511. Recognising Axionic Dark Matter by Compton and de-Broglie Scale Modulation of Pulsar Timing. De Martino, I Broadhurst, T Tye, S H H Chiueh, T Schive, H.-Y Lazkoz, R , De Martino, I., Broadhurst, T., Tye, S. H. H., Chiueh, T., Schive, H.-Y., Lazkoz, R., 2017. Recognising Ax- ionic Dark Matter by Compton and de-Broglie Scale Modulation of Pulsar Timing. . G Desvignes, R N Caballero, L Lentati, J P W Verbiest, D J Champion, B W Stappers, G H Janssen, P Lazarus, S Os Lowski, S Babak, C G Bassa, P Brem, M Burgay, I Cognard, J R Gair, E Graikou, L Guillemot, J W T Hessels, A Jessner, C Jordan, R Karuppusamy, M Kramer, A Lassus, K Lazaridis, K J Lee, K Liu, A G Lyne, J Mckee, C M F Mingarelli, D Perrodin, A Petiteau, A Possenti, M B Purver, P A Rosado, S Sanidas, A Sesana, G Shaifullah, R Smits, S R Taylor, G Theureau, C Tiburzi, R Van Haasteren, A Vecchio, 458High-precision timing of 42 millisecond pulsars with the European Pulsar Timing Array. /mnrasDesvignes, G., Caballero, R. N., Lentati, L., Verbiest, J. P. W., Champion, D. J., Stappers, B. W., Janssen, G. H., Lazarus, P., Os lowski, S., Babak, S., Bassa, C. G., Brem, P., Burgay, M., Cognard, I., Gair, J. R., Graikou, E., Guillemot, L., Hessels, J. W. T., Jessner, A., Jordan, C., Karuppusamy, R., Kramer, M., Las- sus, A., Lazaridis, K., Lee, K. J., Liu, K., Lyne, A. G., McKee, J., Mingarelli, C. M. F., Perrodin, D., Petiteau, A., Possenti, A., Purver, M. B., Rosado, P. A., Sanidas, S., Sesana, A., Shaifullah, G., Smits, R., Taylor, S. R., Theureau, G., Tiburzi, C., van Haasteren, R., Vecchio, A., May 2016. High-precision timing of 42 millisecond pulsars with the European Pulsar Timing Array. /mn- ras 458, 3341-3380. Dark-Disk Universe. J Fan, A Katz, L Randall, M Reece, Physical Review Letters. 11021211302Fan, J., Katz, A., Randall, L., Reece, M., May 2013. Dark-Disk Universe. Physical Review Letters 110 (21), 211302. Double-Disk Dark Matter. J Fan, A Katz, L Randall, M Reece, Physics of the Dark Universe. 2Fan, J., Katz, A., Randall, L., Reece, M., Sep. 2013. Double-Disk Dark Matter. Physics of the Dark Universe 2, 139-156. E Fonseca, T T Pennucci, J A Ellis, I H Stairs, D J Nice, S M Ransom, P B Demorest, Z Arzoumanian, K Crowter, T Dolch, R D Ferdman, M E Gonzalez, G Jones, M L Jones, M T Lam, L Levin, M A Mclaughlin, K Stovall, J K Swiggum, W Zhu, The NANOGrav Nine-year Data Set: Mass and Geometric Measurements of Binary Millisecond Pulsars. ApJ832. 167Fonseca, E., Pennucci, T. T., Ellis, J. A., Stairs, I. H., Nice, D. J., Ransom, S. M., Demorest, P. B., Arzou- manian, Z., Crowter, K., Dolch, T., Ferdman, R. D., Gonzalez, M. E., Jones, G., Jones, M. L., Lam, M. T., Levin, L., McLaughlin, M. A., Stovall, K., Swiggum, J. K., Zhu, W., Dec. 2016. The NANOGrav Nine-year Data Set: Mass and Geometric Measurements of Binary Millisecond Pulsars. ApJ832, 167. Dissipative hidden sector dark matter. R Foot, S Vagnozzi, Phys. Rev. 9123512Foot, R., Vagnozzi, S., 2015. Dissipative hidden sector dark matter. Phys. Rev. D91, 023512. . P C C Freire, C G Bassa, N Wex, I H Stairs, D J Champion, S M Ransom, P Lazarus, V M Kaspi, J W T Hessels, M Kramer, J M Cordes, J P W Verbiest, P Podsiadlowski, D J Nice, J S Deneva, D R Lorimer, B W Stappers, M A Mclaughlin, F Camilo, J1903+0327. MNRAS412Freire, P. C. C., Bassa, C. G., Wex, N., Stairs, I. H., Champion, D. J., Ransom, S. M., Lazarus, P., Kaspi, V. M., Hessels, J. W. T., Kramer, M., Cordes, J. M., Verbiest, J. P. W., Podsiadlowski, P., Nice, D. J., Deneva, J. S., Lorimer, D. R., Stappers, B. W., McLaughlin, M. A., Camilo, F., Apr. 2011. On the nature and evolution of the unique binary pulsar J1903+0327. MNRAS412, 2763-2780. Binaries in a medium of fast lowmass objects. A Gould, 379Gould, A., Sep. 1991. Binaries in a medium of fast low- mass objects. ApJ379, 280-284. Dark-matter dynamical friction versus gravitational-wave emission in the evolution of compact-star binaries. L G Gmez, J A Rueda, Phys. Rev. 96663001Gmez, L. G., Rueda, J. A., 2017. Dark-matter dynam- ical friction versus gravitational-wave emission in the evolution of compact-star binaries. Phys. Rev. D96 (6), 063001. The Parkes multibeam pulsar survey -IV. Discovery of 180 pulsars and parameters for 281 previously known pulsars. G Hobbs, A Faulkner, I H Stairs, F Camilo, R N Manchester, A G Lyne, M Kramer, N D&apos;amico, V M Kaspi, A Possenti, M A Mclaughlin, D R Lorimer, M Burgay, B C Joshi, F Crawford, 352Hobbs, G., Faulkner, A., Stairs, I. H., Camilo, F., Manchester, R. N., Lyne, A. G., Kramer, M., D'Amico, N., Kaspi, V. M., Possenti, A., McLaughlin, M. A., Lorimer, D. R., Burgay, M., Joshi, B. C., Crawford, F., Aug. 2004. The Parkes multibeam pulsar survey -IV. Discovery of 180 pulsars and parameters for 281 previ- ously known pulsars. MNRAS352, 1439-1472. Forming supermassive black holes by accreting dark and baryon matter. J Hu, Y Shen, Y.-Q Lou, S Zhang, 365Hu, J., Shen, Y., Lou, Y.-Q., Zhang, S., Jan. 2006. Forming supermassive black holes by accreting dark and baryon matter. MNRAS365, 345-351. Long-term timing of four millisecond pulsars. G H Janssen, B W Stappers, C G Bassa, I Cognard, M Kramer, G Theureau, 51474Janssen, G. H., Stappers, B. W., Bassa, C. G., Cognard, I., Kramer, M., Theureau, G., May 2010. Long-term timing of four millisecond pulsars. A&A514, A74. PSR J1753-2240: a mildly recycled pulsar in an eccentric binary system. M J Keith, M Kramer, A G Lyne, R P Eatough, I H Stairs, A Possenti, F Camilo, R N Manchester, 393Keith, M. J., Kramer, M., Lyne, A. G., Eatough, R. P., Stairs, I. H., Possenti, A., Camilo, F., Manchester, R. N., Feb. 2009. PSR J1753-2240: a mildly recycled pulsar in an eccentric binary system. MNRAS393, 623- 627. Pulsar timing signal from ultralight scalar dark matter. A Khmelnitsky, V Rubakov, JCAP. 140219Khmelnitsky, A., Rubakov, V., 2014. Pulsar timing sig- nal from ultralight scalar dark matter. JCAP 1402, 019. Dynamical Friction of a Circular-Orbit Perturber in a Gaseous Medium. H Kim, W.-T Kim, 665Kim, H., Kim, W.-T., Aug. 2007. Dynamical Friction of a Circular-Orbit Perturber in a Gaseous Medium. ApJ665, 432-444. Dynamical Friction of Double Perturbers in a Gaseous Medium. H Kim, W.-T Kim, F J Sánchez-Salcedo, Astrophys. J. L. 67933Kim, H., Kim, W.-T., Sánchez-Salcedo, F. J., May 2008. Dynamical Friction of Double Perturbers in a Gaseous Medium. Astrophys. J. L. 679, L33. B Knispel, R P Eatough, H Kim, E F Keane, B Allen, D Anderson, C Aulbert, O Bock, F Crawford, H.-B Eggenstein, H Fehrmann, D Hammer, M Kramer, A G Lyne, B Machenschalk, R B Miller, M A Papa, D Rastawicki, J Sarkissian, X Siemens, B W Stappers, Ein-stein@Home Discovery of 24 Pulsars in the Parkes Multi-beam Pulsar Survey. ApJ774. 93Knispel, B., Eatough, R. P., Kim, H., Keane, E. F., Allen, B., Anderson, D., Aulbert, C., Bock, O., Craw- ford, F., Eggenstein, H.-B., Fehrmann, H., Hammer, D., Kramer, M., Lyne, A. G., Machenschalk, B., Miller, R. B., Papa, M. A., Rastawicki, D., Sarkissian, J., Siemens, X., Stappers, B. W., Sep. 2013. Ein- stein@Home Discovery of 24 Pulsars in the Parkes Multi-beam Pulsar Survey. ApJ774, 93. Updated Kinematic Constraints on a Dark Disk. E D Kramer, L Randall, ApJ824. 116Kramer, E. D., Randall, L., Jun. 2016. Updated Kine- matic Constraints on a Dark Disk. ApJ824, 116. Pulsar Science with the SKA. M Kramer, B Stappers, arXiv:1507.04423.pdfKramer, M., Stappers, B., 2015. Pulsar Science with the SKA. URL http://inspirehep.net/record/1383201/files/ arXiv:1507.04423.pdf Dark matter direct detection with non-Maxwellian velocity structure. M Kuhlen, N Weiner, J Diemand, P Madau, B Moore, D Potter, J Stadel, M Zemp, JCAP. 230Kuhlen, M., Weiner, N., Diemand, J., Madau, P., Moore, B., Potter, D., Stadel, J., Zemp, M., Feb. 2010. Dark matter direct detection with non-Maxwellian ve- locity structure. JCAP 2, 030. F Lelli, P.-A Duc, E Brinks, F Bournaud, S S Mc-Gaugh, U Lisenfeld, P M Weilbacher, M Boquien, Y Revaz, J Braine, B S Koribalski, P.-E Belles, Gas dynamics in tidal dwarf galaxies: Disc formation at z = 0. A&A584, A113. Lelli, F., Duc, P.-A., Brinks, E., Bournaud, F., Mc- Gaugh, S. S., Lisenfeld, U., Weilbacher, P. M., Boquien, M., Revaz, Y., Braine, J., Koribalski, B. S., Belles, P.- E., Dec. 2015. Gas dynamics in tidal dwarf galaxies: Disc formation at z = 0. A&A584, A113. Horizon growth of supermassive black hole seeds fed with collisional dark matter. F D Lora-Clavijo, M Gracia-Linares, F S Guzmán, 443Lora-Clavijo, F. D., Gracia-Linares, M., Guzmán, F. S., Sep. 2014. Horizon growth of supermassive black hole seeds fed with collisional dark matter. MNRAS443, 2242-2251. Handbook of Pulsar Astronomy. Cambridge Observing Handbooks for Research Astronomers. D Lorimer, M Kramer, Cambridge University PressLorimer, D., Kramer, M., 2005. Handbook of Pulsar Astronomy. Cambridge Observing Handbooks for Research Astronomers. Cambridge University Press. URL https://books.google.es/books?id= OZ8tdN6qJcsC Timing of pulsars found in a deep Parkes multibeam survey. D R Lorimer, F Camilo, M A Mclaughlin, 434Lorimer, D. R., Camilo, F., McLaughlin, M. A., Sep. 2013. Timing of pulsars found in a deep Parkes multi- beam survey. MNRAS434, 347-351. D R Lorimer, P Esposito, R N Manchester, A Possenti, A G Lyne, M A Mclaughlin, M Kramer, G Hobbs, I H Stairs, M Burgay, R P Eatough, M J Keith, A J Faulkner, N D&apos;amico, F Camilo, A Corongiu, F Crawford, The Parkes multibeam pulsar survey -VII. Timing of four millisecond pulsars and the underlying spin-period distribution of the Galactic millisecond pulsar population. MN-RAS450. Lorimer, D. R., Esposito, P., Manchester, R. N., Pos- senti, A., Lyne, A. G., McLaughlin, M. A., Kramer, M., Hobbs, G., Stairs, I. H., Burgay, M., Eatough, R. P., Keith, M. J., Faulkner, A. J., D'Amico, N., Camilo, F., Corongiu, A., Crawford, F., Jun. 2015. The Parkes multibeam pulsar survey -VII. Timing of four millisec- ond pulsars and the underlying spin-period distribu- tion of the Galactic millisecond pulsar population. MN- RAS450, 2185-2194. D R Lorimer, A J Faulkner, A G Lyne, R N Manchester, M Kramer, M A Mclaughlin, G Hobbs, A Possenti, I H Stairs, F Camilo, M Burgay, N D&apos;amico, A Corongiu, F Crawford, The Parkes Multibeam Pulsar Survey -VI. Discovery and timing of 142 pulsars and a Galactic population analysis. 372Lorimer, D. R., Faulkner, A. J., Lyne, A. G., Manch- ester, R. N., Kramer, M., McLaughlin, M. A., Hobbs, G., Possenti, A., Stairs, I. H., Camilo, F., Burgay, M., D'Amico, N., Corongiu, A., Crawford, F., Oct. 2006. The Parkes Multibeam Pulsar Survey -VI. Discovery and timing of 142 pulsars and a Galactic population analysis. MNRAS372, 777-800. Timing the main-sequence-star binary pulsar J1740-3052. E C Madsen, I H Stairs, M Kramer, F Camilo, G B Hobbs, G H Janssen, A G Lyne, R N Manchester, A Possenti, B W Stappers, 425Madsen, E. C., Stairs, I. H., Kramer, M., Camilo, F., Hobbs, G. B., Janssen, G. H., Lyne, A. G., Manchester, R. N., Possenti, A., Stappers, B. W., Sep. 2012. Tim- ing the main-sequence-star binary pulsar J1740-3052. MNRAS425, 2378-2385. C Ng, M Bailes, S D Bates, N D R Bhat, M Burgay, S Burke-Spolaor, D J Champion, P Coster, S Johnston, M J Keith, M Kramer, L Levin, E Petroff, A Possenti, B W Stappers, W Van Straten, D Thornton, C Tiburzi, C G Bassa, P C C Freire, L Guillemot, A G Lyne, T M Tauris, R M Shannon, N Wex, The High Time Resolution Universe pulsar survey -X. Discovery of four millisecond pulsars and updated timing solutions of a further 12. 439Ng, C., Bailes, M., Bates, S. D., Bhat, N. D. R., Bur- gay, M., Burke-Spolaor, S., Champion, D. J., Coster, P., Johnston, S., Keith, M. J., Kramer, M., Levin, L., Petroff, E., Possenti, A., Stappers, B. W., van Straten, W., Thornton, D., Tiburzi, C., Bassa, C. G., Freire, P. C. C., Guillemot, L., Lyne, A. G., Tauris, T. M., Shannon, R. M., Wex, N., Apr. 2014. The High Time Resolution Universe pulsar survey -X. Discovery of four millisecond pulsars and updated timing solutions of a further 12. MNRAS439, 1865-1883. C Ng, D J Champion, M Bailes, E D Barr, S D Bates, N D R Bhat, M Burgay, S Burke-Spolaor, C M L Flynn, A Jameson, S Johnston, M J Keith, M Kramer, L Levin, E Petroff, A Possenti, B W Stappers, W Van Straten, C Tiburzi, R P Eatough, A G Lyne, The High Time Resolution Universe Pulsar Survey -XII. Galactic plane acceleration search and the discovery of 60 pulsars. 450Ng, C., Champion, D. J., Bailes, M., Barr, E. D., Bates, S. D., Bhat, N. D. R., Burgay, M., Burke-Spolaor, S., Flynn, C. M. L., Jameson, A., Johnston, S., Keith, M. J., Kramer, M., Levin, L., Petroff, E., Possenti, A., Stappers, B. W., van Straten, W., Tiburzi, C., Eatough, R. P., Lyne, A. G., Jul. 2015. The High Time Resolution Universe Pulsar Survey -XII. Galactic plane accelera- tion search and the discovery of 60 pulsars. MNRAS450, 2922-2947. Dynamical Friction in a Gaseous Medium. E C Ostriker, 513Ostriker, E. C., Mar. 1999. Dynamical Friction in a Gaseous Medium. ApJ513, 252-258. Near-resonant excitation and propagation of eccentric density waves by external forcing. E C Ostriker, F H Shu, F C Adams, 399Ostriker, E. C., Shu, F. H., Adams, F. C., Nov. 1992. Near-resonant excitation and propagation of eccentric density waves by external forcing. ApJ399, 192-212. Binary pulsars as dark-matter probes. P Pani, Phys. Rev. 9212123530Pani, P., Dec. 2015. Binary pulsars as dark-matter probes. Phys. Rev. D92 (12), 123530. Gravity. E Poisson, C M Will, Poisson, E., Will, C. M., May 2014. Gravity. Constraints on ultralight scalar dark matter from pulsar timing. N K Porayko, K A Postnov, Phys. Rev. 90662008Porayko, N. K., Postnov, K. A., 2014. Constraints on ultralight scalar dark matter from pulsar timing. Phys. Rev. D90 (6), 062008. The Dark Disk of the Milky Way. C W Purcell, J S Bullock, M Kaplinghat, ApJ703. Purcell, C. W., Bullock, J. S., Kaplinghat, M., Oct. 2009. The Dark Disk of the Milky Way. ApJ703, 2275- 2284. Dark Matter as a Trigger for Periodic Comet Impacts. L Randall, M Reece, Physical Review Letters. 11216161301Randall, L., Reece, M., Apr. 2014. Dark Matter as a Trigger for Periodic Comet Impacts. Physical Review Letters 112 (16), 161301. Cooling in a Dissipative Dark Sector. E Rosenberg, J Fan, ArXiv e-printsRosenberg, E., Fan, J., May 2017. Cooling in a Dissi- pative Dark Sector. ArXiv e-prints. . P Scholz, V M Kaspi, A G Lyne, B W Stappers, S Bogdanov, J M Cordes, F Crawford, R D Ferdman, P C C Freire, J W T Hessels, D R Lorimer, I H Stairs, B Allen, A Brazier, F Camilo, R F Cardoso, S Chatterjee, J S Deneva, F A Jenet, C Karako-Argaman, B Knispel, P Lazarus, K J Lee, J Van Leeuwen, R Lynch, E C Madsen, M A Mclaughlin, S M Ransom, X Siemens, L G Spitler, K Stovall, J K Swiggum, A Venkataraman, W W Zhu, Timing of Five Millisecond Pulsars Discovered in the PALFA Survey. ApJ800, 123Scholz, P., Kaspi, V. M., Lyne, A. G., Stappers, B. W., Bogdanov, S., Cordes, J. M., Crawford, F., Ferdman, R. D., Freire, P. C. C., Hessels, J. W. T., Lorimer, D. R., Stairs, I. H., Allen, B., Brazier, A., Camilo, F., Cardoso, R. F., Chatterjee, S., Deneva, J. S., Jenet, F. A., Karako-Argaman, C., Knispel, B., Lazarus, P., Lee, K. J., van Leeuwen, J., Lynch, R., Madsen, E. C., McLaughlin, M. A., Ransom, S. M., Siemens, X., Spitler, L. G., Stovall, K., Swiggum, J. K., Venkatara- man, A., Zhu, W. W., Feb. 2015. Timing of Five Millisecond Pulsars Discovered in the PALFA Survey. ApJ800, 123. The kinematics and orbital dynamics of the PSR B1259-63/LS 2883 system from 23 yr of pulsar timing. R M Shannon, S Johnston, R N Manchester, 437Shannon, R. M., Johnston, S., Manchester, R. N., Feb. 2014. The kinematics and orbital dynamics of the PSR B1259-63/LS 2883 system from 23 yr of pulsar timing. MNRAS437, 3255-3264. Neutron Star Kicks in Isolated and Binary Pulsars: Observational Constraints and Implications for Kick Mechanisms. C Wang, D Lai, J L Han, 639Wang, C., Lai, D., Han, J. L., Mar. 2006. Neutron Star Kicks in Isolated and Binary Pulsars: Observa- tional Constraints and Implications for Kick Mecha- nisms. ApJ639, 1007-1017.
[]
[]
[ "DrInge Bethke \nInformatics Institute/UvA Centrum Wiskunde & Informatica\nUniversity of Amsterdam MSc Computational Science\n\n", "DrSerge Fehr \nInformatics Institute/UvA Centrum Wiskunde & Informatica\nUniversity of Amsterdam MSc Computational Science\n\n" ]
[ "Informatics Institute/UvA Centrum Wiskunde & Informatica\nUniversity of Amsterdam MSc Computational Science\n", "Informatics Institute/UvA Centrum Wiskunde & Informatica\nUniversity of Amsterdam MSc Computational Science\n" ]
[]
Oblivious transfer is a powerful cryptographic primitive that is complete for secure multi-party computation. In oblivious transfer protocols a user sends one or more messages to a receiver, while the sender remains oblivious as to which messages have been received. Protocols for oblivious transfer cannot exist in a classical or fully-quantum world, but can be implemented by restricting the users' power. The isolated qubits model is a cryptographic model in which users are restricted to single-qubit operations and are not allowed to use entangling operations. Furthermore, all parties are allowed to store qubits for a long time before measuring them.First of all, I would like to thank my supervisor, Christian Schaffner for introducing me to world of quantum cryptography and for giving me the opportunity to work with him, for his valuable contribution throughout the project, the long hours he spent on trying to solve the riddles of isolated qubits.Furthermore, I want to thank Yi-Kai Liu for helpful discussions and suggestions as well as reading through our first try to tackle his model. I would also like to thank the examination committee for taking the time and effort of reading this thesis.Last but not least, I want to thank my family and friends for their motivation and support during the last year.ii Contents Abstract i
null
[ "https://arxiv.org/pdf/1510.07118v1.pdf" ]
18,642,415
1510.07118
20df1c554de6e2e08df3ba4d4b13e2333cddc98b
October 2015 24 Oct 2015 DrInge Bethke Informatics Institute/UvA Centrum Wiskunde & Informatica University of Amsterdam MSc Computational Science DrSerge Fehr Informatics Institute/UvA Centrum Wiskunde & Informatica University of Amsterdam MSc Computational Science October 2015 24 Oct 2015Master Thesis Secure Identification in the Isolated Qubits Model by Filippos-Arthouros Vogiatzian-Ternaxizian 10661565 Supervisor: Dr. Christian Schaffner Examiners: Oblivious transfer is a powerful cryptographic primitive that is complete for secure multi-party computation. In oblivious transfer protocols a user sends one or more messages to a receiver, while the sender remains oblivious as to which messages have been received. Protocols for oblivious transfer cannot exist in a classical or fully-quantum world, but can be implemented by restricting the users' power. The isolated qubits model is a cryptographic model in which users are restricted to single-qubit operations and are not allowed to use entangling operations. Furthermore, all parties are allowed to store qubits for a long time before measuring them.First of all, I would like to thank my supervisor, Christian Schaffner for introducing me to world of quantum cryptography and for giving me the opportunity to work with him, for his valuable contribution throughout the project, the long hours he spent on trying to solve the riddles of isolated qubits.Furthermore, I want to thank Yi-Kai Liu for helpful discussions and suggestions as well as reading through our first try to tackle his model. I would also like to thank the examination committee for taking the time and effort of reading this thesis.Last but not least, I want to thank my family and friends for their motivation and support during the last year.ii Contents Abstract i In this model, a secure single-bit one-out-of-two randomised oblivious transfer protocol was recently presented by Liu. Motivated by this result, we construct a protocol for secure string one-out-of-two randomised oblivious transfer by simplifying and generalising the existing proof. We then study for the first time interactive protocols for more complex two-party functionalities in this model based on the security of our construction. In order to guarantee the composability of our construction, users are restricted to measurement at the end of each sub-protocol. It is then possible to construct secure one-out-of-two and one-out-of-k oblivious transfer protocols in the isolated qubits model. Moreover, we study secure password-based identification, where a user identifies himself to another user by evaluating the equality function on their inputs, or passwords. We use the oblivious transfer constructions mentioned above as sub-protocols to construct a secure identification protocol. Finally, we prove that constructing a secure identification protocol non-interactively is impossible, even using oblivious transfer. The word cryptography comes from the greek words κρυπτό ("secret") and γράφω ("write"). In other words it defines the art of secret message transmission between two parties in a way that the message remains unreadable to any third party (adversary). This definition is accurate for the historical uses of cryptography but not for its modern form. In the last century, cryptography has evolved from art to science that does not rely on the obscurity of the encryption method but on formal mathematical definitions and rigorous security proofs. Furthermore, modern cryptography deals not only with the problem of message encryption but also with problems such as authentication, digital signatures and multi-party computation. In this section we give a brief overview of the history of cryptography and its evolution from the art of message encryption to its modern forms. First Steps: The Art Of Encrypting Messages The practice of cryptography is as old as the transmission of messages. Closely linked to the history of mankind, forms of encryption were developed independently in a number of places and soon again forgotten as were the civilisations that used them. According to Kahn [Kah96], cryptography has its roots in 1900 BC ancient Egypt, in the use of unusual hieroglyphs, instead of the ordinary ones, in the tomb of a nobleman, Khnumhotep II. Together with the construction of impressive burial monuments, the need to impress the living took the form of decorating tombs with obscure encryptions. These cryptic puzzles did for the first time intend to preserve the secrecy of the original text, at least enough to attract the curiosity of passersby for the short time it would take to decrypt and read. Although there are probably inumerable examples of these first forms of cryptography we note its first known military use to transmit secret messages, the scytale. First mentioned around the 7th century BC by Apollonius of Rhodes, used by the Spartans the scytale was a method to transmit a message secretly. Plutarch gives a more detailed account of its use in Lives (Lysander, 19), two identical wooden rods, the scytalae, are used in the following way. A leather strap is wound around the scytale and then the message is written on it along the length of the rod (see Figure 1.1a). The leather strap is then sent to the receiver of the message who has to wind it around his scytale in order to read the message. If the message is intercepted, it cannot be read unless a rod of the same diameter is used. It is furthermore hypothesized that this could be a method for message authentication instead of encryption, that is only if the sender used the correct scytale is the message readable by the receiver, thus making it more difficult to inject false messages by a third party. Through the next centuries, the most common use of cryptography was encryption of text through ciphers by substitution of letters in a fixed way such as the Caesar's cipher. The latter uses a fixed left shift of the alphabet by three letters, i.e. A would be transcripted as D, B as E, and so on. More complicated ciphers were developped following the same principle, using a, possibly different, shift of the alphabet for every letter of the message, often defined by a secret key. The most prominent example of complex substitution ciphers is the use of rotor machines, for example the Enigma and Lorentz cipher machines used in World War II (see Figure 1.1b). These machines used a number of rotating disks (rotors) that implemented a complex, but fixed, substitution of letters. For every keypress the position of the rotors would change thus using a different substitution for every letter. Modern Cryptography For more than twenty centuries cryptography focused mostly on the art of encrypting and conveying secret messages, mainly for military purposes. A large number of very different and sometimes very complex protocols were implemented, but they all relied on the secrecy of the encryption method. Thus once the protocol was known by an adversary it was no longer secure. The beginning of the end of this era of cryptography was foreseen by A. Kerckhoffs in the following statement: A cryptosystem should be secure even if everything about the system, except the key, is public knowledge. A.Kerckhoffs, "La Cryptographie militaire", 1883 This was later reformulated by C. Shannon as "the enemy knows the system being used" [Sha49], starting the modern era of cryptography, where security of cryptographic schemes or protocols does no longer rely on the obscurity of the encryption methods. For cryptography this was the paradigm shift from art to science. Modern cryptography relies on the formulation of exact definitions for protocols and rigorous proofs of security 1 . Most noteably the security of most cryptographic protocols depends on the unproven assumption that some mathematical problems, such as the factorisation of integers 2 , are hard to solve. A problem is computationally hard to solve if there exist no algorithms that can do so in polynomial time. This of course means that these protocols are not indefinitely secure since an adversary would be able to succeed in violating its security given enough time or an algorithm that could solve the problem on which the protocol's security relies efficiently. Assumptions about the computational restriction of adversaries have so far proved to be sufficiently strong for modern cryptography, but recent developments in quantum computing showed the existence of an algorithm that can factorise integers in polynomial time if run on a quantum computer, Shor's algorithm [Sho94]. That means that once sufficiently large quantum computers are in use, the implemented cryptographic protocols will become vulnerable. Faced with this increasingly real danger, cryptographers are trying to develop new approaches to achieve security. Quantum Cryptography In the early 1970's Wiesner proposed the idea of using two-state quantum-mechanical systems, such as polarised photons, to encode and transmit messages [Wie83]. Motivated by Heisenberg's uncertainty principle he showed that it is possible to use two "conjugate observables", linear and circular polarisation of photons, to "transmit two messages either but not both of which may be received". This important result remained unpublished for a decade, but set the basis of a new form of cryptography that no longer relies on the computational limitation of an adversary to achieve security. Quantum cryptography is solely based on the assumption that the laws of quantum mechanics model nature accurately to achieve security. Although the first steps of quantum cryptography passed almost unnoticed 3 Brassard and Bennett used Wiesner's idea of "conjugate coding" to achieve something previously thought impossible. The quantum key distribution protocol first developped by Bennett and Brassard and later Ekert [BB84,Eke91,BBE92] that allows two users to exchange a secret key over a public quantum communication channel that is eavesdropped on. The strength of this quantum protocol lies in the fact that the users are able to detect an eavesdropper who is trying to obtain their key, since measuring a quantum state disturbs its original state. Following this important success in quantum cryptography, the horizons of cryptography broadened and the quest to implement more cryptographic tasks such as secure multi-party quantum computation relying on quantum phenomena to achieve security began. Finally it is important to mention post-quantum cryptography as another approach to face the potential threat of quantum computers for the currently implemented cryptographic protocols. It is the field of search for classical cryptographic assumptions that cannot be broken efficiently by quantum or classical computers [BBD09]. Secure Two-Party Computation We have seen that for a long time cryptographers focused on the problem of transmitting secret messages. One further problem of cryptography introduced by Yao in [Yao82] is that of secure multi-party computation. That is the problem where a number of N players each of whom holds an input x 1 , . . . , x N want to evaluate a function of all their inputs, f (x 1 , . . . , x N ) correctly without disclosing information about their respective inputs. This is not only an interesting cryptographic problem, but one that leads to a number of useful applications such as secret voting, oblivious negotiation, private querying of database. While Yao introduced the problem of secure multi-party computation, in [Yao82] he mainly focused on the two-party case. That is the problem of two mutually distrustful parties correctly computing a function without revealing their inputs to each other. In this thesis we will focus on one problem of two-party computation, namely secure passwordbased identification: A user Alice identifies herself to a server Bob by securely evaluating the equality function on their inputs (or passwords). In the literature this is often refered to as the "socialist millionaire problem", a variant of the "millionaire problem" 1 , in which the two millionaires want to determine if they are equally rich, without revealing any information about their actual wealth to each other [Yao82]. Bit Commitment & Oblivious Transfer In this section we focus on two similar but fundamental two-party computation problems, bit commitment and oblivious transfer, their history and their importance. Bit commitment schemes consist of two phases, the commit phase where the sender Alice chooses the value of a bit and commits to it in the sense that it cannot be changed later and a reveal phase during which the hidden value of the bit is revealed and before which the receiver Bob has no information about the value of the bit. Oblivious transfer is the transfer of information in such a way that the sender does not know what information the receiver obtains. We will give a brief overview of its origin and its importance in secure two-party computation. The term was coined by Rabin in [Rab81], where he introduced what is now known as Rabin OT, a protocol where one user Alice sends a message and another user Bob does or does not receive it with equal probability, while Alice remains oblivious of the reception of the message, this is often refered to as secure erasure channel. A similar notion was introduced in the first paper on quantum cryptography "Conjugate Coding", where Wiesner describes "a means for transmitting two messages, either but not both of which may be received" [Wie83]. This was later rediscovered by Even, Goldreich and Lempel [EGL85] and named one-out-of-two oblivious transfer and denoted as 2 1 −OT. Intuitively it can be thought of as a black box in which a user Alice can store two messages and another user Bob can choose to receive the first or second message but learns no further information about the message he does not receive. Furthermore it fulfills the condition for oblivious transfer, namely that Alice does not know which message Bob received. A few years later, Crépeau [Cré88] proved that these two flavours of oblivious transfer are equivalent. In the same year Kilian [Kil88] proved that the 2 1 −OT primitive is complete for twoparty computation. This surprising result meant that a secure 2 1 −OT construction is sufficient to implement any two-party computation, making it a fundamental cryptographic problem. Moreover from the results of [Kil88, Cré88] a 2 1 −OT protocol can be used to implement bit commitment. Although a classical protocol was already introduced by Even, Goldreich and Lempel [EGL85] it relies on computational assumptions that are insecure against a quantum adversary. After the early success of quantum cryptography, research focused on the problem of constructing unconditionally secure bit commitment schemes [BC91,BCJL93] and oblivious transfer or 2 1 −OT primitives [BBCS92,Cré94]. Despite these first results, hope to achieve unconditionally secure quantum bit commitment vanished as doing so was proved to be impossible in a quantum setting in [May96,LC97]. As discussed above, since an 2 1 −OT primitive can be used to implement bit commitment, the impossiblity result for bit commitment implies that 2 1 −OT is also impossible. In [Lo97], Lo proved that all quantum one-sided two-party computations, including 2 1 −OT are insecure. Furthermore Colbeck in [Col07] and Buhrman et al. in [BCS12] showed that secure two-party computation is impossible to achieve in a fully quantum setting. One way to circumvent these impossibility results is to impose realistic restrictions on the users. In the literature there are two successful models that do so, the bounded-quantum-storage model [DFSS07,DFSS08] that upper bounds the size of quantum memory of the users and the noisy-storage model [WST08,KWW12,Sch10] that assumes that the quantum memory used is imperfect. Under the assumption of bounding the quantum storage of a user, unconditionally secure oblivious transfer, 2 1 −OT and thus two-party computation can be achieved [DFSS07, DFSS08]. One-Time Memories In The Isolated Qubits Model In 2013 Liu [Liu14a] suggested a further alternative to the memory-restricting models discussed in the previous section, the isolated qubits model, where all parties are restricted to the use of local operations on each qubit and classical communication(LOCC). The restriction to local quantum operations on each qubit means that the users are not allowed to perform entangling operations on the isolated qubits. The model is motivated by experimental work on nitrogen vacancy centers in diamond that can be read out and manipulated optically while at the same time it is difficult to perform entangling operations on pairs of such centers. We discuss the isolated qubits model in more detail in Chapter 2. A one-time memory (OTM) is a protocol or cryptographic device in which Alice stores two messages and sends it to Bob, who is then able to retrieve only one of the two messages. In essence it is a non-interactive or one-way 2 1 −OT, but we will discuss their difference in more detail in Section 2.4.2 Liu showed that it is possible to build an imperfect OTM in the IQM that leaks a fraction of information about the unreceived message [Liu14a,Liu14b]. Furthermore, Liu recently showed that it is possible to use privacy amplification in order to achieve a secure OTM for a single bit [Liu15]. A significant difference between the isolated qubits model and the noisy-and bounded-quantumstorage models is that the parties are not forced to measure the qubits soon after reception, rather they are allowed to store the qubits for an indefinite amount of time. This means that the users are allowed to take advantage of any further information shared between them at a later point to decide on their measurement strategy. On the other hand the noisy-and boundedquantum-storage models allow entangling operations between the users which is not allowed in the isolated qubits model. In this sense the memory-restricting and isolated qubits models are complementary, which is reflected in the fact that protocols that are secure in one model are not secure in the other. Protocols in the noisy-or bounded-quantum-storage model are insecure in the isolated qubits model in which the adversary has access to unlimited and perfect storage of isolated qubits. The opposite is also true since the protocol presented in [Liu14b] is not secure against an adversary that can perform entangling operations. We will discuss this in more detail in Section 2.4.4. Our Contributions In this thesis, we study the constructions of "leaky" string and secure single-bit one-time memories in the isolated qubits model (IQM) introduced in [Liu14a, Liu14b,Liu15]. Using non-linear degenerate functions [DFSS06] we simplify the proof presented in [Liu15]. We then construct and prove the security of a string one-out-of-two sender-randomised oblivious transfer, 2 1 −ROT, protocol in this model. Relying on the construction of a secure string 2 1 −ROT protocol we study for the first time interactive protocols for more complex two-party functionalities in the IQM. In order to do so, we assume that all parties measure the qubits they receive at the end of each sub-protocol, which allows us to construct composed protocols. First, we construct a 2 1 −OT protocol that makes use of one instance of a 2 1 −ROT functionality and prove its security. We then construct a weak but efficient sender-randomised one-out-of-k oblivious transfer, k 1 − ROT, protocol. Finally, we construct a protocol that implements the password-based identification functionality securely, relying on a secure k 1 − ROT. Moreover, we study the possibility to construct protocols that implement the password-based identification functionality securely and non-interactively. We prove that such an implementation is impossible relying only on one-way transmission or even oblivious transfer of messages and qubits from Alice to Bob. Outline Of The Thesis In Chapter 2, we introduce notation, the basic concepts from cryptography as well as the model we use in this thesis. In Chapter 3, we extend bit one-time memories introduced in [Liu14a, Liu14b, Liu15] to string 2 1 −ROTs using results from [DFSS06]. In Chapter 4, we study more complex two-party functionalities that make use of multiple instances of the 2 1 −ROTs constructed in Chapter 3. Firstly, we construct a 2 1 −OT protocol that makes use of one 2 1 −ROT functionality. Secondly, we study a k 1 −OT protocol presented in [BCR86] that makes use of k 2 1 −OTs. Finally, we present a construction for a weaker but more efficient k 1 − ROT protocol that uses only log k 2 1 −ROTs. In Chapter 5, we prove that constructing a non-interactive identification protocol is impossible even using secure k 1 −OT functionalities. We then propose a protocol to achieve secure password-based identification and prove its security using the secure k 1 − ROT constructed in Chapter 4. In the last Chapter 6 we summarise our results and discuss their significance. In this chapter, we introduce notation and the basic tools that we will use in this thesis. We assume some familiarity with basic probability theory and quantum information theory. A brief overview of the probability theory notions used in this thesis can be found in Appendix A and for an indepth introduction to quantum information theory we refer the reader to [NC00]. Basic Notation We use uppercase letters such as X, Y, Z to denote random variables, calligraphic letters X , Y, Z to denote sets and lowercase letters x, y, z to denote a specific value of a random variable. Furthermore, for a sequence of random variables X 1 , . . . , X k we write X i , with i ∈ {1, . . . , k} to denote the sequence X 1 , . . . , X k excluding X i . Moreover, we introduce the symbol P X↔Y ↔Z , as used in [DFSS07] and [FS09], to denote that the distribution of a random variable X is independent of a random variable Z given a random variable Y : P X|Y Z = P X|Y , (2.1) we then write: P XY Z = P X↔Y ↔Z . (2.2) This notation is extended to P XY Z|E = P X↔Y ↔Z|E to denote that the distribution of a random variable X is independent of a random variable Z given a random variable Y conditioned on an event E: P X|Y ZE = P X|Y E . (2.3) Finally, the smoothed min-entropy of a random variable X conditioned on a random variable Y is denoted by H ε ∞ (X|Y ). For more information we refer the reader to Appendix B. For any matrix A ∈ C m×n and vector x ∈ C n we use A , A F and A tr to denote the operator, the Frobenius and the trace norm respectively. Further information on these norms is included in Appendix C. A brief overview of the Bachmann-Landau symbols: We write f (k) = O(g(k)) if ∃c > 0, ∃k 0 , ∀k > k 0 : |f (k)| ≤ c|g(k)|. We write f (k) = o(g(k)) if ∀c > 0, ∃k 0 , ∀k > k 0 : |f (k)| ≤ c|g(k)|. We write f (k) = Ω(g(k)) if ∃c > 0, ∃k 0 , ∀k > k 0 : |f (k)| ≥ c|g(k)|. We write f (k) = Θ(g(k)) if ∃c 1 > 0, ∃c 2 > 0, ∃k 0 , ∀k > k 0 : c 1 |g(k)| ≤ |f (k)| ≤ c 2 |g(k)|. Functions In this section we give a brief overview of special families of functions that we use in this thesis: for two non-zero strings u 0 , u 1 ∈ {0, 1} , where < ·, · > is the bit-wise inner product defined as: Non-Degenerate Linear Functions < a, b >= i=1 a i · b i (2.5) We further mention the definition of a more relaxed notion. Definition 2.2. [DFSS06, Definition 4.3] A binary function β : {0, 1} × {0, 1} → {0, 1} is called 2-balanced if for any s 0 , s 1 ∈ {0, 1} the functions β(s 0 , ·) and β(·, s 1 ) are balanced, meaning that |{σ 1 ∈ {0, 1} : β(s 0 , σ 1 ) = 0| = 2 /2} and |{σ 0 ∈ {0, 1} : β(σ 0 , s 1 ) = 0| = 2 /2}. Finally we note the following result that will allow us to use the fact that for any string s i the functions β(s i , ·) and β(·, s i ) are balanced in the proof of Lemma 3.6 in Section 3. t-wise Independent Hash Functions We introduce the definition of t-wise independent hash functions as defined in [Liu15]. Note that sampling and applying a random function from a family of t-wise independent hash functions can be done efficiently ([Liu15, Proposition 2.5]). We present a large-deviation bound for quadratic functions of 2t−wise independent random variables [Liu15, Proposition 2.7]: Proposition 2.5. Let t ≥ 2 be an even integer, and let H be a family of 2t-wise independent functions {1, . . . , N } → {0, 1}. Let A ∈ R N ×N be a symmetric matrix, A T = A. Let H be a function chosen uniformly at random from H, and define the random variable S = N x,y=1 A xy (−1) H(x) (−1) H(y) − δ xy , (2.6) where δ xy is the Kronecker δ that equals 1 if x = y and 0 otherwise. Then the expected value of S is E[S] = 0 and we have the following large-deviation bound: for any λ > 0, P (|S| ≥ λ) ≤ 4e 1 6t √ πt 4 A 2 F t eλ 2 t 2 + 4e 1 12t √ 2πt 8 A 2 t eλ t , (2.7) where A ∈ R N ×N is the entry-wise absolute value of A, that is A xy = |A xy |. Functionalities & Protocols An ideal functionality formally describes a cryptographic task, detailing the behaviours of honest and dishonest parties. A protocol is a series of clearly defined instructions that the (honest) parties follow. Finally we define the security for a protocol, describing the conditions that need to be fulfilled in order for a protocol to implement a functionality securely. In this section, we introduce the ideal functionalities of 2 1 −OT, 2 1 −ROT, k 1 −OT, k 1 −ROT, k 1 − ROT and password-based identification as well as equivalent security definitions that we will use in the following chapters. 2.3.1 2 1 −OT First, we formally define the 2 1 −OT functionality, that we discussed in Chapter 1, that allows two parties to share one out of two messages such that the sender is oblivious as to which message has been received, while the receiver has no knowledge of the second message. Functionality 2.6. Upon receiving input messages A 0 , A 1 ∈ X from Alice, where A = {0, 1} l and the choice bit D ∈ {0, 1} from Bob, F 2 1 −OT outputs A D to Bob and outputs nothing to Alice. Commonly security of a protocol is proven by showing that a real protocol is indistinguishable from the ideal functionality. However there exists an alternative approach, [FS09, Proposition 4.3] allows us to use an equivalent security definition. If a protocol fulfills this definition, then it securely implements the ideal functionality. Definition 2.7. A ε−secure 2 1 −OT proocol is a protocol between Alice with inputs A 0 , A 1 ∈ A and Bob with input D ∈ {0, 1} such that the following holds: Correctness: For honest user Alice and honest server Bob, for any distribution of Alice's inputs A 0 , A 1 ∈ A and Bob's input D ∈ {0, 1}, Alice gets no output and Bob receives output G = A D , except with probability ε. Security for Alice: For any dishonest server Bob with output G , there exists a random variable D ∈ {0, 1} such that: P D A 0 A 1 ≈ ε P D · P A 0 A 1 (2.8) and P G A D D A 1−D ≈ ε P G |A D D · P A D D A 1−D (2.9) Security for Bob: For any dishonest user Alice with output V , there exists random variables A 0 , A 1 such that: P [G = A D ] ≥ 1 − ε, (2.10) and P DV A 0 A 1 ≈ ε P D · P V A 0 A 1 (2.11) 2.3.2 2 1 −ROT While 2 1 − OT is a powerful tool we present a different oblivious transfer functionality, the randomised one-out-of-two oblivious transfer 2 1 −ROT. Contrary to the 2 1 −OT Alice does not input two messages but receives two random messages from the functionality while Bob receives one out of the two messages depending on his input choice. We present the formal definition of the 2 1 −ROT functionality. Furthermore, we introduce an equivalent security definition that protocols that securely implement the 2 1 −ROT functionality should fulfill. Definition 2.9. A ε−secure 2 1 −ROT proocol is a protocol between Alice with no input and Bob with input D ∈ {0, 1} such that the following holds: Correctness: For honest user Alice and honest server Bob, for any distribution of Bob's input D ∈ {0, 1}, Alice receives output A 0 , A 1 ∈ A and and Bob receives output G = A D , except with probability ε. Security for Alice: For any dishonest server Bob with output G, there exists a random variable D ∈ {0, 1} such that: P A 1−D GA D D ≈ ε P U · P GA D D (2.12) Security for Bob: For any dishonest user Alice with output V , there exists random variables A 0 , A 1 such that: P [G = A D ] ≥ 1 − ε, (2.13) and P DV A 0 A 1 ≈ ε P D · P V A 0 A 1 (2.14) 2.3.3 k 1 −OT In this section, we focus on a generalised oblivious transfer functionality that takes k inputs instead of two, the 1-out-of-k Oblivious Transfer, denoted as k 1 −OT. It is a two-party functionality between a user Alice that inputs k messages X 1 , X 2 , . . . , X k and a user Bob who is allowed to retrieve only one of these messages X D according to his choice D. When the above functionality is implemented securely, Bob should not be able to learn additional information on any of the other messages. At the same time, the obliviousness of the protocol must still hold, Alice should not have any knowledge about the choice of Bob. The formal definition of the k 1 −OT functionality is the following: Functionality 2.10. Upon receiving input messages X 1 , . . . , X k ∈ X from Alice, where X = {0, 1} l and the choice D ∈ {1, . . . , k} of Bob, F k 1 −OT outputs X D to Bob and outputs nothing to Alice. We now introduce an equivalent security definition for the k 1 −OT functionality. Definition 2.11. A ε−secure k 1 −OT proocol is a protocol between Alice with inputs X 1 , . . . , X k ∈ X and Bob with input D ∈ {1, . . . , k} such that the following holds: Correctness: For honest user Alice and honest server Bob, for any distribution of Alice's inputs X 1 , . . . , X k ∈ X and Bob's input D ∈ {1, . . . , k}, Alice gets no output and Bob receives output G = X D , except with probability ε. Security for Alice: For any dishonest server Bob with output G , there exists a random variable D ∈ {1, . . . , k} such that: P D X 1 ...X k ≈ ε P D · P X 1 ...X k (2.15) and P G X D D X D ≈ ε P G |X D D · P X D D X D (2.16) Security for Bob: For any dishonest user Alice with output V , there exist random variables X 1 , . . . X k such that: P [G = X D ] ≥ 1 − ε, (2.17) and P DV X 1 ...X k ≈ ε P D · P V X 1 ...X k (2.18) 2.3.4 k 1 −ROT In this section we introduce a slightly different flavour of the k 1 −OT, where the user Alice does not input messages X 1 , . . . , X k but instead has no inputs and receives as ouptut k random messages S 1 , . . . , S k . This functionality is defined formally below: Functionality 2.12. Honestly behaving Alice and Bob: Upon receiving no input from Alice and a choice D ∈ {1, . . . , k} from Bob, F k 1 −ROT samples random independent strings S 1 , . . . , S k ∈ S = {0, 1} and sends S 1 , . . . , S k to Alice and S D to Bob. Honest Alice and dishonest Bob: Upon receiving no input from Alice, a choice D ∈ {1, . . . , k} and a string S D ∈ S from Bob, F k 1 −ROT samples random independent strings S D ∈ S, and sends S 1 , . . . , S k to Alice. Dishonest Alice and honest Bob: Upon receiving input messages S 1 , . . . , S k ∈ S from Alice, where S and the choice D ∈ {1, . . . , k} of Bob, F k 1 −ROT outputs S D to Bob and outputs nothing to Alice. We introduce the security definition for the k 1 −ROT functionality. Definition 2.13. The sender-randomised k 1 −ROT is secure if the following conditions are fulfilled: Correctness: For honest user Alice and honest server Bob, for any distribution of Bob's input D, Alice gets outputs S 1 , . . . , S k ∈ S uniform and independent of D and Bob receives output S D , except with probability ε. Security for Alice: For any dishonest server Bob with output G , there exists a random variable D ∈ {1, . . . , k} such that: P S D S D D G ≈ ε P U k−1 · P S D D G (2.19) Security for Bob: For any dishonest user Alice with output V , there exist random variables S 1 , . . . , S k such that: P [G = S D ] ≥ 1 − ε, (2.20) and P DV S 1 ,...,S k ≈ ε P D · P V S 1 ,...,S k (2.21) Finally we introduce the security definition for a slightly weaker k 1 −ROT functionality that we call k 1 − ROT. Definition 2.14. The sender-randomised k 1 − ROT is ε−secure if the following conditions are fulfilled: Correctness: For honest user Alice and honest server Bob, for any distribution of Bob's input D, Alice gets outputs S 1 , . . . , S k ∈ S uniform and independent of D and Bob receives output S D , except with probability ε. Security for Alice: For any dishonest server Bob with output G , there exists a random variable D ∈ {1, . . . , k} such that for all I = D : P S I S D D G ≈ ε P U · P S D D G (2.22) Security for Bob: For any dishonest user Alice with output V , there exist random variables S 1 , . . . , S k such that: P [G = S D ] ≥ 1 − ε, (2.23) and P DV S 1 ,...,S k ≈ ε P D · P V S 1 ,...,S k (2.24) The k 1 − ROT is weaker since although every message that does not correspond to Bob's input remains hidden, this is not true for all messages simultaneously. While weaker, the k 1 − ROT functionality is strong enough to construct a secure password-based identification protocol as we will show in Chapter 5. Furthermore the k 1 − ROT protocol we present in Chapter 4 is more efficient than the k 1 −ROT and k 1 −OT protocols, as it makes use of log k instead of k 2 1 −OTs. Password-Based Identification We define the functionality of identification, where a user Alice identifies herself to a server Bob by securely evaluating the equality function on their inputs, called passwords. Our definition is motivated by [FS09]. = W B to Bob. In case Alice is dishonest she may choose W A =⊥ (which never agrees with honest Bob's input) and (for any choice of W A ) the bit G is also output to Alice . The idea behind the F ID functionality is that Alice and Bob both have an input string W A and W B respectively to act as a password and Bob receives and outputs a bit corresponding to the acceptance of Alice's password if their chosen inputs are the same or the rejection if their inputs are not equal. In order for a protocol that fulfills the F ID functionality to be secure, a dishonest server should not be able to learn Alice's password, except with the probability that he guesses the password correctly. At the same time it has to be secure against a dishonest user Alice, so that Bob will not accept her password if it does not correspond to his choice W B . We introduce the definition that should be fulfilled by secure password-based identification protocols. Definition 2.16. A password-based identification protocol is ε−secure if the following conditions are fulfilled: Correctness: For honest user Alice and honest server Bob with inputs W A = W B , Bob outputs G = 1 except with probability ε. Security for Alice: For any dishonest server Bob with output G , for any distribution of W A , there exists a random variable W that is independent of W A such that : P W A W G |W =W A ≈ ε P W A ↔W ↔G |W =W A . (2.25) Security for Bob: For any dishonest user Alice with output V , for any distribution of W B , there exists a random variable W independent of W B such that if W = W B then P [G = 1] ≤ ε and : P W B W V |W =W B ≈ ε P W B ↔W ↔V |W =W B . (2.26) One-Time Memories In The Isolated Qubits Model The Isolated Qubits Model In Chapter 1, we gave a brief introduction of the isolated qubits model that was first presented by Liu in [Liu14a]. In more detail, parties in this model are restricted to local quantum operations on each qubit and classical communication between the qubits. As detailed in [Liu14a] any local operation and classical communication (LOCC) strategy, in the sense desribed above, can be described by a series of adaptive single-qubit measurements. In subsequent work, Liu describes how to model any LOCC adversary by a separable positive-operator-value measurement (POVM) [Liu14b]. Furthermore, in contrast with the memory-restricting models described in Chapter 1, in the isolated qubits model, all parties are allowed to store qubits for a long time and are not allowed to perform entangling operations between qubits. While the restriction on entanglement operations reduces the power of an adversary, the possibility to store qubits for a long time has some important implications. An adversary is thus allowed to store qubits and measure them at the end of a protocol, making use of any information he receives to decide on his measurement strategy. Thus usual privacy amplification techniques using hash functions are not effective, which necessitates the use of stronger families of hash functions and a different approach on using them, as described in [Liu15]. We will describe this in more detail in Section 2.4.3. Moreover the ability of storing qubits for a long time allows an adversary to measure the qubits received at the end of the composed protocol 1 . It is then not clear if the sub-protocols remains secure. Composability in the isolated qubits model has not been studied and it seems to be a non-trivial problem. In this thesis we assume that all parties have to measure all qubits used in a sub-protocol at the latest at the end of this sub-protocol. This rather strong assumption allows us to construct composed protocols that make calls to functionalities as sub-routines. Leaky String 2 1 −ROT In this section, we introduce a protocol for imperfect 2 1 −ROT motivated by the "leaky" onetime memory (OTM) construction presented in [Liu14b]. The security definitions for the "leaky" and perfect OTMs presented in [Liu14b,Liu15] are similar to the 2 1 −ROT security definition, introduced earlier in this chapter. In Chapter 3 we use the "leaky" 2 1 −ROT presented here to construct protocols a secure string 2 1 −ROT. We then use the latter in Chapter 4 to construct a secure 2 1 −OT protocol. For consistency with the view of cryptographic tasks as functionalities that are implemented by protocols we do not use the notion of one-time memories as devices that store two messages out of which only one can be read. We instead construct protocols that implement the 2 1 −ROT functionality (Functionality 2.8) between two users, Alice and Bob. The main difference between an OTM and an oblivious transfer protocol is the fact that the first is non-interactive in the sense that only Alice sends information to Bob, while an oblivious transfer protocol is not necessarily non-interactive. In that sense, the latter is weaker since an OTM implements the oblivious transfer functionality, but an interactive oblivious transfer protocol does not implement the OTM functionality. We first rewrite the "leaky" OTM as introduced in [Liu14b] as a non-interactive "leaky" 2 1 −ROT protocol that takes no input from Alice and input D from Bob, and outputs s and t to Alice and one of the two messages to Bob depending on his input choice. This protocol leaks some information about both messages to Bob and is thus not secure. Protocol 2.17. A protocol for "leaky" string 2 1 −ROT between users Alice with no input and and Bob with input D ∈ {0, 1} respectively. Let C : {0, 1} → {0, 1} n log q be an error correcting code that is linear in GF (2) and approaches the capacity of a q-ary symmetric channel E q with error probability p e = 1 2 − 1 2q . 1. Alice samples and receives as output two strings s, t ∈ {0, 1} uniformly at random. 2. Alice computes C (s) and C (t) and views them as n blocks of log q qubits. 3. Alice prepares the qubits in the following way and sends them to Bob: For i = 1, . . . , n: • If D = 0, he measures all the qubits he receives in the computational basis. • If D = 1, he measures all the qubits he receives in the Hadamard basis. 5. Bob runs the decoding algorithm for C on the string of measurement outcomes z ∈ {0, 1} n log q and receives s or t depending on his choice D. We present the definitions for separable measurements and δ-non-negligible measurement outcomes as presented in [Liu14b], that are used in Theorem 2.19 and later in Chapter 3. Separable Measurement A measurement on m qubits is called separable if it can be written in the form E : ρ → i K † i ρK i , where each operator K i is a tensor product of m single-qubit operators K i = K i,1 ⊗ · · · ⊗ K i,m δ-non-negligible Measurement Outcome Definition 2.18. For any quantum state ρ ∈ C d×d , and any δ > 0, we say that a measurement outcome (POVM element) M ∈ C d×d is δ-non-negligible if tr(M ρ) ≥ δ · tr(M )/d. We rephrase the main result of the original paper, [Liu14b, Theorem 2.3], that defines the security of the protocol: Theorem 2.19 ("Leaky" String 2 1 −ROT). For any k ≥ 2, and for any small constant 0 < µ << 1, Protocol 2.17 between Alice with no input and Bob with input D ∈ {0, 1}, has the following properties: 1. Correctness: For honest users Alice and Bob, Alice receives two messages s, t ∈ {0, 1} , where = Θ(k 2 ) and Bob receives either s or t depending on his choice D, using only LOCC operations. 2. "Leaky" security: Let δ 0 > 0 be any constant, and set δ = 2 −δ 0 k . Honest user Alice receives outputs s, t ∈ {0, 1} . For any dishonest LOCC Bob, and any separable measurement outcome M that is δ-non-neglibible, we have the following security bound: H ε ∞ (S, T |Z = M ) ≥ 1 2 − µ − δ 0 k. (2.27) Here S and T are the random variables describing the two messages, Z is the random variable representing the Bob's measurement outcome, and we have ε ≤ e −Ω(k) . The proof of this theorem can be found in [Liu14b]. This 2 1 −ROT protocol leaks a constant fraction of information to Bob and is thus not secure for cryptographic tasks. Privacy Amplification Common privacy amplification techniques rely on applying a function with a random seed to the string the user holds and require the users to share their seed at a later point. These techniques cannot be used in the isolated qubits model as a dishonest user can postpone his measurement until he has knowledge of the seed and use that information to adapt his measurement. Liu introduces a privacy amplification technique that can be used in the isolated qubits model in [Liu15]. The technique relies on the use of a fixed hash function of a family of r-wise hash functions, that is a family of stronger hash functions than the ones described above. This method allows privacy amplification on the output of a leaky string OTM as the ones presented in [Liu14a,Liu14b] and leads to the construction of a secure single-bit OTM [Liu15]. In Chapter 3 we follow a similar approach to achieve secure string 2 1 −ROT, instead of the single-bit OTM presented in [Liu15]. Comparing The Isolated Qubits And Bounded Quantum Storage Models In Section 1.3 we mentioned briefly that the OTM protocols studied in [Liu14a,Liu14b,Liu15] are not necessarily secure in the noisy-and bounded-quantum-storage models and that at the same time protocols that rely on a quantum memory bound to achieve security are not guaranteed to be secure in the isolated qubits model. In more detail, the OTM protocols constructed in the isolated qubits model are insecure in a model where entangling operations are allowed. An attack against the OTM protocols by an adversary who is allowed to pefrorm entangling operations has been sketched in [Liu14b], relying on the gentle measurement lemma [Win99] and running the decoding alrgorithm for the error-correcting code on a superposition of many different inputs. This implies that the OTM and 2 1 −ROT protocols described in [Liu14a,Liu14b,Liu15] and this thesis are not secure in the noisy-and bounded-quantum-storage models. On the other hand protocols in the noisy-and bounded-quantum-storage model [WST08,KWW12,Sch10], rely on the memory bound or imperfect storage in order to achieve security. In protocols such as Protocol 5.1, one user encodes qubits in the computational or Hadamard basis while the receiver measures the qubits either in a random basis or in a sequence of bases depending on his input. Since these measurements are destructive, the users commit to a particular choice of measurement bases. The correct sequence of bases is announced between the users at a later point, after the memory-bound has been applied. This step allows the users to know which qubits they have measured in the same basis and thus have obtained the same result, unless the quantum communication channel is being eavesdropped on. At the same time the step of announcing the bases used to encode the sent qubits can be exploited by a malicious user in the isolated qubits model. Since the users are allowed to store the received qubits for an indefinite amount of time after receiving the qubits, an adversary is allowed to wait until he has received the sequence of bases and thus measure all qubits correctly, which violates the security of these protocols. Thus we argue that protocols that rely on the restriction of a user to perform non-entangling operations cannot be secure in the memory restricting models. On the other hand protocols that rely on the inability of an adversary to store qubits noiselessly or in large numbers cannot be secure in the isolated qubits model. In this chapter, we introduce a 2 1 −ROT protocol in the isolated qubits model (IQM), motivated by the "ideal" OTM presented in [Liu15]. Our protocol takes no input from Alice and one bit D as Bob's input, and outputs two strings A 0 and A 1 to Alice and one string A D to Bob. This protocol first uses the "leaky" 2 1 −ROT protocol presented in Chapter 2 and makes use of the privacy amplification technique introduced in [Liu15] to achieve security. The 2 1 −ROT protocol differs from the "ideal" OTM of [Liu15] in the fact that the messages are strings instead of single bits as in the original. To prove the security of the 2 1 −ROT protocol we use some results presented in [DFSS06] that allow us to simplify and extend the proof to longer messages, a technique that was not used in the original. 3.1 Secure String 2 1 −ROT As discussed in the previous chapter, the "leaky" 2 1 −ROT, Protocol 2.17, is not secure because it leaks some information. Commonly in such a case one would use privacy amplification techniques to achieve security from this less secure protocol. Typically this involves applying a hash function with a seed that is picked by Alice and later announced to Bob, after he has measured the received qubits or messages. In the isolated qubits model however, the use of such techniques is not possible since Bob is allowed to wait and measure the qubits at a later point, in this case after learning the seed of the hash function used for privacy ampification. A privacy amplification technique such as this would at best have no effect or even allow a dishonest user Bob to use that information to attack the protocol. In [Liu15], Liu presented a technique for privacy amplification in the isolated qubits model by fixing two r-wise independent hash functions at the beginning of the protocol, and applying them on the outputs of the "leaky" 2 1 −ROT protocol. Protocol String 2 1 −ROT We introduce a protocol for string 2 1 −ROT based on the protocol proposed by Liu [Liu14b] and the privacy amplification technique that uses two fixed r-wise independent hash functions. 1 −ROT In The Isolated Qubits Model 21 Protocol 3.1. A protocol for string 2 1 −ROT between user Alice with no input and Bob with input D ∈ {0, 1}. 1. Alice chooses two r-wise independent hash functions F and G uniformly at random. 2. Alice with no input and Bob with input D use a "leaky" string 2 1 −ROT (such as Protocol 2.17). Alice receives as output two messages s, t ∈ {0, 1} and Bob, depending on his choice, receives s if D = 0 or t if D = 1. 3. Alice receives output A 0 , A 1 ∈ {0, 1} such that: A 0 = F (s) (3.1) A 1 = G(t) (3.2) 4. Bob computes F (s) or G(t), depending on his input D and obtains A D . Security Of The Protocol It is not difficult to see that if the "leaky" string 2 1 −ROT is correct then Protocol 3.1 is correct. Furthermore since the protocol is non-interactive Alice learns nothing about Bob's actions, as is reasoned in [Liu15]. The security for Alice of an 2 1 −ROT, Definition 2.9, is equivalent to the following definition, that was used in [Liu15]: Definition 3.2. We say that Protocol 3.1 is secure if the following holds: Let k ≥ 1 be a security parameter. Suppose Alice receives as output two messages A 0 , A 1 ∈ {0, 1}. Consider any dishonest LOCC user Bob, and let Z be the random variable representing the results of Bob's measurements. Then there exists a random variable D ∈ {0, 1} such that: P A 1−D A D DZ − P U × P A D DZ 1 ≤ 2 −Ω(k) , (3.3) where U denotes the uniform distribution on {0, 1} . Theorem 3.3 then states that we can reduce a secure string 2 1 −ROT protocol (Protocol 3.1) to a "leaky" string 2 1 −ROT protocol (Protocol 2.17). That is if there exists a protocol with output two strings s, t ∈ {0, 1} and leaking any constant fraction of information of s and t, then we can construct a 2 1 −ROT where Alice receives two strings A 0 , A 1 ∈ {0, 1} and only allows an exponentially small amount of information about either A 0 or A 1 to leak, and is thus secure. Theorem 3.3. For any constants θ ≥ 1, δ 0 > 0, α > 0, ε 0 > 0 and 0 < κ < min δ 0 2 , ε 0 2 , α 2. Correctness: For honest users, Alice receives s and t and Bob receives s if D = 0 or t if D = 1, using only LOCC operations. 3. "Leaky" security: Let δ 0 > 0 be any constant, and set δ = 2 −δ 0 k . Honest user Alice receives outputs s, t ∈ {0, 1} . For any dishonest LOCC Bob, let Z be the random variable representing the result of his measurement. Let M be any separable measurement outcome M that is δ-non-neglibible. Then: 3. "Ideal" security: For honest Alice with outputs (A 0 , A 1 ) from Protocol 3.1, for any dishonest LOCC user Bob, let Z be the random variable representing the results of his measurements. Then there exists a random variable D ∈ {0, 1}, such that: H ε ∞ (S, T |Z = M ) ≥ αk,(3.P A 1−D A D DZ − P U × P A D DZ 1 ≤ 2 −(δ 0 k−2( +1)) + 2 −(ε 0 k−2 +3) + 2 −( α 2 k−2( +1)) + 2 −( α 2 k−2( +2+θ ln k)−ln (γ+1)) ≤ 2 −Ω(k) ,(3.7) Before proving this theorem we present the definition of the ε −obliviousness condition in order to introduce Theorem 3.5 that we use later to prove the security of Protocol 3.1. Note that the ε −obliviousness condition (for Random 1-2 OT ) extended for strings [DFSS06, Definition 3.2] describes the security condition of Definition 3.2. Definition 3.4. ε -Obliviousness condition: For any LOCC adversary who observes the measurement outcome Z, there exists a binary random variable D such that P A 1−D A D D Z − P U × P A D D Z ≤ ε (3.8) Moreover, we introduce [DFSS06, Theorem 4.5], that we will use to prove the security of Protocol 3.1. Theorem 3.5. [DFSS06, Theorem 4.5] The ε -obliviousness condition is satisfied for any LOCC adversary who observes the measurement outcome Z if and only if: ∀ non-degenerate linear function β: P β(A 0 ,A 1 ) Z − P U × P Z ≤ ε 2 2 +1 (3.9) Theorem 3.5 states that it is enough to show that P (β(A 0 ,A 1 ))Z − P U × P Z ≤ ε 2 2 +1 for all non-degenerate linear functions β, in order to prove the security of the protocol. Proof Of Theorem 3.3 In this section we prove Theorem 3.3 following the reasoning used in [Liu15]. We first show that with high probability over F and G the scheme is secure for any fixed separable measurement outcome M . Then we use the µ−net W for the set of all separable measurement outcomes and show that Protocol 3.1 is secure at all points M ∈ W with high probability.We then show that any separable measurement M can be approximated by a measurement outcome in the µ−net, M ∈ W . Then security at M implies security at M for any separable measurement. Thus Protocol 3.1 is secure. Security For Fixed Measurement M First, we show that in the case when the adversary observes a fixed measurement outcome Z = M the protocol is secure. Assuming that M is separable and δ−non-negligible, the "leaky" security guarantee implies H ε ∞ (S, T |Z = M ) ≥ αk (equation (3.4)). The following lemma defines a smoothing event E and the quantity R β (M ) and states that R β (M ) is small, with high probability over the choice of F and G. We define: R β (M ) = E(1 E · (−1) β(A 0 ,A 1 ) | Z = M ), (3.10) which is a random variable depending on F , G, S and T , since (3.12) Then the following holds: s,t∈{0,1} P (E, S = s, T = t | Z = M ) 2 = s,t∈{0,1} P (E, S = s, T = t | Z = M ) · P (E, S = s, T = t | Z = M ) ≤ s,t∈{0,1} 2 −αk · P (E, S = s, T = t | Z = M ) = 2 −αk · s,t∈{0,1} P (E, S = s, T = t | Z = M ) ≤ 2 −αk (3.13) We now bound the quantity R β (M ) in a similar way as in [Liu15]. For a non-degenerate linear function β defined by non-zero strings u 0 , u 1 , β(A 0 , A 1 ) =< u 0 , F (s) > + < u 1 , G(t) >, where by definition A 0 = F (s) and A 1 = G(t). We then rewrite R β (M ) as H(i, s) = < u 0 , F (s) >, if i = 0 < u 1 , G(s) >, if i = 1,(3.15) for two non-zero u 0 , u 1 ∈ {0, 1} . Note that since F, G are r−wise independent hash functions and u 0 , u 1 are non-zero strings then < u 0 , F (s) > and < u 1 , G(s) > are also r−wise independent hash functions. Secondly, we define a matrix A ∈ R (2·2 )×(2·2 ) with entries A (i,s)(j,t) , for i, j ∈ {0, 1} and s, t ∈ {0, 1} , that take the values: A (i,s),(j,t) =      1 2 P (E, S = s, T = t | Z = M ) if (i, j) = (0, 1) 1 2 P (E, S = t, T = s | Z = M ) if (i, j) = (1, 0) 0 otherwise. (3.16) Finally, using equation (3.15) and equation (3.16), R β (M ) can be written in the following way, R β (M ) = E(1 E · (−1) β(A 0 ,A 1 ) | Z = M ) (3.17) = s,t∈{0,1} P (E, S = s, T = t | Z = M )(−1) <u 0 ,F (s)>+<u 1 ,G(t)> (3.18) = s,t∈{0,1} 1 2 P (E, S = s, T = t | Z = M )(−1) <u 0 ,F (s)>+<u 1 ,G(t)> (3.19) + 1 2 P (E, S = t, T = s | Z = M )(−1) <u 1 ,G(t)>+<u 0 ,F (s)> (3.20) = (i,s),(j,t) A (i,s),(j,t) (−1) H(i,s) (−1) H(j,t) − δ (i,s),(j,t) (3.21) Since < u 0 , F > and < u 1 , G > are r-wise independent random functions, we can set t = r/2 and use Proposition 2.5, using the following bounds on A: A 2 ≤ A 2 F = (i,s),(j,t) A 2 (i,s),(j,t) = 1 2 s,t P (E, S = s, T = t | Z = M ) 2 ≤ 1 2 · 2 −αk , (3.22) where in the last line we used equation (3.13). Then by substituting into Proposition 2.5 we prove equation (3.11). We thus prove Lemma 3.6. Next, we introduce Lemma 3.7 that implies that if R β (M ) is small, we can use Theorem 3.5 to prove the security of the protocol when the adversary observes the measurement outcome M . Lemma 3.7. Fix any measurement outcome M . Suppose |R β (M )| ≤ ξ. Then: P β(A 0 ,A 1 ),E|Z=M − P U ≤ ξ + ε (3.23) Proof. Fix a measurement outcome M and suppose |R β (M )| ≤ ξ. From the definition of R β (M ) we have that: R β (M ) = E(1 E · (−1) β(A 0 ,A 1 ) | Z = M ) (3.24) = P (β(A 0 , A 1 ) = 0, E|Z = M ) − P (β(A 0 , A 1 ) = 1, E|Z = M ) (3.25) From R β (M ) ≤ ξ : −ξ ≤ P (β(A 0 , A 1 ) = 0, E|Z = M ) − P (β(A 0 , A 1 ) = 1, E|Z = M ) ≤ ξ (3.26) From P (E|Z = M ) ≥ 1 − ε and basic probability theory: 1 − ε ≤ P (E|Z = M ) =P (β(A 0 , A 1 ) = 0, E|Z = M ) + P (β(A 0 , A 1 ) = 1, E|Z = M ) ≤ 1 (3.27) Combining equation (3.26) with equation (3.27) we get: P (β(A 0 , A 1 ) = 0, E|Z = M ) − 1 2 ≤ ξ + ε 2 (3.28) and P (β(A 0 , A 1 ) = 1, E|Z = M ) − 1 2 ≤ ξ + ε 2 (3.29) Then the 1 distance between P β(A 0 ,A 1 ),E|Z=M and P U is: P β(A 0 ,A 1 ),E|Z=M − P U = P (β(A 0 , A 1 ) = 0, E|Z = M ) − 1 2 + P (β(A 0 , A 1 ) = 1, E|Z = M ) − 1 2 ≤ ξ + ε (3.30) Thus we have proven that if R β (M ) is small, Lemma 3.7 together with Theorem 3.5 imply that Protocol 3.1 is secure againsta dishonest user Bob that observes the measurement outcome M . Security For µ−net In [Liu15], it is shown that there exists an µ−net W for the set of all possible separable measurement outcomes W with respect to the operator norm · . In this section, we show that the protocol is secure for all the measurement outcomes in the µ−net. First, we introduce the following lemma as presented and proved in [Liu15]. Lemma 3.8. [Liu15, Lemma 3.5] For any 0 < µ ≤ 1, there exists a set W ⊂ W , of cardinality | W | ≤ 9m µ 4m , which is a µ−net for W with respect to the operator norm · . We then use Lemma 3.8, and set µ = 2 −(α/2)k · δ 2 4 m , (3.31) The value of µ is chosen so that it is small enough to approximate any measurement outcome, see equation (3.88) in the next section. Together with the fact that k ≤ m ≤ k θ and δ = 2 −δ 0 k the cardinality of W is bounded by: | W | ≤ 9m · 2 α 2 k 4 m δ 2 4m = 2 log(9m)+ α 2 k+2δ 0 k+2m 4m (3.32) = 2 4m log(9m)+4(α/2+2δ 0 )km+8m 2 ≤ 2 4m log(9m)+(2α+8δ 0 +8)m 2 ≤ 2 4k θ log(9k θ )+(2α+8δ 0 +8)k 2θ (3.33) = 2 4k θ log 9+4k θ θ log k+(2α+8δ 0 +8)k 2θ . (3.34) For sufficiently large k it holds that log k ≤ k ≤ k θ ≤ k 2θ . This also implies that k θ log k ≤ k 2θ . Then for all sufficiently large k, | W | ≤ 2 4k θ log 9+4k θ θ log k+(2α+8δ 0 +8)k 2θ ≤ 2 (4 log 9+4θ+2α+8δ 0 +8)k 2θ (3.35) ≤ 2 γk 2θ , (3.36) where γ is a constant. Next we use Lemma 3.6 and we set λ = 2 −(α/2)k · 2r. ≤ | W | · β P F, G; S, T (|R β ( M )| ≥ λ) (3.40) ≤ | W | · 2 2 · P F, G; S, T (|R β ( M )| ≥ λ) (3.41) ≤ 2 γk 2θ · 2 2 · 8e 1/(3r) √ πr(e 2 /2) −r/4 (3.42) r=4(γ+1)k 2θ = 2 γk 2θ +3+(γ+1)k 2θ +2 · e 1 12(γ+1)k 2θ + 1 2 ln 4π(γ+1)k 2θ +2(γ+1)k 2θ (3.43) =κk = exp (3 + (2γ + 1)k 2θ + 2κk) ln 2 (3.44) + 1 12(γ + 1)k 2θ + 1 2 ln 4π(γ + 1) + θ ln k − 2(γ + 1)k 2θ (3.45) Since k 2θ ln 2 > 0 and e k 2θ ln 2 ≥ 1 we multiply equation (3.45) with e k 2θ ln 2 . Furthermore, using the fact that f (k) = 2κk ln 2 + θ ln k + 1 12(γ + 1)k 2θ + 3 ln 2 + exp (2γ + 1)k 2θ ln 2 − 2(γ + 1)k 2θ + 2κk ln 2 + θ ln k (3.48) + 1 12(γ + 1)k 2θ + 3 ln 2 + 1 2 ln 4π(γ + 1) (3.49) ≤ exp 2(γ + 1)(ln 2 − 1)k 2θ + o(k 2 ) (3.50) = exp − 2(γ + 1)(1 − ln 2)k 2θ − o(k 2 ) . Approximating Measurement Outcomes We now show that any measurement outcome M can be approximated by another measurement outcome M , just as in [Liu15]. First, we introduce a lemma proved and used in [Liu15] that shows that if M is 2δ−non-negligible then M is δ−non-negligible. where we define the matrix ξ ∈ (C 2×2 ) ⊗m as follows: ξ = 4 − s,t∈{0,1} ρ st . (3.67) Also, note that ξ tr ≤ 1. where ν, ξ ∈ (C 2×2 ) ⊗m satisfy ν tr ≤ 1 and ξ tr ≤ 1. We now consider the measurement outcome M . We construct an event E in order to define the quantity R β (M ). The event E conditioned on Z = M behaves similarly to E conditioned on Z = M . We now construct the event E. where ν and ξ are the same matrices used to express R β ( M ) in equation (3.68). In addition, we can lower-bound tr(M ξ) and tr( M ξ) as follows: tr(M ξ) ≥ 2δ · 2 −m tr(M ) ≥ 2δ · 2 −m M (3.71) ≥ 2δ · 2 −m , (3.72) tr( M ξ) ≥ δ · 2 −m tr( M ) (3.73) ≥ δ · 2 −m M (3.74) ≥ δ · 2 −m (1 − µ) ≥ δ · 2 −m · 1 2 . (3.75) Where we used that M is 2δ-non-negligible, M is δ−non-negligible and the inequalities (3.85) and (3.78). Note that we use equation (3.86) and M = 1 to get: M = M − M + M ≤ M − M + M (3.76) =⇒ 1 ≤ µ + M (3.77) =⇒ M ≥ 1 − µ (3.78) We also used the fact that µ ≤ 2 3 · δ · 2 −m , δ ≤ 1 2 and m ≥ k ≥ k 0 ≥ 1 (as defined in [Liu15]) to lower bound 1 − µ as follows: . 1 − µ ≥ 1 − 2 3 · δ · 2 −m ≥ 1 − 1 3 · 2 −m ≥ 1 − 1 6 ≥ 1 2 (3. (3.83) We can then upper-bound this quantity: |R β (M ) − R β ( M )| ≤ µ 2δ · 2 −m + (1 + µ) µ 2δ · 2 −m · δ · 2 −m · 1 2 = µ 2δ · 2 −m 1 + (1 + µ) δ · 2 −m · 1 2 ≤ 2µ 2 m δ 2 . (3.84) This completes the proof of Lemma 3.10. From Lemma 3.10 we can show that Protocol 3.1 is secure, when the adversary observes any separable measurement outcome M ∈ W that is 2δ-non-negligible. Note that M = 1 (that is assumed without loss of generality [Liu15]) implies tr(M ) ≥ 1: tr(M ) ≥ 1 (3.85) Let M ∈ W be the nearest point in the µ-net W . Then we have: M − M ≤ µ, (3.86) where µ = 2 −(α/2)k δ 2 4 m . Then from Lemma 3.9, M is δ-non-neglibible. Then from equation (3.53) we get that |R β ( M )| ≤ λ, where λ = 2 −(α/2)k · 2r: −2 − α 2 k · 2r ≤ R β ( M ) ≤ 2 − α 2 k · 2r. (3.87) Using Lemma 3.10 and substituting µ we get that: |R β (M ) − R β ( M )| ≤ 2µ 2 m δ 2 = 2 · 2 −(α/2)k (3.88) =⇒ −2 · 2 −(α/2)k ≤ R β (M ) − R β ( M ) ≤ 2 · 2 −(α/2)k . (3.89) By adding equations (3.87) and (3.89) we get: −2 −(α/2)k · 2(r + 1) ≤ R β (M ) ≤ 2 −(α/2)k · 2(r + 1) (3.90) =⇒ |R β (M )| ≤ 2 − α 2 k · 2(r + 1) (3.91) Then from Lemma 3.7 we see that Protocol 3.1 is secure for all 2δ-non-negligible measurement outcomes M ∈ W that a dishonest user Bob may observe: P β(A 0 ,A 1 )E|Z=M − P U ≤ 2 − α 2 k · 2(r + 1) + ε = 2 − α 2 k · 2(r + 1) + 2 −ε 0 k ≤ 2 −Ω(k) . (3.92) Consider any LOCC adversary, and let Z be the random variable representing the measurement outcome. We can then write: P β(A 0 ,A 1 )Z − P U × P Z ≤ M P (Z = M ) P β(A 0 ,A 1 )|Z=M − P U ≤ 2δ + M :M is 2δ-non-negligible P (Z = M ) P β(A 0 ,A 1 )|Z=M − P U ,P β(A 0 ,A 1 )Z − P U × P Z ≤ 2δ + 2ε + M :M is 2δ-non-negligible P (Z = M ) P β(A 0 ,A 1 ), E|Z=M − P U ≤ 2δ + 2ε + 2 − α 2 k · 2(r + 1) + ε ≤ 2 · 2 −δ 0 k + 3 · 2 −ε 0 k + 2 − α 2 k · 2(r + 1). (3.94) Note that in the last step we use the definitions of δ = 2 −δ 0 k and ε = 2 −ε 0 k . Then Theorem 3.5 with = κk, where 0 < κ < min δ 0 2 , ε 0 2 , α 4 and P β(A 0 ,A 1 )Z − P U × P Z ≤ 2 · 2 −δ 0 k + 3 · 2 −ε 0 k + 2 − α 2 k · 2(r + 1) = ε 2 2 +1 , (3.95) implies that P A 1−D A D D Z − P U × P A D D Z ≤ ε ≤ 2 2 +1 · 2 · 2 −δ 0 k + 3 · 2 −ε 0 k + 2 − α 2 k · 2(r + 1) ≤ 2 2 +1 · 2 · 2 −δ 0 k + 4 · 2 −ε 0 k + 2 − α 2 k · 2(r + 1) ≤ 2 −δ 0 k+2 +2 + 2 −ε 0 k+2 +3 + 2 − α 2 k+2 +2 + 2 − α 2 k+2 +2+ln r ≤ 2 −(δ 0 k−2( +1)) + 2 −(ε 0 k−2 +3) + 2 −( α 2 k−2( +1)) + 2 −( α 2 k−2( +2+θ ln k)−ln (γ+1)) (3.96) Next we examine the term 2 −(δ 0 k−2( +1)) , since < δ 0 2 k then: ∃c > 0 ∃k 0 : ∀k > k 0 the following holds: δ 0 k − 2( + 1) ≥ ck (3.97) =⇒ δ 0 k − 2( + 1) ∈ Ω(k) (3.98) Chapter 3. 2 1 −ROT In The Isolated Qubits Model 35 then we have that for sufficiently large k: 2 −(δ 0 k−2( +1)) ≤ 2 −Ω(k) (3.99) In a similar way we get that since < ε 0 2 k then for sufficiently large k: 2 −(ε 0 k−2 +3) ≤ 2 −Ω(k) (3.100) Finally since < α 4 k then for sufficiently large k 2 −( α 2 k−2( +1)) ≤ 2 −Ω(k) (3.101) and 2 −( α 2 k−2( +2+θ ln k)−ln (γ+1)) = 2 −( α 2 k−2 −o(k)) ≤ 2 −Ω(k) , (3.102) which holds since f (k) = 2θ ln k + 4 + ln (γ + 1) ∈ o(k). Chapter 4 Flavours Of Oblivious Transfer In the previous chapter, we showed that a secure string 2 1 −ROT protocol can be constructed in the isolated qubits model. In this chapter we use the 2 1 −ROT functionality to construct protocols that implement more complex oblivious transfer functionalities. First we present a protocol that implements the 2 1 −OT functionality using an instance of the 2 1 −ROT functionality. As we have already discussed in Chapter 1, an 2 1 −OT protocol is sufficient to implement any two-party computation securely, which makes it a fundamental problem in cryptography. Secondly, we present a reduction from k 1 −OT and k 1 −ROT to a series of k 2 1 −OTs, that was first introduced in [BCR86]. Finally, we construct a protocol that implements the weaker k 1 − ROT functionality using only log k 2 1 −ROT functionalities. These results are more general as these protocols are not restricted to the isolated qubits model as they rely on the existence and composability of a secure 2 1 −ROT protocol in a cryptographic model. 3. Alice then sends two messages Y 0 , Y 1 such that: Y 0 = S 0 ⊕ A 0 (4.1) Y 1 = S 1 ⊕ A 1 (4.2) 4. Bob then receives output: We introduce the theorem that states that if the 2 1 −ROT functionality is implemented securely, then so is the 2 1 −OT functionality. X D = Y D ⊕ S D (4.3) Theorem 4.2. If the 2 1 −ROT functionality used in Protocol 4.1 fulfills the ε−security Definition 2.9, then the 2 1 −OT Protocol 4.1 is ε−secure according to Definition 2.7. Proof of Theorem 4.2 In order to prove Theorem 4.2 we need to show that all the conditions of Definition 2.7 are fulfilled. Correctness Proof. If both Alice and Bob are honest, they follow Protocol 4.1. Then if the 2 1 −ROT functionality is implemented correctly Alice will receive outputs S 0 , S 1 and Bob will receive S D except with probability ε. After Alice sends Y 0 , Y 1 , where Y i = A i ⊕ S i , Bob outputs Y D ⊕ S D = A D , implying correctness except with probability ε. Security for Alice Proof. For an honest user Alice, security of the 2 1 −ROT functionality implies that there exists a random variable D such that P S 1−D G S D D ≈ ε P U · P G S D D , (4.4) where S 0 , S 1 are the outputs of honest Alice and G is the output of a dishonest user Bob. Define random variable D = D , then since an honest user Alice does not use her inputs A 0 , A 1 in the 2 1 −ROT it is clear that P G D S 0 S 1 A 0 A 1 = P G D S 0 S 1 · P A 0 A 1 , (4.5) which implies that P D A 0 A 1 = P D · P A 0 A 1 . (4.6) Furthermore equation (4.5) implies that P G D S D A 0 A 1 = P G S D |D · P D · P A 0 A 1 , (4.7) and from equation (4.6) we have that P G D S D A 0 A 1 = P G S D |D · P A 0 A 1 D , (4.8) and P G D S D A 0 A 1 = P G S D |D A D · P A D A 1−D D . (4.9) Bob's input G depends on G , Y 0 , Y 1 , where Y 0 = S 0 ⊕ A 0 and Y 1 = S 1 ⊕ A 1 . Then from equation (4.4) and (4.5) we get P S 1−D G D S D A D ≈ ε P U · P G S D D · P A D , (4.10) which implies that P S 1−D |G D S D A D ≈ ε P U . (4.11) Then Y 1−D is independent of A 1−D given G , D , S D , A D and Y D = A D ⊕ S D . Therefore P Y 0 Y 1 G D S D A D A 1−D ≈ ε P Y 0 Y 1 |G D S D A D · P G D S D A D · P A 1−D (4.12) and taking into account equation (4.9) P Y 0 Y 1 G D S D A D A 1−D ≈ ε P Y 0 Y 1 |G D S D A D · P G S D |D A D · P D A D A 1−D (4.13) ⇒ P G D A D A 1−D ≈ ε P G |D A D · P D A D A 1−D (4.14) Thus, equations (4.6) and (4.14) imply that Protocol 4.1 is secure for Alice. Security for Bob Proof. If the 2 1 −ROT functionality is secure for honest user Bob with input D, there exist random variables S 0 , S 1 such that P [G = S D ] ≥ 1 − ε (4.15) and P S 0 S 1 D ≈ ε P S 0 S 1 P D , (4.16) where D and G are Bob's input and output used in F 2 1 −ROT . We can then define random variables A 0 = Y 0 ⊕ S 0 and A 1 = Y 1 ⊕ S 1 , where Y 0 and Y 1 are the messages sent by Alice to Bob after the 2 1 −ROT. Then from equation (4.16) and since Alice receives no further information from Bob after the 2 1 −ROT has been used it is clear that P A 0 A 1 D ≈ ε P A 0 A 1 P D . (4.17) Finally since Bob is honest, his output G will be G = G ⊕ Y D , which implies that The following k 1 −OT protocol makes use of k 2 1 −OT functionalities and was first presented in [BCR86]. P [G = A D ] ≥ 1 − ε.X D = A D,1 ⊕   D−1 j=1 A j,0   (4.19) It is easy to see that a k 1 −ROT protocol can be constructed if Alice chooses her input messages X 1 , . . . , X k uniformly at random. A sketch of the security proof for this protocol can be found in [BCR86]. We present this protocol and its possible extension to a k 1 −ROT protocol to argue that indeed such a protocol can be constructed from a secure 2 1 −OT. However, it requires k (or k − 1) 1 2 1 −OT functionalities. In the next section we present a protocol that fulfills a weaker security definition but requires only log k 2 1 −ROTs. We will later use that protocol in Chapter 5 to achieve secure password-based identification. 4.3 k 1 − ROT from 2 1 −ROT Protocol And Security Definition In this section, we introduce a protocol that implements the k 1 − ROT functionality. While the following k 1 − ROT protocol fulfills a weaker security definition, it is more efficient than Protocol 4.3 or its extension to a k 1 −ROT as it makes use of only log k 2 1 −ROTs instead of k 2 1 −OTs. Alice with no input, receives log k pairs of strings (A i,0 , A i,1 ) from the i th 2 1 −ROT, for i ∈ {1, . . . , log k}. Her output messages S 1 , . . . , S k will later be composed of the possible additions of these strings, for example S 1 = log k i=1 A i,0 . Bob with input D ∈ {1, . . . , k}, that can be seen as a string {D 1 |D 2 | . . . |D log k }, in turn inputs the i th bit of his choice D i to the i th 2 1 −ROT functionality and obtains output A i,D i . He finally adds the outputs he received to obtain his output of the k 1 − ROT funcitonality S D = log k i=1 A i,D i . A Alice receives outputs: S 1 = A 1,0 ⊕ A 2,0 ⊕ · · · ⊕ A log k,0 S 2 = A 1,1 ⊕ A 2,0 ⊕ · · · ⊕ A log k,0 . . . . . . . . . S k = A 1,1 ⊕ A 2,1 ⊕ · · · ⊕ A log k,1 (4.20) and Bob receives output: We now introduce the theorem that states that if the 2 1 −ROT functionalities used in the above protocol are secure, in the sense of Definition 2.7, then the protocol implements the k 1 − ROT functionality securely, according to the Definition 2.13. S D = log k i=1 A i,D i (4.21) Theorem 4.5. If the 2 1 −ROT functionalities used in Protocol 4.4 are ε−secure, according to Definition 2.9, then Protocol 4.4 is ε −secure according to Definition 2.14, where ε ≤ ε log k. Proof Of Theorem 4.4 The following proof consists of three parts as we have to show that all three requirements of Definition 2.13 hold. Correctness First we show that the correctness requirement of Definition 2.14 holds if the correctness requirement of Definition 2.9 holds. Proof. We show that for two honest users who follow Protocol 4.4, the protocol is correct if the 2 1 −ROT functionality is implemented correctly. An honest user Alice receives outputs (A i,0 , A i,1 ) from the i th 2 1 −ROT used in the protocol, for i = 1, . . . , log k. She receives outputs S 1 . . . S k by modulo 2 addition of these inputs, for example S 1 = log k i=1 A i,0 . Then Bob with input D = {D 1 | . . . |D log k }, uses D i as input to the i th 2 1 −ROT. If the 2 1 −ROTs are correct he receives A i,D i except with probability ε for all i = 1, . . . , log k. He then correctly computes X D = log k i=1 A i,D i except with probability ε ≤ ε · log k. Thus Protocol 4.4 is correct. Security For Alice Secondly we show that if the 2 1 −ROT are secure for Alice in the sense of Definition 2.9 then the security for Alice condition of Definition 2.14 holds. Proof. For an honest user Alice with no input and any dishonest user Bob with output G we can define a random variable D such that D = {D 1 | . . . |D log k }, where D i is Bob's input to the i th 2 1 −ROT functionality used in the protocol. Then for all I = {I 1 | . . . |I log k } such that I = D , there exists at least one j ∈ {1, . . . , log k} such that I j = D j . Since the 2 1 −ROTs are secure for Alice for any dishonest user Bob with output G j there exists a random variable D i such that P A j,I j G j A j,D j D j ≈ ε P U · P G j A j,D j D j . (4.22) Then since S D = log k i=1 A i,D i and S I = log k j=1 A j,I j for any I = D , P S I G S D D ≈ ε P U · P G S D D ,(4.23) where ε ≤ ε · log k. Equation (4.23) proves that Protocol 4.4 is secure for Alice according to Definition 2.14. Security For Bob Finally we show that if the 2 1 −ROT functionalities are secure for Bob according to Definition 2.9 then Protocol 4.4 is secure for Bob, in the sense of Definition 2.14. Proof. For an honest user Bob with input D = {d 1 | . . . |d log k } ∈ {1, . . . , k} and any dishonest user Alice we define random variables S 1 , . . . S k in the following way: S I = log k j=1 A j,I j , for I ∈ {1, . . . , k}, (4.24) where A i,0 and A i,1 are the inputs in the i th 2 1 −ROT used in Protocol 4.4. Then since the 2 1 −ROTs are secure for Bob there exist random variables A i,0 , A i,1 such that for the output of the i th 2 1 −ROT P [G i 2 1 −OT = A i,D i ] ≥ 1 − ε,(4.25) and P D i A i,0 A i,1 ≈ ε P D i · P A i,0 A i,1 ,(4.26) for all i = 1, . . . , log k. Since S D = log k j=1 A j,D j , (4.25) implies that P [G k 1 − ROT = S D ] ≥ 1 − ε , (4.27) with ε ≤ ε · log k. Furthermore (4.26) implies that the distribution of D i is independent of the inputs of that 2 1 −OT. Since a dishonest user Alice receives no information the choices of Bob are independent. Then since D = {D 1 | . . . |D log k } and S I = log k j=1 A j,I j P DS 1 ...S k ≈ ε P D · P S 1 ...S k (4.28) Thus if the 2 1 −OT functionalities used are correct, then Protocol 4.4 is secure, this concludes the proof of Theorem 4.5. Secure Identification In this chapter, we aim to construct a protocol that achieves secure password-based identification. In order to do so, we first study existing protocols, namely the protocol proposed in [DFSS07], that achieves secure identification in the bounded quantum storage model. First, we adapt this protocol to the isolated qubits model by using a k 1 −OT functionality. As we have seen in Chapter 4, it is possible to construct a k 1 −OT protocol in this model. However, we notice that this identification protocol requires interaction from Bob to Alice. Secondly, we study if it is possible to construct a non-interactive secure identification protocol. We show that such a protocol is impossible to construct, even based on oblivious transfer. Finally, we prove the security of an interactive password-based identification protocol that makes use of a k 1 − ROT functionality. The latter can be implemented efficiently, as we showed in Chapter 4. 3. Alice picks f ∈ F uniformly at random and sends θ and f to Bob. Both compute I W := {i : θ i = c(W ) i }. Secure Identification From 4. Bob picks h ∈ H uniformly at random and sends h to Alice. Alice sends z := f (X W A | I W A ) ⊕ h(W A ) to Bob, where X W A | I W A is the restriction of X W A to the coordinates X i with i ∈ I W A . Bob accepts if and only if z = f (X D | I W B ) ⊕ h(W B ) While this protocol is secure in the bounded-quantum storage model, it is not secure in the isolated qubits model as we have discussed in Chapter 2. Note however that the first part of the protocol (steps 1-4) can be seen as a protocol that implements the k 1 −OT functionality. As we have shown in Chapter 4, there exists a protocol that achieves that in the isolated qubits model. Taking this fact into account, we construct a password-based identification protocol that relies on the security of a k 1 −OT functionality. A sketch of the protocol is presented in Figure 5.1. Protocol 5.2. Password-based identification protocol with inputs W A and W B , the passwords of user Alice and user Bob respectively. Let H be a family of strong 2-universal hash functions such that h ∈ H and h : {1, . . . , k} → {0, 1} . Then the protocol between Alice and Bob is the following: 1. The user Alice uses a k 1 −OT functionality, F k 1 −OT , with inputs X 1 , X 2 , . . . , X k ∈ X . 2. The user Bob inputs his choice D = W B to the k 1 −OT and receives the message X D 3. Bob sends a function h ∈ H to Alice. Alice sends z := X W A ⊕ h(W A ) to Bob. 5. The user Bob outputs 1 if z = X D ⊕ h(W B ) and 0 otherwise. In Section 5.3, we will prove the security of a similar protocol that relies on the weaker k 1 − ROT functionality, that can be implemented efficiently using log k 2 1 −ROT functionalities. Note however, that both Protocol 5.1 and 5.2 use interaction from Bob to Alice. But if we assume that we can implement a k 1 −OT functionality securely and non-interactively, can we use it to construct a non-interactive secure identification protocol? Impossibility Proof In this section we study if it is possible to construct a non-interactive password-based identification protocol using a k 1 −OT. We introduce a general protocol that uses one instance of the F k 1 −OT functionality to implement the identification functionality F ID and we then prove that such a protocol cannot be secure. We aim to emphasize the importance of the interaction from Bob to Alice (step 4 of Protocol 5.2) in order to implement the identification functionality securely. Non-Interactive Password-Based Identification We formally introduce the protocol later (Protocol 5.3), but we first describe it to give some intuition and argue why the protocol is a general form for all such possible protocols. The user Alice has as input a password W A , can choose inputs X 1 , . . . , X k to the k 1 −OT and sends some extra information Y to the user Bob depending on the specific protocol. In Protocol 5.2 for example, Y is the function f and the message z = f (X W A ⊕ h(W A ). The user Bob has as input his password W B and makes a choice regarding the message he will retrieve from the k 1 −OT. His choice D may depend on his password so that if his password choice is equal to the choice of Alice he will be able to correctly identify her while he will not be able to do so in any other case. So his choice is described by a deterministic function c : W × Y → {1, . . . , k} such that D = c(W B , Y ) returns the message that when combined with the information Y will allow him to check if W A = W B . We note that any non-interactive protocol that uses the F k 1 −OT functionality once has to be of this form. Since it is non-interactive the user Alice can use the functionality of F k 1 −OT once and at most send some additional information Y . On the other hand the user Bob receives information Y and can interact with the F k 1 −OT functionality by inputing his choice D, that can at most depend on both Y and his password choice W B . Finally, he can, at most, use W B , D, X D and Y as inputs to some function g to evaluate the equality function. Protocol 5.3. Non-interactive identification protocol with inputs W A and W B , the passwords of user Alice and user Bob respectively : 1. The user Alice uses the non-interactive k 1 −OT functionality, F k 1 −OT , with inputs X 1 , X 2 , . . . , X k ∈ X and sends additional information Y to Bob. She chooses the inputs and additional information uniformly at random from a joint distribution P X 1 ...X k Y |W A . 2. The user Bob inputs his choice D = c(W B , Y ) to the k 1 −OT and receives the message X D . 3. The user Bob then computes and outputs the acceptance predicate, G: G = g(W B , X D , Y ) = 0, if he rejects, 1, if he accepts. (5.1) A non-interactive identification protocol in this k 1 −OT hybrid model is defined by the following ingredients: P Y X 1 ...X k |W A , P D|W B Y , P G|W B DX D Y In order for Protocol 5.3 to be secure it must fulfill the conditions of the security definition Definition 2.16. We consider the special case for = 0 for perfect security of the protocol.We did not study the case that a non-interactive −secure password-based identification protocol can be constructed using oblivious transfer. Although studying the > 0 case remains an interesting problem for future research, we consider the intuition we collect from the following proof (Section 5.2.2) sufficient to emphasise the importance of interaction between Bob and Alice as discussed in Section 5.2.3. This result justifies the use of interaction in the construction of a secure password-based identification protocol, which is the main goal of this thesis. Then for users Alice and Bob that hold X 1 , . . . , X k , Y and D, X D , Y, G respectively, we can formulate the following security definition, that is equivalent to Definition 2.16. Security for Alice: For any dishonest user Bob, for any distribution of W A , there exists a random variable W that is independent of W A and such that: P W A W Y X D |W =W A = P W A ↔W ↔Y X D |W =W A (5.2) SecurityP W B W Y X 1 ...X k |W =W B = P W B ↔W ↔Y X 1 ...X k |W =W B (5.3) The following theorem that states that it is impossible for a protocol that uses one instance of a k 1 −OT functionality to implement the identification functionality securely. Theorem 5.5. If Protocol 5.3 is correct and secure for Alice according to Definition 5.4, then it is not secure for Bob. Proof Of Theorem 5.5 We first introduce some lemmas that we will use later to prove Theorem 5.5. Lemma 5.6. If Protocol 5.3 is secure for Alice then for all i ∈ {1, . . . , k} the joint distribution of the random variables X i and Y are independent of W A . Proof. Since Protocol 5.3 is secure for Alice, for all P W A , for all i ∈ {1, . . . , k} there exists W independent of W A such that: P W A W X i Y, W =W A = P W A ↔W ↔X i Y |W =W A . (5.4) Then by definition: P W A |W X i Y, W =W A = P W A |W , W =W A (5.5) We also note that trivially when W = W A , P W A |W X i Y, W A =W = P W A |W , W A =W (5.6) Using the property of the marginal distribution: P W A |W X i Y =P [W = W A ]P W A |W X i Y, W =W A + P [W = W A ]P W A |W X i Y, W =W A (5.7) (5.5),(5.6) = P [W = W A ]P W A |W , W =W A + P [W = W A ]P W A |W , W =W A (5.8) =P W A |W (5.9) Using the fact that W is independent of W A equation (5.9) becomes: P W A |W X i Y = P W A (5.10) From equation (5.10) we observe that W , X i , Y are independent of W A . The next lemma states that if Protocol 5.3 is secure for Alice and correct then the function c(·, y) is injective for all possible y. Lemma 5.7. If Protocol 5.3 is correct and secure for Alice then for the function c : W ×Y → [k] the following holds: ∀y ∈ Y with P Y (y) > 0, c(·, y) is injective. (5.11) Proof. Let us assume that the function c(W B , Y ) is not injective. Then ∃y : P Y (y) > 0 and ∃j, m ∈ W with j = m such that c(j, y) = c(m, y). (5.12) Then clearly, X c(j,y) = X c(m,y) , (5.13) which immediately implies that, g(j, X c(m,y) , y) (5.14) = g(j, X c(j,y) , y). (5.14) Let us assume that W A = j and W B = m. Since Protocol 5.3 is correct, Bob computes: g(m, X c(m,y) , Y ) = 0, (5.15) but also g(j, X c(m,y) , y) (5.14) = g(j, X c(j,y) , y) = 1. (5.16) This means that for W A = W B , Bob learns the password of Alice and thus From correctness we expect that for all password inputs w ∈ W thre exists a y ∈ Y and there exists a x ∈ X that Alice can input in the k 1 −OT and will lead Bob to output G = g(w, x, y) = 1. The following lemma states that it must be so for all y ∈ Y with P Y (y) > 0, for all password inputs w ∈ W simultaneously. Intuitively this is so because otherwise a dishonest user Bob would gain some information on the password of Alice from the message Y . He could for example exclude some password choices after seeing Y , making the protocol insecure for Alice. P W A W Y X D |W =W A = P W A ↔W ↔Y X D |W =W A ,(5. Lemma 5.8. If Protocol 5.3 is correct and secure for Alice, then for all w ∈ W, for all y ∈ Y such that P Y (y) > 0 there exists a x ∈ X such that: g(w, x, y) = 1 (5.18) Proof. We will prove this lemma by contraposition. Assume that there exists a w ∈ W and there exists a y ∈ Y with P Y |W A (y|w) > 0 such that for all x ∈ X : g(w, x, y) = 0 (5.19) Let W A = W B = w. Then for all x ∈ X g(w, x, y) = 0, (5.20) which implies that Protocol 5.3 is not correct. So far we have shown that if Protocol 5.3 is correct, then for all w ∈ W, for all y ∈ Y : P Y |W A (y|w) > 0 there exists a x ∈ X such that g(w, x, y) = 1. Furthermore security for Alice implies that P Y |W A = P Y via Lemma 5.6 and thus we can conclude that: For all w ∈ W for all y ∈ Y with P Y (y) > 0, there exists a x ∈ X such that: g(w, x, y) = 1 (5.21) Note that on the one hand, the information Y Alice sends does not give any information about her password input, which is necessary to ensure her security. On the other hand, it also means that Alice does not commit to a password choice by sending the information Y to Bob. We now prove Theorem 5.5 using the above lemmas. The intuition behind the following proof is that if the identification Protocol 5.3 is correct and secure for Alice, a dishonest Bob cannot learn anything about the password of Alice from the output of the k 1 −OT X D or the additional information Y alone except for the output G. He also does not learn anything about the other inputs in the k 1 −OT, thus allowing a dishonest user Alice to launch an attack by choosing the inputs to the k 1 −OT, X 1 , . . . , X k , such that each one of them combined with Y will force Bob to accept for all of his password choices W B . Then Protocol 5.3 is clearly not secure for Bob since the dishonest user Alice does not need to choose one password W A but can force Bob to always accept. Proof. If Protocol 5.3 is correct and secure for Alice, a dishonest user Alice can use the following attack to force Bob to accept for all of his password choices. Alice inputs W A = 1, chooses a value for Y honestly and then picks the inputs to the k 1 −OT, X 1 , . . . , X k , such that for every password choice W B of the user Bob, he will obtain X i = X c(W B ,Y ) such that he will output G = 1. Attack Strategy Of Dishonest User Alice 1. Alice chooses Y = y (honestly) according to the distribution P Y |W A =1 , and sends it to Bob. 2. Alice uses the non-interactive k 1 −OT functionality, with inputs X 1 , . . . , X k that she chooses as follows. For every password w ∈ W: (a) Find a x such that: P X j |W A =w,Y =y (x) > 0, with j = c(w, y) (5.22) and G = g(w, x, y) = 1. (5.23) (b) Set input X j = x, with j = c(w, y). Note that step 2 is possible because correctness and security for Alice imply, via Lemma 5.8, that for all possible choices of Y and for all possible password choices w ∈ W there exists a x ∈ X such that G = 1. Furhtermore, Lemma 5.7 implies that the function c(W B , y) is injective for all y ∈ Y. Then once y is chosen, for every w ∈ W there exists only one j ∈ {1, . . . , k}, such that j = c(w, y). These two facts allow a dishonest Alice to choose the inputs of the k 1 −OT, such that for every password choice of Bob w ∈ W he retrieves a x ∈ X such that he outputs G = 1. In more detail, after receiving the k 1 −OT and Y = y, the honest user Bob chooses a password W B , inputs his choice D = c(W B , y) to the k 1 −OT and receives the message X D . As described above for every one of his password choices he receives a message X D = x such that g(w, x, y) = 1. He then outputs G = g(W B , X D , Y ) = 1 for any of his password choices, which implies that Protocol 5.3 is not secure for Bob. The Importance Of Interaction Theorem 5.5 shows that a non-interactive protocol using one instance of a k 1 −OT functionality cannot implement the identification functionality securely. We proved that security for Alice and correctness of the protocol allow the attack described above to succeed and we claim that this is true as long as Alice has knowledge of the function g(W B , X D , Y ). This knowledge allows her to choose the inputs to the k 1 −OT such that Bob will accept for all of his passwords, making the protocol insecure. It is then interesting to examine where (interactive) protocols that are known to be secure differ. In the examples of [DFSS07, Protocol Q-ID] and Protocol 5.2 the user Bob sends some information to Alice after receiving the k 1 −OT, in this case a strong 2-universal hash function. This interaction makes the protocol secure for Bob against the above attack, because the (possibly extended) function g(W B , X D , Y ) as defined above, is fixed after Alice has chosen her inputs to the k 1 −OT. Fixing the function g after Alice has commited to her inputs to the k 1 −OT denies her the possibility to choose them in such a way that G = 1 for all passwords. Thus we conclude that as long as the function that is used by Bob to determine the acceptance predicate is fixed before Alice chooses her inputs to the k 1 −OT, she can choose the inputs such that the above attack will work. Extending the non-interactive Protocol 5.3 by allowing multiple uses of the k 1 −OT functionality, or even k b −OT functionalities from Alice to Bob will still be insecure for Bob as the function g(·) is fixed before Alice chooses her inputs to the k 1 −OTs. For the same reason, an interaction from Bob to Alice before she chooses her inputs to the k 1 −OT would not stop the above attack from functioning. Secure Identification From k 1 − ROT With Interaction In Section 5.1 we introduced Protocol 5.2, a password-based identification protocol based on a k 1 −OT functionality. The k 1 −OT construction we showed in Chapter 4 is, however, inefficient as it requires k instances of an 2 1 −OT functionality, where k is the number of passwords. In this section, we show that one can instead use the k 1 − ROT functionality that only requires log k 2 1 −OTs. While weaker than the k 1 −OT or k 1 −ROT functionalities, it is sufficient to achieve security for the password-based identification protocol. We introduce the password-based identification protocol that relies on the security of a k 1 − ROT functionality. A sketch of the protocol is shown in Figure 5.2 Protocol 5.9. Password-based identification protocol with inputs W A , W B ∈ W, the passwords of user Alice and user Bob respectively, where W = {1, . . . , k}. Let H be a family of strong 2-universal hash functions such that h ∈ H and h : W → {0, 1} . Then the protocol between Alice and Bob is the following: 1. The users Alice and Bob employ a k 1 − ROT functionality, F k 1 − ROT that takes no input from Alice and input D ∈ W from Bob. 2. The user Alice receives outputs S 1 , S 2 , . . . , S k ∈ S and Bob receives output string S D ∈ S, where S = {0, 1} . 3. Bob chooses a function h ∈ H uniformly at random and sends h to Alice. 4. Alice sends z := S W A ⊕ h(W A ) to Bob. The user Bob accepts if and only if z = S D ⊕ h(W B ) . We now introduce a theorem that states that the secure identification protocol we propose above is secure in the sense of Definition 2.16 if the k 1 − ROT functionality used is secure according to Definition 2.13. Theorem 5.10. If there exists a protocol that implements the k 1 − ROT functionality ε−securely according to Definition 2.14 and the min-entropy of password choices W is H min (W ) ≥ 1, then Protocol 5.9 is ε −secure in the sense of Definition 2.16, where ε = ε + k 2 2 . In Chapter 4 we have showed how to construct a secure k 1 − ROT protocol relying on a secure 2 1 −ROT. Moreover, in Chapter 3 we have showed that a secure 2 1 −ROT protocol exists in the isolated qubits model. Taking these results into account, the previous theorem states that our password-based identification protocol is secure in the isolated qubits model. Proof Of Theorem 5.10 Finally, in this section, we prove that if the k 1 − ROT used in Protocol 5.9 is secure then the identification protocol is secure. Correctness Proof. Honest users Alice and Bob hold inputs W A and W B . Following Protocol 5.9 Bob inputs D = W B in the k 1 − ROT functionality which is correct and thus Bob receives output G k 1 − ROT = S D = S W B ,(5.24) except with probability ε. Then z = S D ⊕ h(W B ) = S W B ⊕ h(W B ). Since Alice is honest z = S W A ⊕ h(S W A ) and thus if W A = W B then z = z and Bob will output G = 1. Similarly if W A = W B Bob will output G = 0 except with probability k 2 2 , the probability that h(W A ) = h(W B ) given W A = W B . Thus Protocol 5.9 is correct except with probability ε = ε + k 2 2 . Security For Alice Proof. For honest user Alice with input W A and any dishonest user Bob we can define a random variable W = D , where D is Bob's input in the k 1 − ROT used in the protocol. Since Bob has received no information before he decides on his input to the k 1 − ROT, D and thus W are independent of the input W A that honest Alice holds. Furthermore W A is independent of the messages S 1 , . . . S k that Alice receives from the k 1 − ROT since she has chosen it before receiving any output from the k 1 − ROT and the latter takes no input from Alice. From the above it is clear that if W = W A and since W = D then P W A W S W |W =W A = P W A ↔W ↔S W |W =W A . (5.25) Furthermore from the security of k 1 − ROT, there exists a random variable D such that for all I = D the following holds P S I D S D |D =I ≈ ε P U · P D S D |D =I . (5.26) Then for I = W A and W A = W equation (5.26) becomes, P S W A W S W |W =W A ≈ ε P U · P W S W |W =W A .(P ZW A W S W |W =W A ≈ ε P U · P W A W S W |W =W A . (5.28) From equation (5.25) the above can be written as: P ZW A W S W |W =W A ≈ ε P U · P W A ↔W ↔S W |W =W A (5.29) ≈ ε P W A ↔W ↔ZS W |W =W A (5.30) ≈ ε P W A ↔W ↔ZS W |W =W A ,(5.31) with ε = ε + k 2 2 . Then Protocol 5.9 is ε −secure for Alice. Security For Bob Proof. Since the k 1 − ROT functionality, that is implemented by an honest user Bob with input D = W B and a dihonest user Alice, is secure for Bob, there exist random variables S 1 , . . . , S k such that P DV S 1 ...S k ≈ ε P D · P V S 1 ...S k . (5.32) It is clear that V , S 1 , . . . , S k are independent of W B . We then define Z i = S i ⊕h(i) for i ∈ W. Consider the event E that all Z i 's are distinct. Since h is strong 2-universal and is also independent of S i the Z i 's are pairwise independent. Then from the union bound it follows that the event E occurs except with probability k(k−1)/2·1/2 ≤ k 2 /2 +1 . We define random variable W such that the message sent by Alice Z = S W ⊕ h(W ). If Z = Z i for all i then we set W = ⊥ and honest Bob always outputs G = 0 regardless of his password choice W B . In this case a dishonest user Alice learns nothing about W B . Similarly from the way that W is defined Bob will output G = 1 if W = W B . Note that from security of the k 1 − ROT functionality, and since h is picked uniformly at random, W is independent of W B . This further implies that Z 1 , . . . , Z k , Z are also independent of W B . Moreover since the event E is determined by the Z i 's it also holds that Z 1 , . . . , Z k , Z are independent from W B conditioned on the event E and even given W conditioned on E and W = W B . Now consider Z 1 , . . . , Z k , Z, G, if W = W B and event E then Bob outputs G = 0 with probability P [G = 0|W = W B , E] = 1. Then Z 1 , . . . , Z k , Z, G are independent of W B given W conditioned on the event W = W B and E, that is: P W B W Z 1 ...Z k ZG|W =W B ,E ≈ ε P W B ↔W ↔Z 1 ...Z k ZG|W =W B ,p = P [E|W = W B ] = P [E] 1 − P [W = W B ] ≤ 2P [E] ≤ k 2 2 . (5.34) Note that p upperbounds the probability P [G = 1|W = W B ] ≤ p ≤ k 2 2 ≤ ε , where ε = ε + k 2 2 , fulfilling the first condition for security. From basic probability theory and using equation (5.33) : P W B W Z 1 ...Z k ZG|W =W B = p · P W B W Z 1 ...Z k ZG|W =W B ,E + p · P W B W Z 1 ...Z k ZG|W =W B ,E (5.35) ≈ ε p · P W B ↔W ↔Z 1 ...Z k ZG|W =W B ,E + p · P W B W Z 1 ...Z k ZG|W =W B ,E (5.36) Finally note that E is independent of W B and W and thus also when conditioned on W = W B , then from conditional independence P W B ↔W ↔Z 1 ...Z k ZG|W =W B = p · P W B ↔W ↔Z 1 ...Z k ZG|W =W B ,E + p · P W B ↔W ↔Z 1 ...Z k ZG|W =W B ,E . (5.37) The distance between two probability distributions is upper-bounded by one by definition. Then so is the distance between P W B W Z 1 ...Z k ZG|W =W B ,E and P W B ↔W ↔Z 1 ...Z k ZG|W =W B ,E . Thus the distance between P W B W Z 1 ...Z k ZG|W =W B and P W B ↔W ↔Z 1 ...Z k ZG|W =W B is upper-bounded by p + ε ≤ k 2 /2 + ε = ε . Then since Alice's output V is defined by Z 1 , . . . , Z k , Z, G: P W B W V |W =W B ≈ ε P W B ↔W ↔V |W =W B . (5.38) Thus Protocol 5.9 is secure for Bob. Thus Protocol 5.9 is secure. We note that this protocol is more efficient than the previous identification protocols, because of the more efficient construction of k 1 − ROT compared to the k 1 −ROT protocol presented in Chapter 4. An interesting question that we encountered on the way is if it is possible to construct noninteractive secure password-based identification protocols. In Section 5.2, we proved constructing such a protocol based on oblivious transfer is impossible. The interaction from Bob to Alice must define the way he computes his output in order for the protocol to be secure against the attack we presented in Section 5.2. Moreover, we claim that this result is not restricted to the secure evaluation of the equality function but also applies to more (or even all) secure-function-evaluation problems. For example in the similar problem of Yao's millionaire problem where Bob computes a different function, a dishonest user Alice still has the ability to predetermine the output for all of her inputs, as long as Bob's function is not determined after she has commited to her inputs. Future Work This thesis studies the construction of a secure string 2 1 −OT protocol if the users are restricted to operations on single qubits and classical communication between them and gives examples of possible applications to construct more complex secure two-party computation protocols such as password-based identification. This leads of course to new questions that remain an open challenge for the future. As we mentioned in the last section, studying if composability holds in the IQM is likely the most interesting problem that arises from this thesis. If it is shown to be so, we have shown that a secure 2 1 −OT construction is possible in the IQM, which would imply that any secure twoparty computation functionality can be implemented. If however composability does not hold in the IQM, then constructing and analysing protocols in this model would prove an exciting challenge in itself. For example, the problem of analysing the security of two parallel 2 1 −OTs and modelling the measurement strategies of an adversary who is allowed to partially measure qubits from the first and second 2 1 −OT and adapt his measurement strategy depending on partial results of each 2 1 −OT seems to be a first challenge for further research. As we have already described in Chapter 1 there are numerous results that prove the impossibility of oblivious transfer in a fully quantum world. Nevertheless, there exist different approaches to restrict the users in a realistic fashion and achieve oblivious transfer. One of the most interesting questions that arises from this train of thought is to find the minimal and most realistic restrictions or assumptions needed to achieve secure 2 1 −OTs. For example, Liu has the question of allowing a number of entangling operations on the isolated qubits in [Liu15], which could be a possible approach to generalise the isolated qubits model. We have discussed in more detail two approaches to limit an adversary, restricting him to single-qubit operations or restricting his qubit-storage capacity. So far, existing protocols that are secure against one type of adversary are not secure against the other. The question then is, could we construct protocols that combine the power of these two models? For example using a 2 1 −OT that is secure in the IQM and one that is secure in the noisy-storage model to construct one 2 1 −OT that is secure in both models and using the modulo 2 addition of their outputs? This would then mean that the adversary would need to both have larger qubit storage capacities, in order to break the noisy-storage model 2 1 −OT security and be able to perform entangling operations on the qubits he receives, in order to break the isolated qubits model 2 1 −OT security. As a further approach to combine these models, Liu has addressed the question of allowing a number of entangling operations on the isolated qubits in [Liu15]. This could be the first step to define a more general model and should be investigate further. One further possibility for future endeavours that arises from our impossibility proof in Section 5.2.2 is to examine if our result indeed applies to more non-interactive two-party protocols. However, there exist results that state that quantum one-time programs can be constructed from one-time memories [? ]. We conjecture that there is a lower-bound on the number of one-time memories needed to construct a secure one-time program for password-based identification so that both results hold. Unfortunately we did not to study this into more detail in this thesis and we leave it as an open question. Furthermore, we leave the task of extending the impossibility proof to the error case as discussed in Section 5.2.1 as an open problem for the future. Our intuition is that the attack described in the proof should function since interaction from Bob to Alice seems to be necessary to achieve security for Bob. Nevertheless formalising this intuition is an interesting extension of the impossibility proof discussed in this thesis. Appendix A Probability Theory A.1 Probability Theory A.1.1 Random Variables The probability distribution of a random variable X that takes values x ∈ X is a function P X : X → [0, 1] and is defined as: P X (x) := P [X = x], ∀x ∈ X (A.1) Note that for every probability distribution the following holds: x∈X P X (x) = 1 (A. 2) The joint probablity distribution of two random variables X and Y that take values x ∈ X and y ∈ Y respectively, is defined as: and indicates the probability that X takes the value x and Y takes the value y simultaneously. P Let P XY be the joint distribution of random variables X and Y . Then the distribution of X can be obtained by marginalising over Y . The distribution P X is then called a marginal distribution: P X (x) = y∈Y P XY (x, y) ∀x ∈ X . (A.4) Let P XY be the joint of random variables X and Y . If X and Y are independent random variables the joint distribution can be written as: P XY (x, y) = P X (x) · P Y (y) ∀x ∈ X , ∀y ∈ Y. (A.5) Furthermore, the conditional probability distribution of a random variable X takes the value x ∈ X given the event that the random variable Y takes the value y ∈ Y is defined as: P X|Y (x|y) := P XY (x, y) P Y (y) . (A.6) Moreover we introduce the symbol P X↔Y ↔Z , as used in [DFSS07] and [FS09], to denote that the distribution of a random variable X is independent of a random variable Z given a random variable Y : P X|Y Z = P X|Y (A.7) Then we write: P XY Z = P X↔Y ↔Z (A.8) This notation is extended to P XY Z|E = P X↔Y ↔Z|E to denote that the distribution of a random variable X is independent of a random variable Z given a random variable Y conditioned on an event E: P X|Y ZE = P X|Y E (A.9) Boole's inequality The union bound or Boole's inequality states that the probability of at least one event occuring cannot be greater than the sum of the probabilities of all individual events. Formally for a set of events A 1 , A 2 , . . . the following inequality holds: P i A i ≤ i P (A i ) (A.10) Finally, the expected value of a random variable X that takes values x ∈ X is defined as: E(x) = x∈X x · P X (x) (A.11) A.1.2 Uniform Distribution If a random variable X is uniformly distributed it means that all of its values are equiprobable. Definition A.1. A random variable X that takes values x ∈ X is uniformly distributed if its distribution P X is of the following form: P X (x) = 1 |X | ∀x ∈ X (A.12) Then P X is a uniform distribution over X . A.1.3 -Net Intuitively an -net is a subset of some normed space such that for every point of the original space there is some point in the -net that is -close to it. We now introduce the formal definition of an -net. Definition A.2. Let E be a subset of some normed space, with norm · and let > 0. Then E is an -net for E if E ⊂ E, and for all x ∈ E, there exists some y ∈ E such that: x − y ≤ (A.13) It is the smallest of the Renyi entropies of order α and thus the most conservative estimate of uncertainty in a random variable. This is the reason why it is widely used in cryptography. Similarly one can define the conditional min-entropy H ∞ (X|Y ) := min x∈X , y∈Y − log P X|Y (x|y) (B.5) B.3 Smoothed Min-Entropy The smoothed min-entropy defined below can be understood as the entropy of a distribution P X that is smoothed by cutting a probability mass ε from the largest probabilities. Informally one can think of it as the maximum min-entropy available in any distribution that is ε-close to the distribution P X . Furthermore the smoothed conditional min-entropy is defined as: The latter is an important measure in cryptography as it defines the maximum amount of randomness that is available from X given Y and S, except with probability ε. Appendix C Linear Algebra C.1 Norms For any matrix A ∈ C m×n and vector x ∈ C n we define the following norms: Operator norm A = max x =1 Ax (C.1) Trace norm A tr = tr( √ AA † ) (C.2) Statistical Distance Let P and Q be two probability distributions of a random variable X that takes values x ∈ X . Then the 1 distance between them is defined as: P − Q = x |P (x) − Q(x)| . (C.3) This is commonly called the statistical distance and is used as a distance measure between the two probability distributions. Of Cryptography: From Art To Science. Scytale (b) Lorenz rotor stream cipher machine Non-degenerate linear functions are functions that depend non-trivially on their inputs and are defined in [DFSS06, Definition 4.2] as follows: Definition 2.1. A function β : {0, 1} × {0, 1} → {0, 1} is called a non-degenerate linear function if it is of the form: β : (s 0 , s 1 ) →< u 0 , s 0 > ⊕ < u 1 , s 1 > (2.4) Definition 2. 4 . 4Let H be a family of functions h : {1, . . . , N } → {1, . . . , M } and H be a function chosen uniformly at random from H. We call H a family of t-wise independent functions if for all subsets S ⊂ {1, . . . , N } of size |S| ≤ t, where t ≥ 1 is an integer, the random variables {H(x)|x ∈ S} are independent and uniformly distributed in {1, . . . , M }. Functionality 2. 8 . 8Upon receiving no input from Alice and the choice bit D ∈ {0, 1} from Bob, F 2 1 −ROT outputs messages A 0 , A 1 ∈ A, where A = {0, 1} to Alice and message A D to Bob. Functionality 2 . 15 . 215Upon receiving strings W A ∈ W from user Alice, where W := {1, . . . , k}, and W B ∈ W from server Bob, F ID outputs the bit G = W A ? ( a ) aLet γ i ∈ {0, 1} be the outcome of an independent and fair coin toss. (b) If γ i = 0 then prepare the i th block of log q qubits of the codeword C (s) in the computational basis: |C (s) i (c) If γ i = 1 then prepare the i th block of log q qubits of the codeword C (t) in the computational basis: H ⊗ log q |C (t) i 4. Bob measures every qubit in the base corresponding to his input D ∈ {0, 1} in the following way: 4 ) 4where ε ≤ 2 −ε 0 k . Now assumbe Alice and Bob use Protocol 3.1, with r-wise independent hash functions F, G :{0, 1} → {0, 1} , with This choice of r is motivated by the union bound, see equation (3.52). Here γ is some universal constant. The choice of is motivated by equations (3.99), (3.100), (3.101) and (3.102)Then Protocol 3.1 is a secure 2 1 −ROT protocol in the isolated qubits model, in the sense of Definition 3.2. More precisely, for all k ≥ k 0 , the following statements hold, except with probability e −Ω(k 2θ ) over the choice of F and G:1. Alice receives as output from Protocol 3.1 two messages A 0 , A 1 ∈ {0, 1} and uses m qubits, where k ≤ m ≤ k θ .2. Correctness: Correctness: For honest users Alice with no input and Bob with input D, Alice receives A 0 and A 1 and Bob receives A D , using only LOCC operations. Lemma 3. 6 . 6Fix any measurement outcome M such that H ε ∞ (S, T |Z = M ) ≥ αk. Then there exists an event E, occurring with probability P (E|Z = M ) ≥ 1 − ε, such that the following statement holds for all non-degenerate linear functions β : {0, 1} × {0, 1} → {0, 1}: A 0 =e 2 λ 2 r/ 4 . 024F (S) and A 1 = G(T ). Then for all λ > 0 and for all non-degenerate linear functions β, P F, G; S, T (|R β (M )| ≥ λ) ≤ 8e 1/(3r) √ πr 8 · 2 −αk r 2 (3.11) Proof. From H ε ∞ (S, T |Z = M ) ≥ αk, there exists a smoothing event E, occurring with probability P (E | Z = M ) ≥ 1 − ε, such that: ∀ s, t ∈ {0, 1} , P (E, S = s, T = t| Z = M ) ≤ 2 −αk . (− 1 ) 1<u 0 ,F (s)>+<u 1 ,G(t)> P (E, S = s, T = t | Z = M ). (3.14) Firstly, we define a function H : {0, 1} × {0, 1} → {0, 1}: By definition, F is a r−wise independent hash function if for all subsets S ⊂ {0, 1} of size |S| ≤ r, the random variables {F (x)|x ∈ S} are independent and uniformly distributed in {0, 1} . Then the random variables{< u 0 , F (x) > |x ∈ S} are also independent, where < u 0 , F (x) >= i=1 u 0 i · {F (x)} i . Furthermore,from the fact that all non-degenerate linear functions are 2−balanced, Lemma 2.3, and from the definition of 2−balanced functions, Definition 2.2, we can see that since u 0 is non-zero the random variables {< u 0 , F (x) > |x ∈ S} are uniformly distributed. (The same holds for {< u 1 , G(x) > | x ∈ S}). P F, G; S, T (|R β (M )| ≥ λ) ≤ 8e 1/3r √ πr(e 2 /2) −r/4 . (3.38) Finally, using the union bound we show that with high probability for all M ∈ W and all non-degenerate linear functions β, R β ( M ) is small. P F, G; S, T ∃ M ∈ W , s.t. M is δ-non-negligible, and ∃β s.t. |R β ( M )| ≥ λ (3.39) P F, G; S, T ∃ M ∈ W , s.t. M is δ-non-negligible, and ∃β s.t. |R β ( M )| ≥ λ ≤ e −Ω(k 2θ ) . (3.52) Equation (3.52) implies that with high probability over F and G, ∀ M ∈ W , ( M is δ-non-negligible) ⇒ |R β ( M )| ≤ λ.(3.53) Thus Protocol 3.1 is secure in the case where the adversary observes any measurement outcome in the set W . Lemma 3. 9 . 9[Liu15, Lemma 3.6] Suppose that M, M ∈ C 2×2 ⊗m , and 0 M I, and0 M I. Suppose that M is 2δ−non-negligible, where 0 < δ ≤ 1 2 , and tr(M ) ≥ 1. Suppose that M satisfies M − M ≤ µ, where µ ≤ 2 3 δ · 2 −m . Then M is δ−non-negligible.The next lemma shows that if the quantity R β ( M ) is defined in terms of an event E, we can define the quantity R β (M ) by choosing E such that R β (M ) ≈ R β ( M ). Lemma 3 . 10 . 1 P 3101Suppose that M, M ∈ (C 2×2 ) ⊗m , and 0 M I, and 0 M I. Suppose that M is 2δ-non-negligible, where 0 < δ ≤ 1 2 , and M = 1. Suppose that M satisfies M − M ≤ µ, where µ ≤ 1 2 , and M is δ-non-negligible. Suppose there exists an event E, occurring with probability P ( E| Z = M ); and let R β ( M ) be defined in terms of E, as shown in equation (3.10). Then there exists an event E, occurring with probability P (E|Z = M ) = P ( E| Z = M ), such that if R β (M ) is defined in terms of E, then the following statement holds: |R β (M ) − R β ( M )By assumption, there is an event E, defined by the probabilities P ( E | Z = M , S = s, T = t). Let ρ st be the quantum state used to encode messages (s, t), i.e., this is the state of the one-time memory, conditioned on S = s and T = t. We start by writing R β ( M ) in a more explicit form. First consider R β ( M ), and note that A 0 = F (S), A 1 = G(T ) and s, t are chosen uniformly at random. We can write R β ( M ) in the form: R β ( M ) = 1 P ( Z = M ) s,t∈{0,1} (−1) β(F (s),G(t)) P ( E, S = s, T = t, Z = M ) = ( Z = M ) s,t∈{0,1} (−1) β(F (s),G(t)) P ( E, | Z = M , S = s, T = t) tr( M ρ st ) 4 − = 1 P ( Z = M ) tr( M ν), FFFF define the matrix ν ∈ (C 2×2 ) ⊗m as follows: (s),G(t)) P ( E| Z = M , S = s, T = t) ρ st .(3.56)The trace norm of ν is ν tr = tr( (s),G(t)) P ( E| Z = M , S = s,T = t) ρ st † (s),G(t)) P ( E| Z = M , S = s, T = t) ρ † st (3.58) Since density operators are self-adjoint ρ † st = ρ st . Then ν † = 4 −l s,t (−1) β(F (s),G(t)) P ( E| Z = M , S = s, T = t) ρ st = ν. β(F (s),G(t)) P ( E| Z = M , S = s, T = t) ρ st ) (s),G(t)) P ( E| Z = M , S = s, T = t) tr(ρ st ) (3.61) Since ρ st is a density operator tr(ρ st = 1) and because P ( E| Z = M , S = s, T = t) ≤ 1 (3.62) ⇒ (−1) β(F (s),G(t)) P ( E| Z = M , S = s, T = t) β(F (s),G(t)) P ( E| Z = M , S = s, T = t) β(F (s),G(t)) P ( E| Z = M , S = s, T = t) ≤ 4 −l · 4 l . (3.65)Thus ν tr ≤ 1.In addition, we can rewrite P ( Z = M ) in the following way: Taking into account equation (3.55) and equation (3.66) we can rewrite R β ( M ) as R β ( M ) = tr( For a fixed measurement outcome M and for all s, t ∈ {0, 1} we define: P (E|Z = M, S = s, T = t) = P ( E| Z = M , S = s, T = t) (3.69) Note that this implies P (E|Z = M ) = P ( E| Z = M ). Using this we can rewrite the quantitiy R β (M ) in a similar way as R β ( M ): 79 ) 79Now we can write R β (M ) − R β ( M ) as follows:R β (M ) − R β ( M ) = tr(M ν) tr(M ξ) − tr( M ν) − tr( M ν) tr(M ξ) + tr( M ν) tr( M ξ) tr(M ξ) tr( M ξ) − tr( M ν) tr(M ξ) tr(M ξ) tr( (M − M )ν) tr(M ξ) + tr( M ν) tr(( M − M )ξ)tr(M ξ) tr( M ξ) :M is 2δ-negligible P (Z = M ) ≤ 2δ. Taking into account the bound shown in equation (3.92) and the fact that P (¬E|Z = M ) ≤ ε, equation (3.93) becomes equations (3.99), (3.100), (3.101) and (3.102), for sufficiently large k P A 1−D A D D Z − P U × P A D D Z ≤ 2 −Ω(k) , (3.104) which completes the proof of Theorem 3.3 section, we introduce a protocol that implements the 2 1 −OT functionality making use of a 2 1 −ROT functionality. A sketch of Protocol 4.1 can be seen inFigure 4.1 Protocol 4.1. A 2 1 −OT protocol between user Alice with inputs A 0 , A 1 ∈ {0, 1} and user Bob with input D ∈ {0, 1}.1. Alice and Bob use a 2 1 −ROT functionality with no input and input D respectively. 2. Alice receives outputs S 0 , S 1 ∈ {0, 1} and Bob receives S D ∈ {0, 1} . Figure 4 . 1 : 41Sketch of the 2 1 −OT Protocol 4.1 using a 2 1 −ROT functionality. Protocol 4.3. A k 1 −OT protocol between user Alice with inputs X 1 , . . . , X k ∈ {0, 1} and user Bob with input D ∈ {1, . . . , k}.1. Alice chooses strings B 1 , . . . , B k ∈ {0, 1} uniformly at random. 2. Alice inputs A 1,0 = B 1 and A 1,1 = X 1 in the first 2 1 −OT. Bob inputs his choice D 1 = δ 1,D and receives string A 1,D 1 . 3. For i = 2, . . . , k: (a) Alice inputs strings A i,0 = B i ⊕ B i−1 and A i,1 = X i ⊕ B i−1 in the i th 2 1 −OT. (b) Bob inputs his i th choice D i = δ i,D and receives the string A i,D i 4. Bob receives output: sketch of the protocol can be seen in Figure 4.2. Protocol 4 . 4 . 44Sender-randomised k 1 − ROT protocol between user Alice with no input and user Bob with input D = {D 1 |D 2 | . . . |D log k } ∈ {1, . . . , k}. 1. For i = 1, . . . , log k: (a) Alice with no input receives strings A i,0 , A i,1 ∈ {0, 1} as outputs of the i th 2 1 −ROT, (b) Bob inputs his i th choice D i and receives the string A i,D i . Figure 4 . 2 : 42Sketch of the k 1 − ROT Protocol 4.4 using log k 2 1 −OT functionalities. are a number of secure password-based identification protocols in the literature, we present [DFSS07, Protocol Q-ID] that achieves secure identification in the bounded quantum storage model. Let c : W → {+, ×} n be the encoding function, where + is the computational and × is the Hadamard basis. Protocol 5.1. Interactive Password-based Identification with inputs W A and W B , the passwords of user Alice and user Bob respectively. Let F and H be families of strong 2-universal hash functions [DFSS07]: 1. The user Alice picks x R ∈ {0, 1} n and θ ∈ R {+, ×} n she then sends state |x θ to Bob 2. Bob measures |x θ in basis D = c(W B ). Let X D be the outcome. Figure 5 . 1 : 51Sketch of the password-based identification protocol (Protocol 5.2) that makes use of a k 1 −OT functionality. Definition 5 . 4 . 54The non-interactive identification Protocol 5.3 is secure if the following conditions are fulfilled: Correctness: For honest user Alice and honest user Bob, Bob outputs G = 1 if W A = W B . Figure 5 . 2 : 52Sketch of the password-based identification protocol (Protocol 5.9) that makes use of a k 1 − ROT functionality. XY (x, y) := P [X = x, Y = y], (A.3) − log P X|Y (x|y) (B.7) = W B then P [G = 1|W B = W ] = 0, and:for Bob: For any dishonest user Alice, for any distribution of W B , there exists a ran- dom variable W independent of W B such that if W 5.27)Consider the random variable Z = S W A ⊕ h(W A ) that describes the message Alice sends to Bob after receiving the hash function h. Taking into account that W A is independent of W , D and S 1 , . . . , S k , including S W and S W A , conditioned on the event W = W A Then from the properties of the modulo 2 addition and equation (5.27) we have that We then define p = P [E|W = W B ] and p = P [E|W = W B ]. Note that P [E] ≤ k 2 /2 +1 .Furthermore since H min (W ) ≥ 1 it is easy to see that P [W = W B ] ≤ 1 2 . ThenE . (5.33) For a detailed introduction we refer to[KL07]. 2 Integer factorisation is a widely used computational hardness assumption in cryptographic protocols, for example in RSA[RSA78]. So far there exists no algorithm that can solve the problem of factoring a large integer into products of smaller number (usually primes) on a classical computer in polynomial time.3 For a very enjoyable brief account of these first steps refer to[Bra06]. The millionaire or Yao's millionaire problem is a classic secure multi-party computation problem in which two millionaires want to determine who is richer without disclosing any information about their wealth to each other. In cryptography, it is common usage to make calls to secure functionalities in a protocol. For example one could use a series of n single-bit commitment functionalities to commit to an n−bit string. One then argues that since every single bit is commited securely, the same holds for the concatenation of these bits. Composability of protocols allows one to use a modular design to construct and prove the security of complex protocols. there exists a constant k 0 ≥ 1 such that:Suppose we have a "leaky" 2 1 −ROT protocol in the isolated qubits model, such as Protocol 2.17, indexed by a security parameter k ≥ k 0 . More precisely, suppose that for all k ≥ k 0 , 1. Alice receives as output from Protocol 2.17 two messages s, t ∈ {0, 1} , where ≥ k and uses m qubits, where k ≤ m ≤ k θ .Chapter 3. 2 This protocol can be implemented using k−1 2 1 −OTs if in the last 2 1 −OT Alice inputs A k,0 = B k−1 ⊕X k−1 and A k,1 = B k−1 ⊕ X k . AcknowledgementsAcknowledgementsiiIn this final chapter, we summarise our main results and conclusions. Then we open the discussion of these results in relation to current knowledge. Finally, we pose some of the questions that arise from this discussion and propose possible future steps.Conclusions & DiscussionIn Chapter 3 we presented a secure string 2 1 −ROT in the isolated qubits model, using the "leaky" OTM presented in[Liu14b]. We note that our proof follows a similar path to the one used in[Liu15], but makes use of the notion of non-degenerate linear functions coupled with the results of[DFSS06]. The resulting proof is simpler than the original and allows us to construct a secure string 2 1 −ROT in the IQM. This comes at a cost, by using Theorem 3.5 security is achieved with an error 2 2 +1 times larger than the one presented in[Liu15]. Fortunately, as shown in equation (3.96), this factor does not influence the security result. However, Theorem 3.3 implies that the security parameter k has to be of the order of in order for the protocol to achieve security.In Chapter 4 we made the first attempt to study more complex two-party functionalities in this model. We propose secure 2 1 −OT, k 1 −OT and k 1 − ROT protocols that rely on the security and composability of an 2 1 −ROT functionality. In order to guarantee composability of the 2 1 −ROT protocol, we restricted the users to measure at the end of each protocol. These protocols can then be implemented in the isolated qubits model using an 2 1 −ROT protocol that is secure in that model, such as Protocol 3.1 presented in Chapter 3.The question that then arises is if the aforementioned assumption is realistic. Since composability of protocols has not been studied in the isolated qubits this questions remains an open problem for further study and we will briefly discuss this in the next section.Following that, in Chapter 5 we address an interesting problem of secure two-party computation, the evaluation of the equality function. We present a protocol for secure password-based identification that uses a k 1 − ROT functionality, motivated by the protocols proposed in[DFSS07]. However, the results of Chapter 4 and Chapter 5 are not limited by the specific cryptographic model, they can be implemented in any model in which there exists a protocol that implements the 2 1 −ROT functionality securely and in a composable way.Appendix BMeasures of UncertaintyB.1 Renyi EntropyDefinition B.1. For a random variable X that takes values x ∈ X , for α ∈ R with α ≥ 0 and α = 1, the Renyi entropy of order α is defined asWe note that the Renyi entropy is a generalised entropy.For α = 1 we obtain the Shannon entropy:For α = 0 we obtain the max-entropy: Quantum cryptography: Public key distribution and coin tossing. C H Bennett, G Brassard, IEEE International Conference on Computers, Systems and Signal Processing. C. H. Bennett and G. Brassard. Quantum cryptography: Public key distribution and coin tossing. IEEE International Conference on Computers, Systems and Signal Processing, pages 175-179, 1984. Practical Quantum Oblivious Transfer. C H Bennett, G Brassard, C Crépeau, M H Skubiszewska, 10.1007/3-540-46766-1_29Lect. Notes Comput. Sci. 576C. H. Bennett, G. Brassard, C. Crépeau, and M. H. Skubiszewska. Practical Quan- tum Oblivious Transfer. Lect. Notes Comput. Sci., 576: 351-366, 1992. DOI: 10.1007/3-540-46766-1 29. Post-Quantum Cryptography. D J Bernstein, J Buchmann, E Dahmen, 10.1007/978-3-540-88702-7Springer5299Berlin Heidelberg; Berlin, HeidelbergD. J. Bernstein, J. Buchmann, and E. Dahmen. Post-Quantum Cryptography, volume 5299. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. DOI: 10.1007/978-3-540-88702-7. Quantum Cryptography, 1992. C H Bennett, G Brassard, A K Ekert, 10.1038/scientificamerican1092-50C. H. Bennett, G. Brassard, and A. K. Ekert. Quantum Cryptography, 1992. DOI: 10.1038/scientificamerican1092-50. Quantum Bit Commitment and Coin Tossing Protocols. G Brassard, C Crépeau, 10.1007/3-540-38424-3_4Advances in Cryptology -CRYPT0 1990. SpringerG. Brassard and C. Crépeau. Quantum Bit Commitment and Coin Tossing Protocols. In Advances in Cryptology -CRYPT0 1990, LNCS, pages 49-61. Springer, 1991. DOI: 10.1007/3-540-38424-3 4. A quantum bit commitment scheme provably unbreakable by both parties. G Brassard, C Crépeau, R Jozsa, D Langlois, 10.1109/SFCS.1993.36685134th Annual Foundations of Computer Science -FOCS 1993. IEEEG. Brassard, C. Crépeau, R. Jozsa, and D. Langlois. A quantum bit commitment scheme provably unbreakable by both parties. In 34th Annual Foundations of Com- puter Science -FOCS 1993, pages 362-371. IEEE, 1993. DOI: 10.1109/SFCS.1993.366851. Information theoretic reductions among disclosure problems. G Brassard, C Crepeau, J.-M Robert, 10.1109/SFCS.1986.26Symp. Found. Comput. Sci. 27th AnnuG. Brassard, C. Crepeau, and J.-M. Robert. Information theoretic reductions among disclosure problems. 27th Annu. Symp. Found. Comput. Sci. (sfcs 1986), 1986. DOI: 10.1109/SFCS.1986.26. Complete Insecurity of Quantum Protocols for Classical Two-Party Computation. H Buhrman, M Christandl, C Schaffner, 10.1103/PhysRevLett.109.160501Phys. Rev. Lett. 10916160501H. Buhrman, M. Christandl, and C. Schaffner. Complete Insecurity of Quantum Protocols for Classical Two-Party Computation. Phys. Rev. Lett., 109(16): 160501, 2012. DOI: 10.1103/PhysRevLett.109.160501. Brief History of Quantum Cryptography: A Personal Perspective. G Brassard, arXiv:quant-ph/060407214G. Brassard. Brief History of Quantum Cryptography: A Personal Perspective. page 14, 2006. arXiv: quant-ph/0604072. Impossibility of secure two-party classical computation. R Colbeck, 10.1103/physreva.76.062308Physical Review A. 766R. Colbeck. Impossibility of secure two-party classical computation. Physical Review A, 76(6), 2007. DOI: 10.1103/physreva.76.062308. Equivalence Between Two Flavours of Oblivious Transfers. C Crépeau, 10.1007/3-540-48184-2_30Lect. Notes Comput. Sci. 293C. Crépeau. Equivalence Between Two Flavours of Oblivious Transfers. In Lect. Notes Comput. Sci., volume 293, pages 350-354. 1988. DOI: 10.1007/3-540-48184-2 30. Quantum Oblivious Transfer. C Crépeau, 10.1080/0950034941455229141C. Crépeau. Quantum Oblivious Transfer. 41(12): 2445-2454, 1994. DOI: 10.1080/09500349414552291. Oblivious Transfer and Linear Functions. I B Damgård, S Fehr, L Salvail, C Schaffner, 10.1007/11818175_26In Adv. Cryptol. -CRYPTO. 4117Lect. Notes Comput. Sci.I. B. Damgård, S. Fehr, L. Salvail, and C. Schaffner. Oblivious Transfer and Linear Functions. In Adv. Cryptol. -CRYPTO 2006, Lect. Notes Comput. Sci., volume 4117, pages 427-444. 2006. DOI: 10.1007/11818175 26. Secure Identification and QKD in the Bounded-Quantum-Storage Model. I B Damgård, S Fehr, L Salvail, C Schaffner, 10.1007/978-3-540-74143-5_19Advances in Cryptology -CRYPTO 2007. SpringerI. B. Damgård, S. Fehr, L. Salvail, and C. Schaffner. Secure Identification and QKD in the Bounded-Quantum-Storage Model. In Advances in Cryptology -CRYPTO 2007, LNCS, pages 342-359. Springer, 2007. DOI: 10.1007/978-3-540-74143-5 19. Cryptography in the Bounded-Quantum-Storage Model. I B Damgård, S Fehr, L Salvail, C Schaffner, 10.1137/060651343SIAM Journal on Computing. 376I. B. Damgård, S. Fehr, L. Salvail, and C. Schaffner. Cryptography in the Bounded- Quantum-Storage Model. SIAM Journal on Computing, 37(6): 1865-1890, 2008. DOI: 10.1137/060651343. A randomized protocol for signing contracts. S Even, O Goldreich, A Lempel, 10.1145/3812.381828S. Even, O. Goldreich, and A. Lempel. A randomized protocol for signing contracts. 28(6): 637-647, 1985. DOI: 10.1145/3812.3818. Quantum cryptography based on bell's theorem. A K Ekert, 10.1103/PhysRevLett.67.661Physical Review Letters. 67A. K. Ekert. Quantum cryptography based on bell's theorem. Physical Review Letters, 67: 661-663, 1991. DOI: 10.1103/PhysRevLett.67.661. Composing Quantum Protocols in a Classical Environment. S Fehr, C Schaffner, 10.1007/978-3-642-00457-5_21Theory Cryptogr., volume. Berlin Heidelberg; Berlin, HeidelbergSpringer5444S. Fehr and C. Schaffner. Composing Quantum Protocols in a Classical Environment. In Theory Cryptogr., volume 5444 LNCS, pages 350-367. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. DOI: 10.1007/978-3-642-00457-5 21. The Codebreakers: The story of secret writing. D Kahn, ScribnerNew YorkD. Kahn. The Codebreakers: The story of secret writing. Scribner, New York, 1996. Online: https://www.goodreads.com/book/show/29608.The_Codebreakers. Founding Crytpography on Oblivious Transfer. J Kilian, 10.1145/62212.62215Proc. 20th Annu. ACM Symp. Theory Comput. 20th Annu. ACM Symp. Theory ComputJ. Kilian. Founding Crytpography on Oblivious Transfer. Proc. 20th Annu. ACM Symp. Theory Comput., pages 20-31, 1988. DOI: 10.1145/62212.62215. Introduction to Modern Cryptography. J Katz, Y Lindell, 10.1080/10658989509342477J. Katz and Y. Lindell. Introduction to Modern Cryptography. 2007. DOI: 10.1080/10658989509342477. Unconditional security from noisy quantum storage. R König, S Wehner, J Wullschleger, 10.1109/TIT.2011.2177772IEEE Transactions on Information Theory. 583R. König, S. Wehner, and J. Wullschleger. Unconditional security from noisy quan- tum storage. IEEE Transactions on Information Theory, 58(3): 1962-1984, 2012. DOI: 10.1109/TIT.2011.2177772. Is Quantum Bit Commitment Really Possible?. H.-K Lo, H Chau, 10.1103/PhysRevLett.78.3410Physical Review Letters. 7817H.-K. Lo and H. Chau. Is Quantum Bit Commitment Really Possible? Physical Review Letters, 78(17): 3410-3413, 1997. DOI: 10.1103/PhysRevLett.78.3410. Building one-time memories from isolated qubits. Y.-K Liu, 10.1145/2554797.25548235th Conference on Innovations in Theoretical Computer Science -ITCS 2014. New York, New York, USAACMY.-k. Liu. Building one-time memories from isolated qubits. In 5th Conference on Innovations in Theoretical Computer Science -ITCS 2014, pages 269-286, New York, New York, USA, 2014. ACM. DOI: 10.1145/2554797.2554823. Single-Shot Security for One-Time Memories in the Isolated Qubits Model. Y.-K Liu, 10.1007/978-3-662-44381-1_2Advances in Cryptology -CRYPTO 2014. Springer8617Y.-K. Liu. Single-Shot Security for One-Time Memories in the Isolated Qubits Model. In Advances in Cryptology -CRYPTO 2014, volume 8617 PART 2 of LNCS, pages 19-36. Springer, 2014. DOI: 10.1007/978-3-662-44381-1 2. Privacy Amplification in the Isolated Qubits Model. Y.-K Liu, 10.1007/978-3-662-46803-6_26Advances in Cryptology -EUROCRYPT 2015. E. Oswald and M. FischlinSpringer9057Y.-K. Liu. Privacy Amplification in the Isolated Qubits Model. In E. Oswald and M. Fischlin, editors, Advances in Cryptology -EUROCRYPT 2015, volume 9057 of LNCS, pages 785-814. Springer, 2015. DOI: 10.1007/978-3-662-46803-6 26. Insecurity of quantum secure computations. H.-K Lo, 10.1103/PhysRevA.56.1154Physical Review A. 56H.-K. Lo. Insecurity of quantum secure computations. Physical Review A, 56: 1154- 1162, 1997. DOI: 10.1103/PhysRevA.56.1154. D Mayers, arXiv:quant-ph/9603015The Trouble with Quantum Bit Commitment. 12arXiv preprintD. Mayers. The Trouble with Quantum Bit Commitment. arXiv preprint quant- ph/9603015, page 12, 1996. arXiv: quant-ph/9603015. Quantum Computation and Quantum Information. M Nielsen, I Chuang, Cambridge University PressM. Nielsen and I. Chuang. Quantum Computation and Quantum Information. Cam- bridge University Press, 2000. How To Exchange Secrets with Oblivious Transfer. M O Rabin, Aiken Comput. Lab, Harvard Univ.Tech. Rep. TR-81M. O. Rabin. How To Exchange Secrets with Oblivious Transfer. Tech. Rep. TR-81, Aiken Comput. Lab, Harvard Univ., pages 1-5, 1981. Online: http://dm.ing.unibs.it/giuzzi/corsi/Support/ papers-cryptography/187.pdf. A method for obtaining digital signatures and public-key cryptosystems. R L Rivest, A Shamir, L Adleman, 10.1145/359340.359342Communications of the ACM. 212R. L. Rivest, A. Shamir, and L. Adleman. A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM, 21(2): 120-126, 1978. DOI: 10.1145/359340.359342. Simple protocols for oblivious transfer and secure identification in the noisy-quantum-storage model. C Schaffner, 10.1103/PhysRevA.82.032308Physical Review A. 82332308C. Schaffner. Simple protocols for oblivious transfer and secure identification in the noisy-quantum-storage model. Physical Review A, 82(3): 032308, 2010. DOI: 10.1103/PhysRevA.82.032308. Communication Theory of Secrecy Systems. C E Shannon, C. E. Shannon. Communication Theory of Secrecy Systems. 1949. Algorithms for quantum computation: discrete logarithms and factoring. P W Shor, 10.1109/SFCS.1994.36570035th Annual Symposium on Foundations of Computer Science -FOCS 1994. IEEEP. W. Shor. Algorithms for quantum computation: discrete logarithms and factoring. In 35th Annual Symposium on Foundations of Computer Science -FOCS 1994, pages 124-134. IEEE, 1994. DOI: 10.1109/SFCS.1994.365700. S Wiesner, 10.1145/1008908.1008920Conjugate coding. SIGACT News. 15Originally written c. 1970 but unpublishedS. Wiesner. Conjugate coding. SIGACT News, 15(1): 78-88, 1983. DOI: 10.1145/1008908.1008920. Originally written c. 1970 but unpublished. Coding theorem and strong converse for quantum channels. A Winter, 10.1109/18.796385IEEE Trans. Inf. Theory. 457A. Winter. Coding theorem and strong converse for quantum channels. IEEE Trans. Inf. Theory, 45(7): 2481-2485, 1999. DOI: 10.1109/18.796385. Cryptography from Noisy Storage. S Wehner, C Schaffner, B M , 10.1103/PhysRevLett.100.220502Physical Review Letters. 10022220502S. Wehner, C. Schaffner, and B. M. Terhal. Cryptography from Noisy Storage. Physical Review Letters, 100(22): 220502, 2008. DOI: 10.1103/PhysRevLett.100.220502. Protocols for secure computations. A C Yao, 10.1109/SFCS.1982.3823rd Annual Symposium on Foundations of Computer Science -FOCS 1982. IEEEA. C. Yao. Protocols for secure computations. In 23rd Annual Symposium on Foun- dations of Computer Science -FOCS 1982, pages 160-164. IEEE, 1982. DOI: 10.1109/SFCS.1982.38.
[]
[ "Efficient Interfacial Solar Steam Generator with Controlled Macromorphology Derived from Flour via \"Dough Figurine\" Technology", "Efficient Interfacial Solar Steam Generator with Controlled Macromorphology Derived from Flour via \"Dough Figurine\" Technology" ]
[ "Zhengtong Li \nSchool of Materials Science and Engineering\nShaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material\nShaanxi University of Science and Technology\n710021Xi'anShaanxiChina\n", "Chengbing Wang [email protected] \nSchool of Materials Science and Engineering\nShaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material\nShaanxi University of Science and Technology\n710021Xi'anShaanxiChina\n", "Zeyu Li ", "Lin Deng \nSchool of Materials Science and Engineering\nShaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material\nShaanxi University of Science and Technology\n710021Xi'anShaanxiChina\n", "Jinbu Su \nSchool of Materials Science and Engineering\nShaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material\nShaanxi University of Science and Technology\n710021Xi'anShaanxiChina\n", "Jing Shi \nSchool of Materials Science and Engineering\nShaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material\nShaanxi University of Science and Technology\n710021Xi'anShaanxiChina\n\nCollege of Mechanical and Electrical Engineering\nShaanxi University of Science and Technology\nXi'an 710021China\n", "Meng An [email protected] \nCollege of Mechanical and Electrical Engineering\nShaanxi University of Science and Technology\nXi'an 710021China\n", "( M An \nCollege of Mechanical and Electrical Engineering\nShaanxi University of Science and Technology\nXi'an 710021China\n" ]
[ "School of Materials Science and Engineering\nShaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material\nShaanxi University of Science and Technology\n710021Xi'anShaanxiChina", "School of Materials Science and Engineering\nShaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material\nShaanxi University of Science and Technology\n710021Xi'anShaanxiChina", "School of Materials Science and Engineering\nShaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material\nShaanxi University of Science and Technology\n710021Xi'anShaanxiChina", "School of Materials Science and Engineering\nShaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material\nShaanxi University of Science and Technology\n710021Xi'anShaanxiChina", "School of Materials Science and Engineering\nShaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material\nShaanxi University of Science and Technology\n710021Xi'anShaanxiChina", "College of Mechanical and Electrical Engineering\nShaanxi University of Science and Technology\nXi'an 710021China", "College of Mechanical and Electrical Engineering\nShaanxi University of Science and Technology\nXi'an 710021China", "College of Mechanical and Electrical Engineering\nShaanxi University of Science and Technology\nXi'an 710021China" ]
[]
Solar-driven interface steam generator (SISG) is a most promising technology forseawater desalination and wastewater purification. A shape-and size-controlled, lowcost, eco-friendly solar-absorber material is urgently desired for practical application of SISG. Herein, we proposed a facile, sustainable and scalable approach to produce tailored SISG with controlled macromorphology derived from flour via "dough figurine" technology which is originated from the China Han Dynasty. Three kinds of self-floated flour-based absorbers i.e. near-cylindrical (integrated), near-spherical (loose packing) and powdery (dense packing) absorber used as SISGs were discussed, we found that the macromorphology significantly influences water transport and interfacial thermal management of SISG, the integrated absorber has an overwhelming advantage, which possesses a high evaporation efficiency 71.9% at normal solar illumination. The proposed "dough figurine" technology breaks the limitations of the inherent geometry of reported biomass based SISG, which provides an important guidance for SISG use in remote and impoverished areas.
10.1002/ente.201900406
[ "https://arxiv.org/pdf/1903.02339v1.pdf" ]
119,388,681
1903.02339
ea5f690bd256ee38ca99bb1e4bffc8fe3ee509ff
Efficient Interfacial Solar Steam Generator with Controlled Macromorphology Derived from Flour via "Dough Figurine" Technology Zhengtong Li School of Materials Science and Engineering Shaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material Shaanxi University of Science and Technology 710021Xi'anShaanxiChina Chengbing Wang [email protected] School of Materials Science and Engineering Shaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material Shaanxi University of Science and Technology 710021Xi'anShaanxiChina Zeyu Li Lin Deng School of Materials Science and Engineering Shaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material Shaanxi University of Science and Technology 710021Xi'anShaanxiChina Jinbu Su School of Materials Science and Engineering Shaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material Shaanxi University of Science and Technology 710021Xi'anShaanxiChina Jing Shi School of Materials Science and Engineering Shaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material Shaanxi University of Science and Technology 710021Xi'anShaanxiChina College of Mechanical and Electrical Engineering Shaanxi University of Science and Technology Xi'an 710021China Meng An [email protected] College of Mechanical and Electrical Engineering Shaanxi University of Science and Technology Xi'an 710021China ( M An College of Mechanical and Electrical Engineering Shaanxi University of Science and Technology Xi'an 710021China Efficient Interfacial Solar Steam Generator with Controlled Macromorphology Derived from Flour via "Dough Figurine" Technology 1 * To whom correspondence should be addressed.Keyword: flourinterfacial solar steam generationstacking formsdough figurine 3 Solar-driven interface steam generator (SISG) is a most promising technology forseawater desalination and wastewater purification. A shape-and size-controlled, lowcost, eco-friendly solar-absorber material is urgently desired for practical application of SISG. Herein, we proposed a facile, sustainable and scalable approach to produce tailored SISG with controlled macromorphology derived from flour via "dough figurine" technology which is originated from the China Han Dynasty. Three kinds of self-floated flour-based absorbers i.e. near-cylindrical (integrated), near-spherical (loose packing) and powdery (dense packing) absorber used as SISGs were discussed, we found that the macromorphology significantly influences water transport and interfacial thermal management of SISG, the integrated absorber has an overwhelming advantage, which possesses a high evaporation efficiency 71.9% at normal solar illumination. The proposed "dough figurine" technology breaks the limitations of the inherent geometry of reported biomass based SISG, which provides an important guidance for SISG use in remote and impoverished areas. Introduction The discovery of solar-driven interfacial steam generation (SISG) with the ultrahigh performance evaporation has sparked a wave of vigor and excitement due to its critical role in ubiquitous related applications, such as seawater desalination, 1, 2 power generation [3][4][5] and steam sterilization 6 and so on. Indeed, SISG overcomes many inherent obstacles in conventional solar-driven steam generation, i.e. the efficiency of the solar energy receiver, thermal management capabilities, high production costs and complex preparation processes. [7][8][9] Recently, various artificial materials of SISG equipment have been designed based on different novel mechanisms, such as plasmonic metal particles absorber, [10][11][12] carbon-based materials absorber, [13][14][15] and heat confinement layers 16,17 . For example, in such plasmonic absorber, the plasmonic noble metal particles are deposited on the alumina template (AAO) or wood blocks, where the plasmon effect of plasmon particle and light traps induced by porous substrate (e.g. AAO, wood blocks) are utilized to enhance the absorption of solar energy. 10,11,18,19 Such porous substrates can transport bulk water to the photo-thermal conversion interface by capillary force as well as provide the channel of steam overflow. Meantime, their low thermal conductivity effectively can reduce the heat conduction loss. In analogy, large amounts of porous substrates combined with traditional synthetic carbon materials are also developed as efficient solar distillation equipment, such as graphene, 20 reduced graphene oxide, 21,22 graphene alkyne, 23 carbon nanotubes, 24 carbon power. 25 However, previous SISG materials with high efficiency steam evaporation often entail complex fabrication processes, high cost and difficult to recycle, which limits their application domain. Therefore, the challenge of developing a cost-effective, robust, environmentally friendly, SISG material remains to obtain a high photo-thermal conversion efficiency under ambient solar illumination. Recently, biomass materials with natural ubiquitous micro-and macrostructures, such as mushroom, 26 lotus 27 , cotton, 28 wood, 29 daikon, 30 louts leaf, 31 are developed as efficient solar steam generation device by a series of simple processing i.e. cutting, freeze drying and high-temperature carbonization ( Figure S1). Some simple treatments for biomass materials have produced unparalleled advantages for synthetic materials (Table S1 and S2). 32 For example, mushroom with an umbrella-shaped structure (i.e. positive solid cone geometric structure) can possess a high evaporation efficiency 78% under 1 sun irradiation. 26 Natural wood is constructed as a high performance of SISG system. The evaporation efficiency of the carbonization poplar wood-based SISG device can reach 86.7% at 10 kW m -2 , which is a fascinating discover for the biomass materials application in SISG system. 33 Due to different natural mesoporous structures in natural wood, it is also used to investigate the effect of water transport and heat transfer on evaporation efficiency. The carbonized lotus seedpods with a high a evaporation efficiency (86.5% @1 SUN) mainly relied on their unique macroscopic cone shape and hierarchical mesopores and macropore structures. 27 Moldy bread, carbonized by a barbecue-like method, achieves 71.4% evaporation efficiency in a high humidity level (70%). 34 In our recent study, a kind of arched structure bamboo charcoal is developed as a high-performance absorber, which possess a larger area for absorbing sunlight and a smaller area for water absorption. 1 These studies of biomass materialsbased SISG can provide a marvelous opportunity to lower cost and high evaporation efficiency. However, it is quite difficult to make a large-scale fabrication due to the inherent geometry structures of biomass materials. Therefore, a shape-and sizecontrolled, low-cost, eco-friendly biomass-based absorber is urgently desired for practical application of SISG Flour, prepared by mechanical grinding of wheat, is the main raw material for 5 cooking steamed bread and noodles. The production of wheat all over the world is very huge. Plenty of expired flour (> 10 million tons in China every year) is generated due to excessive production. [35][36][37][38] Therefore, it is desirable to explore novel value-added application. After two basic processes i.e. shaping and dehydration, four can be made as several typical macrostructures ( Figure S2) with the large surface area beneficial for solar light absorption, as well as inner porous structures to achieve a better performance of water transport. After carbonization, the solar absorption capacity of flour-based absorber with near-columnar is greatly improved to 92.4% across the entire solar spectrum without affecting its mechanical properties. During preparation, only commercial deionized water without any expensive and toxic chemicals is added as a good binder/templating agent. Inspired by Chinese traditional craft, called "Dough Figurine (DF)" in the Han Dynasty, the flour mixing with water can form dough. Through several simple operations such as pinching, rubbing, pushing, etc., various macro geometries can be prepared. More importantly, the macroscopic morphology of the flour-based materials can preserve without any significant changes after the dehydration and carbonization. Therefore, these advantages of flour indicate its promising as a candidate material for solar absorber and searching for a highperformance evaporation structure. In this study, we utilized flour to create solar-driven interface steam conversion equipment. Herein, inspired by the craft of DF, we prepared flour block with different shapes (near-columnar, cup-shaped, umbrella-shaped and near-spherical) which were freezedried and carbonized at a lower temperature (300 °C). Porous carbon foam can be prepared easily, which satisfies all essential elements for interfacial steam generation system. In details, it can float freely on the water surface due to its lower mass density and the self-floating time can keep for a long time (more than 720 hours in seawater). 6 Meanwhile, it has high light absorbance as high as 92.4% from 200 nm to 2500 nm. The efficient water transport (carbonized flour with a rich pore structure) can be achieved by its rich pores and hydrophilic. We firstly discussed in detail the impact of different stacking forms of the same materials on optical absorption, water transport, and thermal management of SISG, we found that the macromorphology significantly influences the evaporation efficiency of SISG, the integrated absorber has an overwhelming advantage, which possesses a high evaporation efficiency 71.9% at normal solar illumination. The proposed "dough figurine" technology breaks the limitations of the inherent geometry of reported biomass based SISG, which provides an important guidance for SISG use in remote and impoverished areas. Result and discussion Similar with the common morphology of solar absorber in SISG system, the carbonized flour-based absorber with near-cylindrical (NC) geometry is fabricated and used to study the evaporation performance of SISG system. 10,14,15,23,39 and after (FFNC) freeze-drying process, where large amounts of nanopores form by adding and removing of water. It is obviously found that tight starch granules piled together on the surface of FFNC. The FFNC is carbonized under an air environment at 300 °C, which is more available compared with that of previously reported studies (Table S1). The easy access to prepare FFNC sample (low carbonization temperature and no gas protection during carbonization) is more conducive to large-scale fabrication of SISG equipment. The water transport of C-FFNC can be enhanced due to its hydrophilicity originating from its special components and carbonization condition. Flour, 17% protein and 25% carbohydrate as the main components, it is easily bonded by stirring after adding deionized water. Also, both carbohydrates and proteins have good hydrophilicity, so the formed dough also has good hydrophilicity which is conducive to the transfer of water from bulk water to air-liquid interface. The degree of carbonization is measured by Xray diffractometry (XRD). As shown in Figure 2e, two broad diffractions were observed at the 2-theta of the degree of 21° and 40°, which suggests that the main structure in the C-FFNC sample is amorphous carbon. The stronger peak of 21° is a typical graphene reflection, indicating the presence of a majority graphene-like structure in the C-FFNC sample. And it can be proved by Raman spectra (Figure 2f). In the Raman spectra of C-FFNC, the D-band with disorder amorphous carbon and Gband with sp 2 vibration of graphite crystal can be observed at 1361 cm -1 and 1560cm -1 . In order to accurately analyze the data, we performed a sub-peak fitting of the measured Raman spectra, the ID/IG=2.4 what can prove that the C-FFNC sample have a higher degree of amorphous carbon. 27,42,43 In addition, we further explored surface chemical composition of C-FFNC by X-ray photoelectron spectroscopy (XPS) and Fourier transformed infrared spectroscopy (FTIR). The XPS spectrum of the sample before (FFNC) and after (C-FFNC) carbonization is shown in the Figure 2g, the main elements of the two including carbon, nitrogen, oxygen. The relative contents of C1s, N1s and O1s of FFNC are 55.39%, 2.49%, 42.15%, respectively. After carbonization, the relative content of N 1s of C-FFNC is increased to 2.67%, and corresponding to C-N/N-H. As the FTIR (Figure 2h) show, the C-N/N-H functional group can still be detected at the 1135cm -1 and 3435cm -1 . The C-FFNC has excellent hydrophilicity and can be further confirmed by the contact angle test. The contact angle of its surface is about 34.6° (inset picture of Figure 2g) which contribute its surface chemistry elements and its rough texture. As it is mentioned above, C-FFNC can ensure that it floats on the water surface to transport a sufficient amount of water from the bulk water to the upper surface of the absorber due to its rich pore structures and hydrophilicity. 32 Besides, excellent optical absorption and effective heat management are critical to enhance evaporation efficiency. Therefore, it is very meaningful to further examine optical capabilities and thermal management. As shown in Figure 2i, the sample before carbonization, the FFNC has a very high reflection in the visible light band. After carbonization, the color of the sample turned black and the reflection was greatly reduced. Based on the formula: and 71.9%, respectively, which is much higher than that of pure water. Such high evaporation efficiency of C-FFNC mainly is explored based on the formula of evaporation rate, the net evaporation rate ̇ can be expressed as ̇h = Aαq − Aεσ( 4 − 4 ) −Ah( − ) − Aq(1) where h is the latent heat, A is surface area of the absorber facing the sun, α is the solar absorption, q is the solar flux, ε is the emittance of the absorbing surface, Since the three samples are of the same material, the wettability, porosity, and optical absorption capacity ( Figure S7) are very close. Their difference in evaporation efficiency is mainly caused by the different heat resistance and water transport. The relationship between the speed trend of water transport (Figure 5b, details in Figure S8) and the evaporation efficiency is linearly negatively correlated, which further indicates that the thermal resistance is the main factor determining the different placement modes of the samples. Experiment Section The preparation process of flour dough with specific shape: The water content and mixing time play a critical role for ensuring the similar microstructure of the samples. The flour (purchased from local supermarket) and deionized water is mixed in a ratio of 2:1. The time of mixing-process is about 15 minutes. According to the craft of DF, these samples with different geometric shapes (nearcolumnar, umbrella-shaped and near-spherical) were prepared (the details are shown in Figure S2) with hand. The preparation process of absorber material: Here, we take flour dough with near-columnar (FNC) as an example to introduce the process of carbonization. First, FNC sample is frozen at 0 °C in the refrigerator for 10 hours. Then, the frozen FNC sample is dehydrated in a freeze dryer for 24 hours. Finally, the freeze-dried sample (FFNC) is carbonized in a muffle furnace. The carbonization is 20 conducted under the condition of heating to 300 °C for two hours at 5 °C/min. The carbonized samples (C-FFNC) can be used as SISG device directly. Materials Characterization: The flour-based sample morphology was characterized via a high-resolution field- to the personal computer (PC). The room temperature and humidity during the tests were controlled at 24°C and 30%, respectively. Supporting Information. The Supporting Information is available free of charge on the ACS Publication Website. Notes The authors declare no competing financial interest. Our work Ps. *On behalf of the reference does not specifically mention the relevant conditions Figure S1. Several typical special structure macroscopic absorber models. The detail of mode is show in Table S2. The models shown in Figure S1 are currently typical solar absorbers with special macro geometry. We summarize the corresponding geometric model structure (see Table S2) and its unique advantages. And we have made some structures using flour-based materials (see Figure S2 for details). This section mainly shows that the flour-based material is a solar absorber preparation material with great potential. 34 Table S2. Here we use the traditional carbon foam mechanical performance test method to express its mechanical properties. The carbon foam is placed on the leaves of the campus plants without obvious deformation, indicating that the quality is lighter. Furthermore, it is placed under the 1kg standard weight without any mechanical deformation or breakage, so it shows that its mechanical strength is excellent. Figure S4. Pulverized mercury instrument raw data mapping analysis of FFNC and C-FFNC. Figure S5. TG-DSC curve analysis of FFC samples. Thermogravimetric (TG)-differential scanning calorimetry (DSC) analysis of freezedried samples can explained the substance change during the carbonization process. As shown in the figure, in the 0-60 minutes, the temperature is raised at 5 °C min-1, then it is kept at 300 °C for 60 minutes, and then naturally cooled to room temperature. The TG curve showed two processes of mass reduction, which reduced the total mass by 7.6% and 58.36% respectively. There are two corresponding peaks in the DSC curve. The first peak of DSC curve is an endothermic process with an onset temperature of 41.1°C and a peak temperature of 83.6°C. It can therefore be inferred that the first mass reduction of the TG curve is the evaporation of water inside. The other peak of DSC curve is an exothermic process with an onset temperature of 275.1°C and a peak temperature of 301.1°C. The main starch and protein were incompletely carbonized during this process. 38 Figure S6 The specific heat and thermal conductivity of samples before and after carbonization. Figure S7. The reflection curve of C-FFP, C-FFNS, C-FFNC sample. 39 To further confirm that the optical properties of the different samples are the same, here we test the reflectance of the C-FFNC, C-FFP, and C-FFNS with four different angles, respectively. In the band (400-760 nm), there is no obvious fluctuation for all samples. However, through detailed calculations, the absorption rate of C-FFNC is slighter lower than C-FFNS and C-FFP (about 1%) due to C-FFNS and C-FFP with higher surface area. \ Figure S8.The wetting process of C-FFNC, C-FFNS, C-FFP sample at 0 s, 60 s, 120 s, 180 s. We preheated all samples in an oven (set temperature 110 °C) for 1 min, then placed them inside the chamber and observed a temperature drop of ~4 °C in three minutes. The above heating step was repeated, and the sample was placed in water, and the surface temperature was changed within three minutes as shown in Figure S2. In the absence of sunlight, effective water transport and natural evaporation mainly promote the cooling process of the sample. The surface temperature of the three samples is close to the water temperature at three minutes. This can again prove that the transport of water is nearly close for the three samples. Figure 1 1shows schematic diagram of making flour-based solar absorber. As shown in Figure 1(a) and 1(b), flour is a kind of biomass material obtained by mechanical grinding of wheat. The short growth cycle (one or two times a year) and large yield (128.8*10 6 t in China 2016) 40 of wheat show an overwhelming advantage over that of other biomass materials, which suggests a large-scale flour can be produced in a factory with low cost. In addition, plenty of flour can't keep for a long time due to some environmental factors, such as moisture, pests, etc. Herein, we utilized flour, an excellent candidate for biomass carbon sources, to create SISG equipment. Propelled by the Chinese traditional method called "DF" craft, flour-based absorbers with different geometries can be prepared (The details of preparation are shown in the experimental section). Figure 1(c) 7 and 1(d) show macro model of the flour block sample with NC geometry before (FNC) Figure 1 . 1Schematic of preparation process of near cylindrical solar absorber based flour block. (a) Natural wheat. (b) Commercially available flour. (c) Preparation of flour block with near-cylindrical (FNC) structure using dough figurine technology. (d) Freeze-dried (FFNC) sample. (e) The carbonization process of freeze-dried (C-FFNC) sample. (f) The schematic picture of solar-driven interface steam generation (SISG) 8 device. The inset picture: the localized evaporation of C-FFNC can be enhanced. (g) The infrared photo of SISG device under one sun irradiation at 1min. The carbonized freeze-drying flour block with near-cylinder structure, simplified as C-FFNC, is shown Figure 1e. During the process of fabrication, C-FFNC possesses a large mechanical strength (Figure S3) due to the absence of activator (such as Na2CO3, K2CO3, and KOH). Without plenty of other gas (CO2 or H2) formation, the inner wall of the tunnel is kept thick 35, 36 (Figure 2c). The C-FFNC sample as SISG sample can form localized evaporation (Figure 1f and inset picture), where water can be pumped spontaneously through its micro-scale channels under the synergistic assistance of capillary force and the absorptive nature of the hydrophilic starch and the vapor for fresh water can be generated under the nature solar light illumination. 27 As shown in Figure 1g, the infrared image of the SISG device under one sun radiation in one minute, the localized hot region can be obviously observed. The microstructure plays a critical role on the water transport and solar absorption of the SISG system. 41 Figure 2a shows the outer surface of FFNC sample. It consists of starch granules with different sizes. After carbonization, multi-scale holes are formed on inner and outer surface of C-FFNC samples shown in Figure 2b and 2c. In addition, the porosity of samples is evaluated before and after carbonization using a mercury intrusion instrument. Based on the measured diameter of pores, the average values of the pore diameters are analyzed(Figure S4). The pores diameter of FFNC samples are mainly micro and mesopores (≤50nm) which are induced by dehydration in the freezedrying process. In carbonization, the pore structures of C-FFNC samples collapses and reorganizes due to the burning of organics (Figure S5). Therefore, the pore structures 9 with meso-and macrospores (≥50 nm) are dominated. The porosity is decreased from 80.3% to 59.8% while the corresponding average pore diameter is increased from 5.4 nm to 173.0 nm. The macro-pores surface of C-FFNC (shown inFigure 1c) provide the channel for the generated steam to escape. This prevents the generated steam from cooling in the original position, thereby reducing the evaporation efficiency. The inset structure of C-FFNC(Figure 1d)showed channels with different pore diameters, which can transport water from the bulk water to the hot region by capillary force. Further, these internal tunnel structures provide a rich heat exchange area that effectively reduces heat loss to the bulk water. The thermal conductivity of FFNC and C-FFNC are 0.12 W/m-K and 0.09 W/m-K(Figure S6), respectively. Figure 2 . 2(a) SEM image of the surface of freeze-dried samples. (b) SEM image of the surface of C-FFNC sample. (c) SEM image of the inner structure of C-FFNC sample. (d) Porosity data statistics for FFNC sample and C-FFNC sample. (e) The XRD diffraction pattern of C-FFNC. (f) Raman spectra of C-FFNC fitted by Gaussian functions. (g) The XPS spectra of FFNC and C-FFNC. Inset picture: contact angle test of C-FFNC. (h) The FTIR spectra of FFNC and C-FFNC. (i) Solar irradiance (AM 1.5 G) (blue, left side axis) and reflection (right side axis) of FFNC and C-FFNC. (λ) is the solar radiant energy density at different wavelengths of AM 1.5 at atmospheric mass. R is the reflectance of the solar absorber at different wavelengths that is usually taken from 0.2 to 2.5μm. Therefore, the absorbance of C-FFNC sample is calculated about 92.4% and even it can reach 95.5% in the visible range (400-760 nm). As mentioned earlier, the stacking way of SISG devices is also an important topic that has to be considered (i.e. under the same conditions: whether an integrated material works well compared with the stacked multiple materials in a certain way.). So we carbonized samples of different shapes (i.e. near-cylindrical (integrated), near-spherical 12 (loose packing) and powdery (dense packing)) were prepared to examine the placement effect of evaporation efficiency. Integrated samples, loosely packed samples, and densely packed samples are three common and important ways of stacking(Figure 4a).The C-FFNC is a sample of the monolithic material. Then we prepared some near spherical (NS) samples with a diameter of ~1 cm, and obtained loosely packed samples after freeze-drying, carbonization (i.e. called C-FFNS). Further, we pulverize some C-FFNS samples and pass the standard filtration to obtain a powder particle size of ~450μm as dense packed samples (i.e. called C-FFP). Unlike nanograde samples stacks, these three kinds of macro samples stacking are similar in terms of optical absorption, hydrophilicity and mechanical strength. 44, 45 We conclude that important factors affecting the evaporation efficiency of these three different stacking materials are their different thermal management. The mass changes of C-FFNC, C-FFNS and C-FFP have large differences (Figure 3a). To systematically evaluate the evaporation performance of C-FFNC, the mass change of water under normal solar illumination (1 sun) was recorded (Figure 3a). The evaporation rate and efficiency are 1.0 kg m -2 h -1 σ is the Stefan-Boltzmann constant (i.e., 5.67 × 10 -8 Wm-2 K -4 ),46 T is the surface temperature of the absorber, Tenvironment is the temperature of the adjacent environment, h is the convection heat transfer coefficient, and q is the heat flux to the 13 underlying water, including conduction and radiation. The second and the third term on the right side of Eq. (1) denotes the heat loss to ambient by heat radiation and convection. According to Eq. (1), the evaporation rate can be enhanced by increasing heat energy absorption and decreasing heat loss (heat convection, heat radiation, and heat conduction).For C-FFNC absorber, the solar light trap formed by the surface pores and the carbonized nature with black color can form a high optical absorption (95.5% in the visible range), which provide sufficient solar energy source. The thermal conductivity of flour-based carbon foam is 0.09 W/m-K, one order of magnitude lower than that of pure water (0.60 W/m-K at room temperature).47,48 The low thermal conductivity of C-FFNC enables the captured solar energy to be confined to the air-liquid surface, which reduces the heat conduction loss to the bulk water and promote the formation of localized hot area as shown inFigure 1(g). Moreover, the rich pore structures (porosity 59.79 %) and hydrophilicity of C-FFNC facilitates the water transport from bulk water transport to air-liquid interface. In addition, the previous studies suggested that the enthalpy of confined water is reduced compared with that of pure water. In C-FFNC absorber, the rich micro-scale pores of C-FFNC facilitates the reduced vaporization enthalpy, further enhancing evaporation rate.Admittedly, precisely controlling the size and shape of SISG device, especially at a harsh environment, is challenging and subject to large uncertainties. Interestingly, our proposed flour-based absorber can be prepared into different geometries (shown inFigure S2), which promises to expand applications of SISG device and making existing applications more advantageous and cost-effective. Moreover, such advantage of flourbased absorber make flour as an optimal candidate material to explore the placement effect of absorber on evaporation rate. To systematically analyze the evaporation14 performance of flour-based absorber with stack ways, the mass change of water under normal solar illumination (1 sun) was recorded(Figure 3a). The mass change of represents the evaporation rate (kg m -2 h -1 ) after the subtraction of the evaporation rate without light illumination, hLV is the total enthalpy of the liquid-vapor phase change (sensible heat + phase-change enthalpy), Copt represents the optical concentration and qi represents the nominal solar illumination (1 kW m -2 ). Figure 3 . 3(a) Mass change of C-FFNC, C-FFNS, and C-FFP as well as water in the dark field and under 1 sun illumination. (b) The evaporation rate (left side axis) and evaporation efficiency (right side axis) of C-FFNC, C-FFNS and C-FFP. The calculated energy efficiencies of evaporation for C-FFNC, C-FFNS, and C-FFP are presented inFigure 3b. It is obviously observed that the energy efficiency of C-FFNC can reach 71.9%, which are much higher than those of C-FFNS (~ 53.1%) and C-FFP (~ 44.7%). In other words, the C-FFNC absorber is the most effective for enhancing the energy efficiency among three kinds of samples. We will explore the 15 potential mechanism for this trend from three main factors: heat localization, heat loss and water transport. Figure 4 . 4(a) The optical image and the infrared image of the C-FFNC/ C-FFNS/ C-FFP sample surface in 1 min under one sun irradiation. The radius of three samples are mainly about 5 cm, 1 cm, 425 μm for C-FFNC, C-FFNS and C-FFP, respectively. (b)Surface temperature statistical point distribution image of the (a) infrared images.16 The heat transfer behavior of flour-based absorber with different sample diameters is systematically evaluated(Figure 4). The surface temperatures of C-FFNC, C-FFNS and C-FFP under 1 sun illumination are carefully measured via an IR camera.Figure 4(a) plots the surface temperature image from IR camera, from which we calculated the temperature distribution of sample surfaces. Obviously, the average steady-statetemperatures of the C-FFNC, C-FFNC, C-FFNS and C-FFP under 1 sun illumination reach 49.9 ℃, 41.2 ℃ and 44.4 ℃, respectively (Figure 4b). These surface temperature are much higher that of pure water (23 ℃). Compared with thermal conductivity (0.09 W/m-K) of carbonized flour-based materials (the details are shown inFigure S4), the larger thermal conductivity of bulk water results in a considerable proportion of the absorbed energy for increasing the underlying water temperature instead of evaporation, which reduces the energy efficiency. More interestingly, the surface temperature of C-FFNC is higher than those of C-FFNS and C-FFP samples. Although the intrinsic porosity among samples with different diameters are close, such stacking with different samples diameters leads to this decreasing trend of water content inside carbonized floured-based absorber: C-FFP, C-FFNS, C-FFNC. The more water, the more the absorbed energy is used to heat underlying water. Therefore, the heat localization by for adiabatic material (C-FFNC), only the water on C-FFNC sample surface is heated and the underlying water will not cause large heat conduction loss. That is, almost all the heat is localized at the air-liquid interfacial, which results in the high surface temperature and fast evaporation. Figure 5 .Figure 5 55(a) The schematic diagram of thermal resistance between solid-liquid interfaces for samples floating on the bulk water. (b) The wetting process of C-FFNC, C-FFNS and C-FFP at 0s and 120s. To further understand the high performance of evaporation of C-FFNC, the heat energy dissipated by interfacial thermal resistance is qualitatively analyzed. 49-51 The total thermal resistance of the floured-based absorber including flour-based materials and inside water is defined by introduce the interfacial thermal resistance Ri shown in Rw and Rs are the intrinsic thermal resistance of bulk water and flour-based sample. Ri is the interfacial thermal resistance between water and flour-based sample, which is related to the interfacial properties of interfacial component materials and 18 interface area. According to Eq. (3), among three samples (i.e., C-FFNC, C-FFNS and C-FFP), the interfacial properties are almost the same while the total interface area are obviously different due to different diameters of flour-based absorber. The diameters of C-FFNC, C-FFNS and C-FFP are about 5.0 cm, 1.2 cm, and 1.5 × 10 -4 cm, respectively. Specifically, the ratio of their interface areas is 1: 4.16: 3.32 × 10 4 . Thus, the heat energy inside C-FFP absorber is severely dissipated by interfacial thermal resistance, leading to a low evaporation rate. As shown in figure 5b, we explore the water transporting capability of samples with different stacking ways. C-FFNC, C-FFNS, and C-FFP samples are heated in oven and then are put into water directly under the lab environment (detail in supporting information). The cooling of the sample is mainly cooled by water nature evaporation.The faster the cooling rate, the faster the water transport capacity. Experiments show that water transport capacity decreases from C-FFP to C-FFNS to C-FFNC in turn.As aforementioned, the evaporation efficiency (E. E.), thermal resistance (H. R.), water transportation (W. T.) relationships are: E.E.C-FFNC >E.E.C-FFNS >E.E.C-FFP. H.R.C-FFNC >H.R.C-FFNS >H.R.C-FFP. W.T.C-FFNC <W.T.C-FFNS <W.T.C-FFP. the technique of dough figurine, we developed a new method for designing biomass-based SISG with controlled micromorphology derived from flour. The selffloated carbonized flour-based absorber with excellent mechanical properties, high efficiency of solar absorption, low thermal conductivity and rich pore structure, which fulfill all the require of SISG for efficient localized evaporation. This flour-based SISG with the common structure (near-cylindrical) can achieve a high evaporation efficiency about 71.93% at normal solar illumination. Furthermore, we first investigated the effects of different macroscopic accumulations of uniform absorber on evaporation efficiency, and explored in detail the effects of different cumulative modes of optical absorption, water transport, and thermal management. This study provides an important guidance for the large-scale use of SISG device in remote and impoverished areas. emission scanning electron microscope (SEM, FEI Verios 460 USA), Environmental scanning electron microscope (FEI Q45+EDAX Octane Prime), and the contact angle measurement through video optical contact angle measurement with measurement accuracy of 0.1°. The characteristic spectral reflection peak of sample was mainly measured by an ultraviolet-visible-near-infrared spectrophotometer (Cary 5000). The sample surface temperature was measured via an IR camera (Fotric) with model LS5.5B4−0038 lens and FTIR spectra were obtained on a VERTEX 70 spectrometer. The porosity was measured by fully automatic mercury porosimeter (AutoPore IV 9510). The Phase testing was performed via an X-ray diffractometer (XRD, D8 advance, Bruker, Germany). The Raman spectra were measured via the micro-Raman spectrometer (Renishaw inVia Reflex, UK) with a CCD detector at room temperature. The elemental valence states were determined by X-ray photoelectron spectroscopy (XPS, ThermoFisher ESCALAB 250Xi, monochromatic Al Kα. Thermal Decomposition and Metamorphic Test Method Synchronous Thermal Analysis TG-DSC (TA, NETZSCH). Thermal conductivity of samples are measured using hot disk method (TPS 2500S), Standard sieve for powder filtration (US standard).Solar steam generation experiments.The C-FFNC was placed in a beaker rolled by commercial expanded polyethylene in order to reduce the ambient effect on experimental results. The salt water (3.5 wt% of salt, which is the same salt concentration as seawater) is placed in the beaker. The samples were illuminated under one-sun intensity with a xenon lamp (CEL-HXF300, 21 AM1.5 filter). The mass during the evaporation process was measured by an electron microbalance (AR224CN) with an accuracy of 0.0001 g and the data were transmitted Figure S2 . S2Flour block preparation of different shapes and volumes by dough figurine. (a) Solid cone, hollow cylinder, solid cylinder (from left to right). (b) Different size of samples. Figure S3 . S3The mechanical performance test of C-FFNC sample (a) The sample is placed on the leaves of a campus plant. (b) 1 kg of weight is placed on top of the sample 36 Table of Contents ofEfficient Interfacial Solar Steam Generator with Controlled MacromorphologyDerived from Flour via "Dough Figurine" Technology Supporting information Efficient Interfacial Solar Steam Generator with Controlled Macromorphology Derived from Flour via "Dough Figurine" Technology Zhengtong Li 1 , Chengbing Wang 1,* , Zeyu Li 1 , Lin Deng 1 , Jinbu Su 1 , Jing Shi 2 , Meng An 2,* 1 School of Materials Science and Engineering, Shaanxi Key Laboratory of Green Preparation and Functionalization for Inorganic Material, Shaanxi University of Science and Technology, Xi'an, Shaanxi 710021, China 2 College of Mechanical and Electrical Engineering, Shaanxi University of Science and Technology, Xi'an 710021, China * To whom correspondence should be addressed. E-mail: [email protected] (C. Wang), [email protected] (M. An)Table S1 Comparison of important parameters of commercially available materials for use as SISG equipment30 Material Carbon ization temper ature (°C) Carbonizat ion condition Evaporation efficiency Experi ment tempera ture(°C) Experiment humidity (%) Ref. mushrooms 500 Ar@12h 78%@1sun 28 41 1 basswood 500 Hot plate 57.3%@1sun constant constant 2 basswood * Flame treatment 72%@1sun 26 40 3 Sponge 500/70 0/900 N2/2h *(2.5-fold) 22.5 * 4 wood 500 Hot plate/60s 86.7%@10sun * * 5 wood 500°C * 74%@1sun * * 6 melamine foams 400/55 0/ 700 N2/2h 87.3%@1sun 25 50 7 daikon 750 N2/2h 85.9%@1sun 28 * 8 louts 500 N2 86.5%@1sun 28 * 9 bread 400 Homemad e furnace 71.4%@1sun 21 70 10 32 flour 300 Air/2h 71.4%@1sun 24 30 AcknowledgementThis work is financially supported by the National Natural Science Foundation of China Arched Bamboo Charcoal as Interfacial Solar Steam Generation Integrative Device with Enhanced Water Purification Capacity. Z Li, C Wang, T Lei, H Ma, J Su, S Ling, W Wang, Advanced Sustainable Systems. 201911800144Li, Z.; Wang, C.; Lei, T.; Ma, H.; Su, J.; Ling, S.; Wang, W., Arched Bamboo Charcoal as Interfacial Solar Steam Generation Integrative Device with Enhanced Water Purification Capacity. Advanced Sustainable Systems 2019, 3 (1), 1800144. Low-cost high-efficiency solar steam generator by combining thin film evaporation and heat localization: Both experimental and theoretical study. G Peng, H Ding, S W Sharshir, X Li, H Liu, D Ma, L Wu, J Zang, H Liu, W J A T E Yu, Applied Thermal. 143Peng, G.; Ding, H.; Sharshir, S. W.; Li, X.; Liu, H.; Ma, D.; Wu, L.; Zang, J.; Liu, H.; Yu, W. J. A. T. E., Low-cost high-efficiency solar steam generator by combining thin film evaporation and heat localization: Both experimental and theoretical study. Applied Thermal Engineering 2018, 143, 1079-1084. Self-Contained Monolithic Carbon Sponges for Solar -Driven Interfacial Water Evaporation Distillation and Electricity Generation. L Zhu, M Gao, C K N Peh, X Wang, G W Ho, Advanced Energy Materials. 8161702149Zhu, L.; Gao, M.; Peh, C. K. N.; Wang, X.; Ho, G. W., Self-Contained Monolithic Carbon Sponges for Solar -Driven Interfacial Water Evaporation Distillation and Electricity Generation. Advanced Energy Materials 2018, 8 (16), 1702149. Water-evaporation-induced electricity with nanostructured carbon materials. G Xue, Y Xu, T Ding, J Li, J Yin, W Fei, Y Cao, J Yu, L Yuan, L Gong, J Chen, S Deng, J Zhou, W Guo, Nature Nanotechnology. 124Xue, G.; Xu, Y.; Ding, T.; Li, J.; Yin, J.; Fei, W.; Cao, Y.; Yu, J.; Yuan, L.; Gong, L.; Chen, J.; Deng, S.; Zhou, J.; Guo, W., Water-evaporation-induced electricity with nanostructured carbon materials. Nature Nanotechnology 2017, 12 (4), 317-321. Functionalized carbon materials for efficient solar steam and electricity generation. B Hou, Z Cui, X Zhu, X Liu, G Wang, J Wang, T Mei, J Li, X Wang, Materials Chemistry and Physics. 222Hou, B.; Cui, Z.; Zhu, X.; Liu, X.; Wang, G.; Wang, J.; Mei, T.; Li, J.; Wang, X., Functionalized carbon materials for efficient solar steam and electricity generation. Materials Chemistry and Physics 2019, 222, 159-164. Interfacial Solar Steam Generation Enables Fast-Responsive, Energy-Efficient, and Low-Cost Off-Grid Sterilization. J Li, M Du, G Lv, L Zhou, X Li, L Bertoluzzi, C Liu, S Zhu, J Zhu, Advanced Materials. 30491805159Li, J.; Du, M.; Lv, G.; Zhou, L.; Li, X.; Bertoluzzi, L.; Liu, C.; Zhu, S.; Zhu, J., Interfacial Solar Steam Generation Enables Fast-Responsive, Energy-Efficient, and Low-Cost Off-Grid Sterilization. Advanced Materials 2018, 30 (49), e1805159. The emergence of solar thermal utilization: solar-driven steam generation. Z Deng, J Zhou, L Miao, C Liu, Y Peng, L Sun, S Tanemura, Journal of Materials. 23Deng, Z.; Zhou, J.; Miao, L.; Liu, C.; Peng, Y.; Sun, L.; Tanemura, S., The emergence of solar thermal utilization: solar-driven steam generation. Journal of Materials 23 . Chemistry A. 201717Chemistry A 2017, 5 (17), 7691-7709. Bio-inspired evaporation through plasmonic film of nanoparticles at the air-water interface. Z Wang, Y Liu, P Tao, Q Shen, N Yi, F Zhang, Q Liu, C Song, D Zhang, W Shang, T Deng, Small. 1016Wang, Z.; Liu, Y.; Tao, P.; Shen, Q.; Yi, N.; Zhang, F.; Liu, Q.; Song, C.; Zhang, D.; Shang, W.; Deng, T., Bio-inspired evaporation through plasmonic film of nanoparticles at the air-water interface. Small 2014, 10 (16), 3234-9. Energy and exergy analysis of solar stills with micro/nano particles: a comparative study. S W Sharshir, G Peng, A Elsheikh, E M Edreis, M A Eltawil, T Abdelhamid, A Kabeel, J Zang, N J E C Yang, Management, Energy Conversion and Management. 177Sharshir, S. W.; Peng, G.; Elsheikh, A.; Edreis, E. M.; Eltawil, M. A.; Abdelhamid, T.; Kabeel, A.; Zang, J.; Yang, N. J. E. C.; Management, Energy and exergy analysis of solar stills with micro/nano particles: a comparative study. Energy Conversion and Management 2018, 177, 363-375. Plasmonic Wood for High-Efficiency Solar Steam Generation. M Zhu, Y Li, F Chen, X Zhu, J Dai, Y Li, Z Yang, X Yan, J Song, Y Wang, E Hitz, W Luo, M Lu, B Yang, L Hu, Advanced Energy Materials. 841701028Zhu, M.; Li, Y.; Chen, F.; Zhu, X.; Dai, J.; Li, Y.; Yang, Z.; Yan, X.; Song, J.; Wang, Y.; Hitz, E.; Luo, W.; Lu, M.; Yang, B.; Hu, L., Plasmonic Wood for High-Efficiency Solar Steam Generation. Advanced Energy Materials 2018, 8 (4), 1701028. 3D selfassembly of aluminium nanoparticles for plasmon-enhanced solar desalination. L Zhou, Y Tan, J Wang, W Xu, Y Yuan, W Cai, S Zhu, J Zhu, Nature Photonics. 106Zhou, L.; Tan, Y.; Wang, J.; Xu, W.; Yuan, Y.; Cai, W.; Zhu, S.; Zhu, J., 3D self- assembly of aluminium nanoparticles for plasmon-enhanced solar desalination. Nature Photonics 2016, 10 (6), 393-398. Investigation of photothermal heating enabled by plasmonic nanofluids for direct solar steam generation. X Wang, Y He, X Liu, L Shi, J Zhu, Solar Energy. 157Wang, X.; He, Y.; Liu, X.; Shi, L.; Zhu, J., Investigation of photothermal heating enabled by plasmonic nanofluids for direct solar steam generation. Solar Energy 2017, 157, 35-46. A facile nanocomposite strategy to fabricate a rGO-MWCNT photothermal layer for efficient water evaporation. Y Wang, C Wang, X Song, S K Megarajan, H Jiang, Journal of Materials Chemistry A. 63Wang, Y.; Wang, C.; Song, X.; Megarajan, S. K.; Jiang, H., A facile nanocomposite strategy to fabricate a rGO-MWCNT photothermal layer for efficient water evaporation. Journal of Materials Chemistry A 2018, 6 (3), 963-971. Graphene oxide-based efficient and scalable solar desalination under one sun with a confined 2D water path. X Li, W Xu, M Tang, L Zhou, B Zhu, S Zhu, J Zhu, 24Li, X.; Xu, W.; Tang, M.; Zhou, L.; Zhu, B.; Zhu, S.; Zhu, J., Graphene oxide-based efficient and scalable solar desalination under one sun with a confined 2D water path. 24 Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America113Proceedings of the National Academy of Sciences of the United States of America 2016, 113 (49), 13953-13958. Tailoring Graphene Oxide-Based Aerogels for Efficient Solar Steam Generation under One Sun. X Hu, W Xu, L Zhou, Y Tan, Y Wang, S Zhu, J Zhu, Advanced Materials. 529Hu, X.; Xu, W.; Zhou, L.; Tan, Y.; Wang, Y.; Zhu, S.; Zhu, J., Tailoring Graphene Oxide-Based Aerogels for Efficient Solar Steam Generation under One Sun. Advanced Materials 2017, 29 (5). Solar steam generation by heat localization. H Ghasemi, G Ni, A M Marconnet, J Loomis, S Yerci, N Miljkovic, G Chen, Nature Communication. 54449Ghasemi, H.; Ni, G.; Marconnet, A. M.; Loomis, J.; Yerci, S.; Miljkovic, N.; Chen, G., Solar steam generation by heat localization. Nature Communication 2014, 5, 4449. Contactless steam generation and superheating under one sun illumination. T A Cooper, S H Zandavi, G W Ni, Y Tsurimaki, Y Huang, S V Boriskina, G Chen, Nature Communication. 915086Cooper, T. A.; Zandavi, S. H.; Ni, G. W.; Tsurimaki, Y.; Huang, Y.; Boriskina, S. V.; Chen, G., Contactless steam generation and superheating under one sun illumination. Nature Communication 2018, 9 (1), 5086. Self-assembled spectrum selective plasmonic absorbers with tunable bandwidth for solar energy conversion. L Zhou, S Zhuang, C He, Y Tan, Z Wang, J Zhu, Nano Energy. 32Zhou, L.; Zhuang, S.; He, C.; Tan, Y.; Wang, Z.; Zhu, J., Self-assembled spectrum selective plasmonic absorbers with tunable bandwidth for solar energy conversion. Nano Energy 2017, 32, 195-200. Dual functional asymmetric plasmonic structures for solar water purification and pollution detection. C Chen, L Zhou, J Yu, Y Wang, S Nie, S Zhu, J Zhu, Nano Energy. 51Chen, C.; Zhou, L.; Yu, J.; Wang, Y.; Nie, S.; Zhu, S.; Zhu, J., Dual functional asymmetric plasmonic structures for solar water purification and pollution detection. Nano Energy 2018, 51, 451-456. Multifunctional Porous Graphene for High-Efficiency Steam Generation by Heat Localization. Y Ito, Y Tanabe, J Han, T Fujita, K Tanigaki, M Chen, Ito, Y.; Tanabe, Y.; Han, J.; Fujita, T.; Tanigaki, K.; Chen, M., Multifunctional Porous Graphene for High-Efficiency Steam Generation by Heat Localization. . Advanced Materials. 2729Advanced Materials 2015, 27 (29), 4302-7. Floating rGO-based black membranes for solar driven sterilization. Y Zhang, D Zhao, F Yu, C Yang, J Lou, Y Liu, Y Chen, Z Wang, P Tao, W Shang, J Wu, C Song, T Deng, Nanoscale. 201748Zhang, Y.; Zhao, D.; Yu, F.; Yang, C.; Lou, J.; Liu, Y.; Chen, Y.; Wang, Z.; Tao, P.; Shang, W.; Wu, J.; Song, C.; Deng, T., Floating rGO-based black membranes for solar driven sterilization. Nanoscale 2017, 9 (48), 19384-19389. 25 Reduced Graphene Oxide-Polyurethane Nanocomposite Foam as a Reusable Photoreceiver for Efficient Solar Steam Generation. G Wang, Y Fu, A Guo, T Mei, J Wang, J Li, X Wang, Chemistry of Materials. 2913Wang, G.; Fu, Y.; Guo, A.; Mei, T.; Wang, J.; Li, J.; Wang, X., Reduced Graphene Oxide-Polyurethane Nanocomposite Foam as a Reusable Photoreceiver for Efficient Solar Steam Generation. Chemistry of Materials 2017, 29 (13), 5629-5635. Synthesis of Hierarchical Graphdiyne-Based Architecture for Efficient Solar Steam Generation. X Gao, H Ren, J Zhou, R Du, C Yin, R Liu, H Peng, L Tong, Z Liu, J Zhang, Chemistry of Materials. 2914Gao, X.; Ren, H.; Zhou, J.; Du, R.; Yin, C.; Liu, R.; Peng, H.; Tong, L.; Liu, Z.; Zhang, J., Synthesis of Hierarchical Graphdiyne-Based Architecture for Efficient Solar Steam Generation. Chemistry of Materials 2017, 29 (14), 5777-5781. Lightweight, Mesoporous, and Highly Absorptive All-Nanofiber Aerogel for Efficient Solar Steam Generation. F Jiang, H Liu, Y Li, Y Kuang, X Xu, C Chen, H Huang, C Jia, X Zhao, E Hitz, Y Zhou, R Yang, L Cui, L Hu, ACS Applied Material. 101Jiang, F.; Liu, H.; Li, Y.; Kuang, Y.; Xu, X.; Chen, C.; Huang, H.; Jia, C.; Zhao, X.; Hitz, E.; Zhou, Y.; Yang, R.; Cui, L.; Hu, L., Lightweight, Mesoporous, and Highly Absorptive All-Nanofiber Aerogel for Efficient Solar Steam Generation. ACS Applied Material Interfaces 2018, 10 (1), 1104-1112. Extremely Cost-Effective and Efficient Solar Vapor Generation under Nonconcentrated Illumination Using Thermally Isolated Black Paper. Z Liu, H Song, D Ji, C Li, A Cheney, Y Liu, N Zhang, X Zeng, B Chen, J Gao, Y Li, X Liu, D Aga, S Jiang, Z Yu, Q Gan, Global Challenge. 201721600003Liu, Z.; Song, H.; Ji, D.; Li, C.; Cheney, A.; Liu, Y.; Zhang, N.; Zeng, X.; Chen, B.; Gao, J.; Li, Y.; Liu, X.; Aga, D.; Jiang, S.; Yu, Z.; Gan, Q., Extremely Cost-Effective and Efficient Solar Vapor Generation under Nonconcentrated Illumination Using Thermally Isolated Black Paper. Global Challenge 2017, 1 (2), 1600003. Mushrooms as Efficient Solar Steam-Generation Devices. N Xu, X Hu, W Xu, X Li, L Zhou, S Zhu, J Zhu, Advanced Materials. 2829Xu, N.; Hu, X.; Xu, W.; Li, X.; Zhou, L.; Zhu, S.; Zhu, J., Mushrooms as Efficient Solar Steam-Generation Devices. Advanced Materials 2017, 29 (28). Hierarchical Porous Carbonized Lotus Seedpods for Highly Efficient Solar Steam Generation. J Fang, J Liu, J Gu, Q Liu, W Zhang, H Su, D Zhang, Chemistry of Materials. 3018Fang, J.; Liu, J.; Gu, J.; Liu, Q.; Zhang, W.; Su, H.; Zhang, D., Hierarchical Porous Carbonized Lotus Seedpods for Highly Efficient Solar Steam Generation. Chemistry of Materials 2018, 30 (18), 6217-6221. Evaporation above a bulk water surface using an oil lamp inspired highly efficient solar-steam generation strategy. X Wu, L Wu, J Tan, G Y Chen, G Owens, H Xu, Journal of Materials Chemistry A. 626Wu, X.; Wu, L.; Tan, J.; Chen, G. Y.; Owens, G.; Xu, H., Evaporation above a bulk water surface using an oil lamp inspired highly efficient solar-steam generation strategy. Journal of Materials Chemistry A 2018, 6 (26), 12267-12274. 26 Robust and Low-Cost Flame-Treated Wood for High-Performance Solar Steam Generation. G Xue, K Liu, Q Chen, P Yang, J Li, T Ding, J Duan, B Qi, J Zhou, ACS Applied Material Interfaces. 201717Xue, G.; Liu, K.; Chen, Q.; Yang, P.; Li, J.; Ding, T.; Duan, J.; Qi, B.; Zhou, J., Robust and Low-Cost Flame-Treated Wood for High-Performance Solar Steam Generation. ACS Applied Material Interfaces 2017, 9 (17), 15052-15057. Carbonized daikon for high efficient solar steam generation. M Zhu, J Yu, C Ma, C Zhang, D Wu, H Zhu, Solar Energy Materials and Solar Cells. 191Zhu, M.; Yu, J.; Ma, C.; Zhang, C.; Wu, D.; Zhu, H., Carbonized daikon for high efficient solar steam generation. Solar Energy Materials and Solar Cells 2019, 191, 83- 90. Lotus leaf as solar water evaporation devices. Y Liao, J Chen, D Zhang, X Wang, B Yuan, P Deng, F Li, H Zhang, Materials Letters. 240Liao, Y.; Chen, J.; Zhang, D.; Wang, X.; Yuan, B.; Deng, P.; Li, F.; Zhang, H., Lotus leaf as solar water evaporation devices. Materials Letters 2019, 240, 92-95. Fast-Growing Field of. Z Li, C Wang, J Su, S Ling, W Wang, M An, Li, Z.; Wang, C.; Su, J.; Ling, S.; Wang, W.; An, M., Fast-Growing Field of . Interfacial Solar Steam Generation: Evolutional Materials, Engineered Architectures, and Synergistic Applications. Solar RRL. 201911800206Interfacial Solar Steam Generation: Evolutional Materials, Engineered Architectures, and Synergistic Applications. Solar RRL 2019, 3 (1), 1800206. Rich Mesostructures Derived from Natural Woods for Solar Steam Generation. C Jia, Y Li, Z Yang, G Chen, Y Yao, F Jiang, Y Kuang, G Pastel, H Xie, B Yang, S Das, L Hu, Joule. 20173Jia, C.; Li, Y.; Yang, Z.; Chen, G.; Yao, Y.; Jiang, F.; Kuang, Y.; Pastel, G.; Xie, H.; Yang, B.; Das, S.; Hu, L., Rich Mesostructures Derived from Natural Woods for Solar Steam Generation. Joule 2017, 1 (3), 588-599. . Y Zhang, S K Ravi, J V Vaghasiya, S C Tan, Barbeque, Analog Route toZhang, Y.; Ravi, S. K.; Vaghasiya, J. V.; Tan, S. C., A Barbeque-Analog Route to . Carbonize Moldy Bread for Efficient Steam Generation. 3Carbonize Moldy Bread for Efficient Steam Generation. iScience 2018, 3, 31-39. One-step synthesis of flourderived functional nanocarbons with hierarchical pores for versatile environmental applications. W Tian, H Zhang, H Sun, M O Tadé, S Wang, Chemical Engineering Journal. 347Tian, W.; Zhang, H.; Sun, H.; Tadé , M. O.; Wang, S., One-step synthesis of flour- derived functional nanocarbons with hierarchical pores for versatile environmental applications. Chemical Engineering Journal 2018, 347, 432-439. From flour to honeycomb-like carbon foam: Carbon makes room for high energy density supercapacitors. X Wu, L Jiang, C Long, Z Fan, Nano Energy. 13Wu, X.; Jiang, L.; Long, C.; Fan, Z., From flour to honeycomb-like carbon foam: Carbon makes room for high energy density supercapacitors. Nano Energy 2015, 13, 527-536. 27 A Novel Sustainable Flour Derived Hierarchical Nitrogen-Doped Porous Carbon/Polyaniline Electrode for Advanced Asymmetric Supercapacitors. P Yu, Z Zhang, L Zheng, F Teng, L Hu, X Fang, Advanced Energy Materials. 6201601111Yu, P.; Zhang, Z.; Zheng, L.; Teng, F.; Hu, L.; Fang, X., A Novel Sustainable Flour Derived Hierarchical Nitrogen-Doped Porous Carbon/Polyaniline Electrode for Advanced Asymmetric Supercapacitors. Advanced Energy Materials 2016, 6 (20), 1601111. Multifunctional Stiff Carbon Foam Derived from Bread. Y Yuan, Y Ding, C Wang, F Xu, Z Lin, Y Qin, Y Li, M Yang, X He, Q Peng, Y Li, ACS Appl Mater Interfaces. 826Yuan, Y.; Ding, Y.; Wang, C.; Xu, F.; Lin, Z.; Qin, Y.; Li, Y.; Yang, M.; He, X.; Peng, Q.; Li, Y., Multifunctional Stiff Carbon Foam Derived from Bread. ACS Appl Mater Interfaces 2016, 8 (26), 16852-61. Recycled waste black polyurethane sponges for solar vapor generation and distillation. S Ma, C P Chiu, Y Zhu, C Y Tang, H Long, W Qarony, X Zhao, X Zhang, W H Lo, Y H Tsang, Applied Energy. 206Ma, S.; Chiu, C. P.; Zhu, Y.; Tang, C. Y.; Long, H.; Qarony, W.; Zhao, X.; Zhang, X.; Lo, W. H.; Tsang, Y. H., Recycled waste black polyurethane sponges for solar vapor generation and distillation. Applied Energy 2017, 206, 63-69. 1Comparative advantage and spatial distribution of wheat in China from 1997 to 2016. Z Tan, X Gao, Journal of Henan Agricultural University. 5205Tan, Z.; Gao, X., 1Comparative advantage and spatial distribution of wheat in China from 1997 to 2016. Journal of Henan Agricultural University. 2018, 52 (05), 825-838. Recent progress in solar-driven interfacial water evaporation: Advanced designs and applications. L Zhu, M Gao, C K N Peh, G W Ho, Nano Energy. 57Zhu, L.; Gao, M.; Peh, C. K. N.; Ho, G. W., Recent progress in solar-driven interfacial water evaporation: Advanced designs and applications. Nano Energy 2019, 57, 507-518. Carbon nanocomposites with high photothermal conversion efficiency. Q Zhang, W Xu, X Wang, Science China Materials. 7Zhang, Q.; Xu, W.; Wang, X., Carbon nanocomposites with high photothermal conversion efficiency. Science China Materials 2018, 61 (7), 905-914. Super black and ultrathin amorphous carbon film inspired by anti-reflection architecture in butterfly wing. Q Zhao, T Fan, J Ding, D Zhang, Q Guo, M Kamada, Carbon. 493Zhao, Q.; Fan, T.; Ding, J.; Zhang, D.; Guo, Q.; Kamada, M., Super black and ultrathin amorphous carbon film inspired by anti-reflection architecture in butterfly wing. Carbon 2011, 49 (3), 877-883. . H Wang, L Miao, S Tanemura, Morphology Control of Ag Polyhedron. 28Wang, H.; Miao, L.; Tanemura, S., Morphology Control of Ag Polyhedron 28 Nanoparticles for Cost-Effective and Fast Solar Steam Generation. Solar RRL. 20171600023Nanoparticles for Cost-Effective and Fast Solar Steam Generation. Solar RRL 2017, 1 (3-4), 1600023. Diameter effect of gold nanoparticles on photothermal conversion for solar steam generation. A Guo, RSC AdvancesY Fu, RSC AdvancesG Wang, RSC AdvancesX Wang, RSC Advances7Guo, A.; Fu, Y.; Wang, G.; Wang, X., Diameter effect of gold nanoparticles on photothermal conversion for solar steam generation. RSC Advances 2017, 7 (8), 4815- 4824. Rich Mesostructures Derived from Natural Woods for Solar Steam Generation. C Jia, Y Li, Z Yang, G Chen, Y Yao, F Jiang, Y Kuang, G Pastel, H Xie, B Yang, L Hu, Joule. 20173Jia, C.; Li, Y.; Yang, Z.; Chen, G.; Yao, Y.; Jiang, F.; Kuang, Y.; Pastel, G.; Xie, H.; Yang, B.; Hu, L., Rich Mesostructures Derived from Natural Woods for Solar Steam Generation. Joule 2017, 1 (3), 588-599. Thermal transport in soft PAAm hydrogels. N Tang, Z Peng, R Guo, M An, X Chen, X Li, N Yang, J J P Zang, 2017688Tang, N.; Peng, Z.; Guo, R.; An, M.; Chen, X.; Li, X.; Yang, N.; Zang, J. J. P., Thermal transport in soft PAAm hydrogels. 2017, 9 (12), 688. Predictions of Thermo-Mechanical Properties of Cross-Linked Polyacrylamide Hydrogels Using Molecular Simulations. Advanced Theory and Simulations. M An, B Demir, X Wan, H Meng, N Yang, T R Walsh, 21800153An, M.; Demir, B.; Wan, X.; Meng, H.; Yang, N.; Walsh, T. R., Predictions of Thermo-Mechanical Properties of Cross-Linked Polyacrylamide Hydrogels Using Molecular Simulations. Advanced Theory and Simulations 2019, 2 (1), 1800153. Nanoscale energy transport and conversion: a parallel treatment of electrons, molecules, phonons, and photons. G Chen, Oxford University PressChen, G., Nanoscale energy transport and conversion: a parallel treatment of electrons, molecules, phonons, and photons. Oxford University Press: 2005. . D G Cahill, P V Braun, G Chen, D R Clarke, S Fan, K E Goodson, P Keblinski, W P King, G D Mahan, A Majumdar, H J Maris, S R Phillpot, E Pop, L Shi, Nanoscale thermal transport. II. 2014111305Applied Physics ReviewsCahill, D. G.; Braun, P. V.; Chen, G.; Clarke, D. R.; Fan, S.; Goodson, K. E.; Keblinski, P.; King, W. P.; Mahan, G. D.; Majumdar, A.; Maris, H. J.; Phillpot, S. R.; Pop, E.; Shi, L., Nanoscale thermal transport. II. 2003-2012. Applied Physics Reviews 2014, 1 (1), 011305. Intercalated water layers promote thermal dissipation at bio-nano interfaces. Y Wang, Z Qin, M J Buehler, Z Xu, Nature Communications. 712854Wang, Y.; Qin, Z.; Buehler, M. J.; Xu, Z., Intercalated water layers promote thermal dissipation at bio-nano interfaces. Nature Communications 2016, 7, 12854. Mushrooms as Efficient Solar Steam-Generation Devices. N Xu, X Z Hu, W C Xu, X Q Li, L Zhou, S N Zhu, J Zhu, Advanced Materials. 29281606762Xu, N.; Hu, X. Z.; Xu, W. C.; Li, X. Q.; Zhou, L.; Zhu, S. N.; Zhu, J., Mushrooms as Efficient Solar Steam-Generation Devices. Advanced Materials 2017, 29 (28), 1606762. Tree-Inspired Design for High-Efficiency Water Extraction. M Zhu, Y Li, G Chen, F Jiang, Z Yang, X Luo, Y Wang, S D Lacey, J Dai, C Wang, C Jia, J Wan, Y Yao, A Gong, B Yang, Z Yu, S Das, L Hu, Advanced Materials. 4429Zhu, M.; Li, Y.; Chen, G.; Jiang, F.; Yang, Z.; Luo, X.; Wang, Y.; Lacey, S. D.; Dai, J.; Wang, C.; Jia, C.; Wan, J.; Yao, Y.; Gong, A.; Yang, B.; Yu, Z.; Das, S.; Hu, L., Tree- Inspired Design for High-Efficiency Water Extraction. Advanced Materials 2017, 29 (44). Robust and Low-Cost Flame-Treated Wood for High-Performance Solar Steam Generation. G Xue, K Liu, Q Chen, P Yang, J Li, T Ding, J Duan, B Qi, J Zhou, Xue, G.; Liu, K.; Chen, Q.; Yang, P.; Li, J.; Ding, T.; Duan, J.; Qi, B.; Zhou, J., Robust and Low-Cost Flame-Treated Wood for High-Performance Solar Steam Generation. . ACS Applied Materials Interfaces. 201717ACS Applied Materials Interfaces 2017, 9 (17), 15052-15057. Self-Contained Monolithic Carbon Sponges for Solar-Driven Interfacial Water Evaporation Distillation and Electricity Generation. L Zhu, M Gao, C K N Peh, X Wang, G W Ho, Advanced Energy Materials. 816Zhu, L.; Gao, M.; Peh, C. K. N.; Wang, X.; Ho, G. W., Self-Contained Monolithic Carbon Sponges for Solar-Driven Interfacial Water Evaporation Distillation and Electricity Generation. Advanced Energy Materials 2018, 8 (16). Rich Mesostructures Derived from Natural Woods for Solar Steam Generation. C Jia, Y Li, Z Yang, G Chen, Y Yao, F Jiang, Y Kuang, G Pastel, H Xie, B Yang, L Hu, Joule. 20173Jia, C.; Li, Y.; Yang, Z.; Chen, G.; Yao, Y.; Jiang, F.; Kuang, Y.; Pastel, G.; Xie, H.; Yang, B.; Hu, L., Rich Mesostructures Derived from Natural Woods for Solar Steam Generation. Joule 2017, 1 (3), 588-599. High-Performance Solar Steam Device with Layered Channels: Artificial Tree with a Reversed Design. H Liu, C Chen, G Chen, Y Kuang, X Zhao, J Song, C Jia, X Xu, E Hitz, H Xie, S Wang, F Jiang, T Li, Y Li, A Gong, R Yang, S Das, L Hu, Advanced Energy Materials. 881701616Liu, H.; Chen, C.; Chen, G.; Kuang, Y.; Zhao, X.; Song, J.; Jia, C.; Xu, X.; Hitz, E.; Xie, H.; Wang, S.; Jiang, F.; Li, T.; Li, Y.; Gong, A.; Yang, R.; Das, S.; Hu, L., High- Performance Solar Steam Device with Layered Channels: Artificial Tree with a Reversed Design. Advanced Energy Materials 2018, 8 (8), 1701616. Integrative solar absorbers for highly efficient solar steam generation. X Lin, J Chen, Z Yuan, M Yang, G Chen, D Yu, M Zhang, W Hong, X Chen, Journal of Materials Chemistry A. 61141Lin, X.; Chen, J.; Yuan, Z.; Yang, M.; Chen, G.; Yu, D.; Zhang, M.; Hong, W.; Chen, X., Integrative solar absorbers for highly efficient solar steam generation. Journal of Materials Chemistry A 2018, 6 (11), 4642-4648. 41 Carbonized daikon for high efficient solar steam generation. M Zhu, J Yu, C Ma, C Zhang, D Wu, H Zhu, Solar Energy Materials and Solar Cells. 191Zhu, M.; Yu, J.; Ma, C.; Zhang, C.; Wu, D.; Zhu, H., Carbonized daikon for high efficient solar steam generation. Solar Energy Materials and Solar Cells 2019, 191, 83- 90. Hierarchical Porous Carbonized Lotus Seedpods for Highly Efficient Solar Steam Generation. J Fang, J Liu, J Gu, Q Liu, W Zhang, H Su, D Zhang, Chemistry of Materials. 3018Fang, J.; Liu, J.; Gu, J.; Liu, Q.; Zhang, W.; Su, H.; Zhang, D., Hierarchical Porous Carbonized Lotus Seedpods for Highly Efficient Solar Steam Generation. Chemistry of Materials 2018, 30 (18), 6217-6221. . Y Zhang, S K Ravi, J V Vaghasiya, S C Tan, Barbeque, Analog Route toZhang, Y.; Ravi, S. K.; Vaghasiya, J. V.; Tan, S. C., A Barbeque-Analog Route to . Carbonize Moldy Bread for Efficient Steam Generation. 3Carbonize Moldy Bread for Efficient Steam Generation. iScience 2018, 3, 31-39. A 3D Photothermal Structure toward Improved Energy Efficiency in Solar Steam Generation. Y Shi, R Li, Y Jin, S Zhuo, L Shi, J Chang, S Hong, K.-C Ng, P Wang, Joule. 20186Shi, Y.; Li, R.; Jin, Y.; Zhuo, S.; Shi, L.; Chang, J.; Hong, S.; Ng, K.-C.; Wang, P., A 3D Photothermal Structure toward Improved Energy Efficiency in Solar Steam Generation. Joule 2018, 2 (6), 1171-1186. Solar Evaporator with Controlled Salt Precipitation for Zero Liquid Discharge Desalination. Y Shi, C Zhang, R Li, S Zhuo, Y Jin, L Shi, S Hong, J Chang, C Ong, P Wang, Environmental Science & Technology. 5220Shi, Y.; Zhang, C.; Li, R.; Zhuo, S.; Jin, Y.; Shi, L.; Hong, S.; Chang, J.; Ong, C.; Wang, P., Solar Evaporator with Controlled Salt Precipitation for Zero Liquid Discharge Desalination. Environmental Science & Technology 2018, 52 (20), 11822-11830. Enhancement of Interfacial Solar Vapor Generation by Environmental Energy. X Li, J Li, J Lu, N Xu, C Chen, X Min, B Zhu, H Li, L Zhou, S Zhu, T Zhang, J Zhu, Joule. 20187Li, X.; Li, J.; Lu, J.; Xu, N.; Chen, C.; Min, X.; Zhu, B.; Li, H.; Zhou, L.; Zhu, S.; Zhang, T.; Zhu, J., Enhancement of Interfacial Solar Vapor Generation by Environmental Energy. Joule 2018, 2 (7), 1331-1338. Reduced graphene oxide-polyurethane nanocomposite foams as a reusable photo-receiver for efficient solar steam generation. G Wang, Y Fu, A Guo, T Mei, J Wang, J Li, X Wang, Chemistry of Materials. 29135629Wang, G.; Fu, Y.; Guo, A.; Mei, T.; Wang, J.; Li, J.; Wang, X., Reduced graphene oxide-polyurethane nanocomposite foams as a reusable photo-receiver for efficient solar steam generation. Chemistry of Materials 2017, 29 (13), 5629. Improved light-harvesting and thermal management for efficient solar-driven water 42 evaporation using 3D photothermal cones. Y Wang, C Wang, X Song, M Huang, S K Megarajan, S F Shaukat, H Jiang, Journal of Materials Chemistry A. 621Wang, Y.; Wang, C.; Song, X.; Huang, M.; Megarajan, S. K.; Shaukat, S. F.; Jiang, H., Improved light-harvesting and thermal management for efficient solar-driven water 42 evaporation using 3D photothermal cones. Journal of Materials Chemistry A 2018, 6 (21), 9874-9881. Nature-Inspired, 3D Origami Solar Steam Generator toward Near Full Utilization of Solar Energy. S Hong, Y Shi, R Li, C Zhang, Y Jin, P Wang, ACS Appl Mater Interfaces. 1034Hong, S.; Shi, Y.; Li, R.; Zhang, C.; Jin, Y.; Wang, P., Nature-Inspired, 3D Origami Solar Steam Generator toward Near Full Utilization of Solar Energy. ACS Appl Mater Interfaces 2018, 10 (34), 28517-28524. Three-dimensional water evaporation on a macroporous vertically aligned graphene pillar array under one sun. P Zhang, Q Liao, H Yao, H Cheng, Y Huang, C Yang, L Jiang, L Qu, Journal of Materials Chemistry A. 631Zhang, P.; Liao, Q.; Yao, H.; Cheng, H.; Huang, Y.; Yang, C.; Jiang, L.; Qu, L., Three-dimensional water evaporation on a macroporous vertically aligned graphene pillar array under one sun. Journal of Materials Chemistry A 2018, 6 (31), 15303-15309.
[]
[ "Trends in the magnetic properties of Fe, Co and Ni clusters and monolayers on Ir(111), Pt(111) and Au(111)", "Trends in the magnetic properties of Fe, Co and Ni clusters and monolayers on Ir(111), Pt(111) and Au(111)" ]
[ "S Bornemann \nDepartment Chemie und Biochemie\nLudwig-Maximilians-Universität München\n81377MünchenGermany\n", "O Šipr \nInstitute of Physics\nASCR v. v. i\nCukrovarnická 10CZ-162 53PragueCzech Republic\n", "S Mankovsky \nDepartment Chemie und Biochemie\nLudwig-Maximilians-Universität München\n81377MünchenGermany\n", "S Polesya \nDepartment Chemie und Biochemie\nLudwig-Maximilians-Universität München\n81377MünchenGermany\n", "J B Staunton \nDepartment of Physics\nUniversity of Warwick\nCV4 7ALCoventryUnited Kingdom\n", "W Wurth \nInstitut für Experimentalphysik\nUniversität Hamburg and Centre for Free-Electron Laser Science\n22761HamburgGermany\n", "H Ebert \nDepartment Chemie und Biochemie\nLudwig-Maximilians-Universität München\n81377MünchenGermany\n", "J Minár \nDepartment Chemie und Biochemie\nLudwig-Maximilians-Universität München\n81377MünchenGermany\n" ]
[ "Department Chemie und Biochemie\nLudwig-Maximilians-Universität München\n81377MünchenGermany", "Institute of Physics\nASCR v. v. i\nCukrovarnická 10CZ-162 53PragueCzech Republic", "Department Chemie und Biochemie\nLudwig-Maximilians-Universität München\n81377MünchenGermany", "Department Chemie und Biochemie\nLudwig-Maximilians-Universität München\n81377MünchenGermany", "Department of Physics\nUniversity of Warwick\nCV4 7ALCoventryUnited Kingdom", "Institut für Experimentalphysik\nUniversität Hamburg and Centre for Free-Electron Laser Science\n22761HamburgGermany", "Department Chemie und Biochemie\nLudwig-Maximilians-Universität München\n81377MünchenGermany", "Department Chemie und Biochemie\nLudwig-Maximilians-Universität München\n81377MünchenGermany" ]
[]
We present a detailed theoretical investigation on the magnetic properties of small single-layered Fe, Co and Ni clusters deposited on Ir(111), Pt(111) and Au(111). For this a fully relativistic abinitio scheme based on density functional theory has been used. We analyse the element, size and geometry specific variations of the atomic magnetic moments and their mutual exchange interactions as well as the magnetic anisotropy energy in these systems. Our results show that the atomic spin magnetic moments in the Fe and Co clusters decrease almost linearly with coordination on all three substrates, while the corresponding orbital magnetic moments appear to be much more sensitive to the local atomic environment. The isotropic exchange interaction among the cluster atoms is always very strong for Fe and Co exceeding the values for bulk bcc Fe and hcp Co, whereas the anisotropic Dzyaloshinski-Moriya interaction is in general one or two orders of magnitude smaller when compared to the isotropic one. For the magnetic properties of Ni clusters the magnetic properties can show quite a different behaviour and we find in this case a strong tendency towards noncollinear magnetism.
10.1103/physrevb.86.104436
[ "https://arxiv.org/pdf/1204.6230v1.pdf" ]
59,422,867
1204.6230
dfb901f7624f8aa79eb9aca74e22ad991f799204
Trends in the magnetic properties of Fe, Co and Ni clusters and monolayers on Ir(111), Pt(111) and Au(111) 27 Apr 2012 (Dated: May 10, 2014) S Bornemann Department Chemie und Biochemie Ludwig-Maximilians-Universität München 81377MünchenGermany O Šipr Institute of Physics ASCR v. v. i Cukrovarnická 10CZ-162 53PragueCzech Republic S Mankovsky Department Chemie und Biochemie Ludwig-Maximilians-Universität München 81377MünchenGermany S Polesya Department Chemie und Biochemie Ludwig-Maximilians-Universität München 81377MünchenGermany J B Staunton Department of Physics University of Warwick CV4 7ALCoventryUnited Kingdom W Wurth Institut für Experimentalphysik Universität Hamburg and Centre for Free-Electron Laser Science 22761HamburgGermany H Ebert Department Chemie und Biochemie Ludwig-Maximilians-Universität München 81377MünchenGermany J Minár Department Chemie und Biochemie Ludwig-Maximilians-Universität München 81377MünchenGermany Trends in the magnetic properties of Fe, Co and Ni clusters and monolayers on Ir(111), Pt(111) and Au(111) 27 Apr 2012 (Dated: May 10, 2014)APS/123-QEDnumbers: 7575-c7575Lf7570Tj7570Ak We present a detailed theoretical investigation on the magnetic properties of small single-layered Fe, Co and Ni clusters deposited on Ir(111), Pt(111) and Au(111). For this a fully relativistic abinitio scheme based on density functional theory has been used. We analyse the element, size and geometry specific variations of the atomic magnetic moments and their mutual exchange interactions as well as the magnetic anisotropy energy in these systems. Our results show that the atomic spin magnetic moments in the Fe and Co clusters decrease almost linearly with coordination on all three substrates, while the corresponding orbital magnetic moments appear to be much more sensitive to the local atomic environment. The isotropic exchange interaction among the cluster atoms is always very strong for Fe and Co exceeding the values for bulk bcc Fe and hcp Co, whereas the anisotropic Dzyaloshinski-Moriya interaction is in general one or two orders of magnitude smaller when compared to the isotropic one. For the magnetic properties of Ni clusters the magnetic properties can show quite a different behaviour and we find in this case a strong tendency towards noncollinear magnetism. I. INTRODUCTION The magnetism of surface supported clusters has been the subject of intense research activities over the last few years as such systems often show peculiar and unexpected magnetic behaviour. These exceptional magnetic properties arise from the reduced dimensionality in combination with spin-orbit coupling (SOC) which can cause complex interactions among the atomic magnetic moments. In this context clusters of magnetic 3d transition metal elements deposited on 5d noble metal substrates are very interesting as for these systems spin-orbit driven effects mediated by substrate atoms with large SOC are most prominent. With technical or chemical applications in focus, there is a growing need to understand the trends and principles behind the manifold of magnetic properities for different cluster and substrate materials as only this will make it possible to anticipate which magnetic properties may result from a particular cluster/substrate combination. In previous experimental and theoretical investigations on the magnetism of atomic clusters on surfaces it was already demonstrated that their magnetic properties differ strongly from the magnetic properties of the corresponding bulk materials and that this has its main origin in the reduced atomic coordination of cluster sites which in fact has a strong impact on the local spin and orbital magnetic moments 1-3 . More recently it was also shown that for 3d clusters or monolayers on 5d metal surfaces SOC induced effects on the spin configurations also play an immanent role causing various noncolliniear magnetic structures [4][5][6] . This SOC induced noncolliniear magnetism is, however, intrinsically different from the spin frustrations that may arise e.g. by a competition between ferro-and anti-ferromagnetism or that may be present in systems where the magnetic and geometric symmetries are incompatible 7,8 . Unfortunately, each of the theoretical studies published so far were aimed at only one or two combinations of the cluster and substrate materials and often only very few cluster sizes and shapes were investigated. In addition to that comes the fact that many theoretical investigations have focused only on some selected magnetic properties as for instance the magnetic moments and exchange interaction but leaving out important information concerning the magnetic anisotropy energy (MAE). Moreover, due to limitations which are present in all theoretical schemes it is often also problematic to compare results obtained for different systems by different groups which use different methods. Thus, in order to obtain a more complete picture about the trends in the magnetic properties of deposited clusters one needs a sufficiently large self-contained set of results for interrelated systems which are obtained by the same method. This motivated us to calculate a large spectrum of the magnetic properties for sets of Fe, Co, and Ni clusters of 1-7 atoms on Ir(111), Pt(111), and Au(111) surfaces, within a unified fully-relativistic Green's function formalism. Moreover, we studied also complete monolayers as reference systems for the sequences with increasing cluster size. This enables us to analyse a large pool of data which are directly comparable because they were obtained by the same procedure. We found that the magnetism of Fe and Co clusters on all investigated surfaces follows common patterns that can be understood by considering the coordination numbers of atoms in the clusters and the polarisability of the substrate. For Ni clusters the situation is more complicated and some of the systematic trends observed for Fe and Co clusters are absent. II. COMPUTATIONAL FRAMEWORK The calculations for the investigated cluster and monolayer systems were done within the framework of spin density functional theory (SDFT) using the local spin density approximation (LSDA) with the parametrisation given by Vosko, Wilk and Nusair for the exchange and correlation potential 9 . The electronic structure has been determined in a fully relativistic way on the basis of the Dirac equation for spin-polarised potentials which was solved using the Korringa-Kohn-Rostoker (KKR) multiple scattering formalism 10 . The calculations for surface deposited clusters consist of two steps. First the host surface is calculated self-consistently with the tight-binding or screened version of the KKR method 11 using layers of empty sites to represent the vacuum region. This step is then followed by treating the deposited clusters as a perturbation to the clean surface with the Green's function for the new system being obtained by solving the corresponding Dyson equation 12 . This technique avoids the spurious interactions between clusters which may occur if a supercell approach is used instead 18 . For all systems discussed below the cluster atoms were assumed to occupy ideal lattice sites in the first vacuum layer and no effects of structure relaxation were included. The substrates were simulated by finite slabs which contained 37 atomic layers and we used lattice parameters of 3.84Å, 3.92Å and 4.08Å for Ir(111), Pt(111) and Au(111), respectively. The surface calculations were converged with respect to k-point integration. For the surface Brillouin zones a regular k-mesh of 100 × 100 points was used which corresponds to 1717 k-points in the irreducible part of the Brillouin zone. The effective potentials were treated within the atomic sphere approximation (ASA). The occuring energy integrals were evaluated by contour integration on a semicircular path within the complex energy plane using a logarithmic mesh of 32 points. The multipole expansion of the Green's function was truncated at an angular momentum cutoff of ℓ max = 2. For selected surface and cluster systems calculations with ℓ max = 3 were also performed which showed that this causes a more-or-less uniform increase of the local spin moments by 3-5% and of the local orbital moments by 3-10%. This indicates that the systematic trends in the spin and orbital magnetic moments are well described by ℓ max = 2. For the representation of the interatomic exchange interactions we made use of the rigid spin approximation 13 and mapped the magnetic energy landscape E({ê k }) onto an extended classical Heisenberg model for all atomic magnetic moment directions {ê k }. The corresponding extended Heisenberg Hamiltonian has the form 14,15 : H = − 1 2 i,j(i =j) J ijêi ·ê j − 1 2 i,j(i =j)ê i J S ijêj − 1 2 i,j(i =j) D ij · [ê i ×ê j ] + i K i (ê i ) ,(1) where, the exchange interaction tensor has been decomposed into its conventional isotropic part J ij , its traceless symmetric part J S ij and its anti-symmetric part J A ij which is given in terms of the Dzyaloshinski-Moriya (DM) vector D ij . We calculated the J ij coupling parameters and DM vectors D ij following the scheme by Udvardi et al. 15 . The anisotropy constants K i (ê i ) account for the on-site magnetic anisotropy energy associated with each individual magnetic moment oriented alongê i . The magnetic anisotropy energy ∆E is usually split into two parts, the SOC induced magnetocrystalline anisotropy ∆E soc and the so-called shape anisotropy ∆E dd caused by magnetic dipole-dipole interactions, i.e. ∆E = ∆E soc + ∆E dd .(2) ∆E dd can be determined classically by a lattice summation over the magnetostatic energy contributions of the individual magnetic moments or in an ab-initio way by using a Breit Hamiltonian 16 . Here, we used the classical approach to calculate ∆E dd for the full monolayers while we found that for clusters containing just a few magnetic atoms ∆E dd is negligible. The magnetocrystalline anisotropy energy ∆E soc was extracted from magnetic torque calculations which are described in more detail in Refs. [17][18][19] . As discussed recently byŠipr et al. 18 the approximations and truncations mentioned in this Section result in a limited accuracy concerning in particular the values of ∆E soc . However, this does not hinder our analysis of the general trends of ∆E soc with respect to cluster geometries as well as different cluster/substrate combinations. III. RESULTS AND DISCUSSION A. Magnetic moments Fig. 1 shows the considered cluster geometries together with the calculated values of local spin (µ spin ) and orbital (µ orb ) magnetic moments for Fe clusters of 1-7 atoms as well as full Fe monolayers deposited on Ir(111), Pt(111) and Au(111). For identical Co and Ni clusters the corresponding data are presented in Figs. 2 and 3. In addition, these figures also show the induced spin magnetic moments of the respective substrate atoms that are adjacent to cluster atoms. One can see that in some cases there are considerable variations of µ spin and µ orb between the different sites of the deposited clusters. The magnetic moments depend not only on the position of the site with respect to other Fe, Co or Ni atoms but also on their position with respect to the underlying substrate atoms. This can be observed for example when inspecting the two differently located compact trimers or the cross-shaped five atom clusters in Figs. 1, 2 and 3. Clusters supported by Pt(111) have largest µ spin when compared with Ir(111) and Au(111) while the µ orb values are increasing from Ir to Pt to Au. There is a big difference between the induced magnetic moments in the Ir(111) and Pt(111) substrates on the one hand and the Au(111) substrate on the other hand. Ir and Pt atoms which are nearest neighbours of any Fe or Co atom have a relatively large µ spin of up to 0.15 µ B , while corresponding Au atoms have always small negative µ spin , not larger than 0.03 µ B in the absolute value. Substrate atoms with a larger number of Fe, Co or Ni neighbours usually have a larger µ spin than substrate atoms with a smaller number of neighbouring cluster atoms. However, this is not a general rule as seen for the substrate atoms adjacent to the central atom of the cross-shaped Fe 5 and Co 5 or to the differently located compact Fe 3 and Co 3 clusters on Ir(111) and Pt(111). The orbital magnetic moments induced in the substrate atoms are always small: they can reach up to 0.03 µ B for Fe and Co on Pt(111) while being smaller than 0.007 µ B for Ir(111) and smaller than 0.004 µ B for Au(111). Except for the Au(111) substrate atoms µ orb is found to be always parallel with µ spin . The finding that Pt is the most polarisable of the three elements and that Au is less polarisable than Ir is consistent with earlier theoretical 20,21 and experimental works 22-25 for multilayer systems. This high spin polarisability of Pt can be ascribed to its high spin susceptibility that in turn is caused by its relatively large density of states at the Fermi level leading to a large Stoner product (see below). By plotting the local magnetic moments as a function of the coordination number one can visualise the site-dependence of µ spin and µ orb . Such plots are shown in the insets of Fig. 4, where only neighbouring cluster atoms are considered in defining the coordination number. Sites with a lower coordination number generally have larger µ spin and µ orb than sites with a higher coordination number, with Ni on Ir(111) being the only ex- ception. For Ni on Ir(111) the magnetic moments quasioscillate strongly with changing cluster size or shape and we find that µ spin for the adatom (0.38 µ B ) is smaller than for a Ni atom in the full monolayer (0.49 µ B ). For all other cluster/substrate systems considered in our study a quasi-linear relationship between µ spin and coordination number is found. Interestingly, increasing the coordination number for the atoms of such small clusters leads to a stronger reduction of µ spin when compared to equally coordinated atoms in larger clusters or full monolayers. For example the central atom of a compact 7-atom cluster has always lower µ spin than a monolayer atom. One can also see that the corresponding orbital magnetic moments are much more sensitive with respect to coordination than the spin magnetic moments. While the insets in Fig. 4 show a strong decay of µ orb with increasing coordination for Fe and Co clusters on all three substrates the orbital magnetism in Ni clusters behaves non-monotonously. An analysis of the average spin and orbital magnetic moments as function of cluster size is shown in Fig. 4. For the three and five atom clusters the lower µ spin and µ orb values correspond to the compact clusters. All clus-ters have largest µ spin when deposited on Pt(111) followed by the Au(111) substrate. The lowest µ spin values are obtained for deposition on Ir(111). The highest values of µ orb , however, are found for clusters deposited on Au(111) where the interaction between cluster and substrate atoms is weak and the lattice constant largest. Concerning the trend of µ spin for the three different substrates there are two competing effects that must be considered. At first there is an increase in the lattice constant when going from Ir (a=3.84Å) to Pt (a=3.92Å) to Au (a=4.08Å), i.e. as the atoms of the clusters occupy ideal lattice sites, their distance from the substrate is largest in the case of Au. This means also that among the discussed substrates, the interaction between adatoms and substrate is smallest for Au and one would therefore expect that clusters deposited on Au(111) would have the largest µ spin values. On the other hand hybridisation of the electronic states between adatoms and substrate leads to a small charge transfer of minority 3d-electrons from the cluster atoms into empty 5d-states of adjacent substrate atoms, thereby increasing µ spin for the clusters. This however, happens only for the spatially extended 5d-states of Ir and Pt with their 5d-states having an appreciable energetic overlap with the minority 3d-states of cluster atoms. This can be clearly seen from the density of states curves which are presented in Fig. 5. In contrast to this there is almost no or only little interaction between the minority 3d-states of cluster atoms with the energetically low-lying 5d-states of Au. Besides, the hybridization between the cluster-derived and substratederived states leads to energy lowering of the Fe, Co and Ni 4p-states and this lowering is again more pronounced for the Ir and Pt substrates than for the Au substrate. This causes an additional charge redistribution within the cluster atoms, i.e. 4p-states become occupied, again at the cost of minority 3d-states. In this way Fe 1 deposited on Pt(111) ends up with about 0.1 electrons less in the minority 3d-orbitals and thus a slightly larger spin magnetic moment when compared to the deposition on Au(111), with this effect becoming stronger for Co and Ni.Šipr et al. 2 have demonstrated this behaviour for Co clusters on Pt(111) and Au(111) by performing calculations for Co clusters on an Au(111) substrate using the lattice constant of Pt. This also showed that the ob-served increase in µ orb can be solely attributed to the larger lattice constant of Au. B. Density of states In Fig. 5 we show the spin-resolved density of states (DOS) for the adatoms, dimers and the central atoms of the 7-atom clusters as well as the corresponding full monolayers. The DOS of the respective topmost atomic layer of the clean substrates is also shown by the brown areas together with the respective Bloch spectral function A B ( k, E) along the high symmetry linesΓ-K-M-Γ in the two-dimensional Brillouin zone presented in the first column. A B ( k, E) can be interpreted as a k-resolved DOS revealing the detailed features in the electronic structure of the three different substrates. The large grey-shaded regions in the A B ( k, E) diagrams in the first column of hybridisations between surface and bulk states. Clearly visible are for instance the Rashba split surface states aroundΓ for Pt(111) and Au(111). For Ir and Pt there is an appreciable energetic overlap between electronic states located at the substrate and states located at cluster sites resulting in hybridisation with a prominent broadening in the cluster DOS. The situation is different for the Au substrate, where the energetically low-lying states of Au can only hybridise with the majority states of Fe while for minority states of Fe as well as for all states of Co and Ni, there are no energetically close Au states to hybridize with and hence very distinct atomic-like features prevail in the DOS of cluster atoms in that case. With increasing number of cluster atoms a complex fine structure appears in the DOS which also broadens appreciably with increasing coordination numbers of the cluster atoms. Moreover, the presented DOS curves for the central atom of the 7-atom cluster demonstrate that the DOS of deposited clusters acquires very quickly the main features which are present in the DOS of a complete monolayer. However, as for such small clusters the DOS at the Fermi level varies strongly with changing the number of atoms so do the corresponding chemical and magnetic properties in this finite size regime. The decreasing overlap between states located in the substrate and located in the clusters when going from Ir to Au explains the finding that µ orb is largest for clusters on Au where the atomic-like character of the DOS prevails. In this context, however, the size of µ spin cannot always be directly related to the overlap between cluster and substrate DOS as this overlap is smaller for Au(111) than for Pt(111) and yet µ spin is largest for clusters on the Pt(111) substrate. In the same way also the induced magnetisation within the substrate depends on this mutual energetic overlap between 3d and 5d-states, which explains the very small induced magnetic moments in the case of Au(111). But also here, one should keep in mind that the polarizability of the substrate atoms is determined by the Stoner product I · N F with the exchange integral I and the number of states at the Fermi level N F (I · N F = 0.29 for Ir, 0.59 for Pt and 0.05 for Au) 26 . C. Exchange Coupling The calculated isotropic exchange coupling constants J ij are presented for all clusters (except for the crossshaped 5-atom and 6-atom ones) in Tab. I. Positive and negative values of J ij correspond to ferromagnetic and antiferromagnetic coupling, respectively. In Tab. II we show the sum of all couplings related to a particular atom at site i. J i = i =j J ij .(3) The effective exchange field J i can be seen as the total strength by which the magnetic moment at site i is held along its direction by all other atoms. From the data given in Tab. I one can see that Fe and Co clusters show a strong ferromagnetic nearest neighbour coupling in the range of 40-140 meV while for Ni clusters the J ij values are often much smaller. In the case of Fe and Co the couplings between nearest neighbouring atoms are about one order of magnitude larger than couplings between more distant atoms, i.e. the coupling strength falls off very rapidly with increasing the interatomic distance. For Ni clusters, however, and especially for Ni on Ir(111) where the couplings are very weak this trend is less pronounced. These results also show that there is an occasional weak anti-ferromagnetic coupling between more distant atoms for Fe clusters, which however, gives only an insignificant contribution to the cummulative J i of each respective atom. As each J ij contains by definition (see the Heisenberg Hamiltonian in Eq. (1)) the product between the involved spin magnetic moments µ i spin and µ j spin the coupling is, naturally, largest for Fe and smallest for Ni clusters. Moreover, the nearest neighbour exchange coupling among the Fe, Co and Ni cluster atoms is larger than our corresponding values for standard bcc Fe (37.8 meV), hcp Co (26.3 meV) and fcc Ni (4.8 meV). Apart from the magnitude of the spin magnetic mo- ments also atomic coordination as well as substrate effects play an important role. Especially for Fe clusters the J ij values between low coordinated cluster atoms are often much larger when compared to atoms with higher coordination. Nevertheless, the effective exchange field J i increases monotonically with increasing coordination, i.e. given a fixed number of Fe, Co or Ni atoms the most compact structure will form the most stable ferromagnetic state. As one can clearly see from the data in Table I in combination with the cluster geometries given in Figs. 1-3 the isotropic exchange coupling is also affected by the arrangement of cluster atoms with respect to the underlying surface sites. Looking at the two different compact Fe and Co trimers on Ir(111) and Pt(111) for instance we find that the coupling values differ by about 8-10%. For the 7-atom Fe cluster on Ir(111) and Pt(111), however, J ij for nearest neighbouring edge atoms varies by as much as 20-45%, respectively, whereas the corresponding couplings for clusters on Au(111) do in general not exhibit such a pronounced dependence on the atomic position with respect to the substrate atoms. The latter seems also to be the case for Fe and Co 7-atom clusters with an identical configuration on Cu(111). This was studied in detail by Mavropolus et al. 3 and the Cu substrate atoms also do not seem to participate in the exchange coupling of the Fe and Co cluster atoms. Therefore, we ascribe this substrate effect to the large spin-polarisation within the the Ir and Pt surface atoms while the weak induced magnetism in Cu and Au causes only minor variations in the exchange coupling of equidistant cluster atoms. These irregularities in the couplings underline that transferring J ij coupling constants obtained from bulk calculations to low-dimensional finite nanostructures will lead in general to unreliable results. For Fe and Co clusters the magnitude of the isotropic exchange interaction is quite similar for all three investigated substrates. Ni clusters, on the other hand, have comparable nearest neighbour J ij values only when being deposited on Pt(111) and Au(111) while deposition on Ir(111) reduces the coupling strength to just a few meV. This results in a quite small effective exchange field J i per Ni atom in the order of about 10 meV. As the exchange interaction is so small for these Ni clusters, there is a pronounced tendency that their magnetic ground state deviates strongly from a collinear configuration (see below). The coupling of magnetic cluster atoms to the induced magnetic moments in the substrate is always very small. J ij is about 2 meV between Fe or Co cluster atoms and topmost layer atoms of an Ir or Pt surface. The small induced moments in the Au(111) substrate couple antiferromagnetically to the cluster atoms. Here the nearest neighbour J ij 's are only in the order of 0.1 meV being of similar magnitude as the ferromagnetic coupling of Ni cluster atoms to Ir or Pt surface sites. In addition to the isotropic J ij coupling constants Tab. III shows the complementary data for the anisotropic exchange interaction. For clarity we present here only the magnitude of the DM vector | D ij | which can be seen as a measure of the driving force towards a non-collinear spin configuration. Given the fact that the SOC strength is comparable in Ir, Pt and Au one can see from the data in Tab. III that for any given cluster there are often strong variations (without any clear trends) in | D ij | upon deposition onto different substrates. As discussed above for the J ij values, we find here an even more pronounced dependence of | D ij | on the position of cluster atoms with respect to the underlying substrate atoms and the results show in addition that the relative decay of the DM interaction with increasing interatomic distance is much weaker when compared to the corresonding isotropic exchange coupling. Albeit that | D ij | is between one or two orders of magnitude smaller than the isotropic exchange coupling, it is not negligible. For Fe n on Ir(111) we obtain a relatively strong DM interaction which is in accordance with the recent findings of Heinze et al. 6 and of von Bergmann et al. 27 as well as Deák et al. 28 for Fe/Ir(001) which all demonstrate that these systems show a strong tendency towards non-collinear magnetism. Moreover, our results also show large | D ij | values for Fe n and Co n clusters from which we conjecture that this may also lead to complex magnetic structures within extended Fe and Co nanostructures on these substrates. In fact the sometimes experimentally observed, unexpected low magnetic moments in Fe-and Co-Pt(111) systems may be caused by this mechanism 29,30 . For Ni clusters the DM interaction is always very important with respect to the isotropic exchange coupling as both quantities are often of the same order of magnitude. Thus, one can expect the presence of non-collinear magnetic ordering in Ni clusters on all three substrates. It should be stressed that this non- collinearity will not be a consequence of the frustration between the magnetic and geometric order but rather will follow from the influence of spin-orbit effects on the exchange coupling, as manifested by the DM interaction. D. Magnetic Anisotropy Energy The magnetic anisotropy energies (MAE) per atom for all investigated clusters are compiled in Table IV. Positive MAE values denote an out-of-plane anisotropy while negative MAE values correspond to an in-plane magnetic easy axis. Fe clusters on Pt(111) and Au(111) show always an out-of-plane MAE whereas all other cluster substrate systems exhibit a rather nonuniform behaviour of their MAE with varying cluster size or geometry. This complex behaviour arises from the fact that already tiny changes in the electronic structure can cause large changes in the MAE. This can be seen again for example in the case of the compact trimers where one can observe a dramatic dependence of the MAE on their position with respect to the substrate, i.e. depending on whether a substrate atom is underneath the cluster centre or not. All dimers and linear trimers with in-plane MAE have their magnetic easy axis fixed along the cluster axis which is a result of the strong azimuthal MAE in these systems being in the order of 1-4 meV per atom. For compact symmetric clusters as well as for the full monolayers there remains only a very small azimuthal MAE in the order of µeV being thus negligible. When evaluating the MAE by means of the torque method, contributions stemming from all individual atomic sites of the system are added together. One can therefore technically identify which portion of the MAE comes from the adsorbed atoms and which portion comes from the substrate atoms. We found that the contribu-tion coming from the substrate is negligible in the case of clusters while it can be significant in the case of complete monolayers (e.g. up to 45% of the total value for the Co monolayer on the Pt(111) substrate). This is plausible given the fact that for monolayers, the substrate atoms are subject to interaction with a larger number of adsorbed atoms, meaning that their spin-polarization will be stronger and more robust than in the case of small clusters, contributing thereby more significantly to the MAE. At the same time, one has to bear in mind that energy is not an extensive quantity and that any decomposion of the MAE into parts has only a limited significance. Concerning the dipole-dipole or shape MAE contribution, for clusters it is negligible while for complete monolayers it attains appreciable values of -0.19 meV and -0.09 meV per atom for Fe and Co monolayers, respectively. Moreover, we find that the substrate as well as the dipole-dipole contribution to the MAE is negligible for clusters, whereas for monolayers both contribution are much more important. E. Comparison with other works As already mentioned in the introduction it is not always straightforward to directly compare theoretical LSDA results obtained by different computational abinitio implementations due to differences in the truncation of the wavefunction or Green's function etc. as well as different technical issues and approches as for example the implementation of spin-orbit coupling as perturbation, the use of a supercell vs. embedding techniques or approximations in the description of the effective potentials and so forth. All this can affect the obtained numerical results, especially for sensitive magnetic quantities like for instance orbital magnetic moments and magnetic anisotropy energies. Among the cluster/substrate systems discussed in this article, only Fe 1 and Co n on Pt(111) have been studied extensively by other groups and we find for these systems that our spin magnetic moments agree quantitatively well with the corresponding µ spin values given in Refs. 1,31-36 using identical geometries. The same is true for Fe 3 on Pt(111) 5 , Fe 1 and Co 1 on Ir(111) 31 as well as for the monolayer systems Fe/Ir(111) 27,31 , Co/Ir(111) 31 , Fe/Pt(111), Co/Pt(111) 31 and Co/Au(111) 37 . Regarding the values of µ orb and the MAE, however, the agreement is in general less good, i.e. only qualitative or worse, for the above mentioned reasons. As already analysed byŠipr et al. 38 , methods which rely on a supercell approach 34-36 produce always significantly higher induced spin magnetic moments within the substrate atoms when compared to methods which apply embedding techniques 1,31-33 . IV. SUMMARY AND CONCLUSIONS The evolution of the spin and orbital magnetic moments of the investigated 3d transition metal clusters on 5d noble metal surfaces mostly follows common trends and patterns that can be understood by considering the coordination numbers of atoms in the clusters and the polarizability of the substrate. The average µ spin values decrease nearly monotonously with the number of atoms in the cluster being at variance with trends observed for free clusters 39 . Our results show that µ orb may strongly depend on the position of the cluster with respect to the surface atoms, as demonstrated in particular for the triangular 3-atom clusters on Pt(111) and Au(111). The magnetic moments for Ni clusters on Ir are smaller than one would expect judging from the trends for the other cluster/substrate combinations. Moreover, they depend wildly on number of atoms in the cluster and their smallness is compatible with the fact that the peak in minority DOS is below E F . Apart from Ni n /Ir(111) all clusters show a strong ferro-magnetic isotropic exchange coupling exceeding the corresponding bulk values of standard bcc Fe, hcp Co and fcc Ni. In addition, there are also strong anisotropic DM interactions present revealing the intrinsic tendencies towards noncollinear magnetism in these systems. Finally, the magnetic anisotropy energies can be very large for some cluster/substrate or surface/substrate combinations, but unfortunately, there are no clear trends visible that would allow any straightforward anticipation of this sensitive quantity. FIG. 3 : 3Cluster geometries for Ni clusters of 1-7 atoms supported by Ir(111), Pt(111) and Au(111). The local spin and orbital magnetic moments at Ni sites are given by the upper and lower numbers, respectively. The spin magnetic moments for nearest neighbour substrate sites are also shown. The data presented within frames give the corresponding monolayer values. FIG. 4 : 4Fig. 5correspond to electronic bulk states of the underlying substrates, while the sharp black lines represent surface states localised within the topmost atomic layer of the clean surfaces. The blue and red regions arise from Average spin (top row) and orbital (bottom row) magnetic moments per atom for Fe, Co and Ni clusters and monolayers (ml) on Ir(111), Pt(111) and Au(111), respectively. The insets present µspin and µ orb vs. the number of nearest neighbouring cluster atoms (n.n.). FIG. 5 : 5Spin projected density of states (DOS) for Fe, Co and Ni monomers (second column), dimers (third column) as well as the central atom of a 7 atom cluster (fourth column) and the corresponding full monolayers (rightmost column) deposited on the (111) surfaces of Ir (top row), Pt (middle row) and Au (bottom row). The brown areas represent the DOS for unperturbed surface atoms of clean substrates. Corresponding Bloch spectral functions A B ( k, E) for surface atoms of clean substrates are presented in the leftmost column along theΓ-K-M-Γ line of the two-dimensional Brillouin zone. II: Effective exchange field J i in meV for Fe, Co and Ni clusters deposited on Ir(111), Pt(111) and Au(111). The icons in the left column indicate the corresponding cluster geometry as well as the cluster site i. The last line gives J i for the full monolayer ( FIG. 2: Cluster geometries for Co clusters of 1-7 atoms supported by Ir(111), Pt(111) and Au(111). The local spin and orbital magnetic moments at Co sites are given by the upper and lower numbers, respectively. The spin magnetic moments for nearest neighbour substrate sites are also shown. The data presented within frames give the corresponding monolayer values.Con/Ir(111) 0.14 0.15 0.14 0.13 0.13 0.07 0.08 0.07 0.08 1.84 2.01 1.95 2.01 1.95 0.12 0.23 0.15 0.23 0.15 0.14 0.14 0.13 0.09 0.13 0.09 0.07 0.08 0.07 0.08 1.79 2.01 2.00 2.01 2.00 0.17 0.24 0.23 0.24 0.23 0.07 0.10 0.07 0.07 0.07 2.05 2.05 0.33 0.33 0.07 0.11 0.11 0.06 0.06 0.07 0.07 1.92 2.05 2.05 0.27 0.29 0.29 0.08 0.12 0.08 0.08 0.08 0.08 0.08 1.98 1.98 1.98 0.20 0.20 0.20 0.14 0.15 0.14 0.14 0.13 0.09 0.07 0.08 0.08 0.07 0.08 1.74 2.02 1.98 1.95 1.95 2.02 0.11 0.23 0.16 0.18 0.19 0.20 0.12 0.12 0.08 0.12 0.08 0.08 1.99 1.99 1.99 0.20 0.20 0.20 0.13 0.14 0.08 0.08 0.13 0.08 0.08 0.08 1.93 2.00 1.99 1.93 0.15 0.21 0.20 0.15 0.15 0.15 0.15 0.14 0.14 0.14 0.08 0.08 0.08 0.08 0.08 0.08 1.67 1.98 1.98 1.98 1.98 1.98 1.98 0.07 0.19 0.19 0.19 0.19 0.19 0.19 0.06 0.06 0.06 2.16 0.49 0.16 1.82 0.12 ✲ ✻ x y Con/Pt(111) 0.12 0.12 0.12 0.11 0.11 0.08 0.08 0.08 0.08 1.93 2.11 2.04 2.11 2.04 0.16 0.27 0.16 0.27 0.16 0.12 0.12 0.10 0.09 0.11 0.09 0.08 0.08 0.08 0.08 1.90 2.10 2.10 2.10 2.10 0.22 0.34 0.34 0.34 0.34 0.09 0.10 0.08 0.09 0.08 2.16 2.16 0.44 0.44 0.09 0.11 0.11 0.08 0.08 0.08 0.08 2.04 2.16 2.16 0.34 0.39 0.39 0.08 0.09 0.08 0.08 0.08 0.08 0.08 2.08 2.08 2.08 0.33 0.33 0.33 0.12 0.12 0.11 0.11 0.11 0.09 0.08 0.08 0.08 0.08 0.08 1.83 2.12 2.06 2.04 2.04 2.11 0.13 0.27 0.18 0.21 0.23 0.25 0.11 0.11 0.08 0.11 0.08 0.08 2.08 2.08 2.08 0.23 0.23 0.23 0.11 0.10 0.08 0.08 0.11 0.08 0.08 0.08 2.02 2.10 2.09 2.02 0.17 0.27 0.27 0.17 0.12 0.12 0.12 0.11 0.11 0.11 0.08 0.08 0.08 0.08 0.08 0.08 1.76 2.07 2.07 2.07 2.07 2.07 2.07 0.09 0.21 0.21 0.21 0.21 0.21 0.21 0.08 0.08 0.08 2.27 0.60 0.19 1.91 0.13 ✲ ✻ x y Con/Au(111) -0.02 -0.02 -0.02 -0.01 -0.01 0.00 -0.01 0.00 -0.01 1.86 2.02 1.97 2.02 1.97 0.19 0.30 0.21 0.30 0.21 -0.01 -0.01 -0.02 -0.01 -0.01 -0.01 0.00 -0.01 0.00 -0.01 1.84 2.03 2.03 2.03 2.03 0.27 0.45 0.45 0.45 0.45 -0.01 -0.01 -0.01 -0.01 -0.01 2.07 2.07 0.55 0.55 -0.01 -0.01 -0.01 -0.01 -0.01 0.00 0.00 1.97 2.07 2.07 0.46 0.50 0.50 -0.01 -0.02 -0.01 -0.01 -0.01 -0.01 -0.01 2.00 2.00 2.00 0.44 0.44 0.44 -0.02 -0.02 -0.02 -0.01 -0.01 -0.01 0.00 -0.01 -0.01 -0.01 0.00 1.78 2.04 1.99 1.98 1.98 2.04 0.17 0.29 0.22 0.26 0.25 0.33 -0.01 -0.01 0.00 -0.01 0.00 0.00 2.00 2.00 2.00 0.33 0.33 0.33 -0.01 -0.02 -0.01 -0.01 -0.01 0.00 -0.01 -0.01 1.95 2.02 2.02 1.95 0.22 0.32 0.34 0.22 -0.02 -0.02 -0.02 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 1.70 2.00 2.00 2.00 2.00 2.00 2.00 0.10 0.25 0.25 0.25 0.25 0.25 0.25 -0.01 -0.01 -0.01 2.18 1.12 -0.02 1.87 0.17 ✲ ✻ x y TABLE I : IIsotropic exchange coupling constants Jij in meV for Fe, Co and Ni clusters deposited on Ir(111), Pt(111) and Au(111). The icons in the left column indicate the corresponding cluster geometry as well as the cluster sites i and j, respectively. The last line gives Jij for nearest neighbouring sites within the full monolayer (ml).Fe Co Ni ij Ir Pt Au Ir Pt Au Ir Pt Au 128.8 137.8 143.8 97.5 107.7 112.8 7.9 30.4 26.7 110.5 111.8 114.6 69.8 77.6 72.5 0.8 14.6 16.1 100.3 107.9 114.2 64.5 71.0 67.4 1.8 16.4 15.7 76.8 90.9 90.6 66.9 76.9 74.8 11.2 24.9 20.9 -15.9 -8.4 -12.2 5. TABLE TABLE III : IIIAnisotropic exchange coupling parameter | Dij | in meV for Fe, Co and Ni clusters deposited on Ir(111), Pt(111) and Au(111). The icons in the left column indicate the corresponding cluster geometry as well as the cluster sites i and j, respectively. The last line gives | Dij | for nearest neighbouring sites within the full monolayer (ml). .72 0.50 1.06 1.48 1.30 1.03 2.24 0.83 0.80 1.95 0.37 0.63 0.23 2.34 0.77 0.86 1.40 2.21 3.11 0.16 0.61 0.40 5.60 5.26 2.00 2.30 1.82 5.39 0.15 0.88 1.21 3.33 3.64 1.19 1.14 1.47 1.34 0.26 0.54 0.34 2.51 5.22 1.43 3.83 1.54 3.81 0.39 0.42 1.19 2.90 4.90 1.21 1.46 2.58 2.40 0.31 0.28 0.76 0.79 0.35 0.73 0.40 1.73 1.10 0.17 0.40 0.49 1.49 0.46 1.45 1.12 1.31 1.74 0.35 0.74 0.27 1.00 4.99 1.08 5.01 1.86 1.99 0.45 0.26 0.72 3.72 6.00 2.55 1.44 2.88 1.56 0.26 0.23 1.22 1.28 0.43 0.97 0.57 1.20 0.80 0.10 0.27 0.39 1.28 0.43 0.97 0.57 1.20 0.84 0.10 0.29 0.38 0.58 0.43 1.70 1.97 1.56 0.86 0.12 0.33 0.33 3.81 2.52 2.55 1.48 1.75 1.67 0.13 0.35 0.45 ml 4.17 2.54 0.66 1.33 1.68 0.71 0.13 0.20 0.04Fe Co Ni ij Ir Pt Au Ir Pt Au Ir Pt Au 1.17 6.93 1.61 3.48 5.47 2.34 0.26 0.24 0.49 4.60 6.16 0.70 1.06 1.17 4.18 0.39 1.45 0.99 0.94 6.31 2.45 5.76 8.31 8.66 0.24 2.35 1.32 1.83 5.64 2.77 2.06 3.51 1.27 0.48 0.63 0.48 3.62 1.19 3.78 0.38 1.79 2.91 0.40 1.20 1.42 4.86 6.02 1.52 2.29 0.67 1.73 0.14 0.97 1.45 1.76 5.64 2.26 4.33 4.24 2.85 0.20 1.09 1.57 2.54 5.30 0.98 1.19 1.39 2.14 0.26 0.28 0.39 1.58 0.64 0.75 0.34 1.75 0.99 0.09 0.58 0.19 5.75 5.81 0.63 2.97 6.03 7.98 0.54 0.87 0.26 2.25 5.12 1.85 4.81 5.63 3.29 0.52 1.42 0.73 4.67 6.97 2.51 3.43 6.24 5.82 0.51 0.88 0.90 1.39 5.46 1.45 7.55 9.98 4 TABLE IV : IVMagnetic anisotropy energy (MAE) per atom for Fe, Co and Ni clusters deposited on Ir(111), Pt(111) and Au(111). The icons in the left column indicate the corresponding cluster geometry. The positive (negative) values of the MAE (in meV) correspond to an out-of-plane (in-plane) magnetic easy axis. For the full monolayers the total MAE ∆E is decomposed into its dipolar part ∆E dd and its magnetocrystalline part ∆Esoc. The latter is further decomposed into contributions that originate from the monolayer (∆E 3d soc ) and the substrate (∆E 5d soc ), respectively. For the deposited clusters we found ∆E dd ≈ 0 and ∆Esoc ≈ ∆E 3d soc .Fe Co Ni Ir Pt Au Ir Pt Au Ir Pt Au 0.10 8.42 11.45 3. AcknowledgmentsFinancial support by the Bundesministerium für Bildung und Forschung (BMBF) Verbundprojekt Röntgenabsorptionsspektroskopie (05K10WMA und 05K10GU5), Deutsche Forschungsgemeinschaft (DFG) via SFB668 and by the Grant Agency of the Czech Republic (108/11/0853) is gratefully acknowledged. * Electronic address: [email protected]. de* Electronic address: [email protected] . P Gambardella, S Rusponi, M Veronese, S S Dhesi, C Grazioli, A Dallmeyer, I Cabria, R Zeller, P H Dederichs, K Kern, C Carbone, H Brune, Science. 3001130P. Gambardella, S. Rusponi, M. Veronese, S. S. Dhesi, C. Grazioli, A. Dallmeyer, I. Cabria, R. Zeller, P. H. Ded- erichs, K. Kern, C. Carbone, and H. Brune, Science 300, 1130 (2003). . O Šipr, S Bornemann, J Minár, S Polesya, V Popescu, A Simunek, H Ebert, J. Phys.: Condensed Matter. 1996203O.Šipr, S. Bornemann, J. Minár, S. Polesya, V. Popescu, A. Simunek, and H. Ebert, J. Phys.: Condensed Matter 19, 096203 (2007). . P Mavropoulos, S Lounis, S Blügel, phys. stat. sol. (b). 2471187P. Mavropoulos, S. Lounis, and S. Blügel, phys. stat. sol. (b) 247, 1187 (2010). . A Antal, B Lazarovits, L Udvardi, L Szunyogh, B Ujfalussy, P Weinberger, Phys. Rev. B. 77174429A. Antal, B. Lazarovits, L. Udvardi, L. Szunyogh, B. Ujfalussy, and P. Weinberger, Phys. Rev. B 77, 174429 (2008). . P Ruiz-Díaz, R Garibay-Alonso, J Dorantes-Dávila, G M Pastor, Phys. Rev. B. 8424431P. Ruiz-Díaz, R. Garibay-Alonso, J. Dorantes-Dávila, and G. M. Pastor, Phys. Rev. B 84, 024431 (2011). . S Heinze, K Bergmann, M Menzel, J Brede, A Kubetzka, R Wiesendanger, G Bihlmayer, S Blügel, Nature Physics. 7713S. Heinze, K. Bergmann, M. Menzel, J. Brede, A. Kubet- zka, R. Wiesendanger, G. Bihlmayer, and S. Blügel, Nature Physics 7, 713 (2011). . S Lounis, P Mavropoulos, R Zeller, P H Dederichs, S Blügel, Phys. Rev. B. 75174436S. Lounis, P. Mavropoulos, R. Zeller, P. H. Dederichs, and S. Blügel, Phys. Rev. B 75, 174436 (2007). . P Ruiz-Díaz, J L Ricardo-Chávez, J Dorantes-Dávila, G M Pastor, Phys. Rev. B. 81224431P. Ruiz-Díaz, J. L. Ricardo-Chávez, J. Dorantes-Dávila, and G. M. Pastor, Phys. Rev. B 81, 224431 (2010). . S H Vosko, L Wilk, M Nusair, Can. J. Phys. 581200S. H. Vosko, L. Wilk, and M. Nusair, Can. J. Phys. 58, 1200 (1980). . H Ebert, D Ködderitzsch, J Minár, Rep. Prog. Phys. 7496501H. Ebert, D. Ködderitzsch, and J. Minár, Rep. Prog. Phys. 74, 096501 (2011). . R Zeller, P H Dederichs, B Újfalussy, L Szunyogh, P Weinberger, Phys. Rev. B. 528807R. Zeller, P. H. Dederichs, B.Újfalussy, L. Szunyogh, and P. Weinberger, Phys. Rev. B 52, 8807 (1995). . S Bornemann, J Minár, S Polesya, S Mankovsky, H Ebert, O Šipr, Phase Transitions. 78701S. Bornemann, J. Minár, S. Polesya, S. Mankovsky, H. Ebert, and O.Šipr, Phase Transitions 78, 701 (2005). . V P Antropov, M I Katsnelson, B N Harmon, M Van Schilfgaarde, D Kusnezov, Phys. Rev. B. 541019V. P. Antropov, M. I. Katsnelson, B. N. Harmon, M. van Schilfgaarde, and D. Kusnezov, Phys. Rev. B 54, 1019 (1996). . V P Antropov, M I Katsnelson, A I Liechtenstein, Physica B. 336V. P. Antropov, M. I. Katsnelson, and A. I. Liechtenstein, Physica B 237-238, 336 (1997). . L Udvardi, L Szunyogh, K Palotas, P Weinberger, Phys. Rev. B. 68104436L. Udvardi, L. Szunyogh, K. Palotas, and P. Weinberger, Phys. Rev. B 68, 104436 (2003). . S Bornemann, J Minár, J Braun, D Ködderitzsch, H Ebert, Solid State Communications. 15285S. Bornemann, J. Minár, J. Braun, D. Ködderitzsch, and H. Ebert, Solid State Communications 152, 85 (2012). . S Mankovsky, S Bornemann, J Minár, S Polesya, H Ebert, J B Staunton, A I Lichtenstein, Phys. Rev. B. 8014422S. Mankovsky, S. Bornemann, J. Minár, S. Polesya, H. Ebert, J. B. Staunton, and A. I. Lichtenstein, Phys. Rev. B 80, 014422 (2009). . O Šipr, S Bornemann, J Minár, H Ebert, Phys. Rev. B. 82174414O.Šipr, S. Bornemann, J. Minár, and H. Ebert, Phys. Rev. B 82, 174414 (2010). . J B Staunton, L Szunyogh, A Buruzs, B L Gyorffy, S Ostanin, L Udvardi, Phys. Rev. B. 74144411J. B. Staunton, L. Szunyogh, A. Buruzs, B. L. Gyorffy, S. Ostanin, and L. Udvardi, Phys. Rev. B 74, 144411 (2006). . G Y Guo, H Ebert, Phys. Rev. B. 5112633G. Y. Guo and H. Ebert, Phys. Rev. B 51, 12633 (1995). . R Tyer, G Van Der Laan, W M Temmerman, Z Szotek, H Ebert, Phys. Rev. B. 67104409R. Tyer, G. van der Laan, W. M. Temmerman, Z. Szotek, and H. Ebert, Phys. Rev. B 67, 104409 (2003). . F Wilhelm, P Poulopoulos, G Ceballos, H Wende, K Baberschke, P Srivastava, D Benea, H Ebert, M Angelakeris, N K Flevaris, D Niarchos, A Rogalev, N B Brookes, Phys. Rev. Letters. 85413F. Wilhelm, P. Poulopoulos, G. Ceballos, H. Wende, K. Baberschke, P. Srivastava, D. Benea, H. Ebert, M. Ange- lakeris, N. K. Flevaris, D. Niarchos, A. Rogalev, and N. B. Brookes, Phys. Rev. Letters 85, 413 (2000). . F Wilhelm, P Poulopoulos, H Wende, A Scherz, K Baberschke, M Angelakeris, N K Flevaris, A Rogalev, Phys. Rev. Letters. 87207202F. Wilhelm, P. Poulopoulos, H. Wende, A. Scherz, K. Baberschke, M. Angelakeris, N. K. Flevaris, and A. Ro- galev, Phys. Rev. Letters 87, 207202 (2001). . F Wilhelm, M Angelakeris, N Jaouen, P Poulopoulos, E T Papaioannou, C Mueller, P Fumagalli, A Rogalev, N K Flevaris, Phys. Rev. B. 69220404F. Wilhelm, M. Angelakeris, N. Jaouen, P. Poulopoulos, E. T. Papaioannou, C. Mueller, P. Fumagalli, A. Rogalev, and N. K. Flevaris, Phys. Rev. B 69, 220404 (2004). . V V Krishnamurthy, D J Singh, N Kawamura, M Suzuki, T Ishikawa, Phys. Rev. B. 7464411V. V. Krishnamurthy, D. J. Singh, N. Kawamura, M. Suzuki, and T. Ishikawa, Phys. Rev. B 74, 064411 (2006). . M M Sigalas, D A Papaconstantopoulos, Phys. Rev. B. 507255M. M. Sigalas and D. A. Papaconstantopoulos, Phys. Rev. B 50, 7255 (1994). . K Bergmann, S Heinze, M Bode, E Y Vedmedenko, G Bihlmayer, S Blügel, R Wiesendanger, Phys. Rev. Lett. 96167203K. von Bergmann, S. Heinze, M. Bode, E. Y. Vedmedenko, G. Bihlmayer, S. Blügel, and R. Wiesendanger, Phys. Rev. Lett. 96, 167203 (2006). . A Deák, L Szunyogh, B Ujfalussy, Phys. Rev. B. 84224413A. Deák, L. Szunyogh, and B. Ujfalussy, Phys. Rev. B 84, 224413 (2011). . J Honolka, T Y Lee, K Kuhnke, D Repetto, V Sessi, P Wahl, A Buchsbaum, P Varga, S Gardonio, C Carbone, S R Krishnakumar, P Gambardella, M Komelj, R Singer, M Fähnle, K Fauth, G Schütz, A Enders, K Kern, Phys. Rev. B. 79104430J. Honolka, T. Y. Lee, K. Kuhnke, D. Repetto, V. Sessi, P. Wahl, A. Buchsbaum, P. Varga, S. Gardonio, C. Car- bone, S. R. Krishnakumar, P. Gambardella, M. Komelj, R. Singer, M. Fähnle, K. Fauth, G. Schütz, A. Enders, and K. Kern, Phys. Rev. B 79, 104430 (2009). . V Sessi, K Kuhnke, J Zhang, J Honolka, K Kern, A Enders, P Bencok, S Bornemann, J Minár, H Ebert, Phys. Rev. B. 81195403V. Sessi, K. Kuhnke, J. Zhang, J. Honolka, K. Kern, A. Enders, P. Bencok, S. Bornemann, J. Minár, and H. Ebert, Phys. Rev. B 81, 195403 (2010). . C Etz, J Zabloudil, P Weinberger, E Y Vedmedenko, Phys. Rev. B. 77184425C. Etz, J. Zabloudil, P. Weinberger, and E. Y. Vedme- denko, Phys. Rev. B 77, 184425 (2008). . B Lazarovits, L Szunyogh, P Weinberger, Phys. Rev. B. 6724415B. Lazarovits, L. Szunyogh, and P. Weinberger, Phys. Rev. B 67, 24415 (2003). . T Balashov, T Schuh, A F Takács, A Ernst, S Ostanin, J Henk, I Mertig, P Bruno, T Miyamachi, S Suga, W Wulfhekel, Phys. Rev. Letters. 102257203T. Balashov, T. Schuh, A. F. Takács, A. Ernst, S. Ostanin, J. Henk, I. Mertig, P. Bruno, T. Miyamachi, S. Suga, and W. Wulfhekel, Phys. Rev. Letters 102, 257203 (2009). . R F Sabiryanov, K Cho, M I Larsson, W D Nix, B M Clemens, J. Magn. Magn. Materials. 365R. F. Sabiryanov, K. Cho, M. I. Larsson, W. D. Nix, and B. M. Clemens, J. Magn. Magn. Materials 258-259, 365 (2003). . A M Conte, S Fabris, S Baroni, Phys. Rev. B. 7814416A. M. Conte, S. Fabris, and S. Baroni, Phys. Rev. B 78, 014416 (2008). . P Blonski, J Hafner, J. Phys.: Condensed Matter. 21426001P. Blonski and J. Hafner, J. Phys.: Condensed Matter 21, 426001 (2009). . B Újfalussy, L Szunyogh, P Bruno, P Weinberger, Phys. Rev. Letters. 771805B.Újfalussy, L. Szunyogh, P. Bruno, and P. Weinberger, Phys. Rev. Letters 77, 1805 (1996). . O Šipr, S Bornemann, J Minár, H Ebert, Phys. Rev. B. 82174414O.Šipr, S. Bornemann, J. Minár, and H. Ebert, Phys. Rev. B 82, 174414 (2010). . O Šipr, J Minár, H Ebert, Cent. Eur. J. Phys. 7257O.Šipr, J. Minár, and H. Ebert, Cent. Eur. J. Phys. 7, 257 (2009).
[]
[]
[ "Samuel M Corson " ]
[]
[]
For certain uncountable cardinals κ we produce a group of cardinality κ which is freely indecomposable, strongly κ-free, and whose abelianization is free abelian of rank κ. The construction takes place in Gödel's constructible universe L. This strengthens an earlier result of Eklof and Mekler[4].
10.1515/jgth-2019-0102
[ "https://arxiv.org/pdf/1903.03334v1.pdf" ]
119,172,006
1903.03334
bbcd833c68be9ab702d4133d83509add2f768ef2
8 Mar 2019 Samuel M Corson 8 Mar 2019FREELY INDECOMPOSABLE ALMOST FREE GROUPS WITH FREE ABELIANIZATION2010 Mathematics Subject Classification Primary 03E7520E05; Secondary 20E06 Key words and phrases free groupalmost free groupindecomposable group For certain uncountable cardinals κ we produce a group of cardinality κ which is freely indecomposable, strongly κ-free, and whose abelianization is free abelian of rank κ. The construction takes place in Gödel's constructible universe L. This strengthens an earlier result of Eklof and Mekler[4]. Introduction We produce examples of groups which exhibit some properties enjoyed by free groups but which in other ways are very far from being free. We recall some definitions before stating the main result. Given a group G and subgroup H ≤ G we say H is a free factor of G provided there exists another subgroup K ≤ G such that G = H * K in the natural way (that is-the map H * K → G induced by the inclusions of H and K is an isomorphism). We call such a writing G = H * K a free decomposition of G and say that G is freely indecomposable provided there does not exist a free decomposition of G via two nontrivial free factors. Given a cardinal κ we say G is κ-free if each subgroup of G generated by fewer than κ elements is a free group. Historically a κ-free group of cardinality κ is called almost free [7]. By the theorem of Nielsen and Schreier every free group is κ-free for every cardinal κ. A subgroup H of a κ-free group G is κ-pure if H is a free factor of any subgroup ⟨H ∪ X⟩ where X ⊆ G is of cardinality < κ. A κ-free group G is strongly κ-free provided each subset X ⊆ G with X < κ is included in a κ-pure subgroup of G generated by fewer than κ elements. Let ZFC denote the Zermelo-Fraenkel axioms of set theory including the axiom of choice, and V = L denote the assertion that every set is constructible. The theory ZFC + V = L is consistent provided ZFC is consistent [6]. The set theoretic concepts in the following statement will be reviewed in Section 3 but the reader can, for example, let κ be any uncountable succesor cardinal (e.g. ℵ 1 , ℵ 2 , ℵ ω+1 ): Theorem 1.1. (ZFC + V = L) Let κ be uncountable regular cardinal that is not weakly compact. There exists a group G of cardinality κ for which (1) G is freely indecomposable; (2) G is strongly κ-free; (3) the abelianization G G ′ is free abelian of cardinality κ. The hypotheses on the cardinal κ cannot be dropped since a κ-free group of cardinality κ must be free when κ is singular or weakly compact (see respectively [15] and [3]). A group as in the conclusion seems unusual since on a local level it is free, on a global level it is quite unfree and in fact indecomposable, but the abelianization is as decomposable as possible. Theorem 1.1 minus condition (3) was proved in [4] and the construction apparently does not have free abelianization; indeed, the proof that their groups are freely indecomposable involves abelianizing. A non-free ℵ 1 -free group of cardinality ℵ 1 which abelianizes to a free abelian group was produced by Bitton [1] using only ZFC, and Theorem 1.1 can be considered a constructible universe strengthening of his result. The first construction of a nonfree almost free group of cardinality ℵ 1 was given by Higman [7] without any extra set theoretic assumptions; a strongly ℵ 1 -free group of cardinality ℵ 1 produced only from ZFC was given by Mekler [13]. The reader can find other results related to almost free (abelian) groups in such works as [5], [12], [2]. We note that is is not possible to produce a group G of cardinality ≥ κ whose every subgroup of cardinality κ satisfies conditions (1)-(3) of Theorem 1.1. This is because for every uncountable locally free group G there exists a free subgroup H ≤ G with H = G [14, Theorem 1.1]. Finally, we mention that the construction used in proving Theorem 1.1 allows one, as in [4], to construct 2 κ many pairwise non-isomorphic groups of this description using [16]. Some Group Theoretic Lemmas The following appears as Lemma 1 in [7]: Lemma 2.1. If K is a free factor of G and H is a subgroup of G then H ∩ K is a free factor of H. We will use the following construction in our proof of Theorem 1.1: Construction 2.2. Suppose that we have a free group F a with free decomposition F = F 0 * F 1 * F 2,a with F 1 nontrivial and F 2 freely generated by {t n } n∈ω . Let F b be a free group with free decomposition F b = F 0 * F 1 * F 2,b where F 2,b is freely generated by {z n } n∈ω . Let y be an element of a free generating set for F 1 . Define φ ∶ F a → F b so that φ ↾ F 0 * F 1 is identity and φ(t n ) = yz −1 n+1 z n z n+1 . Property (iv) of the following lemma compares to [1, Lemma 2.9 (3) & (4)]: Lemma 2.3. Let F 2,a,n = ⟨t 0 , . . . , t n−1 ⟩. The map φ satisfies the following: (i) φ is a monomorphism; (ii) z n ∉ φ(F a ) for all n ∈ ω; (iii) φ(F a ) is not a free factor of F b but φ(F 0 * F 1 * F 2,a,n ) is a free factor for every n ∈ ω; (iv) the equality (φ(F a )) ′ = φ(F a ) ∩ F ′ b holds and the natural induced map φ ∶ F a F ′ a → F b F ′ b is an isomorphism. Proof. Fix a possibly empty set of free generators X for F 0 and a possibly empty set Y such that the disjoint union Y ⊔ {y} freely generates F 1 . Now φ(F a ) is the subgroup ⟨X ∪ Y ∪ {y} ∪ {yz −1 n+1 z n z n+1 } n∈ω ⟩ = ⟨X ∪ Y ∪ {y} ∪ {z −1 n+1 z n z n+1 } n∈ω ⟩ and the generators X ∪ Y ∪ {y} ∪ {z −1 n+1 z n z n+1 } n∈ω freely generate φ(F ) since they satisfy the Nielsen property (see [11, I.2] and [8, Example 2.8 (iii)]). The set X ∪ Y ∪ {y} ∪ {y −1 t n } n∈ω is a free generating set for F a and maps bijectively under φ to the free generating set X ∪ Y ∪ {y} ∪ {z −1 n+1 z n z n+1 } n∈ω for φ(F a ). Thus φ is a monomorphism and we have (i). Moreover since X ∪ Y ∪ {y} ∪ {z −1 n+1 z n z n+1 } n∈ω satisfies the Nielsen property it is clear that z n ∉ φ(F a ) for all n ∈ ω and we have (ii). For (iii) we notice that ⟨⟨φ(F a )⟩⟩ = F b since the free generators listed for F b are conjugate in F b to free generators of φ(F a ). Since φ(F a ) is a proper subgroup of F b this means that φ(F a ) cannot be a free factor of F b . On the other hand we have φ(F 0 * F 1 * F 2,a,n ) generated by X ∪ Y ∪ {y} ∪ {yz −1 1 z 0 z 1 , . . . , yz −1 n z n−1 z n } and we claim that X ∪ Y ∪ {y} ∪ {yz −1 1 z 0 z 1 , . . . , yz −1 n z n−1 z n } ∪ {z n , z n+1 , . . .} is a free generating set for F b . It is plain that ⟨{y} ∪ {yz −1 1 z 0 z 1 , . . . , yz −1 n z n−1 z n } ∪ {z n }⟩ = ⟨{y} ∪ {z 0 , . . . , z n }⟩ and since finitely generated free groups are Hopfian we get that {y} ∪ {yz −1 1 z 0 z 1 , . . . , yz −1 n z n−1 z n } ∪ {z n } is a free generating set of the free factor ⟨{y} ∪ {z 0 , . . . , z n }⟩ of F b . Thus indeed X ∪ Y ∪ {y} ∪ {yz −1 1 z 0 z 1 , . . . , yz −1 n z n−1 z n } ∪ {z n , z n+1 , . . .} is a free generating set for F b and so φ(F 0 * F 1 * F 2,a,n ) = ⟨X ∪ Y ∪ {y} ∪ {yz −1 1 z 0 z 1 , . . . , yz −1 n z n−1 z n }⟩ is a free factor of F b and we have shown (iii). For condition (iv) certainly the inclusion (φ(F a )) ′ ⊆ φ(F a ) ∩ F ′ b holds. Moreover a word w in (X ∪ Y ∪ {y} ∪ {z n } n∈ω ) ±1 represents an element of F ′ b if and only if the sum of the exponents of each element in X ∪ Y ∪ {y} ∪ {z n } n∈ω is 0. By treating each element of {z −1 n+1 z n z n+1 } n∈ω as an unreducing letter, a word in (X ∪ Y ∪ {y} ∪ {z −1 n+1 z n z n+1 } n∈ω ) ±1 represents an element of (φ(F a )) ′ if and only if the sum of the exponents of each element of X ∪ Y ∪ {y} ∪ {z −1 n+1 z n z n+1 } n∈ω is 0 . This is clearly equivalent to having the sum of the exponents of each letter in X ∪Y ∪{y}∪{z n } n∈ω be 0, so we have (φ(F )) ′ = φ(F ) ∩ F ′ . Thus the map φ ∶ F a F ′ a → F b F ′ b is injective. Moreover since each element of X ∪ Y ∪ {y} ∪ {z n } n∈ω is conjugate in F b to an element of X ∪ Y ∪ {y} ∪ {z −1 n+1 z n z n+1 } n∈ω the map φ is onto as well. We recall some notions for free products of groups (see [11,IV.1]. Suppose that we have a free product L 0 * L 1 . We call the nontrivial elements in L 0 ∪ L 1 letters. Each element g ∈ L 0 * L 1 can be expressed uniquely as a product of letters g = h 0 h 1 ⋯h n−1 such that h i ∈ L 0 if and only if h i+1 ∈ L 1 for all 0 ≤ i < n − 1 (this is the reduced or normal form of the element). We call the number n the length of g, denoted Len(g). Thus the identity element has length 0 and a nontrivial element has length 1 if and only if it is a letter. Given a writing of an element of L 0 * L 1 as a product of nontrivial elements h 0 ⋯h n−1 in L 0 ∪ L 1 it is easy to determine the normal form of the element by taking a consecutive pair h i h i+1 for which h i , h i+1 are both in L 0 or both in L 1 and performing the group multiplication in the appropriate group. This either gives the trivial element, in which case we remove the pair h i h i+1 from the expression, or it gives a nontrivial element g i and we replace h i h i+1 with g i in the expression. This process reduces the number of letters in the writing by at least 1 every time and so the process must eventually terminate, and it terminates at the normal form. We will generally consider an element of L 0 * L 1 as a word (the normal form) in the letters. For words w 0 and w 1 in the letters we will use w 0 ≡ w 1 to represent that w 0 and w 1 are the same word letter-for-letter when read from left to right. We say an element of L 0 * L 1 is cyclically reduced if its reduced form is either of length 0 or 1 or begins with a letter in L j and ends with a letter in L 1−j . A cyclically reduced element is of minimal length in its conjugacy class and if two cyclically reduced elements are conjugate to each other then the normal form of one is a cyclic shift of the other (w is a cyclic shift of u if we can write w as a concatenation w ≡ v 0 v 1 such that u ≡ v 1 v 0 ). Each element g of L 0 * L 1 is conjugate to a cyclically reduced element h, and we call Len(h) the cyclic length of g. Lemma 2.4. If ψ ∶ F a → L 0 * L 1 is a monomorphism such that ψ(F 1 ) = ψ(F a ) ∩ L 0 and ψ(F 0 * F 2,a ) = ψ(F a ) ∩ L 1 then there does not exist a monomorphism θ ∶ F b → L 0 * L 1 for which θ ○ φ = ψ. Proof. Suppose on the contrary that such a θ exists. We will treat F a as a subgroup of F b since φ is a monomorphism and treat F a and F b as subgroups of L 0 * L 1 such that F 1 = F a ∩ L 0 and F 0 * F 2,a = F a ∩ L 1 . We have that t n = yz −1 n+1 z n z n+1 for all n ∈ ω, and so y −1 t n = z −1 n+1 z n z n+1 . Since y −1 ∈ L 0 and t n ∈ L 1 we see that y −1 t n is cyclically reduced and Len(y −1 t n ) = 2. Therefore for each n ∈ ω we know z n is of cyclic length 2, so Len(z n ) ≥ 2, and any cyclic reduction of z n must be a cyclic shift of y −1 t n . We claim that Len(z n ) ≥ Len(z n+1 ) + 1 for all n ∈ ω. This immediately gives Len(z 0 ) ≥ Len(z n ) + n for all n ∈ ω which is a contradiction. It remains to prove that Len(z n ) ≥ Len(z n+1 ) + 1. To economize on writing subscripts we will show that Len(z 0 ) ≥ Len(z 1 ) + 1 and the same proof will work for general n by adding n to the subscripts of t 0 , z 0 , t 1 , z 1 . Suppose to the contrary that Len(z 0 ) ≤ Len(z 1 ). We have z 1 y −1 t 0 z −1 1 = z 0 . Since z 1 is nontrivial it must end with a letter of L 0 or a letter of L 1 . Case A: z 1 ends with a letter of L 0 . In this case it must be that z 1 ends with y since otherwise we readily see from the reduced form for z 0 = z 1 y −1 t 0 z −1 1 that Len(z 0 ) = (2 Len(z 1 ) + 2) − 1 = 2 Len(z 1 ) + 1 > Len(z 1 ) contrary to our assumption that Len(z 0 ) ≤ Len(z 1 ). Also, the second-to-last letter of z 1 must be t −1 0 since otherwise L(z 0 ) = (2 Len(z 1 ) + 2) − 3 = 2 Len(z 1 ) − 1 > Len(z 1 ) since Len(z 1 ) ≥ 2. Thus we may write z 1 as a reduced word z 1 = w(t −1 0 y) k with k ≥ 1 and maximal. Notice that w is nonempty, for otherwise z 1 = (t −1 0 y) k and z 0 = y −1 t 0 and therefore z 0 and z 1 commute instead of generating a free subgroup of rank 2. The word w must end with a letter from L 0 since w(t −1 0 y) k is reduced. Moreover Len(w) ≥ 2 since otherwise w = g ∈ L 0 and z 1 is conjugate to (t −1 0 y) k−1 t −1 0 yg and cyclically reducing this word cannot produce a cyclic shift of y −1 t 1 . Then the second-to-last letter of w is an element from L 1 . If the last letter of w is y then the second-to-last letter of w is not t −1 0 by maximality of k and we get Len(z 0 ) = (2 Len(z 1 ) + 2) − 4k − 2 − 1 = 2 Len(z 1 ) − 1 − 4k. If, on the other hand, the last letter of w is g ∈ L 0 ∖ {1, y} then we see that Len(z 0 ) = (2 Len(z 1 ) + 2) − 4k − 1 = 2 Len(z 1 ) − 4k + 1 so in either case we know Len(z 0 ) ≥ 2 Len(z 1 ) − 1 − 4k. Since we are assuming Len(z 1 ) ≥ Len(z 0 ) we have Len(z 1 ) ≤ 4k + 1. Write z 1 = (y −1 t 0 ) m u(t −1 0 y) k with m ≥ 0 maximal. Certainly u is nontrivial since otherwise z 1 would commute with z 0 . Also u must begin and end with a letter from L 0 . If m > k then Len(z 1 ) = 2m + 2k + Len(u) > 4k + 1. If k = m then since 4k + 1 ≥ Len(z 1 ) = 2m + 2k + Len(u) = 4k + Len(u) we get Len(u) = 1. Then z 1 is conjugate to an element of length 1, contradicting the fact that z 1 is of cyclic length 2. Thus k > m. If k > m+1 then by maximality of m we conjugate z 1 to a word u(t −1 0 y) k−m−1 t −1 0 y where u might or might not start with y −1 but if it starts with y −1 the second letter of u would not be t 0 . Thus whether or not u starts with y −1 we know that u(t −1 0 y) k−m−1 t −1 0 y has cyclic length at least 3 despite being conjugate to z 1 , a contradiction. Thus we know precisely that k = m + 1 and so z 1 is conjugate to ut −1 0 y. If u does start with y −1 , say u ≡ y −1 v for some word v, then we get z 1 conjugate to vt −1 0 . Since the cyclic length of z 1 is 2 we know that v is nonempty, must start with a letter from L 1 and that letter must not be t 0 by the maximality of m. Then v ≡ tv 0 with t ∈ L 1 ∖ {1, t −1 0 } and v 0 begins and ends with a letter in L 0 . Thus z 1 is conjugate to v 0 (t −1 0 t) which is a cyclically reduced word. Since z 1 has cyclic length 2 and cyclic reduction y −1 t 1 we obtain that v 0 = y −1 and t −1 0 t = t 1 . But now z 1 = (y −1 t 0 ) k−1 u(t −1 0 y) k = (y −1 t 0 ) k−1 y −1 (t 0 t 1 )y −1 (t −1 0 y) k so that z 1 is expressed as a product of elements in F a , contradicting Lemma 2.3 part (ii). Therefore u must start with some g ∈ L 0 ∖ {1, y −1 }, say y ≡ gv 0 . Now we conjugate ut −1 0 y ≡ gv 0 t −1 0 y to v 0 t −1 0 (yg) which is cyclically reduced (by considering (yg) as a single letter in L 0 ). But v 0 t −1 0 (yg) cannot possibly be a cyclic shift of y −1 t 1 , regardless of whether v is empty or not. This finishes the proof of Case A. Case B: z 1 ends with a letter of L 1 . The reasoning in this case follows that in the other case more or less and we give the sketch. Arguing as before we see that z 1 must end with t 0 and the second-to-last letter must be y −1 . Write z 1 ≡ w(y −1 t 0 ) k with k ≥ 1 maximal. As before, w is nonempty. Again we have Len(z 0 ) ≥ 2 Len(z 1 ) − 1 − 4k from which Len(z 1 ) ≤ 4k + 1. Writing z 1 ≡ (t −1 0 y) m u(y −1 t 0 ) k with m ≥ 0 maximal we again see that u is nonempty and that k = m + 1. Also, u must begin and end with letters from L 1 . Therefore z 1 is conjugate to uy −1 t 0 . If u ≡ t −1 0 v for some v then we get z 1 conjugate to vy −1 and as m was maximal the word v starts with an element of L 0 which is not y. Then v ≡ gv 0 with g ∈ L 0 ∖{1, y} and v 0 nontrivial since u, and therefore v, must end in a letter from L 1 . Conjugating vy −1 to the cyclically reduced word v 0 (y −1 g) we must have that this word is a cyclic shift of y −1 t 1 . Then v 0 = t 1 but y −1 g cannot be y −1 since g was nontrivial. Therefore it must be that u ≡ tv for some t ∈ L 1 ∖ {1, t 0 }. Conjugating uy −1 t 0 = tvy −1 t 0 to the cyclically reduced word vy −1 (t 0 t), it must be that this word is a cyclic shift of y −1 t 1 . Then v is empty and t 0 t = t 1 . Therefore z 1 = (t −1 0 y) k−1 u(y −1 t 0 ) k = (t −1 0 y) k−1 (t −1 0 t 1 ) (y −1 t 0 ) k and we have z 1 ∈ F a , which contradicts Lemma 2.3 part (ii). This completes the proof of Case B and of the lemma. Proof of the Main Theorem The groups that we produce for Theorem 1.1 will follow the induction used in [4, Theorem 2.2], using Construction 2.2 at the key stages. We will use combinatorial principles which follow from ZFC + V = L to rule out any possible free decomposition while ensuring strong κ-freeness and free abelianization. We first review some concepts from set theory. Definitions 3.1. (see [9]) Recall that a cardinal number is naturally considered as an ordinal number which cannot be injected into a proper initial subinterval of itself. A subset E of an ordinal α is bounded in α if there exists β < α which is an upper bound on E. The cofinality of an ordinal is the least cardinal κ for which there exists an unbounded E ⊆ α of cardinality κ. An infinite cardinal κ is regular if the cofinality of κ is κ. A subset C of ordinal α is club if it is unbounded in α and closed under the order topology in α. The intersection of two club sets in an uncountable regular cardinal is again a club set. A subset E of ordinal α is stationary if it has nonempty intersection with every club subset of α. The intersection of a club set and a stationary set in a regular cardinal is again stationary. We mention that weakly compact cardinals are inaccessible and it is therefore consistent to assume that a universe of set theory does not contain any (see [9,Chapters 9,17]). The following is a theorem of Solovay (see [16] or [9, Theorem 8.10]): Theorem 3.2. If κ is an uncountable regular cardinal then each stationary subset of κ can be decomposed as the disjoint union of κ many stationary subsets of κ. We quote Jensen's ◇ κ (E) principle (see [10,Lemma 6.5] f ∶ {0, 1}×κ → κ under which f (0, α) = α for every limit ordinal α < κ. Let f i ∶ κ → κ be given by f i (α) = f (i, α) for each i ∈ {0, 1}, so f 0 has disjoint image from f 1 . Notice that for each limit ordinal α < κ and X ⊆ κ we have f i (X ∩ α) = f i (X) ∩ α. Letting E 0 be the intersection of E with the set of limit ordinals below κ we have that E 0 is stationary as the intersection of a stationary with a club. For α ∈ E 0 define T 0 α = f −1 0 (S α ) and T 1 α = f −1 1 (S α ). Given L 0 , L 1 ⊆ κ we let J = f 0 (L 0 ) ⊔ f 1 (L 1 ) and notice that {α ∈ E 0 L 0 ∩ α = T 0 α and L 1 ∩ α = T 1 α } = {α ∈ E 0 f 0 (L 0 ∩ α) ⊔ f 1 (L 1 ∩ α) = S α } = {α ∈ E 0 (f 0 (L 0 ) ∩ α) ⊔ (f 1 (L 1 ) ∩ α) = S α } = {α ∈ E 0 J ∩ α = S α } is stationary in κ. Letting T 0 α = ∅ = T 1 α for α ∈ E ∖ E 0 gives the desired sequence of ordered pairs. Theorem 3.5. (ZFC+ V = L) If κ is an uncountable regular cardinal which is not weakly compact then there is a stationary subset E ⊆ κ for which each element of E is a limit ordinal of cofinality ω and for each limit ordinal α < κ the set E ∩ α is not stationary in α. Proof. (of Theorem 1.1) Let κ be as in the hypotheses of Theorem 1.1. We inductively define a group structure on κ and this will serve as our group G. Let γ 0 = ω, γ α+1 = γ α + γ α and γ β = ⋃ α<β γ α for limit ordinal β < κ. The set {γ α } α<κ is obviously a club set in κ. We will define a group structure on each γ α so that γ α will be a subgroup of γ β whenever α < β and thus the group structure on κ = ⋃ α<κ γ α will be well defined. Select stationary E 0 ⊆ κ satisfying the conclusion of Theorem 3.5 and such that κ ∖ E 0 is stationary in κ (using Theorem 3.2). It is easy to check that E = {γ α } α∈E0 is also stationary in κ. Let {(T 0 γα , T 1 γα )} γα∈E be a sequence as in Remark 3.4. The following properties will hold on the groups for all α < β < κ: (i) γ α is free of infinite rank; (ii) γ α is a proper subgroup of γ β ; (iii) γ α is a free factor of γ β if and only if γ α ∉ E; (iv) γ ′ α = γ ′ β ∩ γ α . We let γ 0 be free of countably infinite rank. Suppose that we have defined the group structure on γ α for all α < β < κ such that the above conditions hold. If β = δ + 1 and γ δ ∉ E then we give γ β the group structure obtained by any bijection f ∶ γ β → γ δ * Z such that f ↾ γ α is the identity map. Such a bijection exists since γ β ∖ γ δ = γ δ . Conditions (i), (ii) and (iv) obviously hold. Notice also that for α < β we have γ α a free factor of γ β if and only if γ α is a free factor of γ δ (this uses Lemma 2.1) and so (iii) also holds. If β = δ + 1 and γ δ ∈ E then we consider two subcases. Firstly suppose that there exists a strictly increasing sequence {α n } n∈ω for which • γ αn ∉ E; • ⋃ n∈ω α n = δ; • γ αn = (γ αn ∩ T 0 γ δ ) * (γ αn ∩ T 0 γ δ ) with both free factors nontrivial for all n ∈ ω. From this it follows immediately that γ δ = (γ δ ∩ T 0 γ δ ) * (γ δ ∩ T 1 γ δ ). Since {γ αn } n∈ω is a strictly increasing sequence of sets we know for each n ∈ ω that it is either the case that γ αn+1 ∩ T 0 γ δ ⊋ γ αn ∩ T 0 γ δ or that γ αn+1 ∩ T 1 γ δ ⊋ γ αn ∩ T 1 γ δ . By choosing a subsequence we can assume without loss of generality that γ αn+1 ∩ T 0 γ δ ⊋ γ αn ∩ T 0 γ δ for all n ∈ ω. Since each γ αn ∉ E we know by our induction that γ αn is a free factor of γ δ and also of γ αn+1 . Then by Lemma 2.1 we know that γ αn ∩ T 0 γ δ is a free factor of γ δ ∩ T 0 γ δ and of γ αn+1 ∩ T 0 γ δ . Inductively select a free basis Q of γ δ ∩ T 0 γ δ such that Q∩γ αn ∩T 0 γ δ is a free generating set for γ αn ∩T 0 γ δ for each n ∈ ω. For each n ∈ ω select t n ∈ Q ∩ γ αn+1 ∩ T 0 γ δ ∖ γ αn and let X = Q ∖ {t n } n∈ω . Let {y} ⊔ Y be a free generating set for γ δ ∩ T 1 γ δ . Let γ β be a group freely generated by X ∪ {z n } n∈ω ∪ {y} ∪ Y such that the inclusion map ι ∶ γ δ → γ β is the map φ from Construction 2.2. Certainly conditions (i) and (ii) hold, and condition (iv) holds by Lemma 2.3 (iv). Also we know γ δ is not a free factor of γ β by Lemma 2.3 (iii). Notice that for each n ∈ ω we have ⟨X ∪ {t 0 , . . . , t n−1 } ∪ {y} ∪ Y ⟩ is a free factor of γ β by Lemma 2.3 (iii). For each n ∈ ω we know γ αn is a free factor of ⟨X ∪ {t 0 , . . . , t n−1 } ∪ {y} ∪ Y ⟩ and for each α < δ there exists n ∈ ω for which α < α n . It follows by Lemma 2.1 that for α < δ we have γ α a free factor of γ β if and only if γ α ∉ E, and condition (iii) holds. On the other hand suppose β = δ + 1 and γ δ ∈ E and no such increasing sequence {α n } n∈ω exists. Since δ ∈ E 0 is of cofinality ω and E 0 ∩ δ is not stationary in δ we may select a strictly increasing sequence α n ∉ E 0 such that ⋃ n∈ω α n = δ. Then γ αn ∉ E. As each γ αn is a free factor of γ δ and the α n are strictly increasing we may select by induction a free generating set Q for γ δ such that Q ∩ γ αn is a free generating set for γ αn . Pick y ∈ γ α0 ∩Q and t n ∈ γ αn+1 ∩Q∖γ αn and letting X = ∅ and Y = Q∖({t n } n∈ω ∪{y}) we let γ β be a group freely generated by X ∪{z n } n∈ω ∪{y}∪Y such that the inclusion map ι ∶ γ δ → γ β is the map φ from Construction 2.2. The check that the induction conditions still hold is as in the other subcase. When β < κ is a limit ordinal the binary operation on γ β = ⋃ α<β γ α is defined by that on the γ α for α < β. By how E 0 was chosen we know E 0 ∩ β is not stationary in β and so we select a club set C ⊆ β for which C ∩ E 0 = ∅. By induction we know that for α, δ ∈ C with α < δ we have γ α is a proper free factor of γ δ and since C is closed we have γ δ = ⋃ α<δ,α∈C γ α for any δ ∈ C which is a limit under the ordering of κ restricted to C. Then by induction on C we can select a free generating set Q for γ β = ⋃ α∈C γ α for which Q ∩ γ α is a free generating set for γ α for each α ∈ C. Then (i) and (ii) hold. For α < β it is clear by Lemma 2.1 that γ α is a free factor of γ β if and only if γ α is a free factor of γ δ for some δ ∈ C with δ > α, and since γ δ is a free factor of γ β for each δ ∈ C condition (iii) holds. Condition (iv) follows by induction since γ ′ β = ⋃ α<β γ ′ α . This completes the construction of the group structure on κ. We verify that conditions (1)-(3) of the statement of Theorem 1.1 hold. Imagine for contradiction that κ = L 0 * L 1 with L 0 , L 1 nontrivial subgroups of κ. Letting C = {α < κ γ α = (γ α ∩ L 0 ) * (γ α ∩ L 1 )} it is straightforward to verify that C is club in κ. Since the set κ ∖ E 0 is stationary in κ we know D = C ∖ E 0 is stationary and therefore unbounded in κ. Then the closure D is club in κ, and so is {γ α } α∈D . Then there exists γ δ ∈ E with δ ∈ D and L 0 ∩ γ δ = T 0 γ δ and L 1 ∩ γ δ = T 1 γ δ . Then δ ∈ E 0 and so δ ∈ E 0 ∩ D. As δ ∈ E 0 we know that δ has cofinality ω in κ. Certainly δ ∉ D = C ∖ E 0 and so there exists a strictly increasing sequence {α n } n∈ω with α n ∈ C ∖ E 0 such that ⋃ n∈ω α n = δ. Then γ αn = (γ αn ∩ L 0 ) * (γ αn ∩ L 1 ) = (γ αn ∩ T 0 γ δ ) * (γ αn ∩ T 1 γ δ ). By our construction we know that γ δ includes into γ δ+1 in such a way that γ δ+1 is not a subgroup of L 0 * L 1 (using Lemma 2.4), and this is a contradiction. Thus κ is freely indecomposable and we have part (1). For part (2) we let X ⊆ κ with X < κ. By the regularity of κ select α < κ large enough that γ α ⊇ X. Notice that γ α+1 ∉ E. Any subgroup H of κ with H < κ and H ≥ γ α+1 satisfies H ≤ γ β for some β > α by the regularity of κ. Since γ α+1 is a free factor of γ β by our construction, we have by Lemma 2.1 that γ α+1 is a free factor of H. Thus γ α+1 is κ-pure and the group κ is strongly κ-free and we have verified (2). For part (3) we notice that since γ ′ α = γ α ∩γ ′ β for each α < β and since κ ′ = ⋃ β<κ γ ′ β the equality γ ′ α = γ α ∩ κ ′ holds for all α < κ. In particular the homomorphism induced by the inclusion map γ α γ ′ α → κ κ ′ is injective and so the abelianization of κ is the increasing union of free abelian subgroups γ α γ ′ α . In our construction, when β = δ + 1 and γ δ ∈ E the map induced by inclusion γ δ γ ′ δ → γ β γ ′ β is an isomorphism by Lemma 2.3 (iv). When β = δ + 1 and γ δ ∉ E we have γ δ γ ′ δ as a proper direct summand of γ β γ ′ β . As well we have γ β γ ′ β = ⋃ α<β γ α γ ′ α for limit β < κ. Thus we may inductively select a free abelian basis for the abelianization of κ, and the free basis will be of cardinality κ since there are κ many β for which β = δ + 1 with γ δ ∉ E. We have verified (3) and finished the proof of the theorem. or [9, 27.16]), remark an easy consequence (see [4, page 97]) and quote one more result of Jensen (see [10, Theorem 5.1], [4, Theorem 1.3]): Theorem 3.3. (ZFC + V = L) If κ is an uncountable regular cardinal and E ⊆ κ is stationary in κ there exists a sequence {S α } α∈E such that S α ⊆ α and for any J ⊆ κ the set {α ∈ E J ∩ α = S α } is stationary in κ. Remark 3 . 4 . 34From a sequence given by Theorem 3.3 one obtains a sequence of ordered pairs{(T 0 α , T 1 α )} α∈E for which T 0 α , T 1 α ⊆ α and given any subsets L 0 , L 1 ⊆ κ the set {α ∈ E L ∩ α = T 0 α and L 1 ∩ α = T 1 α } is stationary in κ.To see this we give the product {0, 1} × κ the lexicographic order. There is an order isomorphism The abelianization of almost free groups. C Bitton, Proc. Amer. Math. Soc. 129C. Bitton, The abelianization of almost free groups, Proc. Amer. Math. Soc. 129 (2000), 1799-1803. On the existence of κ-free abelian groups. P Eklof, Proc. Amer. Math. Soc. 47P. Eklof, On the existence of κ-free abelian groups, Proc. Amer. Math. Soc. 47 (1975), 65-72. P Eklof, Methods of logic in abelian group theory, Abelian Group Theory. Springer Verlag616P. Eklof, Methods of logic in abelian group theory, Abelian Group Theory, Springer Verlag Lecture Notes in Mathematics 616 (1977), 251-269. On constructing indecomposable groups in L. P Eklof, A Mekler, J. Algebra. 49P. Eklof, A. Mekler, On constructing indecomposable groups in L, J. Algebra 49 (1977), 96- 103. Almost free modules, set-theoretic methods. P Eklof, A Mekler, North-Holland, AmsterdamP. Eklof, A. Mekler, Almost free modules, set-theoretic methods, North-Holland, Amsterdam, 1990. The consistency of the axiom of choice and of the generalized continuum hypothesis. K Gödel, Ann. Math. Studies. 3K. Gödel, The consistency of the axiom of choice and of the generalized continuum hypothesis, Ann. Math. Studies 3 (1940). Almost free groups. G Higman, Proc. London Math. Soc. 1G. Higman, Almost free groups, Proc. London Math. Soc. 1 (1951), 284-290. Some countably free groups. G Higman, Proceedings of the Singapore conference on group theory. the Singapore conference on group theoryBerlinWalter de GruyterG. Higman, Some countably free groups, Proceedings of the Singapore conference on group theory, Walter de Gruyter, Berlin, 1990, 129-150. T Jech, Set Theory: The Third Millenium Edition, Revised and Expanded. Springer-VerlagT. Jech, Set Theory: The Third Millenium Edition, Revised and Expanded, Springer-Verlag, 2006. The fine structure of the constructible hierarchy. R Jensen, Ann. Math. Logic. 4R. Jensen, The fine structure of the constructible hierarchy, Ann. Math. Logic 4 (1972), 229-308. . R Lyndon, P Schupp, Springer-VerlagNew YorkCombinatorial Group TheoryR. Lyndon, P. Schupp, Combinatorial Group Theory, Springer-Verlag, New York 1977, 2001. When does almost free imply free? (for groups, transversals. M Magidor, S Shelah, J. Amer. Math. Soc. 7M. Magidor, S. Shelah, When does almost free imply free? (for groups, transversals, etc.), J. Amer. Math. Soc. 7 (1994), 769-830. How to construct almost free groups. A Mekler, Can. J. Math. 32A. Mekler, How to construct almost free groups, Can. J. Math. 32 (1980), 1206-1228. Uncountable locally free groups and their group rings. T Nishinaka, J. Group Theory. 21T. Nishinaka, Uncountable locally free groups and their group rings, J. Group Theory 21 (2018), 101-105. A compactness theorem for singular cardinals, free algebras, Whitehead proplem and transversals. S Shelah, Israel J. Math. 21S. Shelah, A compactness theorem for singular cardinals, free algebras, Whitehead proplem and transversals, Israel J. Math. 21 (1975), 319-349. Real-valued measurable cardinals. R , Solovay , Axiomatic Set Theory. D. S. ScottPart I Providence, Rhode IslandXIIIProc. SymposR, Solovay, Real-valued measurable cardinals, in Axiomatic Set Theory (D. S. Scott, ed.), Proc. Sympos. Pure Math., Vol XIII, Part I Providence, Rhode Island 1971. Foundation for Science. ; Ikerbasque-Basque, Matematika Saila, Upv/Ehu, S/N Sarriena, Leioa -Bizkaia. 48940Spain E-mail address: [email protected] Foundation for Science and Matematika Saila, UPV/EHU, Sar- riena S/N, 48940, Leioa -Bizkaia, Spain E-mail address: [email protected]
[]
[ "A Security Architecture for Railway Signalling", "A Security Architecture for Railway Signalling" ]
[ "Christian Schlehuber [email protected] \nDB Netz AG\nGermany\n", "Markus Heinrich \nDept of Computer Science\nDarmstadtTUGermany\n", "Tsvetoslava Vateva-Gurova \nDept of Computer Science\nDarmstadtTUGermany\n", "Stefan Katzenbeisser \nDept of Computer Science\nDarmstadtTUGermany\n", "Neeraj Suri [email protected] \nDept of Computer Science\nDarmstadtTUGermany\n" ]
[ "DB Netz AG\nGermany", "Dept of Computer Science\nDarmstadtTUGermany", "Dept of Computer Science\nDarmstadtTUGermany", "Dept of Computer Science\nDarmstadtTUGermany", "Dept of Computer Science\nDarmstadtTUGermany" ]
[]
We present the proposed security architecture Deutsche Bahn plans to deploy to protect its trackside safety-critical signalling system against cyber-attacks. We first present the existing reference interlocking system that is built using standard components. Next, we present a taxonomy to help model the attack vectors relevant for the railway environment. Building upon this, we present the proposed "compartmentalized" defence concept for securing the upcoming signalling systems.
10.1007/978-3-319-66266-4_21
[ "https://arxiv.org/pdf/2009.04207v1.pdf" ]
40,250,820
2009.04207
a245d97690a61acbec9eacf3d73ffb634948548a
A Security Architecture for Railway Signalling Christian Schlehuber [email protected] DB Netz AG Germany Markus Heinrich Dept of Computer Science DarmstadtTUGermany Tsvetoslava Vateva-Gurova Dept of Computer Science DarmstadtTUGermany Stefan Katzenbeisser Dept of Computer Science DarmstadtTUGermany Neeraj Suri [email protected] Dept of Computer Science DarmstadtTUGermany A Security Architecture for Railway Signalling We present the proposed security architecture Deutsche Bahn plans to deploy to protect its trackside safety-critical signalling system against cyber-attacks. We first present the existing reference interlocking system that is built using standard components. Next, we present a taxonomy to help model the attack vectors relevant for the railway environment. Building upon this, we present the proposed "compartmentalized" defence concept for securing the upcoming signalling systems. Introduction The state of the art in safety-critical railway signalling typically entails the use of monolithic interlocking systems that are often proprietary, expensive and not easily exchangeable. Consequently, the transition to more cost-effective and growth-oriented open networks is desired that can also utilize commercial offthe-shelf (COTS) hardware and software, provided the safety requirements are met. These drivers have led Deutsche Bahn (DB) to explore transforming its signalling infrastructure using open networks and COTS to reduce cost and maintenance overhead. At the same time, the risk of cyber-attacks introduced by open networks and COTS needs to be explicitly addressed to avoid any compromise of safety. This work documents DB's ongoing experience in developing new signalling architectures that by-design decouple safety and security functionalities. In this context, we first present a taxonomy of attacks outlining the potential cyber-threats relevant to protecting a railway signalling system. Consequently, utilizing the actual layout of the currently used German railway command and control system, we propose a security architecture that explicitly delineates safety and security, and will be deployed by DB in Germany's new interlocking systems (ILS) to address security concerns. The architecture is compartmentalized into zones and conduits following IEC 62443 [6]. Thereby we regard the German prestandard DIN VDE V 0831-104 [2] which is a guideline to apply IEC 62443 to the railway signalling domain with respect to the very strict safety requirements. Current Interlocking Network Architecture The reference architecture, as currently deployed by DB, is divided into three layers: Operational Layer, Interlocking Layer and the Field Element Area. The Operational Layer (upper blocks of Fig. 1) consists of an Operating Center and a Security Center. The Operating Center is responsible for the central monitoring and controlling of the system and is equipped with central switching points. The Security Center provides security services to the system such as security monitoring of certain communication channels and management of the Public Key Infrastructure (PKI). As depicted in Fig. 1, the communication between the Operational Layer and the Interlocking Layer of the reference architecture is encrypted. The Security Center has the same or higher security requirements compared to the rest of the components. The Interlocking Layer (middle blocks in Fig. 1) provides the safety logic of the system. The main components of the Interlocking Layer are the Technology Center and the interface to the European Train Control System (ETCS), as depicted in Fig. 1. The Technology Center is comprised of the ILS and auxiliary systems (e.g., needed for documenting the actions of the ILS). The ILS plays a central role in the reference architecture by ensuring system's safety given its critical role to control signals, switches and to prevent any conflicting train movements. The Field Element Area (FEA) (lowest blocks in Fig. 1) provides the interface to the actual trackside signalling elements called field elements. These are signals, points, and train detection systems amongst others that are steered by Object Controllers (OC). Communication across the components of the Operational and Interlocking layers takes place over a Wide Area Network (WAN) through the use of Standard Communication Interfaces. Typically, the Rail Safe Transport Application (RaSTA) Protocol [3] is used as a unified communication protocol for all the defined interfaces. RaSTA targets at guaranteeing safety in the communication of railway systems. Each RaSTA-network is assigned a network identification number which is unique within the given transport layer. A safety code is used to guarantee the integrity of the transmitted messages. Required redundancy for the system's high availability is omitted in Fig. 1 to reduce complexity. As can be seen in Fig. 1, only the communication between the Operational and Interlocking Layers of the reference architecture is encrypted. This is insufficient from a security perspective, and naturally the entire communication chain across the Technology Center, the FEAs and the linking communication interfaces need to be protected. However, enhancing the presented architecture in terms of security is not a trivial task, as various operational and compatibility constraints make introducing innovations to the interlocking system rather cumbersome. A complicating factor being re-ensuring that no safety violations get introduced with any security related changes (i.e. proving freedom of interference). In a normal computational environment, addressing security issues might require rapid patching and frequent updates. However, for the safety-critical railway environment any changes to a critical infrastructure, such as the signalling system that might affect the safety of the system, require explicit approval by the National Safety Authority. This can take significant time and exacerbates the timely reaction to security risks. In addition, the limited hardware resources in the signalling system do not allow deploying widely-used security solutions that are computationally intensive. Moreover, it is expected that deployed systems are used over a long operational lifetime (typically decades) and also provide strong timeliness response guarantees. All of these constraints need to be explicitly addressed when proposing a security-oriented signalling system architecture. Given the physically large spatial scattering of the railway infrastructure, it is infeasible to install physical protection comparable to a limited area factory premise. Access control and plant security, as important elements in a factory's security concept, do not apply to the full extent across the railway system. Only some parts -for example the interlocking computer -reside in a building that offers physical perimeter protection, while others (e.g., the field elements) lie unprotected along the railtracks. In addition, we need to ensure safety and high availability of railway signalling systems. This is tightly coupled with the timeliness requirements of critical communication between network entities. In cases where we cannot preclude attacks, it is necessary to install monitoring systems that can detect ongoing attacks. For setting up a proper security concept we first need to define the capabilities of the attacker we want to defend the system against. In the railway signalling community it is widely recognized that some security incidents are already covered by the established safety functions. The design of DB's security architecture follows the standard IEC 62443 [6] and the German prestandard DIN VDE V 0831-104 [2]. They classify the strength of attackers according to their (financial) resources, their motivation, and their knowledge. With the attacker strength in mind, we capture attacks that can be performed in a taxonomy that scopes the applicable security measures. A taxonomy can facilitate enhancing security, as it can represent the diverse attack scenarios that threaten the railway signalling system, and also allows to consider future threats. While sophisticated attack scenarios have been considered by the taxonomies in [4,5,[7][8][9] as well, most of them go beyond attack vectors and include information on the targeted system [4,8] that can be as detailed as software versions. However, unlike contemporary taxonomies built on full information access, we consider the systems from the operator perspective and do not know beforehand which technology the vendors use to meet the requirements. Thus, we are constrained to only model generic requirements of the systems. Figure 2 outlines our approach to categorize threats. On the top level we distinguish across directed and undirected attacks. This is justified by the following assumption: It is impossible for undirected attacks to cause an unsafe state in the signalling system, as they will typically not circumvent the existing safety measures. However, this class of attacks may affect the availability of the system. Since casualties could be the consequence, we consider impersonation as the most severe attack (i.e. an attacker being able to forge authentic messages of a network entity such as a OC or the ILS computer itself). As in any other network that comprises standard components, all known and unknown vulnerabilities pose a threat to the system in case they are exploited. Thus, vulnerabilities must be regarded in an attack model. Due to the scattered physical layout of the network it is prone to many kinds of information gathering attempts, and network entities like the field elements are difficult to protect against physical tampering. Although confidentiality is not an important target of signalling security, some information like cryptographic keys that are used to protect entities and communication channels, as well as account credentials, need to be kept secret. A compromised key would enable more severe attacks on the system, for example impersonation. This interconnection shows that a holistic approach is needed to secure railway signalling and neither perimeter protection nor isolated solutions will suffice. Orthogonal to the presented threats are denial-of-service attacks where no comprehensive countermeasure exists. The signalling systems mitigate this threat by utilizing redundancy and avoiding single points of failure. We do not explicitly depict redundancy in Fig. 1 and 3, though all signalling relevant communication is performed over at least two separate channels provided by RaSTA. Entities such as the Security Center (from Fig. 1) also exist redundantly. New Security Architecture for Interlocking Systems For safety-related railway systems, the dominant requirements are integrity, timely delivery of critical messages and system availability. To ensure this, a Reliability, Availability, Maintainability, and Safety (RAMS) lifecycle has been introduced by EN 50126 [1] to make the current signalling systems resilient to internal faults and human error. However, EN 50126 does not consider attackers or malware that constitute a growing threat to all industrial control systems, including railway signalling systems. Thus, enhanced security mechanisms are needed, provided their potential to detrimentally affect safety and availability is explicitly delineated. This makes it infeasible to introduce standard "commercial" anti-malware and anti-virus systems into an ILS network, as the side effects are not easily discernible to be controlled. Based on the developed attack model taxonomy, a security architecture for the new interlocking technology was engineered. The security engineering process is based on the standard IEC 62443-3-3 [6] with guidance taken from DIN VDE V 0831-104 [2]. According to the general system design the signalling system has been partitioned into functional blocks e.g., Object Controllers (OC) and ILS (see Fig. 3). The reference architecture is additionally divided into zones and conduits, where each zone is logically or physically defined [6]. According to IEC 62443 each object within the architecture being hardware, software, user, etc. is assigned to exactly one zone or to exactly one conduit. A zone (colored areas) is a grouping of assets that have common security requirements which is expressed as a Security Level (SL) that is assigned to each zone. Conduits are the communication channels between zones with both the same and different security requirements. ESTW-ZE MDM A risk analysis yielded SLs of 2 or 3 for every zone. Based on these SLs, the security requirements were defined for every component of the system to ensure the fulfilment of a defence-in-depth concept. The requirements range from password changing abilities over cryptographic functions to a set of requirements that support the later detection and analysis of attacks e.g., logging capabilities. After the zones have been provided with security measures, the conduits between them remain a vulnerable point. In contrast to the zones, IEC 62443-3-3 does not contain guidance on how to secure conduits. Over our requirements and taxonomy process two types of conduits have been identified, namely: (a) conduits connecting zones of equal SL, and (b) conduits connecting zones of different SLs. Conduits which only have unidirectional data flow could also be considered, but these are only a subtype of one of the former described conduits. The system layout of Fig. 1 has been extended to secure the zones and conduits, as shown in Fig. 3. Again, redundancy is omitted. The FEA is provided in more detail to show the security application. Multiple OCs are presented as there are a number of field elements to steer in a single FEA. For redundancy, they are organized in a ring topology with switches (angular boxes) and routers (round boxes). The relation between OC and field element is usually one-to-one. Security boxes have been added to every OC (depicted as locks) in the FEA within a junction box (labelled FeAk). They provide the system with encryption capabilities and the possibility for basic filtering and DoS prevention rules. The capabilities are required for securing conduits between zones with equal SLs. The boxes are based on a ruggedized and hardened hardware platform. As they are completely separated from the safety functionality, they can be applied as a replacement of switch components in the interlocking network and even be introduced during system upgrades. The security terminates in the security box, thus the safety hardware need to be protected by physical measures. The FEA junction boxes are thus physically protected by "housing alerts" that trigger an alarm to prohibit attackers from tampering with the system. In the Technology Center, a termination point for the field element encryption has been introduced. Also, several zones with different SLs have to be connected, e.g., the interlocking system has to be connected to the maintenance and data management subsystem (MDM) with different SL. To tackle this challenge, an application layer gateway (ALG) has been introduced as a central entity of the Technology Center. This device is configured to only allow desired connections between zones. Via packet inspection mechanisms malicious code can be identified. If zones of different SL are connected, the allowed communication can be limited via white-list filtering on different layers. If anomalous behaviour is detected the ALG reports this to the Security Operation Center (SOC), where an operator can decide what actions have to be taken. In certain cases the separation of a zone from the rest of the network (quarantine) may be needed, which then can be realized by the ALG. Upon the detection of new attack scenarios the operator also has the possibility to change the rule set and filtering of the ALG to mitigate the new attack. On the operational layer the SOC has been extended by a Security and Information Event Management (SIEM) system besides elements for system management, such as PKI, domain name service, network time server, and a directory service. The SIEM system aggregates information from every component and analyses it for possible attacks. If it detects a possible attack the security operator is informed, starts with further investigation on the issue, and finally performs some action to solve it. As the provisioning of security requires the application of tools and methods on a sustained basis, a process based approach is implemented to ensure a constant level of security. For this a patch management process has been developed. Changes to components are first checked in a simulated environment for quality assurance before they are applied to the operational components. For a rapid reaction to attacks, the rule sets of the ALG and security boxes can be altered to mitigate the vulnerability until a patch can be applied. Furthermore, processes for incident management and an Information Security Management System (ISMS) have been implemented. Upon the detection of an anomaly it is checked against a database of known incidents and relevant actions are applied. For unclassified anomalies, forensics are performed to determine the relevant reaction. After solving the incident, the findings are used as input for the ISMS to enhance the security processes. By having added security features to the communication channel of the safety building blocks, the architecture allows to control that strict safety requirements such as availability and timeliness are still met. The communication channel is transparent to the safety system such that the security blocks can be updated independently and without affecting the safety homologation process. The decoupling of safety and security still requires to make the physical gap between them as small as possible (e.g., on the same circuit board), to avoid attacks just behind the security component. Conclusion The existing interlocking architecture provides insufficient security against cyberattacks. To overcome this, DB plans to deploy the presented security architecture in Germany's new ILS to mitigate security risks without detrimentally impacting the system's safety. The presented security concept includes monitoring and information systems as well as basic security building blocks such as cryptography support and filtering. It ensures security not only cross the Operational and Interlocking layers but also provides security functions for the Technology Center and the Field Element Areas. In addition, processes are established to ensure the correct handling of incidents and functional requirements to each building block in order to help build security enabled components. Fig. 2 . 2A Railway Attack Taxonomy. Fig. 3. Proposed Security Architecture for interlocking systems of DB.Technology Center ETCS WAN Central PKI Security Center SIEM AD AAA Network Monitoring Diagnosis SDI-DS admin DNS SIEM NTP NTP DNS PKI Operating Center RBC Aux. Systems (Doc.) Field Element Area Object Controller FeAk Housing Alerts Object Controller FeAk Housing Alerts Neighbor Tech. Center Interlocking System WAN NG-FW(ALG) Field Element Area Object Controller FeAk Housing Alerts Object Controller FeAk Housing Alerts Crypto Railway Security AssessmentIn order to propose a security architecture, this section presents the prerequisites needed for defending signalling infrastructure and also elucidates the capabilities of attackers against which the signalling system needs to be protected. To systematically tackle the problem of enhancing security in interlocking systems, we first provide a taxonomy of the attacks relevant for the railway environment. Acknowledgements. Research supported in part by EC CIPSEC GA 700378.The final publication is available at Springer via http://dx.doi.org/10. 1007/978-3-319-66266-4_21. CENELEC: EN 50126: Railway applications -The specification and demonstration of Reliability, Availability, Maintainability and Safety (RAMS. CENELEC: EN 50126: Railway applications -The specification and demonstration of Reliability, Availability, Maintainability and Safety (RAMS) (1999) 62443 (DIN VDE V 0831-104DKE: Elektrische Bahn-Signalanlagen -Teil. 104Leitfaden für die IT-Sicherheit auf Grundlage der IECDKE: Elektrische Bahn-Signalanlagen -Teil 104: Leitfaden für die IT-Sicherheit auf Grundlage der IEC 62443 (DIN VDE V 0831-104) (2014) DKE: Electric signalling systems for railways. Part 200: Safe transmission protocol according to DIN EN 50159 (DIN VDE V 0831-200DKE: Electric signalling systems for railways -Part 200: Safe transmission protocol according to DIN EN 50159 (DIN VDE V 0831-200) (2015) A taxonomy of network and computer attacks. S Hansman, R Hunt, 10.1016/j.cose.2004.06.011Computers & Security. 241Hansman, S., Hunt, R.: A taxonomy of network and computer attacks. Computers & Security 24(1), 31-43 (2005), https://doi.org/10.1016/j.cose.2004.06.011 A common language for computer security incidents. J D Howard, T A Longstaff, 10.2172/751004Sandia Natl LabTech. Rep. SAND98-8667Howard, J.D., Longstaff, T.A.: A common language for computer security inci- dents. Tech. Rep. SAND98-8667, Sandia Natl Lab (1998), https://doi.org/10. 2172/751004 Electrotechnical Commission: IEC 62443 Industrial communication networks -Network and system security. Intl, IEC. 62443Intl. Electrotechnical Commission: IEC 62443 Industrial communication networks -Network and system security. IEC 62443 (Nov 2010) Taxonomies of cyber adversaries and attacks: a survey of incidents and approaches. C Meyers, S Powers, D Faissol, 10.2172/9677127Lawrence Livermore Natl LabMeyers, C., Powers, S., Faissol, D.: Taxonomies of cyber adversaries and attacks: a survey of incidents and approaches. Lawrence Livermore Natl Lab (April) 7, 1-22 (2009), https://doi.org/10.2172/967712 Avoidit: A cyber attack taxonomy. C Simmons, S Shiva, H Bedi, D Dasgupta, Annual Symposium on Information Assurance. Simmons, C., , Shiva, S., Bedi, H., Dasgupta, D.: Avoidit: A cyber attack taxonomy. In: Annual Symposium on Information Assurance. pp. 2-12 (2014) A taxonomy of computer intrusions. D J Weber, Ph.D. thesis, MITWeber, D.J.: A taxonomy of computer intrusions. Ph.D. thesis, MIT (1998)
[]
[]
[ "Ajay Patwardhan \nInstitute of Mathematical Sciences\nSt Xavier's college\nMahapalika MargMumbai, ChennaiIndia Visitor, India\n" ]
[ "Institute of Mathematical Sciences\nSt Xavier's college\nMahapalika MargMumbai, ChennaiIndia Visitor, India" ]
[]
This paper explores the connection between dynamical system properties and statistical phyics of ensembles of such systems. Simple models are used to give novel phase transitions; particularly for finite N particle systems with many physically interesting examples.1. INTRODUCTION The developments in dynamical systems and in Statistical Mechanics have occured for over a century. The ergodic hypothesis of J W Gibbs and Boltzmann's H theorem were re-investigated as integrable and chaotic dynamical systems were found. Poincare, Birkhoff, Krylov formulated these problems. Kolmogorov, Arnold and Moser theorem gave a detailed description of phase space as 'islands of integrable and sea of chaotic regions'.From chaotic billiards to integrable Toda lattices the range of the systems includes mixed type of systems. Work of Ruelle, Benettin, Galgani, Casati, Galavotti and others clarified some properties of classical and quantum systems. Recent work of E.G.D.Cohen, L Casetti and others has given a rigourous basis for Riemannian space based description of dynamics of Hamiltonian systems and K entropy. However the Statistical mechanics of dynamical systems remains to be formulated.Ensembles with model Hamiltonians showing chaos transitions and topological transitions have become significant as simulation and empirical work has grown. A partition function inclusive of the Kolmogorov entropy and the Euler characteristic has been defined. Nano clusters of particles show properties dependent on the boundaries, number, energy and interaction parameters. Strongly interacting systems in condensed matter , nucleons and quarks are also possible applications.This requires that any dynamical system model such as with maps, or differential equations for the 'units' in the physical system, will have consequences for the statistical mechanics of their ensemble. It leads to novel phase transitions and new interpretations of thermodynamic phase transitions. This has been shown in this paper for some simple models .The Baker transform is used to model kinetics of melting in two dimensions. The Henon Heiles like models are used to model ergodic channels with correlated charge densities in superconductors. The gas of molecules with Henon Heiles Hamiltonian is shown to have a 'phase' transition dependent on the chaotic transition in the molecule. Quantum chaos also creates a transition in the ensemble of such systems and the Poisson, GOE, GUE and Husimi distributions are an example of Wigner distributions on phase space.These ideas could be generalised to more complex model maps or Hamiltonians, that are used in physical systems. Hence a statistical physics of finite ( any N ) number of particles is expected to have definite properties dependent on
null
[ "https://arxiv.org/pdf/0711.0542v1.pdf" ]
2,691,161
0711.0542
c52453bafb0c935f56443bd2f502effbe43d4fff
4 Nov 2007 Ajay Patwardhan Institute of Mathematical Sciences St Xavier's college Mahapalika MargMumbai, ChennaiIndia Visitor, India 4 Nov 2007STATISTICAL PHYSICS AND DYNAMICAL SYSTEMS: MODELS OF PHASE TRANSITIONS This paper explores the connection between dynamical system properties and statistical phyics of ensembles of such systems. Simple models are used to give novel phase transitions; particularly for finite N particle systems with many physically interesting examples.1. INTRODUCTION The developments in dynamical systems and in Statistical Mechanics have occured for over a century. The ergodic hypothesis of J W Gibbs and Boltzmann's H theorem were re-investigated as integrable and chaotic dynamical systems were found. Poincare, Birkhoff, Krylov formulated these problems. Kolmogorov, Arnold and Moser theorem gave a detailed description of phase space as 'islands of integrable and sea of chaotic regions'.From chaotic billiards to integrable Toda lattices the range of the systems includes mixed type of systems. Work of Ruelle, Benettin, Galgani, Casati, Galavotti and others clarified some properties of classical and quantum systems. Recent work of E.G.D.Cohen, L Casetti and others has given a rigourous basis for Riemannian space based description of dynamics of Hamiltonian systems and K entropy. However the Statistical mechanics of dynamical systems remains to be formulated.Ensembles with model Hamiltonians showing chaos transitions and topological transitions have become significant as simulation and empirical work has grown. A partition function inclusive of the Kolmogorov entropy and the Euler characteristic has been defined. Nano clusters of particles show properties dependent on the boundaries, number, energy and interaction parameters. Strongly interacting systems in condensed matter , nucleons and quarks are also possible applications.This requires that any dynamical system model such as with maps, or differential equations for the 'units' in the physical system, will have consequences for the statistical mechanics of their ensemble. It leads to novel phase transitions and new interpretations of thermodynamic phase transitions. This has been shown in this paper for some simple models .The Baker transform is used to model kinetics of melting in two dimensions. The Henon Heiles like models are used to model ergodic channels with correlated charge densities in superconductors. The gas of molecules with Henon Heiles Hamiltonian is shown to have a 'phase' transition dependent on the chaotic transition in the molecule. Quantum chaos also creates a transition in the ensemble of such systems and the Poisson, GOE, GUE and Husimi distributions are an example of Wigner distributions on phase space.These ideas could be generalised to more complex model maps or Hamiltonians, that are used in physical systems. Hence a statistical physics of finite ( any N ) number of particles is expected to have definite properties dependent on the integrable to chaotic transitions in their units. This can be seen in the Toda and Fermi Pasta Ulam like systems. The description of Hamiltonian systems as flows on Riemann spaces gives a intrinsic definition of geodesic deviation equation and Lyapunov exponents. This has led to defining the connection between statistical mechanics and dynamical systems in a fundamental way. MODEL FOR KINETICS OF MELTING AND FREEZING IN TWO DIMENSIONS , USING BAKER LIKE TRANSFORMS The folding property of this transform causes mixing and ergodicity and has a K entropy. Any regular structure of points in the (0, 1) square, will after many iterations become 'smeared' all overthe square. A two dimensional crystal , a snow flake, a liquid crystal, a spin or metallic glass has a kinetics of melting and freezing. A order parameter and correlations with a time scale dependent on rate of cooling and heating are present. Any model for the kinetics of melting , converted into a difference equation with a time step and a folding with two subintervals in a square is like a Baker transform. Consider this process modeled by a Baker transform with a time step for iteration and a 'unit cell square'. A point x n , y n goes to x n+1 , y n+1 using matrix mapping (1,1), (1,2) with this 'cat map' having eigenvalues 0.5(3 + / − √ 5) = exp(+/ − σ). This gives the Lyapunov exponent σ. Starting from any initial point in the square n iterates will give the randomised distribution over a ordering length scale l max = τ −1/σ ; where τ is the time step and the σ is the K entropy. A more general model would use different maps on the half intervals. Diagonal (2, 0.5) matrix for lower half 0 < x n < 0.5 ; and the same matrix acting on x n , y n with (−1, 0.5) deducted on the interval 0.5 < x n < 1. More generally the Baker like transforms can be taken as x n < α < 1 and for x n > α < 1. Then y n+1 = λ a y n , x n+1 = x n /α and respectively, y n+1 = λ b y n + 0.5 ,x n+1 = xn−α 1−α for α, λ a , λ b all between 0 and 1. This gives a number of adjustable parameters to model a variety of melting and freezing in two dimensions. Two point correlations can be found < φ(ω 1 )φ(ω 2 >=< φ(ω 1 ) >< φ(ω 2 ) > for two regions ω 1 and ω 2 in the square. These can give parametrisation in terms of experimentally observed values. From exact crystalline symmetry to random network of bonds , the iterated map over the time interval of melting or freezing gives a model for partial mixing. Consider each unit cell modelled by the Baker like transform; and a range of parameters that can vary across the sample. Coordination clusters, ionic mobility, cooling or heating rates and correlation lengths are all measured quantities which are related to the parameters of the Baker like transforms. The dynamical phase transitions can create a variety of configurations. Equilibrium partition functions are defined at beginning and end stage . But rapid cooling or heating leads to multiple energy minima and entropy maxima , with the statistical entropy ( Kolmogorov entropy ) playing a role in the thermodynamic entropy. Ordered and disorderd states are formed at intermediate times, with transitions among them. The basic quantity, Lyapunov exponent of the map or dynamical system is connected to the basic quantity of the condensed matter system , the correlation length. The ensemble of ' cells ' with the Baker mapping iterated on each is a model of the kinetics of melting and freezing in two dimensions. 3. MODEL FOR ERGODIC CHANNELS WITH CORRELATIONS, IN HENON HEILES LIKE SYSTEMS, AND SUPERCONDUCTIVITY . The phase space has islands of integrability and chaotic sea picture. Ergodic channels can form on repeated or periodic lattice structures, that create connected regions of the chaotic sea. In these regions two point correlations can be non zero. Consider a 'unit cell' with ergodic regions that are connected to those in neighbouring 'unit cells'. Then over some order parameter scale there is a continuous connected chaotic region. In this ergodic channel , across the sample there are non zero correlations. This could represent a model of the axial and planar degrees in a unit cell of a high temperature superconductor modeled by Henon Heiles type of Hamiltonian for the electron. The Henon Heiles (HH) like Hamiltonian : H = 0.5(p 2 1 + p 2 2 ) + 0.5(q 2 1 + q 2 2 ) + αq 2 1 q 2 − β 3 q 3 2 with α = 1 and β = 1 for the original HH case. For H = E with E < E c1 = 1 12 phase space is mostly periodic or quasi periodoc motion. For E > E c2 = 1 6 it is mostly chaotic and for in between energies it has mixed chaotic and integrable subspaces. The single connected ergodic region has a Lyapunov exponent σ = 0.03. Consider the axial and planar direction coordinates for the electron to be q 1 and q 2 for the unit cell in a orthorhombic structure as occuring in high T c superconductors. While the phase space is mostly integrable the electron is bound in cell; however if the mixed form occurs the electron can traverse the chaotic sea component, which increases in its volume fraction as chaotic transition occurs. The contiguous ergodic channels in neighbouring unit cells connect to form a sample wide ergodic region. A correlation length scale or order parameter can be found. For an intermediate energy E = 0.125 the phase space has half volume integrable and half chaotic. The area fraction computed numerically as a function of energy is a straight line. The broken or partial ergodicity on phase space requires a weighted average over a disjoint union of regions with different ergodic properties, such as the Kolmogorov entropy. Oseledec and Pesin's work gives entropy definition as sum of K entropies, and an invariant measure for integration. Conductivity arises in this model by electron motion in chaotic sea channels in classical case and in connected Husimi probability distribution in quantum Hence the charge and current densities are correlated at two points in the sample. If this ergodic channel extends over the whole sample and is continuous , then the correlation length scale allows an effective resistance less transfer of charge and current fluctuations from any point in sample to any other ; hence superconductivity occurs. The property is seen as a consequence of the nonlinearity rather than the Cooper pairing mechanism. case. < ρ ω1 ρ ω2 > is non zero < d dt ρ ω1 d dt ρ ω2 > is The condition for this transition from conductivity to superconductivity to occur can be obtained in terms of the Jacobian J(ω 1 , ω 2 ) by expanding ρ to first order in ω. This should leave the non zero correlation condition unchanged , that is the variation in the two point correlation is zero. Then the correlation length order parameter ζ can be defined in terms of the Jacobian. A Fokker Planck like equation in the ergodic channel for a two particle distribution for ρ can be defined. The small spread in T c in the resistance versus temperature data in high T c superconductors could be attributed to the variation and number of ergodic channels available in parallel in the sample. The mixed state, 'dirty' or granular superconductors depend on the details of the microstructure to obtain coherence length, whereas long range correlations are introduced by ergodicity in this approach. However the T c , critical magnetic fields, energy gap and currents are not easily obtained in terms of the nonlinearity or chaotic transition energy surfaces in this model. Charge and current correlations rather than transport, in specialised regions in classical or quantum phase space, and its projection onto real space for the mechanism of conductivity and superconductivity need further work. Model hamiltonians and energy and parameter ranges, creaing ergodicity occur frequently in dynamical systems; which will have consequences for the statistical physics of condensed matter. QUANTUM CHAOS AND STATISTICAL PHYSICS The phase space version of quantum mechanics gives Wigner distributions. Many computed systems are known with chaotic transition to Husimi distributions; with connected and disconnected regions and probability measures on them. This implies that for models which have ensembles of such dynamical systems the statistical physics must have averaging on the distributions, which show a chaotic transition dependent on parameters . Hence a phase transition for the whole system depends critically on this chaotic transition in its constituents. In the case of energy level spectra the distribution of these energy levels is given by a GOE or GUE distribution for chaotic case and a Poisson or Wigner like one for the integrable case. The energy distribution probability is given generally by: P (E) = a[ E <E> ] α Exp(−[ E <E> ] β ]) is in chaotic case and it goes to α = 0 and β = 1 in integrable Poisson or Wigner case. < E > is the average energy and ais positive, and α and β are positive exponents, typically equal to two in the chaotic GOE, GUE cases. The phase transition depends on the parameter that switches the energy distribution from chaotic to integrable case. Any partition function statistical physics average should include an additional multiplicative weight or measure; that assigns probabilities for accessing the energy levels, as given by these distributions. And the parametric transition to chaotic distribution will reflect in the partition function and its partial derivatives that give thermodynamic quantities. This is a phase transitionof a new kind or an alternate interpretation of usual phase transitions ; that is debatable. This is an intrinsic effect, not dependent on the thermal reservoir, but dependent on the hamiltonian and its parameters. In an ensemble of systems with variation in hamiltonian and its parameters allowed this effect will be seen. Often the precise hamiltonian and its parameters and form is not known for a sample and hence it is essential to take a ensemble of these and average over them. So does this entail a modified description of canonical and grand canonical ensembles or a new ensemble; that remains an open question. If a bulk system is a collection of nano systems then such an ensemble may be defined to relate nano to bulk sample properties. Possibly the best examples for these distributions arise in quantum optics where radiation -matter interactions occur. A non thermal like radiation spectrum will result if the P (E) function is used in averaging. It will also show a parametric transition. Anharmonic oscillators in equilibrium with radiation can be experimentally observed to show this behaviour. Condensed matter examples in which the density of states function is modified by this P (E) multiplying the usual onecan show a parametric rather than thermal effect, in the chaotic transition. In extended and localised states, the density of states function will have this P (E) function as a multiplier. 5. TODA AND FERMI PASTA ULAM LIKE MODELS Toda models have near neighbour interaction with a exponential potential. They are known to be integrable. While Fermi Pasta Ulam models have polynomial potentials ( typically quartic) and show a variety of phenomena dependent on energy, number and parameters. The dynamics of a chain of coupled masses with a general potential V (x) = a n x n can in the limit, show a Boussinesq like equation, and using quadratures method give a solitary wave solution. A typical potential will be polynomial plus exponential if it is of Fermi Pasta Ulam plus Toda type. Simulations of the dynamics of such a chain can be shown to have a chaotic transition for reasonably small number of particles (< 20) and energies. For a range of parameters such as interparticle separation, and coupling parameters occuring as coefficients in the potential, this is true. This system was a prototype for the question : how does classical mechanics become statistical mechanics. How do bulk and fine properties arise. Is there a thermodynamic limit and how is it reached. While the Toda lattice will not show equipartition of energy the FPU system does. The combined system should have a statistical mechanics for any number N particles. Taking a canonical Gibbsian ensemble and partition function is possible, but evaluating the integrals may not be easy. In the region where the Toda is significant , by going to integrals in involution, the exponent in the Gibbs density is replaced by these integrals. However for the FPU part the integrals will have to be evaluated on the energy surfaces that have partial ergodicity ; and the weight function on micropartitions; exp(K), K is the Kolmogorov entropy. The statistical physics of such non exactly solvable systems is not known. However there are a variety of applications of such systems in polymer chains. The dynamical system itself has its interpretation in coupled osmotic cells, corrosive sequence of spots, magnetic and spin lattices etc. PARTITION FUNCTION FOR HENON HEILES LIKE SYSTEMS AND SPECIFIC HEATS An easier system to evaluate partly analytically and partly numerically are the Henon Heiles like models, which are taken as model hamiltonians for molecules of a gas or two dimensional domains. A canonical partition function dωexp(−βH) can be integrated by making a partition in constant energy surface . On each such surface the dynamical surface shows coexisting integrable and chaotic regions. The area fraction for these regions as a function of energy is known from computation. It is a straight line function between E = 0.11 and E = 0.167. The Kolmogorov entropy as a function of energy is known to rise to saturation. This can be put into the additional weight or measure as exp(K) in the integral. K is zero for integrable or KAM torii regions. The sum over all regions for each energy surface and the integral over all energies can be evaluated numerically, by splitting E == 0 to E = 0.11 for integrable; then E = 0.11 to E = 0.167 for the mixed regions and E = 0.167 to E = 1 for the chaotic case. The general Hamiltonian of Henon Heiles type is more useful to study the parametric transition to chaos, and hence the dependence of the partition function on these parameters. If an ensemble of these H-H systems is taken , that is a gas of molecules with their internal hamiltonians as H-H, then the chaotic transitions internally can be generated by collisional transfer of energy among molecules. If the average energy per particle is less than the critical energy E = 0.11 , then the integrable regions dominate . But as the thermal energy per particle crosses the E = 0.11 to E = 0.167; then all chaotic regions dominate. Hence a kT < 0.11 increasing to kT > 0.167 change will create a discontinuous change in the partition function and consequently in the total internal energy and specific heat of such a gas. This appears as a different kind of phase transition than the usual first and second order ones in thermodynamics. It may be called an intrinsic phase transition. ACKNOWLEDGEMENT In November 1973 I was concerned with the issue of dynamical systems and statistical mechanics, but did not develop my work and paper on ergodic mechanics further ; following up on Y Sinai, KAM, J Ford and others. I thank W. C. Schieve, I. Prigogine and L.Reichl at University of Texas at Austin for early interest in this work. The question of defining partition functions inclusive of K entropy measure remained. Many developments occured in classical and quantum chaos over the years, but there was still no statistical mechanics based on them. Then in the years from 1987 to 1996 I attempted to work on simple dynamical systems with implications in statistical mechanics. I thank Physics department, Mumbai university, School of Physics, Central University , Birla science center and Prof Mondal at Hyderabad, Non Linear dynamics group at NCL, Pune and Raman Research Institute, Bangalore for facilities to work and to present the work in those years. It was too early still to successfully publish any work in this field in a journal as no subject classification for it existed till 1998. Then in the 1990s there was growing work in nano physics, and clusters being published. A renewed interest in the foundations of Statistical mechanics based on dynamical systems and for small or finite N systems has been seen in 2000 onwards publication. Hence it is in acknowledgement of these developments, followed over the years, that I am submitting short papers on this topic to arXiv. More work is definitely required in this subject to have a complete and final theory. I thank the Institute of mathematical sciences , Chennai for its facilities; its Director and Dr H Sharatchandra for supporting my visit, and its faculty for discussions. non zero for the probability density ρ in the two regions ω 1 and ω 2 d dt ρ = [ρ, H] brings the explicit model dependence from the Hamiltonian. Ajay Patwardhan arxiv cond-mat 0511231. 411176Ajay Patwardhan arxiv cond-mat 0511231,0411176 . P Hall, arxiv 0707.1854R Hall P. Wolnyes arxiv 0707.1854 . V Ilyin, Procaccia, arxiv 0705.1043V Ilyin , I Procaccia et al arxiv 0705.1043 . Grigorenko, Haas, Levi, arxiv cond-mat 0607252I Grigorenko, S Haas, I Levi arxiv cond-mat 0607252 . M Pettini, E G D Cassetti, Cohen, arxiv cond -mat 0410282, 0303200, 010426799120929608054M Pettini, L Cassetti, E.G.D. Cohen et al,arxiv cond -mat 0410282, 0303200, 0104267, 9912092, 9608054 . B Gershgorin, arxiv 0707.2830B Gershgorin arxiv 0707.2830 . G Bermann, Izrailev, arxiv nlin 0411062G Bermann, F Izrailev, arxiv nlin 0411062 . V Bulitko, arxiv math 0407116V Bulitko arxiv math 0407116 L Reichl, A modern course in Statistical Physics. ArnoldL Reichl, A modern course in Statistical Physics , Arnold 1992 . R Schaeffer, H Stoeckmann, arxiv nlin 0206015R Schaeffer H Stoeckmann et al arxiv nlin 0206015
[]
[ "(S)neutrino properties in R-parity violating supersymmetry: I. CP-conserving Phenomena", "(S)neutrino properties in R-parity violating supersymmetry: I. CP-conserving Phenomena" ]
[ "Yuval Grossman \nStanford Linear Accelerator Center\nStanford University\n94309StanfordCA\n", "Howard E Haber \nSanta Cruz Institute for Particle Physics\nUniversity of California\n95064Santa CruzCA\n\nFermi National Accelerator Laboratory\nP.O. Box 50060510BataviaIL\n" ]
[ "Stanford Linear Accelerator Center\nStanford University\n94309StanfordCA", "Santa Cruz Institute for Particle Physics\nUniversity of California\n95064Santa CruzCA", "Fermi National Accelerator Laboratory\nP.O. Box 50060510BataviaIL" ]
[]
R-parity-violating supersymmetry (with a conserved baryon number B) provides a framework for particle physics with lepton number (L) violating interactions. We examine in detail the structure of the most general R-parityviolating (B-conserving) model of low-energy supersymmetry. We analyze the mixing of Higgs bosons with sleptons and the mixing of charginos and neutralinos with charged leptons and neutrinos, respectively. Implications for neutrino and sneutrino masses and mixing and CP-conserving sneutrino phenomena are considered. L-violating low-energy supersymmetry can be probed at future colliders by studying the phenomenology of sneutrinos. Sneutrinoantisneutrino mass splittings and lifetime differences can provide new opportunities to probe lepton number violation at colliders.
10.1103/physrevd.59.093008
[ "https://arxiv.org/pdf/hep-ph/9810536v2.pdf" ]
6,708,453
hep-ph/9810536
48b0ee9e416eb96f048f9b590b1de648b391eb6e
(S)neutrino properties in R-parity violating supersymmetry: I. CP-conserving Phenomena 12 Nov 1998 October 1998 Yuval Grossman Stanford Linear Accelerator Center Stanford University 94309StanfordCA Howard E Haber Santa Cruz Institute for Particle Physics University of California 95064Santa CruzCA Fermi National Accelerator Laboratory P.O. Box 50060510BataviaIL (S)neutrino properties in R-parity violating supersymmetry: I. CP-conserving Phenomena 12 Nov 1998 October 1998 R-parity-violating supersymmetry (with a conserved baryon number B) provides a framework for particle physics with lepton number (L) violating interactions. We examine in detail the structure of the most general R-parityviolating (B-conserving) model of low-energy supersymmetry. We analyze the mixing of Higgs bosons with sleptons and the mixing of charginos and neutralinos with charged leptons and neutrinos, respectively. Implications for neutrino and sneutrino masses and mixing and CP-conserving sneutrino phenomena are considered. L-violating low-energy supersymmetry can be probed at future colliders by studying the phenomenology of sneutrinos. Sneutrinoantisneutrino mass splittings and lifetime differences can provide new opportunities to probe lepton number violation at colliders. I. INTRODUCTION There is no fundamental principle that requires the theory of elementary particle interactions to conserve lepton number. In the Standard Model, lepton number conservation is a fortuitous accident that arises because one cannot write down renormalizable leptonnumber-violating interactions that only involve the fields of the Standard Model [1]. In fact, there are some experimental hints for non-zero neutrino masses [2] that suggest that lepton number is not an exact symmetry. In low-energy supersymmetric extensions of the Standard Model, lepton number conservation is not automatically respected by the most general set of renormalizable interactions. Nevertheless, experimental observations imply that lepton number violating effects, if they exist, must be rather small. If one wants to enforce lepton number conservation in the tree-level supersymmetric theory, it is sufficient to impose one extra discrete symmetry. In the minimal supersymmetric standard model (MSSM), a multiplicative symmetry called Rparity is introduced, such that the R quantum number of an MSSM field of spin S, baryon number B and lepton number L is given by (−1) [3(B−L)+2S] . By introducing B−L conservation modulo 2, one eliminates all dimension-four lepton number and baryon number-violating interactions. Majorana neutrino masses can be generated in an R-parity-conserving extension of the MSSM involving new ∆L = 2 interactions through the supersymmetric see-saw mechanism [3,4]. In a recent paper [4] (for an independent study see ref. [5]), we studied the effect of such ∆L = 2 interaction on sneutrino phenomena. In this case, the sneutrino (ν) and antisneutrino (ν), which are eigenstates of lepton number, are no longer mass eigenstates. The mass eigenstates are therefore superpositions ofν andν, and sneutrino mixing effects can lead to a phenomenology analogous to that of K-K and B-B mixing. The mass splitting between the two sneutrino mass eigenstates is related to the magnitude of lepton number violation, which is typically characterized by the size of neutrino masses. a As a result, the sneutrino mass splitting is expected generally to be very small. Yet, it can be detected in many cases, if one is able to observe the lepton number oscillation [4]. Neutrino masses can also be generated in R-parity-violating (RPV) models of low-energy supersymmetry [7][8][9][10][11]. However, all possible dimension-four RPV interactions cannot be simultaneously present and unsuppressed; otherwise the proton decay rate would be many orders of magnitude larger than the present experimental bound. One way to avoid proton decay is to impose either B or L separately. For example, if B is conserved but L is not, then the theory would violate R-parity but preserve a Z 3 baryon "triality". a In some cases the sneutrino mass splitting may be enhanced by a factor as large as 10 3 compared to the neutrino mass [4,6]. In this paper we extend the analysis of ref. [4] and study sneutrino phenomena in models without R-parity (but with baryon triality). Such models exhibit ∆L = 1 violating interactions at the level of renormalizable operators. One can then generate ∆L = 2 violating interactions, which are responsible for generating neutrino masses. In general, one neutrino mass is generated at tree level via mixing with the neutralinos, and the remaining neutrino masses are generated at one-loop. In Section II, we introduce the most general RPV model with a conserved baryon number and establish our notation. In Section III, we obtain the general form for the mass matrix in the neutral fermion sector (which governs the mixing of neutralinos and neutrinos) and in the neutral scalar sector (which governs the mixing of neutral Higgs bosons and sneutrinos). From these results, we obtain the tree-level masses of neutrinos and squared-mass splittings of the sneutrino-antisneutrino pairs. In Section IV, we calculate the neutrino masses and sneutrino-antisneutrino squared-mass splittings generated at one loop. The phenomenological implications of these results are addressed in Section V along with our summary and conclusions. An explicit computation of the scalar potential of the model is presented in Appendix A. For completeness, we present in Appendix B the general form for the mass matrix in the charged fermion sector (which governs the mixing of charginos and charged leptons) and in the charged scalar sector (which governs the mixing of charged Higgs bosons and charged sleptons). The relevant Feynman rules for the RPV model and the loop function needed for the one-loop computations of Section IV are given in Appendices C and D. II. R-PARITY VIOLATION FORMALISM In R-parity-violating (RPV) low-energy supersymmetry, there is no conserved quantum number that distinguishes the lepton supermultipletsL m and the down-type Higgs super-multipletĤ D . Here, m is a generation label that runs from 1 to n g = 3. Each supermultiplet transforms as a Y = −1 weak doublet under the electroweak gauge group. It is therefore convenient to denote the four supermultiplets by one symbolL α (α = 0, 1, . . . , n g ), witĥ L 0 ≡Ĥ D . We consider the most general low-energy supersymmetric model consisting of the MSSM fields that conserves a Z 3 baryon triality. As remarked in Section I, such a theory possesses RPV-interactions that violate lepton number. The Lagrangian of the theory is fixed by the superpotential and the soft-supersymmetrybreaking terms (supersymmetry and gauge invariance fix the remaining dimension-four terms). The theory we consider consists of the fields of the MSSM, i.e the fields of the two-Higgs-doublet extension of the Standard Model plus their superpartners. The most general renormalizable superpotential respecting baryon triality is given by: W = ǫ ij −µ αL i αĤ j U + 1 2 λ αβmL i αL j βÊ m + λ ′ αnmL i αQ j nD m − h nmĤ i UQ j nÛ m ,(2.1) whereĤ U is the up-type Higgs supermultiplet, theQ n are doublet quark supermultipletss, U m [D m ] are singlet up-type [down-type] quark supermultiplets and theÊ m are the singlet charged lepton supermultiplets. b Without loss of generality, the coefficients λ αβm are taken to be antisymmetric under the interchange of the indices α and β. Note that the µ-term of the MSSM [which corresponds to µ 0 in eq. (2.1)] is now extended to an (n g + 1)-component vector, µ α (while the latin indices n and m run from 1 to n g ). Then, the trilinear terms in the superpotential proportional to λ and λ ′ contain lepton number violating generalizations of the down quark and charged lepton Yukawa matrices. Next, we consider the most general set of (renormalizable) soft-supersymmetry-breaking terms. In addition to the usual soft-supersymmetry-breaking terms of the R-parityconserving MSSM, one must also add new A and B terms corresponding to the RPV terms of the superpotential. In addition, new RPV scalar squared-mass terms also exist. As above, we can streamline the notation by extending the definitions of the coefficients of the R-parity-conserving soft-supersymmetry-breaking terms to allow for an index of type α which can run from 0 to n g . Explicitly, V soft = (M 2 Q ) mn Q i * m Q i n + (M 2 U ) mn U * m U n + (M 2 D ) mn D * m D n +(M 2 L ) αβ L i * α L i β + (M 2 E ) mn E * m E n + m 2 U |H U | 2 − (ǫ ij b αL i α H j U + h.c.) +ǫ ij [ 1 2 a αβm L i α L j β E m + a ′ αnm L i α Q j n D m − (a U ) nm H i U Q j n U m + h.c.] + 1 2 M 3 g g + M 2 W a W a + M 1 B B + h.c. . (2.2) Note that the single B term of the MSSM is extended to an (n g + 1)-component vector, b α , the single squared-mass term for the down-type Higgs boson and the n g × n g lepton scalar squared-mass matrix are combined into an (n g + 1) × (n g + 1) matrix, and the matrix A-parameters of the MSSM are extended in the obvious manner [analogous to the Yukawa coupling matrices in eq. (2.1)]. In particular, a αβm is antisymmetric under the interchange of α and β. It is sometimes convenient to follow the more conventional notation in the literature and define the A and B parameters as follows: a αβm ≡ λ αβm (A E ) αβm , (a U ) nm ≡ h nm (A U ) nm , a ′ αnm ≡ λ ′ αnm (A D ) αnm , b α ≡ µ α B α ,(2.3) where repeated indices are not summed over in the above equations. Finally, the Majorana gaugino masses, M i , are unchanged from the MSSM. The total scalar potential is given by: b In our notation, ǫ 12 = −ǫ 21 = 1. The notation for the superfields (extended to allow α = 0 as discussed above) follows that of ref. [12]. For example, ( e − L ) m [( e + R ) m ] are the scalar components of (2.4) In Appendix A, we present the complete expressions for V F (which is derived from the superpotential [eq. (2.1)]) and V D . It is convenient to write out the contribution of the neutral scalar fields to the full scalar potential [eq. (2.4)]: L 2 m [ E m ], etc. V scalar = V F + V D + V soft .V neutral = m 2 U + |µ| 2 |h U | 2 + (M 2 L ) αβ + µ α µ * β ν αν * β − (b ανα h U + b * αν * α h * U ) (2.5) + 1 8 (g 2 + g ′2 ) |h U | 2 − |ν α | 2 2 , where h U ≡ H 2 U is the neutral component of the up-type Higgs scalar doublet andν α ≡ L 1 α . In eq. (2.5), we have introduced the notation: |µ| 2 ≡ α |µ α | 2 . (2.6) In minimizing the full scalar potential, we assume that only neutral scalar fields acquire vacuum expectation values: From eq. (2.5), the minimization conditions are: h U ≡ 1 √ 2 v u and ν α ≡ 1 √ 2 v α .(m 2 U + |µ| 2 )v * u = b α v α − 1 8 (g 2 + g ′2 )(|v u | 2 − |v d | 2 )v * u , (2.7) (M 2 L ) αβ + µ α µ * β v * β = b α v u + 1 8 (g 2 + g ′2 )(|v u | 2 − |v d | 2 )v * α , (2.8) where |v d | 2 ≡ α |v α | 2 . (2.9) The normalization of the vacuum expectation values has been chosen such that v ≡ (|v u | 2 + |v d | 2 ) 1/2 = 2m W g = 246 GeV . (2.10) Up to this point, there is no preferred direction in the generalized generation space spanned by theL α . It is convenient to choose a particular "interaction" basis such that v m = 0 (m = 1, . . . , n g ), in which case v 0 = v d . In this basis, we denoteL 0 ≡Ĥ D . The down-type quark and lepton mass matrices in this basis arise from the Yukawa couplings to 11) while the up-type quark mass matrices arise as in the MSSM: H D ; namely, c (m d ) nm = 1 √ 2 v d λ ′ 0nm , (m ℓ ) nm = 1 √ 2 v d λ 0nm ,(2. c As shown in Appendix B, (m ℓ ) nm is not precisely the charged lepton mass matrix, as a result of a small admixture of the charged higgsino eigenstate due to RPV interactions. (2.12) In the literature, one often finds other basis choices. The most common is one where µ 0 = µ and µ m = 0 (m = 1, . . . , n g ). Of course, the results for physical observables (which involve mass eigenstates) are independent of the basis choice. d In the calculations presented in this paper, when we need to fix a basis, we find the choice of v m = 0 to be the most convenient. (m u ) nm = 1 √ 2 v u h nm . III. NEUTRINOS AND SNEUTRINOS AT TREE LEVEL We begin by recalling the calculation of the tree-level neutrino mass that arises due to the R-parity violation. We then evaluate the corresponding sneutrino mass splitting. In all the subsequent analysis presented in this paper, we shall assume for simplicity that the parameters (M 2 L ) αβ , µ α , b α , the gaugino mass parameters M i , and v α are real. In particular, the ratio of vacuum expectation values, tan β ≡ v u v d (3.1) can be chosen to be positive by convention [with v d defined by the positive square root of eq. (2.9)]. That is, we neglect new supersymmetric sources of CP-violation that can contribute to neutrino and sneutrino phenomena. We shall address the latter possibility in a subsequent paper [14]. A. Neutrino mass The neutrino can become massive due to mixing with the neutralinos [7]. This is determined by the (n g + 4) × (n g + 4) mass matrix in a basis spanned by the two neutral gauginos B and W 3 , the higgsinos h U and h D ≡ ν 0 , and n g generations of neutrinos, ν m . The tree-level fermion mass matrix, with rows and columns corresponding to { B, W 3 , h U , ν β (β = 0, 1, . . . , n g )} is given by [8,9]: M (n) =       M 1 0 m Z s W v u /v −m Z s W v β /v 0 M 2 −m Z c W v u /v m Z c W v β /v m Z s W v u /v −m Z c W v u /v 0 µ β −m Z s W v α /v m Z c W v α /v µ α 0 αβ       ,(3.2) where c W ≡ cos θ W , s W ≡ sin θ W , v is defined in eq. (2.10), and 0 αβ is the (n g + 1) × (n g + 1) zero matrix. In a basis-independent analysis, it is convenient to introduce: d For a general discussion of basis indpendent parameterizations of R-parity violation, see refs. [13] and [11]. cos ξ ≡ α v α µ α v d µ , (3.3) where µ is defined in eq. (2.6). Note that ξ measures the alignment of v α and µ α . It is easy to check that M (n) possesses n g − 1 zero eigenvalues. We shall identify the corresponding states with n g − 1 physical neutrinos of the Standard Model [8], while one neutrino acquires mass through mixing. We can evaluate this mass by computing the product of the five non-zero eigenvalues of M (n) [denoted below by det ′ M (n) ] e det ′ M (n) = m 2 Z µ 2 Mγ cos 2 β sin 2 ξ , (3.4) where Mγ ≡ cos 2 θ W M 1 + sin 2 θ W M 2 . We compare this result with the product of the four neutralino masses of the R-parity-conserving MSSM (obtained by computing the determi- nant of the upper 4 × 4 block of M (n) with µ 0 , v 0 replaced by µ, v d respectively) det M (n) 0 = µ m 2 Z Mγ sin 2β − M 1 M 2 µ . (3.5) To first order in the neutrino mass, the neutralino masses are unchanged by the R-parity violating terms, and we end up with [9] m ν = det ′ M (n) det M (n) 0 = m 2 Z µMγ cos 2 β sin 2 ξ m 2 Z Mγ sin 2β − M 1 M 2 µ . (3.6) Thus, m ν ∼ m Z cos 2 β sin 2 ξ, assuming that all the relevant masses are at the electroweak scale. Note that a necessary and sufficient condition for m ν = 0 (at tree-level) is sin ξ = 0, which implies that µ α and v α are not aligned. This is generic in RPV models. In particular, the alignment of µ α and v α is not renormalization group invariant [9,10]. Thus, exact alignment at the low-energy scale can only be implemented by the fine-tuning of the model parameters. B. Sneutrino mass splitting In RPV low-energy supersymmetry, the sneutrinos mix with the Higgs bosons. Under the assumption of CP-conservation, we may separately consider the CP-even and CP-odd scalar sectors. For simplicity, consider first the case of one sneutrino generation. If Rparity is conserved, the CP-even scalar sector consists of two Higgs scalars (h 0 and H 0 , with m h 0 < m H 0 ) andν + , while the CP-odd scalar sector consists of the Higgs scalar, A 0 , the e To compute this quantity, calculate the characteristic polynomial, det(λI − M (n) ) and examine the first non-zero coefficient of λ n (n = 0, 1, . . .). In the present case, det ′ M (n) is given by the coefficient of λ ng−1 . Goldstone boson (which is absorbed by the Z), and one sneutrino,ν − . Moreover, theν ± are mass degenerate, so that the standard practice is to define eigenstates of lepton number: ν ≡ (ν + + iν − )/ √ 2 andν ≡ν * . When R-parity is violated, the sneutrinos in each CPsector mix with the corresponding Higgs scalars, and the mass degeneracy ofν + andν − is broken. We expect the RPV-interactions to be small; thus, we can evaluate the concomitant sneutrino mass splitting in perturbation theory. For n g > 1 generations of sneutrinos, one can consider non-trivial flavor mixing among sneutrinos (or antisneutrinos) in addition to n g sneutrino-antisneutrino mass splittings. The CP-even and CP-odd scalar squared-mass matrices are most easily derived as follows. (2.5) and call the resulting expression V even + V odd . The CP-even squared-mass matrix is obtained from V even , which is identified by replacing the scalar fields in eq. (2.5) by their corresponding real vacuum expectation values (or equivalently by setting a u = a α = 0 in V even + V odd ). Then, Insert h U = 1 √ 2 (v u +ia u ) andν α = 1 √ 2 (v α +ia α ) into eq.V even = 1 2 m 2 uu v 2 u + 1 2 m 2 αβ v α v β − b α v u v α + 1 32 (g 2 + g ′2 ) v 2 u − v 2 d 2 , (3.7) V odd = 1 2 m 2 uu a 2 u + 1 2 m 2 αβ a α a β + b α a u a α + 1 32 (g 2 + g ′2 ) (a 2 u − a 2 d ) 2 + 2(a 2 u − a 2 d )(v 2 u − v 2 d ) , (3.8) where m 2 uu ≡ (m 2 U + µ 2 ) and m 2 αβ ≡ (M 2 L ) αβ + µ α µ β . The minimization conditions dV even /dv p = 0 (p = u, α) yield eqs. (2.7) and (2.8), with all parameters assumed to be real. In particular, it is convenient to rewrite eq. (2.8). First, we introduce the generalized (n g + 1) × (n g + 1) sneutrino squared-mass matrix: (M 2 νν * ) αβ ≡ (M 2 L ) αβ + µ α µ β − 1 8 (g 2 + g ′2 )(v 2 u − v 2 d )δ αβ . (3.9) Then, eq. (2.8) assumes a very simple form: (M 2 νν * ) αβ v β = v u b α . (3.10) From this equation, we can derive the necessary and sufficient condition for sin ξ = 0 (corresponding to the alignment of µ α and v α ). If there exist some number c such that (M 2 νν * ) αβ µ β = c b α ,(3.11) then it follows that µ α and v α are aligned. f To prove that eq. (3.11) implies the alignment of µ α and v α , simply insert eq. (3.11) into eq. (3.10) [thereby eliminating b α ], and note that f It is interesting to compare this result with the one obtained in ref. [8], where it was shown that µ α and v α are aligned if two conditions hold: (i) b α ∝ µ α and (ii) µ α is an eigenvector of (M 2 L ) αβ . From eq. (3.11), we see that these two conditions are sufficient for alignment [since conditions (i) and (ii) imply the existence of a constant c in eq. (3.11)], but are not the most general. (M 2 νν * ) αβ must be non-singular [otherwise eq. (3.10) would not yield a unique non-trivial solution for v α ]. Naively, one might think that if µ α and v α are aligned, so that all tree-level neutrino masses vanish, then one would also find degenerate sneutrino-antisneutrino pairs at treelevel. This is not generally true. Instead, the absence of degenerate sneutrino-antisneutrino pairs is controlled by the alignment of b α and v α . To see how this works, note that eq. (3.10) implies that b α and v α are aligned if v β is an eigenvector of (M 2 νν * ) αβ . In this case, one can rotate to a basis in which v m = b m = 0 (where m = 1, . . . , n g ). In this basis the matrix elements (M 2 νν * ) 0m = (M 2 νν * ) m0 = 0, which implies that there is no mixing between Higgs bosons and sneutrinos. Thus, although some RPV effects still remain in the theory, the CP-even and CP-odd sneutrino mass matrices are identical. Consequently, the conditions for the absence of tree-level neutrino masses (alignment of µ α and v α ) and the absence of sneutrino-antisneutrino mass splitting at tree-level (alignment of b α and v α ) are different. To compute the tree-level sneutrino-antisneutrino mass splittings, we must calculate the CP-even and CP-odd scalar spectrum. The CP-even scalar squared-mass matrix is given by (M 2 even ) pq = d 2 V even dv p dv q . (3.12) After using the minimization conditions of the potential, we obtain the following result for the CP-even squared-mass matrix M 2 even = 1 4 (g 2 + g ′2 )v 2 u + b ρ v ρ /v u − 1 4 (g 2 + g ′2 )v u v β − b β − 1 4 (g 2 + g ′2 )v u v α − b α 1 4 (g 2 + g ′2 )v α v β + (M 2 νν * ) αβ ,(3.13) where (M 2 νν * ) αβ is constrained according to eq. (3.10). The CP-odd scalar squared-mass matrix is determined from (M 2 odd ) pq = d 2 V odd da p da q ap=0 ,(3.14) where V odd is given by eq. (3.8). The resulting CP-odd squared-mass matrix is then M 2 odd = b ρ v ρ /v u b β b α (M 2 νν * ) αβ . (3.15) Note that the vector (−v u , v β ) is an eigenvector of M 2 odd with zero eigenvalue; this is the Goldstone boson that is absorbed by the Z. One can check that the following tree-level sum rule holds: Tr M 2 even = m 2 Z + Tr M 2 odd . (3.16) This result is a generalization of the well known tree-level sum rule for the CP-even Higgs masses of the MSSM [see eq. (3.21)]. Eq. (3.16) is more general in that it also includes contributions from the sneutrinos which mix with the neutral Higgs bosons in the presence of RPV interactions. To complete the computation of the sneutrino-antisneutrino mass splitting, one must evaluate the non-zero eigenvalues of M 2 even and M 2 odd , and identify which ones correspond to the sneutrino eigenstates. To do this, one must first identify the small parameters characteristic of the RPV interactions. We find that a judicious choice of basis significantly simplifies the analysis. Following the discussion at the end of Section II, we choose a basis such that v m = 0 (which implies that v d = v 0 ). To illustrate our method, we exhibit the calculation in the case of n g = 1 generation. In the basis where v 1 = 0, eq. (3.10) implies that (M 2 νν * ) α0 = b α tan β (α = 0, 1). Then the squared-mass matrices eqs. (3.13) and (3.15) reduce to: M 2 even =    b 0 cot β + 1 4 (g 2 + g ′2 )v 2 u −b 0 − 1 4 (g 2 + g ′2 )v u v d −b 1 −b 0 − 1 4 (g 2 + g ′2 )v u v d b 0 tan β + 1 4 (g 2 + g ′2 )v 2 d b 1 tan β −b 1 b 1 tan β m 2 νν *    ,(3.17) and M 2 odd =    b 0 cot β b 0 b 1 b 0 b 0 tan β b 1 tan β b 1 b 1 tan β m 2 νν *    , (3.18) where m 2 νν * ≡ (M 2 νν * ) 11 = (M 2 L ) 11 + µ 2 1 − 1 8 (g 2 + g ′2 )(v 2 u − v 2 d ) . (3.19) In the R-parity-conserving limit (b 1 = µ 1 = 0), one obtains the usual MSSM tree-level masses for the Higgs bosons and the sneutrinos. In both squared-mass matrices [eqs. (3.17) and (3.18)], b 1 ≪ m 2 Z is a small parameter that can be treated perturbatively. We may then compute the sneutrino mass splitting due to the small mixing with the Higgs bosons. Using second order matrix perturbation theory to compute the eigenvalues, we find: m 2 ν + = m 2 νν * + b 2 1 cos 2 β sin 2 (β − α) (m 2 νν * − m 2 H 0 ) + cos 2 (β − α) (m 2 νν * − m 2 h 0 ) , m 2 ν − = m 2 νν * + b 2 1 (m 2 νν * − m 2 A 0 ) cos 2 β . (3.20) Above, we employ the standard notation for the MSSM Higgs sector observables [15]. Note that at leading order in b 2 1 , it suffices to use the values for the Higgs parameters in the R-parity-conserving limit. In particular, the (tree-level) Higgs masses satisfy: m 2 h 0 + m 2 H 0 = m 2 Z + m 2 A 0 ,(3.21)m 2 h 0 m 2 H 0 = m 2 Z m 2 A 0 cos 2 2β ,(3.22) while the (tree-level) CP-even Higgs mixing angle satisfies: cos 2 (β − α) = m 2 h 0 (m 2 Z − m 2 h 0 ) m 2 A 0 (m 2 H 0 − m 2 h 0 ) . (3.23) After some algebra, we end up with the following expression at leading order in b 2 1 for the sneutrino squared-mass splitting, ∆m 2 ν ≡ m 2 ν + − m 2 ν − : ∆m 2 ν = 4 b 2 1 m 2 Z m 2 νν * sin 2 β (m 2 νν * − m 2 H )(m 2 νν * − m 2 h )(m 2 νν * − m 2 A ) . ( 3.24) We now extend the above results to more than one generation of sneutrinos. In a basis where v m = 0 (m = 1, . . . , n g ), the resulting CP-even and CP-odd squared mass matrices are obtained from eqs. (3.17) and (3.18) by replacing b 1 with the n g -dimensional vector b m and m 2 νν * by the n g ×n g matrix, (M 2 νν * ) mn . In general, (M 2 νν * ) mn need not be flavor diagonal. In this case, the theory would predict sneutrino flavor mixing in addition to the sneutrinoantisneutrino mixing exhibited above. The relative strength of these effects depends on the relative size of the RPV and flavor-violating parameters of the model. To analyze the resulting sneutrino spectrum, we choose a basis in which (M 2 νν * ) mn is diagonal: (M 2 νν * ) mn = (m 2 νν * ) m δ mn . (3.25) In this basis b m is also suitably redefined. (We will continue to use the same symbols for these quantities in the new basis.) The CP-even and CP-odd sneutrino mass eigenstates will be denoted by (ν + ) m and (ν − ) m respectively. g It is a simple matter to extend the perturbative analysis of the scalar squared-mass matrices if the (m 2 νν * ) m are non-degenerate. We then find that (∆m 2 ν ) m ≡ (m 2 ν + ) m − (m 2 ν − ) m is given by eq. (3.24), with the replacement of b 1 and m 2 νν * by b m and (m 2 νν * ) m , respectively. That is, while in general only one neutrino is massive, all the sneutrino-antisneutrino pairs are generically split in mass. h If we are prepared to allow for special choices of the parameters µ α and b α , then these results are modified. The one massive neutrino becomes massless if µ m = 0 for all m (in the basis where v m = 0). In contrast, the number of sneutrino-antisneutrino pairs that remain degenerate in mass is equal to the number of the b m that are zero. (Of course, all these tree-level results are modified by one loop radiative corrections as discussed in Section IV.) If some of the (m 2 νν * ) m are degenerate, the analysis becomes significantly more complicated. We will not provide the corresponding analytic expressions (although they can be obtained using degenerate second order perturbation theory). However, one can show that for two or more generations if n deg of the (m 2 νν * ) m are equal (by definition, n deg ≥ 2), and if b m = 0 for all m then only n g − n deg + 2 of the CP-even/CP-odd sneutrino pairs are split in mass. The remaining n deg − 2 sneutrino pairs are exactly mass-degenerate at tree-level. Additional cases can be considered if some of the b m vanish. IV. ONE-LOOP EFFECTS In Section III, we showed that in the three generation model for a generic choice of RPV-parameters, mass for one neutrino flavor is generated at tree-level due to mixing with the neutralinos, while mass splittings of three generations of sneutrino-antisneutrino pairs at tree level are a consequence of mixing with the Higgs bosons. Special choices of the RPV parameters can leave all neutrinos massless at tree-level and/or less than three sneutrinoantisneutrino pairs with non-degenerate tree-level masses. Masses for the remaining massless neutrinos and mass splittings for the remaining degenerate sneutrino-antisneutrino pairs will be generated by one loop effects. Moreover, in some cases, the radiative corrections to the tree-level generated masses and mass splittings can be significant (and may actually dominate the corresponding tree-level results). As a concrete example, consider a model in which RPV interactions are introduced only through the superpotential λ and λ ′ couplings [eq. (2.1)]. In this case, µ α , b α and v α are all trivially aligned and no tree-level neutrino masses nor sneutrino mass splittings are generated. In a realistic model, soft-supersymmetry-breaking RPV-terms will be generated radiatively in such models, thereby introducing a small non-alignment among µ α , v α and b α . However, the resulting tree-level neutrino masses and sneutrino-antisneutrino mass splittings will be radiatively suppressed, in which case the tree-level and one loop radiatively generated masses and mass splittings considered in this section would be of the same order of magnitude. In this section, we compute the one loop generated neutrino mass and sneutrinoantisneutrino mass splitting generated by the RPV interactions. However, there is another effect that arises at one loop from R-parity conserving effects. Once a sneutrinoantisneutrino squared-mass splitting is established, its presence will contribute radiatively to neutrino masses through a one loop diagram involving sneutrinos and neutralinos (with R-parity conserving couplings). Similarly, a non-zero neutrino mass will generate a one loop sneutrino-antisneutrino mass splitting. In ref. [4], we considered these effects explicitly. The conclusion of this work was that 10 −3 < ∼ ∆mν m ν < ∼ 10 3 . (4.1) This result is applicable in all models in which there is no unnatural cancellation between the tree-level and one loop contribution to the neutrino mass or to the sneutrino-antisneutrino mass splitting. A. One-loop Neutrino mass At one loop, contributions to the neutrino mass are generated from diagrams involving charged lepton-slepton loop (shown in Fig. 1) and an analogous down-type quark-squark loop [7]. We first consider the contribution of the charged lepton-slepton loop. We shall work in a specific basis, in which v m = 0 (i.e., v 0 ≡ v d ) and the charged lepton mass matrix is diagonal. In this basis, the distinction between charged sleptons and Higgs bosons is meaningful. Nevertheless, in a complete calculation, we should keep track of charged slepton-Higgs boson mixing and the charged lepton-chargino mixing which determine the actual mass eigenstates that appear in the loop. For completeness, we write out in Appendix B the relevant mass matrices of the charged fermion and scalar sectors. In order to simplify the computation, we shall simply ignore all flavor mixing (this includes mixing between charged Higgs bosons and sleptons). However, we allow for mixing between the L-type and R-type charged sleptons separately in each generation, since this is necessary in order to obtain a non-vanishing effect. It therefore suffices to consider the structure of a 2 × 2 (LR) block of the charged slepton squared-mass matrix corresponding to one generation. The corresponding charged slepton mass eigenstates are given by: ℓ i = V i1 ℓ L + V i2 ℓ R , i = 1, 2 ,(4.2) where V = cos φ ℓ sin φ ℓ − sin φ ℓ cos φ ℓ . (4.3) The mixing angle φ ℓ can be found by diagonalizing the charged slepton squared-mass matrix M 2 slepton = L 2 + m 2 ℓ Am ℓ Am ℓ R 2 + m 2 ℓ ,(4.4) where L 2 ≡ (M 2 L ) ℓℓ + (T 3 − e sin 2 θ W )m 2 Z cos 2β, R 2 ≡ (M 2 E ) ℓℓ + (e sin 2 θ W )m 2 Z cos 2β, with T 3 = −1/2 and e = −1 for the down-type charged sleptons, and A ≡ (A E ) 0ℓℓ − µ 0 tan β. In terms of these parameters, the mixing angle is given by sin 2φ ℓ = 2Am ℓ (L 2 − R 2 ) 2 + 4A 2 m 2 ℓ . (4.5) The two-point amplitude corresponding to Fig. 1 can be computed using the Feynman rules given in Appendix C. The result is given by iM qm = ℓ,p i=1,2 d 4 q (2π) 4 (−iλ qℓp )C −1 P L V i2 i(q / + m ℓ ) q 2 − m 2 ℓ (iλ mpℓ )P L V i1 i (q − p) 2 − M 2 p i , (4.6) where m ℓ is the lepton mass, M p i are the sleptons masses and the V ij are the slepton mixing matrix elements [eq. (4.3)]. The charge conjugation matrix C appears according to the Feynman rules given in Appendix D of ref. [17]. The integral above can be expressed in terms of the well known one loop-integral B 0 (defined in Appendix D). The corresponding contributions to the one loop neutrino mass matrix is obtained via: (m ν ) qm = −M qm (p 2 = 0). The end result is (m ν ) (ℓ) qm = 1 32π 2 ℓ,p λ qℓp λ mpℓ m ℓ sin 2φ ℓ B 0 (0, m 2 n , M 2 p 1 ) − B 0 (0, m 2 n , M 2 p 2 ) ≃ 1 32π 2 ℓ,p λ qℓp λ mpℓ m ℓ sin 2φ ℓ ln M 2 p 1 M 2 p 2 ,(4.7) where the superscript (ℓ) indicates the contribution of Fig. 1. As expected, the divergences cancel and the final result is finite. In the last step, we simplified the resulting expression under the assumption that m ℓ ≪ M p 1 , M p 2 . The quark-squark loop contribution to the one loop neutrino mass may be similarly computed. Employing the same approximations as described above, the final result can be immediately obtained from eq. (4.7) with the following adjustments: (i) multiply the result by a color factor of N c = 3; (ii) replace the Yukawa couplings λ with λ ′ and the lepton mass m ℓ by the corresponding down-type quark mass m d ; (iii) replace the slepton mixing angle φ ℓ by the corresponding down-type squark mixing angle φ d . Note that φ d is computed using eqs. ν m (p) ν q (p) ℓ − n (q) ℓ − p (q − p)(m ν ) (d) qm ≃ 3 32π 2 d,r λ ′ qdr λ ′ mrd m d sin 2φ d ln M 2 r 1 M 2 r 2 . (4.8) The final result for the neutrino mass matrix is the sum of eqs. (4.7) and (4.8). Clearly, for generic choices of the λ and λ ′ couplings, all neutrinos (including those neutrinos that were massless at tree-level) gain a one loop generated mass. B. One-loop sneutrino-antisneutrino mass splitting We next consider the computation of the one-loop contributions to the sneutrino masses under some simplifying assumptions (which are sufficient to illustrate the general form of these corrections). Since the total R-parity conserving contribution to the sneutrino and antisneutrino mass is equal and large (of order the supersymmetry breaking mass), it is sufficient to evaluate the one loop corrections to the ∆L = 2 sneutrino squared-masses. Flavor non-diagonal contributions are significant only if sneutrinos of different flavors are mass-degenerate. The one loop generated mass splitting is relevant only when the tree level contributions vanish or are highly suppressed. In the simplest case, for one generation of sneutrinos and without tree-level sneutrino-antisneutrino splitting, we get (∆m 2 ν ) n = 2 M nn (p 2 = m 2 ν ) ,(4.9) where iM nm is the sum of all contributing one loop Feynman diagrams computed below and mν is the R-parity-conserving tree-level sneutrino mass. In the more complicated case, where there are n deg flavors of mass-degenerate sneutrinos, sneutrino/antisneutrino masseigenstates are obtained by diagonalizing the 2n deg × 2n deg sneutrino squared-mass matrix: (4.10) where m, n = 1, . . . , n deg and p, q = n deg +1, . . . , 2n deg . In the case that there are small masssplittings between sneutrinos of different flavor, we can treat such effects perturbatively by simply including such flavor non-degeneracies in the diagonal blocks above. Likewise, a small tree-level splitting of the sneutrino and antisneutrino can be accommodated perturbatively by an appropriate modification of the off-diagonal blocks above. As discussed in Section IV.A, we need only consider in detail the contribution of lepton and slepton loops. (In particular, we neglect flavor mixing, but allow for mixing between the L-type and R-type charged sleptons separately in each generation.) The corresponding contributions of the quark and squark loops are then easily obtained by appropriate substitution of parameters. The relevant graphs with an intermediate lepton and slepton loops are shown in Figs. 2 and 3 respectively. Using the Feynman rules of Appendix C (including a minus sign for the fermion loop), the contribution of the lepton loop (Fig. 2) is given by M 2 sneutrino = m 2 ν δ mn M mp (p 2 = m 2 ν ) M * qn (p 2 = m 2 ν ) m 2 ν δ qp ,ν p (p)ν q (p) ℓ − m (q) ℓ − n (q − p)ν p (p)ν q (p) ℓ − m (q) ℓ − n (q − p)iM (f ) pq = − m,n λ pmn λ qnm d 4 q (2π) 4 Tr [(q / + m m )P L (p / + q / + m n )P L ] [q 2 − m 2 m ][(q + p) 2 − m 2 n ] (4.11) = −i 8π 2 m,n λ pmn λ qnm m m m n B 0 (p 2 , m 2 m , m 2 n ). The contribution of the slepton loop (Fig. 3) contains two distinct pieces. In the absence of LR slepton mixing, we have LL and RR contributions in the loop proportional to the λ Yukawa couplings. When we turn on the LR slepton mixing, we find additional contributions proportional to the corresponding A-terms. First, consider the contributions proportional to Yukawa couplings. For simplicity, we neglect the LR slepton mixing in this case. As before, we work in a basis where v m = 0 (i.e., v 0 ≡ v d ) and we choose a flavor basis corresponding to the one where the charged lepton mass matrices are diagonal. Then, the contribution of the slepton loop (Fig. 3), summing over i =L,R type sleptons is given by iM (λ) pq = i,m,n λ pmn λ qnm m m m n d 4 q (2π) 4 1 [q 2 − M 2 m i ][(q + p) 2 − M 2 n i ] (4.12) = i 16π 2 mn λ pmn λ qnm m m m n B 0 (p 2 , M 2 m R , M 2 n R ) + B 0 (p 2 , M 2 m L , M 2 n L ) , where the m n are lepton masses, and M m i are slepton masses. It is easy to check that the divergences cancel from the sum iM (f ) pq + iM (λ) pq , which results in a finite correction to the sneutrino mass. This serves as an important check of the calculation. If LR slepton mixing is included, the above results are modified. The corrections to eq. (4.12) in this case are easily obtained, but we shall omit their explicit form here. In addition, new slepton loop contributions arise that are proportional to the A-parameters (defined in eq. (2.2)). We quote only the final result: iM (A) pq = i 64π 2 m,n a pmn a qnm sin 2φ m sin 2φ n (4.13) × B 0 (p 2 , M 2 m 1 , M 2 n 1 ) + B 0 (p 2 , M 2 m 2 , M 2 n 2 ) − B 0 (p 2 , M 2 m 1 , M 2 n 2 ) − B 0 (p 2 , M 2 m 2 , M 2 n 1 ) , where φ n is the slepton mixing angle of the nth generation, and the corresponding slepton eigenstate masses are M n 1 and M n 2 . This result is manifestly finite. Note that this contribution vanishes when the LR mixing is absent. The total contribution of the lepton and slepton loops are given by the sum of eqs. (4.11), (4.12) and (4.13): iM (ℓ) pq = iM (f ) pq + iM (λ) pq + iM (A) pq . (4.14) Finally, one must add the contributions of the quark and squark loops. The results of this subsection can be used, with the substitutions described in Section IV.A to derive the final expressions. Once again, we see that for generic choices of the λ, A, λ ′ and A ′ parameters, all sneutrino-antisneutrino pairs (including those pairs that were mass-degenerate at tree-level) are split in mass by one loop effects. V. PHENOMENOLOGICAL CONSEQUENCES The detection of a non-vanishing sneutrino-antisneutrino mass splitting would be a signal of lepton number violation. In particular, it serves as a probe of ∆L = 2 interactions, which also contributes to the generation of neutrino masses. Thus, sneutrino phenomenology at colliders may provide access to physics that previously could only be probed by observables sensitive to neutrino masses. Some proposals for detecting the sneutrino-antisneutrino mass splitting were presented in ref. [4]. If this mass splitting is large (more then about 1 GeV) one may hope to be able to reconstruct the two masses in sneutrino pair-production, and measure their difference. In an RPV theory with L-violation, resonant production of sneutrinos become possible [18] and the sneutrino mass splitting may be detected either directly [19] or by using tau-spin asymmetries [20]. If the mass splitting is much smaller than 1 GeV, sneutrino-antisneutrino oscillations can be used to measure ∆mν. In analogy with B-B mixing, a same sign lepton signal will indicate that the two sneutrino mass eigenstates are not mass-degenerate. In practice, one may only be able to measure the ratio xν ≡ ∆mν/Γν. In order to be able to observe the oscillation two conditions must by satisfied: (i) xν should not be much smaller than 1; and (ii) the branching ratio into a lepton number tagging mode should be significant. The sneutrino-antisneutrino mass splitting is proportional to the RPV parameters b m (for tree-level mass splitting) and λ, A, λ ′ and A ′ (for loop-induced mass splitting). Generally speaking, these parameters can be rather large, and the strongest bounds on them come from the limits on neutrino masses. In the following discussion, we will consider the possible values of the relevant parameters: (i) the ratio of the sneutrino-antisneutrino mass splitting to the neutrino mass (r ν ≡ ∆mν/m ν ); (ii) the sneutrino width (Γν); and (iii) the branching ratio of the sneutrino into a lepton number tagging mode. A. Order of magnitude of ∆mν/m ν To determine the order of magnitude of ∆mν/m ν , we shall take all R-parity-conserving supersymmetric parameters to be of order m Z . In the one generation model, the neutrino acquires a mass of order m ν ∼ µ 2 1 cos 2 β/m Z via tree-level mixing, where we have used sin ξ = µ 1 /µ in a basis where v 1 = 0. The tree-level mass splitting of the sneutrinoantisneutrino pair is obtained from eq. (3.24), and we find ∆m 2 ν ∼ b 2 1 sin 2 β/m 2 Z . Using ∆m 2 ν = 2mνν * ∆mν, it follows that r ν ≡ ∆mν m ν ∼ b 2 1 tan 2 β m 2 Z µ 2 1 . (5.1) To appreciate the implications of this result, we note that eq. (3.10) in the v 1 = 0 basis yields b 1 = [(M 2 L ) 10 + µ 1 µ 0 ] cot β . (5.2) The natural case is the one where all terms in eq. (5.2) are of the same order. Then b 1 ∼ O(m Z µ 1 cot β), and it follows that r ν ∼ O(1). On the other hand, it is possible to have r ν ≫ 1 if, e.g., (M 2 L ) 10 ≫ µ 1 µ 0 . The upper bound, r ν < ∼ 10 3 [see eq. (4.1)] still applies in the absence of unnatural cancellations between the tree-level and the one-loop contributions to m ν . We do not discuss here any models that predict the relative size of the relevant RPV parameters. We only note that while we are not familiar with specific one-generation models that lead to r ν ≫ 1, we are aware of models that lead to r ν ∼ 1. One such example is a class of models based on horizontal symmetry [8]. In the three generation model, there is at most one tree-level non-zero neutrino mass, while all sneutrino-antisneutrino pair masses may be split. This provides far greater freedom for the possible values of (∆mν) m ∼ b 2 m sin 2 β/m 3 Z , since in many cases these are not constrained by the very small neutrino masses. In general, significant regions of parameter space exist in which r ν ≫ 1 for at least n g − 1 generations of neutrinos and sneutrinos. Consider next the implications of the RPV one loop corrections. These are proportional to different RPV parameters as compared to those that control the tree-level neutrino masses and sneutrino-antisneutrino mass splittings. Thus, one may envision cases where the RPV one loop results are either negligible, of the same order, or dominant with respect to the treelevel results. If the RPV one loop results are negligible, then the discussion above applies. In particular, in the three generation model with generic model parameters, one typically expects r ν ∼ O(1) for one of the generations, while r ν ≫ 1 for the other two generations. In contrast, if the RPV one loop corrections are dominant, then the results of Section IV imply that r ν ∼ O(1) for all three generations, for generic model parameters. B. Sneutrino width and branching ratios Besides their effect on the sneutrino-antisneutrino mixing, the RPV interactions also modify the sneutrino decays. This can happen in two ways. First, the presence of the λ and λ ′ coupling can directly mediate sneutrino decay to quark and/or lepton pairs. Second, the sneutrinos can decay through their mixing with the Higgs bosons (which would favor the decay into the heaviest fermion or boson pairs that are kinematically allowed). These decays are relevant if the sneutrino is the lightest supersymmetric particle (LSP), or if the R-parityconserving sneutrino decays are suppressed (e.g., if no two-body R-parity-conserving decays are kinematically allowed). Consider two limiting cases. First, suppose that the RPV decays of the sneutrino are dominant (or that the sneutrino is the LSP). Then, in the absence of CP-violating effects, the sneutrino and antisneutrino decay into the same channels with the same rate. Moreover, the RPV sneutrino decays violate lepton number by one unit. Hence, one cannot identify the decaying (anti)sneutrino state via a lepton tag, as in ref. [4]. However, oscillation phenomena may still be observable if there is a significant difference in the CP-even and CP-odd sneutrino lifetimes. For example, if the RPV sneutrino decays via Higgs mixing dominate, then for sneutrino masses between 2m W and 2m t , the dominant decay channels for the CP-even scalar would be W + W − , ZZ and h 0 h 0 , while the CP-odd scalar would decay mainly into bb. In this case, the ratio of sneutrino lifetimes would be of order m 2 Z /m 2 b . Adding up all channels, one finds a ratio of lifetimes of order 10 3 . Moreover, the overall lifetimes are suppressed by small RPV parameters, so one can imagine cases where an LSP sneutrino would decay at colliders with a displaced vertex. Oscillation phenomena similar to that of the K-K system would then be observable for the sneutrino-antisneutrino system. Including all three generations of sneutrinos would lead to a very rich phenomenology that would provide a precision probe of the underlying lepton-number violation of the theory. Second, suppose that the R-parity-conserving decays of the sneutrino are dominant. Then, the considerations of ref. [4] apply. In particular, in most cases, there are leptonic final states in sneutrino decays that tag the initial sneutrino state. Thus, the like-sign dilepton signal of ref. [4] can be used to measure xν = ∆mν/Γν. Since only values of xν > ∼ 1 are practically measurable, the most favorable case corresponds to very small Γν. In typical models of R-parity-conserving supersymmetry, the sneutrino decays into two body final states with a width of order 1 GeV. This result can be suppressed somewhat by chargino/neutralino mixing angle and phase space effects, but the suppression factor is at most a factor of 10 4 in rate (assuming that the tagging mode is to be observable). If the LSP is the τ ± , then supersymmetric models can be envisioned where two-body sneutrino decays are absent, and the three-body sneutrino decaysν ℓ →τ R ν τ ℓ can serve as the tagging mode. In ref. [4], we noted that an LSPτ R is strongly disfavored by astrophysical bounds on the abundance of stable heavy charged particles [21]. In R-parity-violating supersymmetry, this is not an objection, since the LSPτ R would decay through an RPV interaction. Three-body sneutrino decay widths can vary typically between 1 eV and 1 keV, depending on the supersymmetric parameters. Thus, in this case, the like-sign dilepton signature can also provide a precision probe of the underlying lepton-number violation of the theory. C. Conclusions R-parity violating low-energy supersymmetry with baryon number conservation provides a framework for particle physics with lepton-number violation. Recent experimental signals of neutrino masses and mixing may provide the first glimpse of the lepton-number violating world. The search for neutrino masses and oscillations is a difficult one. Even if successful, such observations will provide few hints as to the nature of the underlying lepton number violation. In supersymmetric models that incorporate lepton number violation, the phenomenology of sneutrinos may provide additional insight to help us unravel the mystery of neutrino masses and mixing. Sneutrino flavor mixing and sneutrino-antisneutrino oscillations are analogous to neutrino flavor mixing and Majorana neutrino masses, respectively. Crucial observables at future colliders include the sneutrino-antisneutrino mass splitting, sneutrino oscillation phenomena, and possible long sneutrino and antisneutrino lifetimes. In this paper, we described CP-conserving sneutrino phenomenology that can probe the physics of lepton number violation. In a subsequent paper, we will address the implications of CPviolation in the sneutrino system. The observation of such phenomena at future colliders would have a dramatic impact on the pursuit of physics beyond the Standard Model. Department of Energy under contract DE-FG03-92ER40689 and in part by a Frontier Fellowship from Fermi National Accelerator Laboratory. APPENDIX A: THE SCALAR POTENTIAL In softly-broken supersymmetric theories, the total scalar potential is given by eq. (2.4), where V F and V D originate from the supersymmetry-preserving sector, while V soft contains the soft-supersymmetry-breaking terms. V F is obtained from the superpotential W by first replacing all chiral superfields by their leading scalar components and then computing V F = Φ dW dΦ 2 ,(A1) where the sum is taken over all contributing scalar fields, Φ. For the superpotential in eq. (2.1) we obtain: dW dD m = λ ′ αnm L i α Q j n ǫ ij ,(A2)dW dU m = −h nm H i U Q j n ǫ ij , dW dQ j m = λ ′ αnm L i α D m − h nm H i U U m ǫ ij , dW dE m = 1 2 λ αβm L i α L j β ǫ ij , dW dL i α = λ αβm L j β E m + λ ′ αnm Q j n D m − µ α H j U ǫ ij , dW dH U = h nm Q i n U m − µ α L i α ǫ ij . Inserting these results into eq. (A1), one ends up with: V F = λ ′ αnm λ ′ * γkm L i α Q j n L i * γ Q j * k − L j * γ Q i * k + h nm h * km H i U Q j n H i * U Q j * k − H j * U Q i * k (A3) +λ ′ αnm λ ′ * γnk L i α L i * γ D m D * k + h nm h * nk |H U | 2 U m U * k −(h nm λ ′ * γnk H i U L i * γ U m D * k + h.c.) + 1 2 λ αβm λ * γδm L i α L i * γ L j β L j * δ +λ αβm λ * αγk L i β L i * γ E m E * k + λ ′ αnm λ ′ * αpk Q i n Q i * p D m D * k +|µ α | 2 |H U | 2 + (λ αβm λ ′ * αpk L i β Q i * p E m D * k + h.c.) −(µ α λ * αγk H i U L i * γ E * k + h.c.) − (µ α λ ′ * αpk H i U Q i * p D * k + h.c.) +µ α µ * β L i α L i * β + h nm h * pq U m U * q Q i n Q i * p − (µ α h * pq L i α Q i * p U * q + h.c.) . V D is obtained from the following formula V D = 1 2 D a D a + (D ′ ) 2 ,(A4) where D a = 1 2 g H i * U σ a ij H j U + m Q i * m σ a ij Q j m + α L i * α σ a ij L j α (A5) D ′ = 1 2 g ′ |H U | 2 − α | L α | 2 + 2 m | E m | 2 + 1 3 m | Q m | 2 − 4 3 m | U m | 2 + 2 3 m | D m | 2 . Then, V D = 1 8 g 2 |H U | 2 − α | L α | 2 − m | Q m | 2 2 − 2 α =β |ǫ ij L i α L j β | 2 + 4 α |H i * U L i α | 2 (A6) −2 m =n |ǫ ij Q i m Q j n | 2 + 4 m |H i * U Q i m | 2 − 4 αm |ǫ ij L i α Q i m | 2 + 1 8 g ′2 |H U | 2 − α | L α | 2 + 2 m | E m | 2 + 1 3 m | Q m | 2 − 4 3 m | U m | 2 + 2 3 m | D m | 2 2 . Finally, the soft-supersymmetry-breaking contribution to the scalar potential has already been given in eq. (2.2). APPENDIX B: THE CHARGED FERMION AND SCALAR SECTORS Using the same techniques discussed in Section III, one can evaluate the tree-level masses of charged fermions and scalars. For completeness, we include here the results for the general R-parity-violating, baryon-triality-preserving model exhibited in Section II. (For related results in a minimal RPV model in which µ m is the only RPV parameter, see ref. [22].) First, we consider the sector of charged fermions. The charginos and charged leptons mix, so we must diagonalize a (n g + 2) × (n g + 2) matrix, for n g generations of leptons. Following the notation of ref. [23], we assemble the two-component fermion fields as follows: ψ + = (−iλ + , ψ + H U , ψ + E k ) , ψ − = (−iλ − , ψ − Lα ) ,(B1) where −iλ ± are the two component wino fields, and the remaining fields are the fermionic components of the indicated scalar field. As before, m = 1, . . . , n g and α = 0, 1, . . . , n g , with L 0 ≡ H D . The mass term in the Lagrangian then takes the form [8,9,24]: L mass = − 1 2 (ψ + ψ − ) 0 X T X 0 ψ + ψ − ,(B2) where i i The result given in eq. (B3) corrects a minor error that appears in refs. [8] and [9]. X = M 2 1 √ 2 gv u 0 m 1 √ 2 gv α µ α (m ℓ ) αm .(B3) In eq. (B3), 0 m is a row vector with n g zeros, and (m ℓ ) αm ≡ 1 √ 2 v ρ λ ραm .(B4) Note that in the basis where v n = 0, the definition of (m ℓ ) nm reduces to the one given in eq. (2.11). The charged fermion masses are obtained by either diagonalizing X † X (with unitary matrix V ) or XX † (with unitary matrix U * ), where the two unitary matrices are chosen such that U * XV −1 is a diagonal matrix with the non-negative fermion masses along the diagonal. The following relation is noteworthy: Tr (X † X) = Tr (XX † ) = |M 2 | 2 + |µ| 2 + 2m 2 W + Tr (m † ℓ m ℓ ) ,(B5) where |µ| 2 is defined in eq. (2.6). Note that in the R-parity-conserving MSSM, Tr M 2 χ ≡ |M 2 | 2 + |µ| 2 + 2m 2 W is the sum of the two chargino squared-masses and m ℓ is the charged lepton mass matrix. In the presence of RPV interactions, eq. (B5) remains valid despite the mixing between charginos and charged leptons. Of course, m ℓ no longer corresponds precisely to a mass matrix of physical states. For example, in the v m = 0 basis, X † X =     |M 2 | 2 + 1 2 g 2 |v d | 2 1 √ 2 g(M * 2 v u + v * d µ cos ξ) 0 m 1 √ 2 g(M 2 v * u + v d µ * cos ξ) |µ| 2 + 1 2 g 2 |v u | 2 µ * n (m ℓ ) nm 0 k µ n (m * ℓ ) nk (m † ℓ m ℓ ) km     ,(B6) where cos ξ is defined in eq. (3.3). As expected, if µ m = 0 (but small), then the physical lepton eigenstates will have a small admixture of the charged higgsino eigenstate. It is amusing to note that in the exact limit of m ℓ = 0, there are n g massless fermions (i.e., the charged leptons), in spite of the mixing with the charged higgsinos through the RPV terms. j We next turn to the charged scalar sector. In this case, the charged sleptons mix with the charged Higgs boson and charged Goldstone boson (which is absorbed by the W ± ). The resulting (2n g + 2) × (2n g + 2) squared mass-matrix can be obtained from the scalar potential given by eqs. (A3), (A6) and (2.2). In the {H 1 U , L 2 * β , E m } basis, the charged scalar squared-mass matrix is given by: where the matrix m ℓ is defined in eq. (B4) and M 2 C =     m 2 uu + D b * β + D β µ * β (m ℓ ) βm b α + D * α m 2 αβ + (m ℓ m † ℓ ) αβ + D αβ 1 √ 2 (a ραm v ρ − µ * ρ λ ραm v * u ) µ α (m * ℓ ) αk 1 √ 2 (a * ρβk v * ρ − µ ρ λ * ρβk v u ) (M 2 E ) km + (m † ℓ m ℓ ) km + D km     ,(B7)m 2 uu ≡ m 2 U + |µ| 2 ,(B8)m 2 αβ ≡ (M 2 L ) αβ + µ α µ * β , D αβ ≡ 1 4 g 2 v * α v β + 1 8 (g 2 − g ′2 )(|v u | 2 − |v d | 2 )δ αβ , D km ≡ 1 4 g ′2 (|v u | 2 − |v d | 2 )δ km , D α ≡ 1 4 g 2 v α v u , D ≡ 1 8 (g 2 + g ′2 )(|v u | 2 − |v d | 2 ) + 1 4 g 2 |v d | 2 . As a check of the calculation, we have verified that (−v u , v * β , 0) is an eigenvector of M 2 C with zero eigenvalue, corresponding to the charged Goldstone boson that is absorbed by the W ± . The computation makes use of the minimization conditions of the potential [eqs. (2.7) and (2.8)] and the antisymmetry of λ ρβk and a ρβk under the interchange of ρ and β. A useful sum rule can be derived in the CP-conserving limit. We find: Tr M 2 C = m 2 W + Tr M 2 odd + Tr M 2 E + 2 Tr (m † ℓ m ℓ ) − 1 4 n g m 2 Z cos 2β . This is the generalization of the well known sum rule, m 2 H ± = m 2 W + m 2 A , of the MSSM Higgs sector [15]. The charged sleptons are also contained in the above sum rule. As a check, consider the one-generation R-parity-conserving MSSM limit. Removing the Higgs sum rule contribution from eq. (B9), the leftover pieces are: m 2 e L + m 2 e R − m 2 ν = 2m 2 e + M 2 E − 1 4 m 2 Z cos 2β .(B10) The term in eq. (B10) that is proportional to m 2 Z is simply the D-term contribution to the combination of slepton squared-masses specified above. APPENDIX C: FEYNMAN RULES The fermion-scalar Yukawa couplings take the form: L Yukawa = − 1 2 ∂ 2 W ∂φ i ∂φ j ψ i ψ j + h.c. ,(C1) where superfields are replaced by their scalar components after taking the second derivative of the superpotential W [given in eq. (2.1)], and the ψ i are two component fermion fields. Converting to four-component Feynman rules (see, e.g., the appendices of ref. [17]), and defining P R,L ≡ 1 2 (1 ± γ 5 ), we obtain the Feynman rules listed in Fig. 4. The charge conjugation matrix C appears in fermion-number-violating vertices. The Feynman rules for the cubic scalar interactions can be obtained from the scalar potential [eqs. (A3), (A6) and (2.2)] by putting L 1 α → L 1 α + 1 √ 2 v α . The Feynman rules for the interaction of the sneutrinos with slepton pairs are given in Fig. 5, where (m ℓ ) γm is defined in eq. (B4). In Section IV, we have applied the rules of Fig. 5 to theν p e m e n couplings (p, m, n = 1, . . . , n g ) in the basis where v m = 0 and (m ℓ ) nm is diagonal. In this basis, the terms in Fig. 5 proportional to gauge couplings do not contribute. . ν α e − Rn e − Rm −iλ αγn (m * ℓ ) γm − i 2 √ 2 g ′2 v * α δ mñ ν α e − Lρ e − L β −iλ αβk (m * ℓ ) ρk + i 4 √ 2 (g 2 − g ′2 )v * α δ βρ − 2g 2 v * β δ αρ ν α e − Rn e − L β −ia αβn One can express B 0 as a one-dimensional integral: B 0 (p 2 , M 2 , m 2 ) = ∆ − 1 0 dx ln m 2 x + M 2 (1 − x) − p 2 x(1 − x) µ 2 ,(D2) where ∆ ≡ (4π) ǫ Γ(ǫ) = 1 ǫ − γ + ln(4π) + O(ǫ), ǫ = 2 − n 2 .(D3) Two limiting cases are useful for the calculations performed in Section IV. In the p 2 → 0 Fig. 1 . 1One-loop contribution to the neutrino mass. (4.3) and(4.5), after replacing m ℓ , e = −1,M 2 L , M 2 E and (A E ) 0ℓℓ with m d , e = −1/3,M 2 Q , M 2 D , and (A D ) 0dd respectively. Here and below, d [r] labels the generations of down-type quarks [squarks]. Then, Fig. 2 . 2Lepton pair loop contribution to the sneutrino-antisneutrino mass splitting. Fig. 3 . 3Slepton pair loop contribution to the sneutrino-antisneutrino mass splitting. j It may seem from eq. (B6) that the charged leptons are unmixed if m ℓ = 0. But, one can shown that this is not the case by computing XX † . The mixing originates from µ m = 0 appearing in the matrix X [eq. (B3)]. Fig. 4 . 4Feynman rules for the scalar-fermion interactions. Fig. 5 . 5Feynman rules for the interactions of the sneutrinos and charged sleptons.APPENDIX D: THE B 0 FUNCTIONThe B 0 function is defined as follows:i 16π 2 B 0 (p 2 , M 2 , m 2 ) = d n q (2π) n 1 (q 2 − m 2 ) [(q − p) 2 − M 2 ] g The index m labels sneutrino generation, although one should keep in mind that in the presence of flavor violation, the sneutrino mass basis is not aligned with the corresponding mass bases relevant for the charged sleptons, charged leptons, or neutrinos. h This is a very general tree-level result. Consider models with n g generations of left-handed neutrinos in which some of the neutrino mass eigenstates remain massless. One finds that generically, all n g sneutrino-antisneutrino pairs are split in mass. For example, in the three generation see-saw model with one right handed neutrino, only one neutrino is massive, while all three sneutrino-antisneutrino pairs are non-degenerate. (At the one-loop level, the non-degeneracy of the sneutrino-antisneutrino pairs will generate small masses for neutrinos that were massless at tree level[16].) ACKNOWLEDGMENTSWe thank Yossi Nir for helpful discussions. YG is supported by the U.S. Department of Energy under contract DE-AC03-76SF00515, and HEH is supported in part by the U.S. . S Weinberg, Phys. Rev. 26287S. Weinberg, Phys. Rev. D26, 287 (1982). . Y Fukuda, Super-Kamiokande CollaborationPhys. Lett. 43633Y. Fukuda et al. [Super-Kamiokande Collaboration], Phys. Lett. B436, 33 (1998); For a recent review of the evidence for neutrino mass, see B. Kayser, hep-ph/9810513. A recent compilation of experimental evidence for neutrino mass can be found in C. Caso et al. Phys. Rev. Lett. 811Eur. Phys. J.Phys. Rev. Lett. 81, 1562 (1998). For a recent review of the evidence for neutrino mass, see B. Kayser, hep-ph/9810513. A recent compilation of experimental evidence for neutrino mass can be found in C. Caso et al. [Particle Data Group], Eur. Phys. J. C3, 1 (1998). . J Hisano, T Moroi, K Tobe, M Yamaguchi, T Yanagida, Phys. Lett. 357579J. Hisano, T. Moroi, K. Tobe, M. Yamaguchi, and T. Yanagida, Phys. Lett. B357, 579 (1995); . J Hisano, T Moroi, K Tobe, M Yamaguchi, Phys. Rev. 532442J. Hisano, T. Moroi, K. Tobe, and M. Yamaguchi, Phys. Rev. D53, 2442 (1996). . Y Grossman, H E Haber, Phys. Rev. Lett. 783438Y. Grossman and H.E. Haber, Phys. Rev. Lett. 78, 3438 (1997). . M Hirsch, H V Klapdor-Kleingrothaus, S G Kovalenko, hep-ph/9701273Phys. Lett. 3981947Phys. Rev.M. Hirsch, H.V. Klapdor-Kleingrothaus and S.G. Kovalenko, Phys. Lett. B398, 311 (1997); hep-ph/9701273; Phys. Rev. D57, 1947 (1998); . M Hirsch, H V Klapdor-Kleingrothaus, S Kolb, S G Kovalenko, Phys. Rev. 572020M. Hirsch, H.V. Klapdor- Kleingrothaus, S. Kolb and S.G. Kovalenko, Phys. Rev. D57, 2020 (1998). . L J Hall, T Moroi, H Murayama, Phys. Lett. 424305L.J. Hall, T. Moroi and H. Murayama, Phys. Lett. B424, 305 (1998). . C Aulakh, R Mohapatra, Phys. Lett. 119136C. Aulakh and R. Mohapatra, Phys. Lett. B119, 136 (1983); . L J Hall, M Suzuki, Nucl. Phys. 231419L.J. Hall and M. Suzuki, Nucl. Phys. B231, 419 (1984); . L.-H Lee, Phys. Lett. 138121L.-H. Lee, Phys. Lett. B138, 121 (1984); . Nucl. Phys. 246120Nucl. Phys. B246 120, (1984); . G G Ross, J W F Valle, Phys. Lett. 151375G.G. Ross and J.W.F. Valle, Phys. Lett. B151, 375 (1985); . J Ellis, Phys. Lett. 150142J. Ellis et al., Phys. Lett. B150, 142 (1985); . S Dawson, Nucl. Phys. 261297S. Dawson, Nucl. Phys. B261, 297 (1985); . A Santamaria, J W F Valle, Phys. Lett. B195. 423A. Santamaria and J.W.F. Valle, Phys. Lett. B195 423 (1987); . K S Babu, R N Mohapatra, Phys. Rev. Lett. 641705K.S. Babu and R.N. Mohapatra, Phys. Rev. Lett. 64, 1705 (1990); . R Barbieri, M M Guzzo, A Masiero, D Tommasini, Phys. Lett. 252251R. Barbieri, M.M. Guzzo, A. Masiero, and D. Tommasini, Phys. Lett. B252, 251 (1990); . E Roulet, D Tommasini, Phys. Lett. 256281E. Roulet and D. Tommasini, Phys. Lett. B256, 281 (1991); . K Enkvist, A Masiero, A Riotto, Nucl. Phys. 37395K. Enkvist, A. Masiero, and A. Riotto, Nucl. Phys. B373, 95 (1992); . J C Romao, J W F Valle, Nucl. Phys. 38187J.C. Romao and J.W.F. Valle, Nucl. Phys. B381, 87 (1992); . R M Godbole, P Roy, X Tata, Nucl. Phys. 40167R.M. Godbole, P. Roy, and X. Tata, Nucl. Phys. B401, 67 (1993); . A S Joshipura, M Nowakowski, Phys. Rev. 512421A.S. Joshipura and M. Nowakowski, Phys. Rev. D51, 2421 (1995); . Phys. Rev. 515271Phys. Rev. D51, 5271 (1995); . F M Borzumati, Y Grossman, E Nardi, Y Nir, Phys. Lett. 384123F.M. Borzumati, Y. Grossman, E. Nardi and Y. Nir, Phys. Lett. B384, 123 (1996); . M Nowakowski, A Pilaftsis, Nucl. Phys. 46119M. Nowakowski and A. Pilaftsis, Nucl. Phys. B461, 19 (1996); . H.-P Nilles, N Polonsky, Nucl. Phys. 48433H.-P. Nilles and N. Polonsky, Nucl. Phys. B484, 33 (1997). . T Banks, Y Grossman, E Nardi, Y Nir, Phys. Rev. 525319T. Banks, Y. Grossman, E. Nardi and Y. Nir, Phys. Rev. D52, 5319 (1995). . E Nardi, Phys. Rev. 555772E. Nardi, Phys. Rev. D55, 5772 (1997). . R Hempfling, Nucl. Phys. 4783R. Hempfling, Nucl. Phys. B478, 3 (1996); . B De Carlos, P L White, Phys. Rev. 543427B. de Carlos and P.L. White, Phys. Rev. D54, 3427 (1996). . J Ferrandis, hep-ph/9810371J. Ferrandis, Valencia preprint IFIC/98-78 (1998) [hep-ph/9810371]. H E Haber, Recent Directions in Particle Theory, Procedings of the 1992 Theoretical Advanced Study Institute in Elementary Particle Physics. J. Harvey and J. PolchinskiSingaporeWorld ScientificH.E. Haber, in Recent Directions in Particle Theory, Procedings of the 1992 Theoretical Advanced Study Institute in Elementary Particle Physics, J. Harvey and J. Polchinski, editors (World Scientific, Singapore, 1993) pp. 589-686. . S Davidson, J Ellis, Phys. Lett. 390210S. Davidson and J. Ellis, Phys. Lett. B390, 210 (1997); . Phys. Rev. 564182Phys. Rev. D56, 4182 (1997); . S Davidson, hep-ph/9808425S. Davidson, CERN-TH-98-161 (1998) [hep-ph/9808425]. . Y Grossman, H E Haber, preprint in preparationY. Grossman and H.E. Haber, preprint in preparation. J F Gunion, H E Haber, G Kane, S Dawson, The Higgs Hunter's Guide. Redwood City, CAAddison-Wesley Publishing CompanyJ.F. Gunion, H.E. Haber, G. Kane and S. Dawson, The Higgs Hunter's Guide, (Addison- Wesley Publishing Company, Redwood City, CA, 1990). . S Davidson, S F King, hep-ph/9808296S. Davidson and S.F. King, CERN-TH-98-256 (1998) [hep-ph/9808296]. . H E Haber, G L Kane, Phys. Rep. 11775H.E. Haber and G.L. Kane, Phys. Rep. 117, 75 (1985). . S Dimopoulos, L J Hall, Phys. Lett. 207210S. Dimopoulos and L.J. Hall, Phys. Lett. B207, 210 (1988); . V Barger, G F Giudice, T Han, Phys. Rev. 402987V. Barger, G.F. Giu- dice and T. Han, Phys. Rev. D40, 2987 (1989); . S Dimopoulos, Phys. Rev. 412099S. Dimopoulos et al., Phys. Rev. D41, 2099 (1990); . J Erler, J L Feng, N Polonsky, Phys. Rev. Lett. 783063J. Erler, J.L. Feng and N. Polonsky, Phys. Rev. Lett. 78, 3063 (1997); . J Kalisowski, R Ruckl, H Spiesberger, P M Zerwas, Phys. Lett. 406314J. Kalisowski, R. Ruckl, H. Spiesberger and P.M. Zerwas, Phys. Lett. B406 314 (1997); . Phys. Lett. 414297Phys. Lett. B414, 297 (1997); . B C Allanach, H Dreiner, P Morawitz, M D Williams, Phys. Lett. 420307B.C. Allanach, H. Dreiner, P. Morawitz and M.D. Williams, Phys. Lett. B420, 307 (1998). . J L Feng, J F Gunion, T Han, Phys. Rev. 5871701J.L. Feng, J.F. Gunion and T. Han, Phys. Rev. D58, 071701 (1998). . S Bar-Shalom, G Eilam, A Soni, hep-ph/9804339Phys. Rev. Lett. 804629Riverside preprint UCRHEP-T221S. Bar-Shalom, G. Eilam and A. Soni, Phys. Rev. Lett. 80, 4629 (1998); Riverside preprint UCRHEP-T221 (1998) [hep-ph/9804339]. . A Gould, B T Draine, R W Romani, S Nussinov, Phys. Lett. 238337A. Gould, B.T. Draine, R.W. Romani, and S. Nussinov, Phys. Lett. B238, 337 (1990); . G Starkman, A Gould, R Esmailzadeh, S Dimopoulos, Phys. Rev. 413594G. Starkman, A. Gould, R. Esmailzadeh, and S. Dimopoulos, Phys. Rev. D41, 3594 (1990); . T Memmick, Phys. Rev. 412074T. Memmick et al., Phys. Rev. D41, 2074 (1990); . P Verkerk, Phys. Rev. Lett. 681116P. Verkerk et al., Phys. Rev. Lett. 68, 1116 (1992). . M A Diaz, J C Romao, J W F Valle, Nucl. Phys. 52423M.A. Diaz, J.C. Romao and J.W.F. Valle, Nucl. Phys. B524, 23 (1998); . A Akeroyd, M A Diaz, J Ferrandis, M A Garcia-Jareno, J W F Valle, Nucl. Phys. 5293A. Akeroyd, M.A. Diaz, J. Ferrandis, M.A. Garcia-Jareno and J.W.F. Valle, Nucl. Phys. B529, 3 (1998). . J F Gunion, H E Haber, Nucl. Phys. 2721E: B402, 567 (1993)J.F. Gunion and H.E. Haber, Nucl. Phys. B272, 1 (1986) [E: B402, 567 (1993)]. . A G Akeroyd, M A Diaz, J W F Valle, hep-ph/9806382Valencia preprint FTUV-98-50A.G. Akeroyd, M.A. Diaz, and J.W.F. Valle, Valencia preprint FTUV-98-50 (1998) [hep-ph/9806382].
[]
[ "Face Recognition Algorithms Based on Transformed Shape features", "Face Recognition Algorithms Based on Transformed Shape features" ]
[ "Sambhunath Biswas \nMachine Intelligence Unit\nISI Kolkata-700108\nIndia\n", "Amrita Biswas \nElectronics & Communication Department\nSikkim Manipal Institute of Technology Majitar\n737132India\n" ]
[ "Machine Intelligence Unit\nISI Kolkata-700108\nIndia", "Electronics & Communication Department\nSikkim Manipal Institute of Technology Majitar\n737132India" ]
[]
Human face recognition is, indeed, a challenging task, especially under the illumination and pose variations. We examine in the present paper effectiveness of two simple algorithms using coiflet packet and Radon transforms to recognize human faces from some databases of still gray level images, under the environment of illumination and pose variations. Both the algorithms convert 2-D gray level training face images into their respective depth maps or physical shape which are subsequently transformed by Coiflet packet and Radon transforms to compute energy for feature extraction. Experiments show that such transformed shape features are robust to illumination and pose variations. With the features extracted, training classes are optimally separated through linear discriminant analysis (LDA), while classification for test face images is made through a k-NN classifier, based on L1 norm and Mahalanobis distance measures. Proposed algorithms are then tested on face images that differ in illumination,expression or pose separately, obtained from three databases,namely, ORL, Yale and Essex-Grimace databases. Results, so obtained, are compared with two different existing algorithms.Performance using Daubechies wavelets is also examined. It is seen that the proposed Coiflet packet and Radon transform based algorithms have significant performance, especially under different illumination conditions and pose variation. Comparison shows the proposed algorithms are superior.
10.1049/ic.2011.0123
[ "https://arxiv.org/pdf/1207.2537v1.pdf" ]
5,202,359
1207.2537
564ea56f037c0d30010237feb86fad85a6dd26cc
Face Recognition Algorithms Based on Transformed Shape features Sambhunath Biswas Machine Intelligence Unit ISI Kolkata-700108 India Amrita Biswas Electronics & Communication Department Sikkim Manipal Institute of Technology Majitar 737132India Face Recognition Algorithms Based on Transformed Shape features Face RecognitionRadon TransformWavelet TransformLinear Discriminant Analysis Human face recognition is, indeed, a challenging task, especially under the illumination and pose variations. We examine in the present paper effectiveness of two simple algorithms using coiflet packet and Radon transforms to recognize human faces from some databases of still gray level images, under the environment of illumination and pose variations. Both the algorithms convert 2-D gray level training face images into their respective depth maps or physical shape which are subsequently transformed by Coiflet packet and Radon transforms to compute energy for feature extraction. Experiments show that such transformed shape features are robust to illumination and pose variations. With the features extracted, training classes are optimally separated through linear discriminant analysis (LDA), while classification for test face images is made through a k-NN classifier, based on L1 norm and Mahalanobis distance measures. Proposed algorithms are then tested on face images that differ in illumination,expression or pose separately, obtained from three databases,namely, ORL, Yale and Essex-Grimace databases. Results, so obtained, are compared with two different existing algorithms.Performance using Daubechies wavelets is also examined. It is seen that the proposed Coiflet packet and Radon transform based algorithms have significant performance, especially under different illumination conditions and pose variation. Comparison shows the proposed algorithms are superior. Introduction Face Recognition problem has been studied extensively for more than twenty years but even now the problem is not fully solved. In particular, the problem still exists when illumination and pose vary significantly. Recently, some progress [1] has been made on the problems of face recognition, especially under conditions such as small variations in lighting and facial expressions or pose. Of the many algorithms for face recognition, so far developed, the traditional approaches are based on Principal Component Analysis (PCA). Hyeonjoon Moon et al. [2] implemented a generic modular PCA algorithm where the numerous design decisions have been stated explicitly. They experimented with changing the illumination normalization procedure and studied its effect through the performance of compressing images with JPEG and wavelet compression algorithms. For this, they varied the number of eigen vectors in the representation of face images and changed the similarity measure in the classification process. Kamran Etemad and Rama Chellappa in their discriminant analysis algorithm [3], made an objective evaluation of the significance of visual information in different parts (features) of a face for identifying the human subject. LDA of faces provides a small set of features that carries the most relevant information for classification purposes. The features are obtained through eigen vector analysis of scatter matrices with the objective of maximizing between-class variations and minimizing within-class variations. The algorithm uses a projection based feature extraction procedure and an automatic classification scheme for face recognition. A slightly different method, called the evolutionary pursuit method, for face recognition was de-scribed by Chengjun Liu and Harry Wechsler [4]. Their method processes images in a lower dimensional whitened PCA subspace. Directed but random rotations of the basis vectors in this subspace are searched by Genetic Algorithm, where evolution is driven by a fitness function defined in terms of performance accuracy and class separation. Up to now, many face representation approaches have been introduced including subspace based holistic features and local appearance features [17]. Typical holistic features include the well-known principal component analysis (PCA) [18], linear discriminant analysis [19], independent component analysis (ICA) [20] etc. Recently, information from different sections, such as, scale, space and orientation, has been used for representation and recognition of human faces by Zhen et al. [21]. This does not include the effect of illumination change. Subspace based face recognition under the scenarios of misalignments and/or image occlusions has been published by Shuicheng et al. [22]. We have not considered image occlusions as our objective is different. The proposed research work addresses the problem of face recognition to achieve high performance in the face recognition system. Face Recognition method, [5] based on curvelet based PCA and tested on ORL Database, uses 5 images for training and has achieved 96.6% recognition rate and, using 8 images for training on the Essex Grimace database, has achieved 100% recognition rate. Another algorithm [6], based on wavelet transform, uses 5 images for training from the ORL Database has achieved a recognition rate of 99.5%. But still more improvement is required to ensure that the face recognition algorithms are robust, in particular to illumination and pose variation. A face recognition algorithm, mainly based on two dimensional graylevel images, in general, exhibits poor performance when exposed to different lighting conditions. This is because the features extracted for classification are not illumination invariant. To get rid of the illumination problem, we have used the 3-dimensional depth images of the corresponding 2-dimensional graylevel face images. This is because the 3-D depth image depicts the physical surface of the face and thus, provides the shape of human face. The primary reason is that such a shape depends on the gradient values of the physical surface of the face, i.e., on the difference of intensity values and not on the absolute values of intensity. As a result, change in illumination does not affect the feature set and so the decision also remains unaffected. Such a shape can be obtained using a shape from shading algorithm and subsequently can be used for feature extraction. 3-D face matching using isogeodesic stripes through a graph as described in [23] is a different technique for face recognition. But it is computationally expensive.However, it is also a different area of research. Xiaoyang and Triggs [24], on the other hand, considered texture features for face recognition under difficult lighting conditions. Their method needs to enhance local textures but how to select the local textures or which local textures are adequate and need be considered are not discussed. The proposed algorithms use the shape from shading algo-rithm [8], and Coiflet/wavelet and Radon transform respectively to compute energy for feature extraction. It should be noted that Radon transform provides directional information in image. Since, PCA eigen axis provides the direction of maximum variance in data, this axis must rotate with the rotation of the image to maintain the optimality in data variance and hence the DFT magnitude of the directional information, provided by the Radon transform computed with respect to PCA eigen axis gives robust features with respect to rotation. Note that, linear discriminant analysis (LDA) groups the similar classes in an optimal way in the eigen space and so the k-NN classifier can be used for classification. We have used L1 norm and Mahalanobis distance to test for classification. With this, the outline of the paper is described as follows: In section 2, we briefly review a shape from shading algorithm and in section 3, the concepts of wavelet packet decomposition, Radon transform and LDA methodology are briefly sketched. Section 4, depicts the proposed two different algorithms, while experimental results and comparison of both the methods are discussed in section 5. Finally, conclusion is made in section 6. 2.Extraction of Illumination Dependent Features The problem of recovering 3-D shape from a single monocular 2-D shaded image was first addressed by B. K. P Horn [14]. He developed a method connecting the surface gradient (p, q) with the brightness values for Lambertian objects. Theresult is known as the reflectance map. Therefore, he computed the surface gradients (p, q) using the reflectance map in order to get the shape. From (p, q), he also computed depth, Z.Since, orientation of tangent planes is accompanied by the orientation of their normal vectors, say (n x ,n y ,n z ), they can also be effectively used to represent the surface shape.As the reflectance map, in general, is non-linear, it is very difficult to find the gradient values in a straightforward way.Some other researchers, such as, Bruss [15] and Pentland [16], to simplify the problem, thought of local analysis to compute the shape. Thus, two different kinds of algorithms,e.g. global and local emerged. In global methods, Horn showed the shape can be recovered by minimizing some cost function involving constraints such as smoothness. He used variational calculus approach to compute the shape in the continuous domain and its iterative discrete version in the discrete domain.Bruss showed that no shape from shading technique can provide a unique solution without additional constraint. Later on, P. S. Tsai and M. Shah [8] provided a simple method to compute shape through linearization of Horn"s nonlinear reflectance map. For our purpose, we have used the shape from shading algorithm described by P. S. Tsai and M. Shah [8] for its simplicity and fastness.This approach employs discrete approximations for p and q using finite differences, and linearizes the reflectance in Z(x,y). The method is fast,since each operation is purely local. In addition, it gives good results for the spherical surfaces, unlike other linear methods.Note that the illumination change may be due to the position change of the source keeping the strength of the source as it is or due to the change in the source strength keeping the position of the source fixed. In either case, the gradient values, p and q, of the surface do not change, i.e., they can be uniquely determined [14]. Hence, for the linear reflectance map, the illumination will have no effect on the depth map. In other words, depth map will be illumination invariant. Feature Extraction A number of methods are available to extract the facial features. In the proposed methods, we have selected two approaches. One is based on discrete Coiflet packet transform and the other is based on Radon transform. Both these trans-forms use the depth values of face images to extract features for classification. Discrete Wavelet Transform Discrete wavelet transform is a powerful technique in signal processing and can be used in different research areas. Wavelet transform has merits of multi-resolution, multi-scale decomposition, and so on. In frequency field, when the facial image is decomposed using two In wavelet packet decomposition [13], we divide each of these four regions in the similar way. The sketch map of wavelet packet decomposition is shown in Fig. 1. As illustrated in Fig. 1, the L denotes low frequency and the H denotes high frequency. With each level of decomposition, the number of pixels in each subband greatly reduces into packets while the essential features of the underlying image are retained. The low frequency region in decompositions at different levels is the blurred version of the input image, while the high frequency regions contain the finer detail or edge information contained in the input image. To ensure almost nearly rotational invariance,the linear combination of the four subbands can be taken.This combination provides the sum of the different energy bands. For coiflets, this linear combination of four subband coiflet coefficients provides excellent constancy when the subject undergoes rotation. We have examined both coiflet and Daubechies wavelet in the proposed first algorithm. Also,note that coiflet has zero wavelet and zero scaling function moments in addition to one zero moment for orthogonality.As a result, a combination of zero wavelet and zero scaling function moments used with the samples of the signal may give superior results compared to wavelets with only zero wavelet moments [7]. This fact also is reflected in our results. Radon Transform and robust features In the proposed second algorithm, Radon transform is used to derive the linear features. Due to inherent properties of Radon transform, it is a useful tool to capture the directional features of images. It should be noted that the principal axis of PCA for an image rotates when the image rotates. This is done to maintain the maximum variance in data. Therefore,Radon transform, computed with respect to this axis, tenders robust features. The Radon transform of a two dimensional function f(x,y) is defined as , [11]). Jafari-Khouzani and Soltanian-Zadeh [10] showed how the rotation of the texture sample corresponds to a circular shift along θ. Therefore, using a translation invariant wavelet transform along θ, they produced rotation invariant features. Their method is very expensive in the sense that they used the Radon transform for 1-degree to 180degree to find the principal direction of texture. In the proposed algorithm, we have used Radon transform along the principal eigen axis given by the PCA method to compute the projections of all training images. Thus, the principal direction here is the direction of PCA eigen axis. The features so obtained are rotationally robust because the principal eigen axis, given by the PCA method, considers always the line that best fits the data cloud. Then their DFT magnitude may be taken to constitute the feature vectors. Thus, directional facial characteristics are incorporated in the feature values. dxdy y x r y x f y x f r R ) sin cos ( ) , ( )] , ( )[ , ((1) Linear Discriminant Analysis Linear Discriminant Analysis (LDA) is a technique commonly used for data classification and dimensionality reduction. LDA easily handles the case where within class frequencies are unequal. It maximizes the ratio of between class variance to within class variance in any particular data set thereby guaranteeing maximal separability. However in face recognition problem, one is confronted with the difficulty that the within class scatter matrix is always singular. This stems from the fact that the number of images in the training set is much smaller than the number of pixels in the image. In order to overcome this problem the images are projected to a lower dimensional space so that the resulting within class In order to overcome the problem for singularity of S W , the regularization technique has been applied. A constant µ > 0 is added to the diagonal elements of S W as S W + µI d ,where I d is the identity matrix of size "d". It is easy to verify that S W + µI d is positive definite and hence is non singular. Proposed Approach We discuss in this section the proposed two different algorithms, using discrete wavelet transform and Radon transform. The first algorithm uses the discrete Coiflet/Daubechies wavelet for feature extraction and described as follows: Algorithm 1 Step 1: compute depths of all the training images using shape from shading algorithm. Step 2: do the four levels of decomposition of depth images using wavelet transform, based on coiflet mother wavelet to get four subband components. Step 3: take the linear summation of all the wavelet transform coefficients to build up the feature vectors for training images. Step 4: do the linear discriminant analysis on the feature vectors. Step 5: compute the feature vectors for test images and project it in the LDA subspace. Step 6: classify the test images using L1 norm and Mahalanobis distance measure. Step 7: stop. In Algorithm 2, Radon transform is followed by Fourier transform to capture the directional features of the depth map images. The selected angle for Radon transform computation is θ, detected by the principal eigen axis because of its uniqueness. Algorithm 2 Step 1: compute depths of all the training images using shape from shading method. Step 2: compute the Radon Transform coefficients of the depth images and take them as feature vectors or find the DFT magnitude of the transformed data to constitute the feature vectors. Step 3: do the linear discriminant analysis on the feature vector set. Step 4: compute the feature vectors for test images and project it in the LDA subspace. Step 5: classify the test images using the L1 norm and Mahalanobis distance measure. Step 6: stop. 5.Results & Discussion In order to test the proposed algorithms, we have used ORL,Essex-Grimace and Yale databases. ORL (AT and T) database contains 10 different images (92 x 112), each of 40 different subjects. All images were taken against a dark homogenous background with the subjects in upright, frontal position with some side movement. Sample images of the dataset are shown in Fig. 5. Yale Database contains images of 10 subjects(480x 640) under 64 different lighting conditions. Sample images of this database are shown in Fig. 6. Essex-Grimace database,on the other hand, contains a sequence of 20 images (180x200), each of 18 individuals consisting of male and female subjects, taken with a fixed camera. During the sequence, the subject moves his/her head and makes grimaces which get more extreme towards the end of the sequence. Sample images of this database are shown in Fig. 7. The depth map was computed for all the images in the training database assuming the reflectance of the surface to be Lambertian. The obtained depth image has the same size as the original image i.e. 92 x112, 180 x 200 and 480 x 640 for the ORL, Essex-Grimace and Yale Database, respectively. Depth image computed by shape from shading algorithm for the first image in ORL database is shown in Fig. 8. To show the robustness of features against orientation, we have plotted, in both the cases (Algorithm 1 and Algorithm 2), the relative error in distance measurement for all ten images in six classes (of ORL database) from their respective mean images in the LDA space. Note that this distance is almost zero for all the images in a class and maintains excellent constancy. Also for robustness of features against illumination, we have computed this distance using Yale database. Constancy is found to remain preserved in this case. But for space constraint we cannot provide this result. Algorithm 1, based on Coiflet and Daubechies wavelet packet features, and the Algorithm 2, based on directional features through Radon transform, both have been tested using different number of training images. Classification was conducted in the LDA space using k-NN classifier based on L1 norm measure and Mahalanobis distance measure. The results are shown in Table 1, Table 2 and Table 3.It is seen that the coiflet based Algorithm 1 and the Radon transform based Algorithm 2 have nearly similar performance. Coiflet based classification is found to be superior to Daubechies wavelet based classification for all the three data bases. The justification is coiflet has zero wavelet and zero scaling function moments in addition to one zero moment for orthogonality. This provides the superior results compared to wavelets with only zero wavelet moments. Comparison in Table 4 shows the proposed method (Algorithm 1, Algorithm 2) is better than the Curvelet based PCA [1] and DWT based [2] method, even at lower number of training images. Conclusion We have proposed two different algorithms with two different kinds of features and these features are reasonably robust against illumination variation and orientation. These facts are supported by experimental results based on images of Yale and ORL data bases respectively. The choice of Yale and ORL data bases are due to large variation in illumination and orientation. Tests on Essex-Grimace database proves that both the proposed algorithms are robust against variations in expression and provides 100% Recognition Rate for just 2 training images per class. Tests on the Yale database proves that the proposed algorithms, particularly the second algorithm based on Radon Transform, is robust against major variations in illumination for just 2 training images per class and provides 100% Recognition Rate. ORL database whose images have minor variations of pose, expression and illumination has also been tested and a recognition rate of 100% has been achieved using the first algorithm and 98% recognition rate has been achieved using the second algorithm. Fig. 1 1Wavelet Packet Decomposition dimensional wavelet transform, we get four regions. These regions are: one low-frequency region LL (approximate component), and three high-frequency regions,namely LH (horizontal component), HL (vertical component),and HH (diagonal component), respectively. angle formed by the distance vector as shown in fig. 2.Radon transform is defined for an image with unlimited support. In practice, the image is confined to [-L, L]x[-L, L]. According to Fourier slice theorem, this transformation is invertible. Fourier slice theorem states that for a 2-D function, the 1-D Fourier transforms of the Radon transform along r, are the radial samples of the 2-D Fourier transform of f(x,y) at the corresponding angles. We know the Radon transform changes as the image rotates. Rotation of the input image corresponds to the translation of the Radon transform along ([9] Fig. 2 2Radon Transform scatter matrix is non singular. Let the between class scatter matrix be defined as and the within class scatter matrix be defined as where µ i is the mean image of the class X i and N i is the no.of samples in the class Xi. If S W is non singular the optimal projection W opt is chosen as the matrix with orthonormal columns which maximizes the ratio of the determinant of the between class scatter matrix of the projected samples to the determinant of the within class scatter matrix of the projected samples, i.e, where {w i |i=1,2,…m] is the set of generalized eigen vectors of S B and S W corresponding to the m largest generalized eigen values λ i |i=1,2,..m i.e., S B w i = λ i S W w i , i = 1,2…..m. Note that there are at most c-1 non zero generalized eigen values and so, an upper bound on m is c-1 where c is the number of classes. Fig. 3 1 Fig. 4 314Relative error of images from the respective class mean in Algorithm Relative error of images from the respective class mean in Algorithm2. . Fig. 5 5Sample images of ORL Database Fig.6 Sample images of Yale Database Fig.7 Sample images of Essex Grimace Database Fig.8 Image and its depth map Table 1 : 1Coiflet Based Classification(Algorithm 1) Sl.No. Data-base No.of Tr.Images Classi- fication Recog- nition % 1 ORL 4 L1 norm 100.0 2 ORL 4 Mahalanobis 100.0 3 ORL 3 L1 norm 98.5 4 ORL 3 Mahalanobis 98.5 5 ORL 2 L1 norm 97.5 6 ORL 2 Mahalanobis 95.0 7 Yale 2 L1 norm 100.0 8 Yale 2 Mahalanobis 100.0 9 Essex- Grimace 2 L1 norm 100.0 10 Essex- Grimace 2 Mahalanobis 100.0 11 Essex- Grimace 1 L1 norm 100.0 12 Essex- Grimace 1 Mahalanobis 79.0 Table 2 : 2Daubechies Wavelet Based Classification Sl.No. Data-base No.of Tr.Images Classi- fication Recog- nition % 1 ORL 5 L1 norm 90.0 2 ORL 5 Mahalanobis 90.0 3 ORL 4 L1 norm 93.3 4 ORL 4 Mahalanobis 95.0 5 ORL 3 L1 norm 92.8 6 ORL 3 Mahalanobis 88.5 7 ORL 2 L1 norm 86.25 8 ORL 2 Mahalanobis 81.25 9 Yale 2 L1 norm 100.0 10 Yale 2 Mahalanobis 100.0 11 Essex- Grimace 2 L1 norm 100.0 12 Essex- Grimace 2 Mahalanobis 100.0 13 Essex- Grimace 1 L1 norm 100.0 14 Essex- Grimace 1 Mahalanobis 73.3 Table 3 : 3Radon Transform Based Classification(Algorithm 2) Sl.No. Data-base No.of Tr.Images Classi- fication Recog- nition % 1 ORL 5 L1 norm 98.0 2 ORL 5 Mahalanobis 100.0 3 ORL 4 L1 norm 91.6 4 ORL 4 Mahalanobis 96.3 5 ORL 3 L1 norm 91.4 6 ORL 3 Mahalanobis 94.2 7 ORL 2 L1 norm 81.2 8 ORL 2 Mahalanobis 86.2 9 Yale 2 L1 norm 100.0 10 Yale 2 Mahalanobis 100.0 11 Essex- Grimace 2 L1 norm 100.0 12 Essex- Grimace 2 Mahalanobis 100.0 Table 4 : 4Comparison of Results Sl.No. Sambhunath Biswas obtained the M.Sc. degree in Physics, and Ph.D. in Radiophysics and Electronics, from the University of Calcutta in 1973 and 2001 respectively. In 1983-85 he completed courses of M. Tech. (Computer Science) from Indian Statistical Institute, Calcutta.His doctoral thesis is on Image Data Compression. He was a UNDP Fellow at MIT, USA to study machine vision in 1988-89 and was exposed to research in the Artificial Intelligence Laboratory. He visited the Australian National University at Canberra in 1995 and joined the school on wavelets theory and its aplications. In 2001, he became a representative of the Government of India to visit China for holding talks about collaborative research projects in different Chinese Universities.He started his career in Electrical Industries, in the beginning as a graduate engineering treinee and, then as a design and development engineer. He, at present, is a system analyst, GR-I (in the rank of Professor) at the Machine Intelligence Unit in Indian Statistical Institute, Calcutta where he is engaged in research and teaching. He is also an external examiner of the Department of Computer Science at Jadavpur University in Calcutta.He has published several research articles in International Journals of repute and is a member of IUPRAI (Indian Unit of Pattern Recognition and Artificial Intelligence). He is the principal author of a book titled Bezier and Splines in Image Processing and Machine Vision, published by Springer, London. Amrita Biswas obtained B.Tech Degree in Electronics and Communication engineering from Sikkim Manipal Institute of Technology in 2004 and M.Tech in Digital Electronics and Advanced Communication from Sikkim Manipal University in 2005.She is presently working in SMIT as an Associate Professor.She is also pursuing her PhdD.Her current research areas include Pattern Recognition and Image Processing.Data-base No.of Tr.Images Classi-fication Recog- nition % 1 ORL 5 Curvelet Based PCA[1] 96.6 2 Essex- Grimace 8 Curvelet Based PCA[1] 100.0 3 ORL 5 DWT & Image Comparison[2] 99.5 4 ORL 4 Proposed Algo-1 100.0 5 Yale 2 Proposed Algo-1 100.0 6 Essex- Grimace 2 Proposed Algo-1 100.0 7 ORL 5 Proposed Algo-2 98.0 8 Yale 2 Proposed Algo-2 100.0 9 Essex- Grimace 2 Proposed Algo-2 100.0 . W Zhao, R Chellappa, P J Phillips, W. Zhao, R. Chellappa, P. J. Phillips, "A. Face Recognition:A Literature Survey. Rosenfeld, ACM Computing Surveys. 354Rosenfeld,Face Recognition:A Literature Survey", ACM Computing Surveys, Vol. 35, No. 4, 2003,pp.399-458. Computational and Performance Aspects of PCA Based Face Recognition Algorithms. Hyeonjoon Moon, Jonathon Phillips, Perception. 303Hyeonjoon Moon, P Jonathon Phillips, "Computational and Performance Aspects of PCA Based Face Recognition Algorithms", Perception 30(3),2001,pp.303 -321 Discriminant Analysis for Recognition of Human Face Images. Kamran Etemad, Rama Chellappa, Proc. First Int. Conf. on Audio and Video Based Biometric Person Authentication. First Int. Conf. on Audio and Video Based Biometric Person AuthenticationCrans-Montana, Switzerland1206Kamran Etemad and Rama Chellappa, "Discriminant Analysis for Recognition of Human Face Images", Proc. First Int. Conf. on Audio and Video Based Biometric Person Authentication,Crans- Montana, Switzerland,Lecture Notes In Computer Science; Vol.1206, August 1997,pp.127 -142 Face Recognition using Evolutionary Pursuit. Chengjun Liu, Harry Wechsler, Proc. Fifth European Conf. on Computer Vision, ECCV"98. Fifth European Conf. on Computer Vision, ECCV"98Freiburg, GermanyIIChengjun Liu and Harry Wechsler, "Face Recognition using Evolutionary Pursuit", Proc. Fifth European Conf. on Computer Vision, ECCV"98,Freiburg, Germany, Vol II, 02-06 June 1998, pp.596-612. Face Recognition Using Curvelet. Tanaya Mandal, Q M , Jonathan Wu, 6/08Based PCA. Technical ReportIEEETanaya Mandal and Q. M. Jonathan Wu, "Face Recognition Using Curvelet"Based PCA,IEEE, Technical Report , 6/08,pp.978-1-4244-2175. Face Recognition based on Wavelet Transform and Image Comparison. Zheng Dezhong, Cui Fayi, Proc. International Symposium on Computational Intelligence and Design. International Symposium on Computational Intelligence and DesignZheng Dezhong Cui Fayi, "Face Recognition based on Wavelet Transform and Image Comparison", Proc. International Symposium on Computational Intelligence and Design, Volume: 2, 2008,pp. 24-29. Introduction to Wavelets and Wavelet Transforms. C , Sydney Burrus, A Gopinath, Haitao Guo, Prentice Hall, N.J 07458, USAC.Sydney Burrus and A. Gopinath and Haitao Guo, "Introduction to Wavelets and Wavelet Transforms", Prentice Hall, N.J 07458, USA, 1998. Shape From Shading Using Linear Approximation. Ping-Sing Tsai, Mubarak Shah, Image and Vision Computing. 12Ping-Sing Tsai and Mubarak Shah "Shape From Shading Using Linear Approximation", Image and Vision Computing,vol:12, 1994, pp.487-498. Rotation invariantMultiresolution Texture Analysis using Radon and Wavelet Transform. Kourosh Jafari, - Khouzani, Hamid Soltanian-Zadeh, IEEE Trans. on signal processing. 146Kourosh Jafari-Khouzani and Hamid Soltanian- Zadeh,"Rotation invariantMultiresolution Texture Analysis using Radon and Wavelet Transform", IEEE Trans. on signal processing, vol. 14, no. 6, June 2005. Radon TransformOrientation Estimation For Rotation Invariant Texture Analysis. Kourosh Jafari, - Khouzani, Hamid Soltanian-Zadeh, IEEE Trans. on Pattern Analysis and Machine Intelligence. 276Kourosh Jafari-Khouzani and Hamid Soltanian-Zadeh, Radon TransformOrientation Estimation For Rotation Invariant Texture Analysis, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 27, no. 6, June 2005. Pattern recognition by means of the Radon transform and the continuous wavelet transform. E Magli, G Olmo, L. Lo Presti, Signal processing. 73ElsevierE. Magli, G. Olmo and L. Lo Presti, "Pattern recognition by means of the Radon transform and the continuous wavelet transform", Signal processing, vol. 73, Elsevier, 1999, pp.277-289. Eigenfaces vs Fisherfaces:Recognition using Class Specific Linear Projection. N Peter, Joao P Belhumeur, David J Hespanha, Kriegman, IEEE Trans. on PAMI. Peter N. Belhumeur, Joao P. Hespanha and David J. Kriegman, "Eigenfaces vs Fisherfaces:Recognition using Class Specific Linear Projection",IEEE Trans. on PAMI, July 1997. R C Gonzalez, R E Woods, Digital Image Processing. Dorling Kindersley, India, Pearson Prentice HallR. C. Gonzalez and R. E. woods," Digital Image Processing", Dorling Kindersley, India, Pearson Prentice Hall, 2006. Robot Vision. B K Horn, MIT PressCambridge, Massachusetts, USAB.K.P Horn," Robot Vision", Cambridge, Massachusetts, USA ,MIT Press,1986. The Image Irradiance Equation:Its Solution and Applicaion. A R Bruss, TR-623MIT-AITechnical ReportA. R. Bruss, "The Image Irradiance Equation:Its Solution and Applicaion",Technical Report TR-623, MIT- AI, June 1981. Local Shading Analysis. A P Pentland, IEEE Trans. on PAMI. 62A. P. Pentland, "Local Shading Analysis", IEEE Trans. on PAMI, vol.6,no.2, March 1984, pp.170-187. Handbook of Face Recognition. S Z Li, A K Jain, Springer-VerlagNew YorkS. Z. Li and A. K. Jain, "Handbook of Face Recognition", New York,Springer-Verlag, 2005. Face Recognition using eigenfaces. M A Turk, A P Pentland, Proc.IEEE Computer Society Conf. .IEEE Computer Society ConfM. A. Turk and A.P. Pentland, "Face Recognition using eigenfaces", Proc.IEEE Computer Society Conf. . Comput. vs. Pattern Recognition. Comput. vs. Pattern Recognition, June 1991 pp. 586-591. Eigenfaces vs. fisherfaces:recognition using class specific linear projection. P J Belhumeur, D Hespanha, Kriegman, IEEE Trans. On Pattern Analysis and Machine Intelligence. 269P. Belhumeur. J. Hespanha and D. Kriegman, "Eigenfaces vs. fisherfaces:recognition using class specific linear projection", IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 26, no. 9, Sept. 2004,pp.1222- 1228. Independent component analysis a new concept?. P Conor, Signal Processing. 36P. Conor, "Independent component analysis a new concept?", Signal Processing, vol. 36,1994, pp. 287-314. Face recognition by exploring information jointly in space, scale and orientation. Zhen Lei, Shengcai Liao, Matti Pietikainen, Z Li, IEEE Trans. on Pattern Analysis and Machine Intelligence. 201Zhen Lei, Shengcai Liao, Matti Pietikainen and Z. Li "Face recognition by exploring information jointly in space, scale and orientation", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20, no. 1, Jan. 2011, pp.247-256. Misalignment-robust face recognition. Shuicheng Yan, Xiaoou Liu, Tomas S Tang, Huang, IEEE Trans. on Pattern Analysis and Machine Intelligence. 194Shuicheng Yan, jianzhuang Liu, Xiaoou Tang and Tomas S.Huang,"Misalignment-robust face recognition", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 4, Aril2010, pp. 1087-1096. 3D face recognition using isogeodesic stripes. Stefano Berretti, Alberto Del Bimbo, Pietro Pala, IEEE Trans. on Pattern Analysis and Machine Intelligence. 3212Stefano Berretti, Alberto Del Bimbo and Pietro Pala," 3D face recognition using isogeodesic stripes", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, no. 12,Dec. 2010, pp.2162-2177. Enhanced local texture features sets forface recognition under difficult lighting conditions. Xiaoyang Tan, Bill Triggs, IEEE Trans. on Pattern Analysis and Machine Intelligence. 196Xiaoyang Tan and Bill Triggs," Enhanced local texture features sets forface recognition under difficult lighting conditions", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 6, Jun.2010 pp.1635-1650.
[]
[ "Spreading of fake news, competence, and learning: kinetic modeling and numerical approximation", "Spreading of fake news, competence, and learning: kinetic modeling and numerical approximation" ]
[ "Jonathan Franceschi ", "Lorenzo Pareschi " ]
[]
[]
The rise of social networks as the primary means of communication in almost every country in the world has simultaneously triggered an increase in the amount of fake news circulating online. This fact became particularly evident during the 2016 U.S. political elections and even more so with the advent of the COVID-19 pandemic. Several research studies have shown how the effects of fake news dissemination can be mitigated by promoting greater competence through lifelong learning and discussion communities, and generally rigorous training in the scientific method and broad interdisciplinary education. The urgent need for models that can describe the growing infodemic of fake news has been highlighted by the current pandemic. The resulting slowdown in vaccination campaigns due to misinformation and generally the inability of individuals to discern the reliability of information is posing enormous risks to the governments of many countries. In this research using the tools of kinetic theory we describe the interaction between fake news spreading and competence of individuals through multi-population models in which fake news spreads analogously to an infectious disease with different impact depending on the level of competence of individuals. The level of competence, in particular, is subject to an evolutionary dynamic due to both social interactions between agents and external learning dynamics. The results show how the model is able to correctly describe the dynamics of diffusion of fake news and the important role of competence in their containment.
10.1098/rsta.2021.0159
[ "https://arxiv.org/pdf/2109.14087v1.pdf" ]
238,215,679
2109.14087
e31e266149cb8df6b37fe0fc3858ef4a4b2e3457
Spreading of fake news, competence, and learning: kinetic modeling and numerical approximation September 30, 2021 Jonathan Franceschi Lorenzo Pareschi Spreading of fake news, competence, and learning: kinetic modeling and numerical approximation September 30, 2021fake newscompartmental modelscompetencelearning dynamicsinteracting agentssocio-economic kinetic modelsmean-field models The rise of social networks as the primary means of communication in almost every country in the world has simultaneously triggered an increase in the amount of fake news circulating online. This fact became particularly evident during the 2016 U.S. political elections and even more so with the advent of the COVID-19 pandemic. Several research studies have shown how the effects of fake news dissemination can be mitigated by promoting greater competence through lifelong learning and discussion communities, and generally rigorous training in the scientific method and broad interdisciplinary education. The urgent need for models that can describe the growing infodemic of fake news has been highlighted by the current pandemic. The resulting slowdown in vaccination campaigns due to misinformation and generally the inability of individuals to discern the reliability of information is posing enormous risks to the governments of many countries. In this research using the tools of kinetic theory we describe the interaction between fake news spreading and competence of individuals through multi-population models in which fake news spreads analogously to an infectious disease with different impact depending on the level of competence of individuals. The level of competence, in particular, is subject to an evolutionary dynamic due to both social interactions between agents and external learning dynamics. The results show how the model is able to correctly describe the dynamics of diffusion of fake news and the important role of competence in their containment. Introduction With the rise of the Internet, connections among people has become easier than ever; so has been for the availability of information and its accessibility. As such, the Internet is also the source of and malicious purposes). This descriptions are also sensible in terms of matching the model with data available. In this paper we follow this pathway: borrowing ideas from kinetic theory [15,27], we combine a classical compartmental approach inspired by epidemiology [18,20] with a kinetic description of the effects of competence [28,29]. We refer also to the recent work [6] concerning evolutionary models for knowledge. In fact, the first wave of initiatives addressing fake news focused on news production by trying to limit citizen exposure to fake news. This can be done by fact-checking, labeling stories as fake, and eliminating them before they spread. Unfortunately, this strategy has already been proven not to work, it is indeed unrealistic to expect that only high quality, reliable information will survive. As a result, governments, international organizations, and social media companies have turned their attention to digital news consumers, and particularly children and young adults. From national campaigns in several countries to the OECD, there is a wave of action to develop new curricula, online learning tools, and resources that foster the ability to "spot fake news" [26]. It is therefore of paramount importance to build models capable of describing the interplay between the dissemination of fake news and the creation of competence among the population. To this end, the approach we have followed in this paper falls within the recent socio-economic modeling described by kinetic equations (see [27] for a recent monograph on the subject). More precisely, we adapted the competence model introduced in [28,29] to a compartmental model describing fake news dissemination. Such a model allows not only to introduce competence as a static feature of the dynamics but as an evolutionary component both taking into account learning by interactions between agents and possible interventions aimed at educating individuals in the ability to identify fake news. Furthermore, in our modeling approach agents may have memory of fake news and as such be permanently immune to it once it has been detected, or fake news may not have any inherent peculiarities that would make it memorable enough for the population to immunize themselves against it in the future. The approach can be easily adapted to other compartmental models present in the literature, like the ones previously discussed [5,25,32]. The rest of the manuscript is organized as follows. In Section 2 we introduce the structured kinetic model describing the spread of fake news in presence of different competence levels among individuals. The main properties of the resulting kinetic models are also analyzed. Next, Section 3 is devoted to study the Fokker-Planck approximation of the kinetic model and to derive the corresponding stationary states in terms of competence. Several numerical results are then presented in Section 4 that illustrate the theoretical findings and the capability of the model to describe transition effects in the spread of fake news due to the interaction between epidemiological and competence parameters. Some concluding remarks are reported in the last Section together with details on the theoretical results and the numerical methods in two separate appendices. Fake news spreading in a socially structured population In this section, we introduce a structured model for the dissemination of fake news in presence of different levels of skills among individuals in detecting the actual veracity of information, by combining a compartmental model in epidemiology and rumor-spreading analysis [14,18] with the kinetic model of competence evolution proposed in [29]. We consider a population of individuals divided into four classes. The oblivious ones, still not aware of the news; the reflecting ones, who are aware of the news and are evaluating how to act; the spreader ones, who actively disseminate the news and the silent ones, who have recognized the fake news and do not contribute to its spread. Terminology, when describing this compartmental models, is not fully established; however, the dominant one, inspired by epidemiology, refers to the definitions provided by Daley [14] of a population composed of ignorant, spreader and stifler individuals. The class of reflecting agents can be referred to as a group that has a time-delay before taking a decision and enter an active compartment [5,25]. Notation, i.e., the choice of letters to represent the compartments, is even more scattered and somewhat confusing. In Table 1 for readers' convenience we have summarized some of the different possible choices of letters and terminology found in literature. Given the widespread use of epidemiological models compared to fake news models, in order to make the analogies easier to understand, we chose to align with notations conventionally used in epidemiology. Therefore, in the rest of the paper we will describe the population in terms of susceptible agents (S), who are the oblivious ones; exposed agents (E), who are in the time-delay compartment after exposure and before shifting into an active class; infectious agents (I), who are the spreader ones and finally removed agents (R) who are aware of the news but not actively engaging in its spread. Note that this subdivision of the population does not take into account actual beliefs of agents about the truth of the news, so that removed agents, for instance, need not be actually skeptic, nor the spreaders need to actually believe the news. To simplify the mathematical treatment, as in the original works by Daley and Kendall [13,14], we ignored the possible 'active' effects of the population of removed individuals by interacting with other compartments and producing immunization among susceptible (the role of skeptic individuals in [5,25]) and remission among spreaders (the role of stiflers in [32]). Of course, the model easily generalizes to include these additional dynamics. The main novelty in our approach is to consider an additional structure on the population based on the concept of competence of the agents, here understood as the ability to assess and evaluate information. Let us suppose that agents in the system are completely characterized by their competence x ∈ X ⊆ R + , measured in a suitable unit. We denote by f S = f S (x, t), f E = f E (x, t), f I = f I (x, t), f R = f R (x, t) , the competence distribution at time t > 0 of susceptible, exposed, infectious and removed individuals, respectively. Aside from natality or mortality concerns (i.e., the social network is a closed system-nobody enters or leaves it during the diffusion of the fake news, which is a common assumption, based on the average lifespan of fake news) we therefore have: X f S (x, t) + f E (x, t) + f I (x, t) + f R (x, t) dx = 1, t > 0, SEIR (this paper) DK [13,14] ISR [32] SEIZR [5] SEIZ [25] Category name Susceptible Ignorant Ignorant Susceptible Susceptible Exposed --Idea incubator Exposed Infectious Spreader Spreader Idea adopter Infectious Removed Stifler Stifler Skeptic/Recovered Skeptic which implies that we will refer to Variable notation S X I S S E - - E E I Y S I I R Z R Z/R ZS(t) = X f S (x, t) dx, E(t) = X f E (x, t) dx,I(t) = X f I (x, t) dx, R(t) = X f R (x, t) dx as the fractions of the population that are susceptible, exposed, infected, or recovered respectively. We also denote the relative mean competences as m S (t) = X xf S (x, t) dx, m E (t) = X xf E (x, t) dx, m I (t) = X xf I (x, t) dx, m R (t) = X xf R (x, t) dx. A SEIR model describing fake news dynamics The fake news dynamics proceeds as follows: a susceptible agent gets to know it by a spreader. At this point, the now-exposed agent evaluates the piece of information-the reflecting, or delay, stage-and decide whether to share it with other individuals (and turning into a spreader themselves) or to keep silent, removing themselves by the dissemination process. When the dynamic is independent from the knowledge of individuals, the model can be expressed by the following system of ODEs S E I R β ( 1 − η ) δ η δ αγ 1 − α γ                     dS dt = −βSI + (1 − α)γI dE dt = βSI − δE dI dt = (1 − η)δE − γI, dR dt = ηδE + αγI,(1) with S + E + I + R = 1 and where β is the contact rate between the class of the susceptible and the class of infectious, δ is the rate at which agents make their decision about spreading the news or not, 1 − η is the portion of agents who become infectious and γ is the rate at which spreaders remove themselves from the compartment, due, e.g., to loss of interest in sharing the news or forgetfulness. Finally, α is related to the specificity of the fake news and the probability of individulas to remember it. A probability of 0 means that the fake news has not any inherent peculiarity (e.g., in terms of content, structure, style, . . . ) that can make it memorable enough for the population to 'immunize' against it in the future, while a probability of 1 allows for the agents to have the full ability to not fall for that fake news a second time. The various parameters have been summarized in Table 2. The diagram of the SEIR model (1) is shown in Figure 1. It is straightforward to notice that when α and η are zero, system (1) specializes in a classic SEIS epidemiological model. This is consistent with treating the dissemination of non-specific fake news in a population as the spread of a disease with multiple strains, for which a durable immunization is never attained. In this case system (1) has two equilibrium states: a disease-free equilibrium state (1, 0, 0) and an endemic equilibrium stateP = (S,Ẽ,Ĩ) wherẽ S = 1 R 0 ,Ẽ = γ γ + δ 1 − 1 R 0 ,Ĩ = δ γ + δ 1 − 1 R 0 ,(2) Parameter Definition β contact rate between susceptible and infected individuals 1/δ average decision time on whether or not to spread fake news η probability of deciding not to spread fake news 1/γ average duration of a fake news α probability of remembering fake news and R 0 = β/γ is the basic reproduction number. It is known [21] that if R 0 > 1 the endemic equilibrium stateP of system (1) is globally asymptotically stable. If instead α > 0 or η > 0, there also is the possibility to permanently immunize against fake news with those traits; moreover, both infectious and exposed agents eventually vanish, leaving only the susceptible and removed compartments populated. In the case of maximum specificity of the fake news, i.e., α = 1, the stationary equilibrium state has the form S(t) → S ∞ , E(t) → 0, I(t) → 0, R(t) → R ∞ = 1 − S ∞ ,(3) where S ∞ is solution of the nonlinear equation log S 0 S ∞ = β γ (1 − η)(1 − S ∞ ),(4) in which S 0 is the initial datum S(t = 0). We refer to [5,25,32] for the inclusion of additional interaction dynamics, taking into account counter-information effects due to the removed population interacting against susceptible and infectious, and the relative analysis of the resulting equilibrium states. The interplay with competence and learning In the following, we combine the evolution of the densities according to the SEIR model (1) with the competence dynamics proposed in [29]. We refer to the degree of competence that an individual can gain or loose in a single interaction from the background as z ∈ R + ; in what follows we denote by C(z) the bounded-mean distribution of z, satisfying R + C(z) dz = 1, R + zC(z) dz = m B . Assuming a susceptible agent has a competence level x and interacts with another one belonging to the various compartments in the population and having a competence level x * , their levels after the interaction will be given by x = (1 − λ S (x))x + λ CJ (x)x * + λ BS (x)z + κ SJ x x * = (1 − λ J (x * ))x * + λ CS (x * )x + λ BJ (x * )z +κ SJ x * , J ∈ {S, E, I, R}(5) where λ S (·) and λ BS (·) quantify the amount of competence lost by susceptible individuals by the natural process of forgetfulness and the amount gained by susceptible individuals from the background, respectively. λ CJ , instead, models the competence gained through the interaction with members of the class J, with J ∈ {S, E, I, R}; a possible choice for λ CJ (x) is λ CJ (x) = λ CJ χ(x ≥x), where χ(·) is the characteristic function andx ∈ X a minimum level of competence required to the agents for increasing their own skills by interactions. Finally, κ SJ andκ SJ are independent and identically distributed zero-mean random variables with the same variance σ(t) to consider the non-deterministic nature of the competence acquisition process. The binary interactions involving the exposed agents can be similarly defined x = (1 − λ E (x))x + λ CJ (x)x * + λ BE (x)z + κ EJ x x * = (1 − λ J (x * ))x * + λ CE (x * )x + λ BJ (x * )z +κ EJ x * , J ∈ {S, E, I, R}(6) the same holds for the interactions concerning the infectious fraction of the population x = (1 − λ I (x))x + λ CJ (x)x * + λ BI (x)z + κ IJ x x * = (1 − λ J (x * ))x * + λ CI (x * )x + λ BJ (x * )z +κ IJ x * , J ∈ {S, E, I, R}(7) and finally we have the interactions regarding the removed agents x = (1 − λ R (x))x + λ CJ (x)x * + λ BR (x)z + κ RJ x x * = (1 − λ J (x * ))x * + λ CR (x * )x + λ BJ (x * )z +κ RJ x * , J ∈ {S, E, I, R}.(8) It is reasonable to assume that both the processes of gain and loss of competence from the interaction with other agents or with the background in (5)-(8) are bounded by zero. Therefore we suppose that if J, H ∈ {S, E, I, R}, and if λ J ∈ [λ − J , λ + J ], with λ − J > 0 and λ + J < 1, and λ CJ (x), λ BJ (x) ∈ [0, 1] then κ HJ may, for example, be uniformly distributed in [−1 + λ + J , 1 − λ + J ] . In order to combine the compartmental model SEIR with the evolution of the competence levels given by equations (5)-(8) we introduce the interaction operator Q HJ (·, ·) following the standard Boltzmann-type theory [27]. As earlier, we will denote with J a suitable compartment of the population, i.e., H, J ∈ {S, E, I, R}, and we will use the brackets · to indicate the expectation with respect to the random variable κ HJ . Thus, if ψ(x) is an observable function, then the action of Q HJ (f H , f J )(x, t) on ψ(x) is given by R + Q SJ (f S , f J )ψ(x) dx = R 2 + f S (x, t)f J (x * , t) ψ(x ) − ψ(x) dx * dx ,(9) with x defined by (5) R + Q EJ (f E , f J )ψ(x) dx = R 2 + f E (x, t)f J (x * , t) ψ(x ) − ψ(x) dx * dx ,(10) with x defined by (6) R + Q IJ (f I , f J )ψ(x) dx = R 2 + f I (x, t)f J (x * , t) ψ(x ) − ψ(x) dx * dx ,(11) with x defined by (7), R + Q RJ (f R , f J )ψ(x) dx = R 2 + f R (x, t)f J (x * , t) ψ(x ) − ψ(x) dx * dx ,(12) with x defined by (8). All the above operators preserve the total number of agents as the unique interaction invariant, corresponding to ψ(·) ≡ 1. The system then reads:                                  ∂f S (x, t) ∂t = −K(x, t)f S (x, t) + (1 − α(x))γ(x)f I (x, t) + J∈{S,E,I,R} Q SJ (f S , f J )(x, t), ∂f E (x, t) ∂t = K(x, t)f S (x, t) − δ(x)f E (x, t) + J∈{S,E,I,R} Q EJ (f E , f J )(x, t), ∂f I (x, t) ∂t = δ(x)(1 − η(x))f E (x, t) − γ(x)f I (x, t) + J∈{S,E,I,R} Q IJ (f I , f J )(x, t), ∂f R (x, t) ∂t = δ(x)η(x)f E (x, t) + α(x)γ(x)f I (x, t) + J∈{S,E,I,R} Q RJ (f R , f J )(x, t),(13) where the function K(x, t) = R + β(x, x * )f I (x * , t) dx * is responsible for the contagion, β(x, x * ) being the contact rate between agents with competence levels x and x * . In the above formulation we also assumed β, γ, δ, η and α functions of x. Note that, clearly, the most important parameters influenced by individuals' competence are β(x, x * ), since individuals have the highest rates of contact with people belonging to the same social class, and thus with a similar level of competence, δ(x) as individuals with greater competence invest more time in checking the authenticity of information, and η(x), which characterizes individuals' decision to spread fake news. On the other hand, the values of γ and α we may assume to be less influenced by the level of expertise of individuals. Properties of the kinetic SEIR model with competence In this section we analyze some of the properties of the Boltzmann system (13). First let us consider the reproducing ratio in presence of knowledge. By integrating system (13) against x, and considering only the compartments of individuals which may disseminate the fake news we have        dE(t) dt = X K(x, t)f S (x, t) dx − X δ(x)f E (x, t) dx, dI(t) dt = X δ(x)(1 − η(x))f E (x, t) dx − X γ(x)f I (x, t) dx.(14) In the above derivation we used the fact that the Boltzmann interaction terms describing knowledge evolution among agents preserve the total number of individuals and therefore vanish. Following the analysis in [4], and omitting the details for brevity, we obtain a reproduction number generalizing the classical one R 0 (t) = X K(x, t)f S (x, t) dx X γ(x)f I (x, t) dx .(15) Next, following [15], we can prove uniqueness of the solution of (13) in the simplified case of constant parameters: 1]. In this case, exploiting the fact that the interaction operator Q(·, ·) has a natural connection with the Fourier transform by choosing its kernel e −ixξ as test function, we can analyze the system (13) with the Fourier transforms of the densities as unknowns. β(x, x * ) = β > 0, γ(x) = γ > 0, δ(x) = δ > 0, η(x) = η ∈ [0, 1], α(x) = α ∈ [0, Indeed, given a function f (x) ∈ L 1 (R + ), its Fourier transform is defined aŝ f (ξ) = R e −ixξ f (x) dx. The system (13) becomes                                      ∂f S (ξ, t) ∂t = −βI(t)f S (ξ, t) + (1 − α)γf I (ξ, t) + J∈{S,E,I,R} Q SJ (f S ,f J )(ξ, t), ∂f E (ξ, t) ∂t = βI(t)f S (ξ, t) − δf E (ξ, t) + J∈{S,E,I,R} Q EJ (f E ,f J )(ξ, t), ∂f I (ξ, t) ∂t = δ(1 − η)f E (ξ, t) − γf I (ξ, t) + J∈{S,E,I,R} Q IJ (f I ,f J )(ξ, t), ∂f R (ξ, t) ∂t = δηf E (ξ, t) + αγf I (ξ, t) + J∈{S,E,I,R} Q RJ (f R ,f J )(ξ, t),(16) where the operatorsQ HJ (f H ,f J ) are defined in terms of the Fourier transforms of their arguments for J ∈ {S, E, I, R}, so that Q HJ (f H ,f J ) = f H (A HJ ξ − λ BH z, t) f J (λ CJ ξ, t) −f H (ξ, t)J(t), where A HJ , with H, J ∈ {S, E, I, R} is defined as A HJ = 1 − λ H + κ H .(17) We suppose that the parameters satisfy the condition ν = max H,J∈{S,E,I,R} [λ 2 CJ + A 2 JH ] < 1,(18) which will prove useful in the proof. As in [15] we recall a class of metrics which is of natural use in bilinear Boltzmann equations [27]. Let f and g be probability densities. Then, for s > 0 we define d s (f, g) = sup ξ∈R |f (ξ) −ĝ(ξ)| |ξ| s ,(19) which is finite whenever f and g have equal moments up to the integer part of s or to s − 1 if s is an integer. We have the following result. d 2 (f J (x, t), g J (x, t) ≤ J∈{S,E,I,R} d 2 (f J (x, 0), g J (x, 0)e −(1−ν)t . For the details of the proof we refer to Appendix A. Mean-field approximation A highly useful tool to obtain information analytically on the large-time behavior of Boltzmanntype models are scaling techniques; in particular the so-called quasi-invariant limit [27], which allows to derive the corresponding mean-field description of the kinetic model (13). Indeed, let us consider the case in which the interactions between agents produce small variations of the competence. We scale the quantities involved in the binary interactions (5)- (8) accordingly λ CJ → ελ CJ , λ BJ → ελ BJ , λ J → ελ J , σ → εσ,(20) where J ∈ {S, E, I, R} and the functions involved in the dissemination of the fake news, as well β(x, x * ) → εβ(x, x * ), γ(x) → εγ(x), δ(x) → εδ(x), η(x) → εη(x).(21) We denote by Q ε HJ (·, ·) the scaled interaction terms. Omitting the dependence on time on mean values and re-scaling time as t → t/ε, we obtain up to O(ε) 1 ε R + Q ε SJ (f S , f J )ψ(x) dx ≈ R + −ψ(x) (λ S xJ − λ CJ m J − λ BS m B J) + σ 2 ψ(x) x 2 J f S (x, t) dx 1 ε R + Q ε EJ (f E , f J )ψ(x) dx ≈ R + −ψ(x) (λ E xJ − λ CJ m J − λ BE m B J) + σ 2 ψ(x) x 2 J f E (x, t) dx 1 ε R + Q ε IJ (f I , f J )ψ(x) dx ≈ R + −ψ(x) (λ I xJ − λ CJ m J − λ BI m B J) + σ 2 ψ(x) x 2 J f I (x, t) dx 1 ε R + Q ε RJ (f R , f J )ψ(x) dx ≈ R + −ψ(x) (λ R xJ − λ CJ m J − λ BR m B J) + σ 2 ψ(x) x 2 J f R (x, t) dx, where we used a Taylor expansion for small values of ε of ψ(x ) = ψ(x) + (x − x)ψ (x) + (x − x) 2 2 ψ (x) + O(ε 2 ) in (9)-(12) and the scaled interaction rules (5)-(8). Stationary solutions of Fokker-Planck SEIR models Let us impose that ε → 0, following [27] from the computations of the previous section we formally obtain the Fokker-Planck system ∂f S (x, t) ∂t = −K(x, t)f S (x, t) + (1 − α(x))γ(x)f I (x, t) + ∂ ∂x [(xλ S − m(t) − λ BS m B )f S (x, t)] + σ 2 ∂ 2 ∂x 2 (x 2 f S (x, t)) (22) ∂f E (x, t) ∂t = K(x, t)f S (x, t) − δ(x)f E (x, t) + ∂ ∂x [(xλ E − m(t) − λ BE m B )f E (x, t)] + σ 2 ∂ 2 ∂x 2 (x 2 f E (x, t)) (23) ∂f I (x, t) ∂t = δ(x)(1 − η(x))f E (x, t) − γ(x)f I (x, t) + ∂ ∂x [(xλ I − m(t) − λ BI m B )f I (x, t)] + σ 2 ∂ 2 ∂x 2 (x 2 f I (x, t)) (24) ∂f R (x, t) ∂t = δ(x)η(x)f E (x, t) + α(x)γ(x)f I (x, t) + ∂ ∂x [(xλ R − m(t) − λ BR m B )f R (x, t)] + σ 2 ∂ 2 ∂x 2 (x 2 f R (x, t))(25) where now m(t) = λ CS m S (t) + λ CE m E (t) + λ CI m I (t) + λ CR m R (t). We can consider the mean values system associated to (22)- (25) in the case of constant epidemiological parameters dm S (t) dt = −βI(t)m S (t) + (1 − α)γm I (t) + λS(t)(m(t) − m B )/2 − λm S (t)(26)dm E (t) dt = βI(t)m S (t) − δm E (t) + λE(t)(m(t) − m B )/2 − λm E (t)(27)dm I (t) dt = δ(1 − η)m E (t) − γm I (t) + λI(t)(m(t) − m B )/2 − λm I (t) (28) dm R (t) dt = δηm E (t) + αγm I (t) + λR(t)(m(t) − m B )/2 − λm R (t),(29) with m(t) = m S (t) + m E (t) + m I (t) + m R (t). In the case α > 0 or η > 0, we know that E(t) → 0, I(t) → 0, S(t) → S ∞ and R(t) → R ∞ = 1 − S ∞ due to mass conservation, so that m E (t) → 0 and m I (t) → 0 as well. Thus, adding all the equations together leads us to λ 2 (m ∞ S + m ∞ R ) + λ 2 m B = λ(m ∞ S + m ∞ R ),(30) i.e., m ∞ S + m ∞ R = m B . At this point, adding together equations (22) to (25) gives us 0 = λ ∂ ∂x (x − m B ) J∈{S,R} f ∞ J (x) + σ 2 ∂ 2 ∂x 2 x 2 J∈{S,R} f ∞ J (x) , which has as solution an inverse Gamma density f ∞ (x) = f ∞ S (x) + f ∞ R (x) = k µ Γ(µ) e −k/x x 1+µ ,(31)µ = 1 + 2λ σ , k = (µ − 1)m B . It is straightforward to see that the scaled Gamma densities f ∞ S (x) = S ∞ k µ Γ(µ) e −k/x x 1+µ f ∞ R (x) = (1 − S ∞ ) k µ Γ(µ) e −k/x x 1+µ are solutions of the system (22)- (25). If, instead, α = η = 0, we find again the same solution as (31), but in this case J →J, wherẽ J are defined as in (2). In Figure 2 we report two examples of the stationary solutions where we chose the competence variable z to be uniformly distributed in [0, 1]: in the first case (left) we considered α = η = 0, while in the second case (right) we set α = 0.2 and η = 0.1. Numerical examples In this section we present some numerical tests to show the characteristics of the model in describing the dynamics of fake news dissemination in a population with a competence-based structure. To begin with, we validate the Fokker-Planck model obtained as the quasi-invariant limit of the Boltzmann equation: we will do so through a Monte Carlo method for the competence distribution (see [27], Chapter 5 for more details). Next, we approximate the Fokker-Planck systems (22)-(25) by generalizing the structure-preserving numerical scheme [30] to explore the interplay between competence and disseminating dynamics in the more realistic case of epidemiological parameters dependent on the competence level (see Appendix B). Lastly, we investigate how the fake news' diffusion would impact differently on different classes of the population defined in terms of their capabilities of interacting with information. Test 1: Numerical quasi-invariant limit In this test we show that the mean-field Fokker-Planck system (22)-(25) obtained under the quasi-invariant scaling (20) and (21) is a good approximation of the Boltzmann models (13) when ε 1. We do so by using a Monte Carlo method with N = 10 4 particles, starting with a uniform distribution of competence f 0 (x) = 1 2 χ(x ∈ [0, 2]), where χ(·) is the indicator function, and performing various iterations until the stationary state was reached; next, the distributions were averaged over the next 500 iterations. We considered constant competencerelated parameters λ CJ = λ BJ and λ J = λ CJ + λ BJ as well as a constant variance σ for the random variables η HJ . In Figure 3, we plotted the results for (λ, σ) = (0.075, 0.150) (circle-solid, teal) and for (λ, σ) = (0.001, 0.002) (square-solid, ochre): those choices correspond to a scaling regime of ε = 0.075 and ε = 0.001, respectively, with µ = 2. Finally, we assumed that m B = 0.75 (left) and m B = 1 (right). Directly comparing the Boltzmann dynamics equilibrium with the explicit analytic solution of the Fokker-Planck regime shows that if ε is small enough, Fokker-Planck asymptotics provide a consistent approximation of the steady states of the kinetic distributions. Test 2: Learning dynamics and fake news dissemination For this test, we applied the structure-preserving scheme to system (22)- (25) in a more realistic scenario featuring an interaction term dependent on the competence level of the agents, as well as a competence-dependent delay during which agents evaluate the information and decide how to act. In this setting, we refer to the recent Survey of Adult Skills (SAS) made by the OECD [26]: in particular, we focus on competence understood as a set of information-processing skills, especially through the lens of literacy, defined [26] as "the ability to understand, evaluate, use and engage with written texts in order to participate in society". One of the peculiarities that makes the SAS, which is an international, multiple-year spanning effort in the framework of the PIAAC (Programme for the International Assessment of Adult Competencies) by the OECD, interesting in our case is that it was administered digitally to more than 70% of the respondents. Digital devices are arguably the most important vehicle for information diffusion in OECD countries, so that helps to keep consistency. Literacy proficiency was defined through 6 increasing levels; we therefore consider a population partitioned in 6 classes based on the competence level of their occupants, equated to the score of the literacy proficiency test of the SAS, normalized. Thus, we chose a log-normal-like distribution f (x) = 1 (ξ − x)σ √ 2π · e − (log(ξ−x)−μ) 2 2σ 2 , whereξ = 5,μ ≈ 0.85 andσ ≈ 0.22 to make f (x) agree with the empirical findings in [26]. The computational domain is restricted to x ∈ [0, 5] and stationary boundary conditions have been applied as described in Appendix B. Initial distributions for the epidemiological compartments were set as f S (x, 0) = ρ S f (x), f E (x, 0) = ρ E f (x), f I (x, 0) = ρ I f (x), f R (x, 0) = ρ R f (x), with ρ I = 10 −2 , ρ S = 1 − ρ I and ρ E = ρ R = 0. The contact rate β(x, x * ) was set as β(x, x * ) = β 0 (1 + x 2 )(1 + x 2 * ) χ(|x − x * | ≤ ∆)(32) with ∆ = 2 on the hypothesis that interactions occur more frequently among people with a similar competence level and are higher for people with lower competence levels. The rate δ at which the information is evaluated by the agents, who therefore exits the exposed class, was set to be δ(x) = δ R + (δ L − δ R ) 1 1 + e a(b−x) ,(33) with δ L = 1, δ R = 5, a = 2 and b = 2.5. Here, we are taking into account that people with higher efficacy at identifying fake news spend significantly more time on conducting their evaluations than people with lower efficacy [23]. In this specific test case the time range for the evaluation of the information spans between 1 day and about 5 hours. The values were purposely chosen rather large compared to realistic values in order to highlight also the behavior of the exposed compartment. Finally, we set γ = 0.2, which correspond to an average fake news duration of 5 days, and α = 0.2, so that individuals have a moderate possibility to remember the fake news and become immune to it, and assume η = 0.1, namely in this test we do not relate the decision to spread or not fake news to the level of competence. We investigate the relation between the dissemination-related component of the model and the competence-related one, which entails that agents can learn, i.e., increase their competence level, both from the background and from direct binary interactions. Under the assumption that λ CJ + λ BJ = λ J , which is a conservative choice: the expected value of the competence gained through interactions cancels out the one lost due to forgetfulness, in this latter process two main parameters are involved: λ = λ J (i.e., all compartments have the same learning rate) and m B , which is the mean of the background competence variable z. For what concerns the dissemination-related component, instead, the main factor is the reproduction number R 0 (15). Hence, we measured the differences on the spread of fake news varying these three parameters. In Figure 4 (left) we show the highest portion of spreaders in relation to the mean of background m B and to the reproduction number R 0 ; in the right image λ is opposed to R 0 . To perform the test, we leveraged the structure-preserving numerical scheme [30] whose details are presented for convenience in Appendix B. In both images of Figure 4 we see transition effects: the learning process triggered by the competence dynamics is capable of slowing down the dissemination of fake news in the population, even to the point of preventing it to take place. In the first case, the mean of the background m B , i.e., the mean of the distribution of the background competence variable z, which we assumed uniformly distributed, varies between 0.03125 and 0.25, while the reproduction number R 0 varies between 1.1 and 10. In the second case, we left untouched R 0 , while λ varies between 0.0125 and 0.125 with a background mean m B = 0.125. We can see that the mean of the background has a more pronounced impact on the slowing of the diffusion of fake news, with a steeper transition effect. Test 3: Impact of the different competence levels In this final test we considered how much of an impact the competence level can have on the dissemination of fake news in the population. We simulated the mean-field model (22)-(25) assuming the same competence-dependent contact rate β(x, x * ) of Test 2, in this case with β 0 = 4, as well the same delay rate δ(x) and the same γ, but we additionally assume that the decision to spread or not a fake news is affected by the level of competence. This is somewhat controversial in the literature since other factors also affect this behavior like the age of individuals (tests carried out on young people have shown independence from the competence in the decision to share a fake news in contrast to what happens in in older people, see [23,26]). To emphasize this effect we assume η(x) = 1 − e −kx 2 ,(34) with k = 0.1. Thus individuals with high level of competence rarely decide to spread fake news. In Figure 5 we report the time evolution of the distributions of susceptible (top left), exposed (top right), infected (bottom left) and removed (bottom right) agents with competence parameters of λ BJ = λ CJ = 0.125, λ J = λ BJ + λ CJ and m B = 0.125, in the case α = 0.1. In Figure 6, instead, we show the evolution with the same parameters except for a larger probability α = 0.9 of remembering the fake news. In Figure 7 are shown the relative numbers of susceptible, exposed, infected and removed agents, on the left for α = 0.1 and on the right for α = 0.9. To measure the effects of the competence, we considered the curve of the infected agents depending on their levels accordingly to [26] for x ∈ [0, 5]: • below level 1: scoring less then 175/500, (x < 1.75); • level 1: scoring between 176/500 and 225/500, (x > 1.75 and x < 2.25); • level 2: scoring between 226/500 and 275/500, (x > 2.25 and x < 2.75); • level 3: scoring between 276/500 and 325/500, (x > 2.75 and x < 3.25); • level 4: scoring between 326/500 and 375/500, (x > 3.25 and x < 3.75); [26]. Left: α = 0.1; right: α = 0.9. • level 5: scoring more than 375/500, (x > 3.75). Figure 8 shows clearly that the more competent the individual, the lesser they contribute to the spread of fake news, in perfect agreement with the transition effects observed in Test 4.2 due to the interplay between competence and the dissemination dynamics. Moreover, we can see how the probability α of detecting fake news influences its dissemination in the population: a lower probability implies a higher peak of infected agents for each competence level, as well as a slower spread overall. Concluding remarks In this paper, we introduced a compartmental model for fake news dissemination that also considers the competence of individuals. In the model, the concept of competence is not introduced as a static feature of the dynamic, but as an evolutionary component that takes into account both learning through interactions between agents and interventions aimed at educating individuals in the ability to detect fake news. From a mathematical viewpoint the competence dynamics has been introduced as a Boltzmann interaction term in the corresponding system of differential equations. A suitable scaling limit, permits to recover the corresponding Fokker-Planck models and then the resulting stationary states in terms of competence. These, in agreement with [29], are given explicitly by Gamma distributions. The numerical results demonstrate the model's ability to correctly describe the interplay between fake news dissemination and individuals' level of competence, highlighting transition phenomena at the level of expertise that allow fake news to spread more rapidly. Future developments of the model will be considered in particular in the case of networks, in order to describe the spread of fake news on social networks and present plausible scenarios useful to limit the spread of false information. This can be done by following an approach similar to that of kinetic models for opinion formation on networks [1]. Another challenging aspect concerns the matching of the model with realistic data that requires the introduction of quantitative aspects not always easy to identify [25,36]. One of the main applications will be related to combating misinformation in the vaccination campaign against COVID-19. A Proof of Theorem 1 We provide here a proof for Theorem 1. The proof is identical to [15]; we develop it here, too, for completeness, only in the case α = η = 0. If H ∈ {S, E, I} we have J∈{S,E,I} Q(f H ,f J )(ξ, t) = J∈{S,E,I} f H (A HJ ξ − λ BH z, t) f J (λ CJ ξ, t) − J∈{S,E,I} f H (ξ, t)J(t) =Q + (f H )(ξ, t) −Ĥ(ξ, t)(35) where the second equality follows from mass conservation. In (35), we have definedQ + (f H )(ξ, t) asQ + (f H )(ξ, t) = J∈{S,E,I} f H (A HJ ξ − λ BH z, t) f J (λ CJ ξ, t), where A HJ has been defined originally in (17). Thus, now system (16) reads                  ∂f S (ξ, t) ∂t +f S (ξ, t) = −βI(t)f S (ξ, t) + γf I (ξ, t) +Q + (f I )(ξ, t) ∂f E (ξ, t) ∂t +f E (ξ, t) = βI(t)f S (ξ, t) − δf E (ξ, t) +Q + (f E )(ξ, t) ∂f I (ξ, t) ∂t +f I (ξ, t) = δf E (ξ, t) − γf I (ξ, t) +Q(f I )(ξ, t),(36) To ensure positivity of all coefficients on the right-hand side (since I(t) < 1 and β, γ, δ < 1 we can addf J (ξ, t) to each side, where J equals S, E and I in the first, second and third equation, respectively, to resort to the equivalent system                  ∂f S (ξ, t) ∂t + 2f S (ξ, t) = (1 − βI(t))f S (ξ, t) + γf I (ξ, t) +Q + (f I )(ξ, t) ∂f E (ξ, t) ∂t + 2f E (ξ, t) = βI(t)f S (ξ, t)(1 − δ)f E (ξ, t) +Q + (f E )(ξ, t) ∂f I (ξ, t) ∂t + 2f I (ξ, t) = δf E (ξ, t)(1 − γ)f I (ξ, t) +Q(f I )(ξ, t),(37) in which all coefficients on the right-hand side are positive. Now, letf J (ξ, t) andĝ(ξ, t) be two solutions of the system (37). We look at the time behavior of the d 2 metric of their difference, where the Fourier-based metric was defined in (19); therefore we define h J (ξ, t) =f J (ξ, t) −ĝ J (ξ, t) |ξ| 2 . We see that the metric and h J are related by d 2 (f J , g J ) = h J ∞(38) By its definition, the h J are solutions of                  ∂ĥ S (ξ, t) ∂t + 2ĥ S (ξ, t) = (1 − βI(t))ĥ S (ξ, t) + γĥ I (ξ, t) + L + (f S )(ξ, t) ∂ĥ E (ξ, t) ∂t + 2ĥ E (ξ, t) = βI(t)ĥ S (ξ, t)(1 − δ)ĥ E (ξ, t) + L + (f E )(ξ, t) ∂ĥ I (ξ, t) ∂t + 2ĥ I (ξ, t) = δĥ E (ξ, t) + (1 − γ)ĥ I (ξ, t) + L + (f I )(ξ, t),(39) where, with H ∈ {S, E, I, R} are defined as L + (f H )(ξ, t) =Q + (f H )(ξ, t) −Q + (ĝ H )(ξ, t) |ξ| 2 . We can rewrite L + (f H )(ξ, t) in full: L + (f H )(ξ, t) = J∈{S,E,I,R} f H (A HJ ξ − λ BH z, t)f J (λ CJ ξ, t) −ĝ H (A HJ ξ − λ BH z, t)ĝ J (λ CJ ξ, t) |ξ| 2 ,(40) with the expectation · put outside for convenience. As shown, e.g., in [27], and since f J and g J are solution of the SEIS system for the masses and the mean values, we can profitably bound the addends in the sum on the right-hand side of (40) in terms of the functions h H and h J f H (A HJ ξ − λ BH z, t)f J (λ CJ ξ, t) −ĝ H (A HJ ξ − λ BH z, t)ĝ J (λ CJ ξ, t) |ξ| 2 ≤ |f H (A HJ ξ − λ BH z, t)| f J (λ CJ ξ, t) −ĝ J (λ CJ ξ, t) |λ CJ ξ| 2 λ 2 CJ + |ĝ J (λ J ξ, t)| f H (A HJ ξ − λ BH z, t) −ĝ H (A HJ ξ − λ BH z, t) |A HJ ξ| 2 A 2 HJ ≤ H(t) sup ξ f J (ξ, t) −ĝ J (ξ, t) |ξ| 2 λ 2 CJ + J(t) sup ξ f H (ξ, t) −ĝ H (ξ, t) |ξ| 2 A 2 HJ = λ 2 CJ H(t) h J ∞ + A 2 HJ h H ∞ . So we obtain that L + (f H )(t) ∞ ≤ H(t) J∈{S,E,I} λ 2 CJ h J (t) ∞ + h H (t) ∞ J∈{S,E,I} A 2 HJ J(t) . Multiplying both sides of (39) by e 2t we have                ∂[h S (ξ, t)e 2t ] ∂t ≤ (1 − βI(t)) h I (t)e 2t ∞ + γ h I (t)e 2t ∞ + L + (f S )(t)e 2t ∞ ∂[h E (ξ, t)e 2t ] ∂t ≤ βI(t) h I (t)e 2t ∞ + (1 − δ) h E (t)e 2t ∞ + L + (f E )(t)e 2t ∞ ∂[h I (ξ, t)e 2t ] ∂t ≤ δ h E (t)e 2t ∞ + (1 − γ) h I (t)e 2t ∞ + L + (f I )(t)e 2t ∞ , If we integrate from 0 to t and take the suprema we get                  h S (ξ, t)e 2t ∞ ≤ h S (0) ∞ + t 0 (1 − βI(t)) h I (t)e 2t ∞ + γ h I (t)e 2t ∞ + L + (f S )(t)e 2t ∞ ds h E (ξ, t)e 2t ∞ ≤ h E (0) ∞ + t 0 βI(t) h I (t)e 2t ∞ + (1 − δ) h E (t)e 2t ∞ + L + (f E )(t)e 2t ∞ ds h I (ξ, t)e 2t ∞ ≤ h S (0) ∞ + t 0 δ h E (t)e 2t ∞ + (1 − γ) h I (t)e 2t ∞ + L + (f I )(t)e 2t ∞ ds. Thanks to mass conservation we can also write Recalling the relation (38) we obtain the thesis of Theorem 1. B Structure-preserving methods Here we provide some details on the structure-preserving numerical scheme [30] for the general class of nonlinear Fokker-Planck equations of the form    ∂g(x, t) ∂t = ∇ x · B[g](x, t)g(x, t) + ∇ x (D(x)g(x, t)) , g(x, 0) = g 0 (x), where t ≥ 0, x ∈ X ⊆ R d , d ≥ 1 and g(x, t) ≥ 0 is the unknown distribution function. As mentioned above, B[g] is a bounded aggregation operator and D(·) models diffusion. The scheme [30] follows the work of Chang and Cooper [9] to construct a numerical method which can preserve features of the solution such as its large time behavior. If we examine system (22)-(25), we notice it has a structure like the following ∂f (x, t) ∂t = ∂F [f ](x, t) ∂x + E (f (x, t)),(42) where f (x, t) = (f S (x, t), f E (x, t), f I (x, t), f R (x, t)) T , E (f (x, t)) is a vector accounting for dissemination dynamics E (f (x, t)) =     −K(x, t)f S (x, t) + (1 − α(x))γ(x)f I (x, t) K(x, t)f S (x, t) − δ(x)f E (x, t) δ(x)(1 − η(x))f E (x, t) − γ(x)f I (x, t) δ(x)η(x)f E (x, t) + α(x)γ(x)f I (x, t),     and F [f ](x, t) is the Fokker-Planck component F [f ](x, t) = ∂ ∂x [(xλ J − m(t) − 4λ BJ m B )f J (x, t)] + σ 2 ∂ 2 ∂x 2 (x 2 f J (x, t)) T J∈{S,E,I,R} . Here we recognize that the J-th entry of F [f ](x, t) is precisely the right side of equation (41) in dimension d = 1, with the choices B[f ](x, t) = λ J x − m x (t) − 4λ BJ m B when α > 0 and D(x) = σ/2x 2 . Hence, if we consider system (22)- (25) in the form (42) we can apply the structure-preserving numerical scheme [30] to it: if we consider a spatially-uniform grid x i ∈ X, such that x i+1 − x i = ∆x, and denoting x i±2 = x i ± ∆x/2, we have that the discretization of the J-th component of (42) can be obtained by [30] df J,i (t) dt = F i+1/2 (t) − F i−1/2 (t) ∆x + E J,i (t), where F i+1/2 = C i+1/2fi+1/2 + D i+1/2 f i+1 − f i δx , having defined C i+1/2 = D i+1/2 ∆x x i+1 x i B[f ](x, t) + ∂ x D(x) D(x) , andf i+1/2 = (1 − δ i+1/2 )f i+1 + δ i+1/2 f i , where δ i+1/2 = 1 λ i+1/2 + 1 1 − exp(λ i+1/2 ) , and finally λ i+1/2 = x i+1 x i B[f ](x, t) + ∂ x D(x) D(x) dx. For what concerns integration with respect to the competence level was performed using a Gauss-Legendre quadrature with 6 points. Notice also that we need to truncate the domain of computation for x > 0: following [30] we imposed on the last grid point x N +1 the quasi-stationary condition [30] f N +1 (t) f N (t) = exp x N +1 x N B[f ](x, t) + ∂ x D(x) D(x) dx Time integration of (43) was performed using a semi-implicit scheme f n+1 i = f n i + ∆tF n+1 i+1/2 −F n+1 i−1/2 ∆x + ∆t E n J,i , whereF n+1 i+1/2 =C n i+1/2 (1 − δ n i+1/2 )f n+1 i+1 + δ n i+1/2 f i + 1 n+1 + D i+1/2 f n+1 i+1 − f n+1 i ∆x , which, upon choosing ∆t = O(∆x), preserves the nonnegativity of the solution (see [30]). Figure 1 : 1SEIR diagram with transition rates. Theorem 1 . 1Let J ∈ {S, E, I, R}, and let f J (x, t) and g J (x, t) be two solutions of the system(13) with initial values f J (x, 0) and g J (x, 0) such that d 2 (f J (x, 0), g J (x, 0) is finite. Then, condition(18) implies that the Fourier based distance d 2 (f J (x, t), g J (x, t) decays exponentially (in time) to zero, so that J∈{S,E,I,R} Figure 2 : 2Exact solutions for competence distributions at the end of epidemic(31) for λ = 0.1, µ = 5, m B = 0.5 (solid), m B = 3 (dash-dotted),S = 0.5,Ẽ = 0.1,Ĩ = 0.4 (left) and S ∞ = 0.6, R ∞ = 0.4 (right). Left: case α = η = 0; right: case α = 0.2, η = 0.1. Figure 3 : 3Test 1. Comparison of the competence distributions at the end of epidemics for system (13) with the explicit Fokker-Planck solution (31) with scaling parameters ε = 0.075, 0.001. We considered the case m B = 0.75 (left) and m B = 1 (right). Figure 4 : 4Test 2. Interplay between competence levels and dissemination dynamics. Contour plots of the highest number of infected in relation to the reproduction number (15) R 0 ∈ [1.1, 10] and the background competence mean m B ∈ [0.03125, 0.25] (left) and learning rate λ ∈ [0.0125, 0.125] (right). Figure 5 : 5Test 3. Time evolution of the competence distribution for the kinetic model (13) with competence parameters λ BJ = λ CJ = 0.125, λ J = λ BJ + λ CJ , with a background competence mean of m B = 0.125. We considered α = 0.1, β 0 = 4, γ = 0.2. Top left: susceptible; top right: exposed; bottom left: infected; bottom right: removed. Figure 6 : 6Test 3. Time evolution of the competence distribution for the kinetic model (13) with competence parameters λ BJ = λ CJ = 0.125, λ J = λ BJ + λ CJ , with a background competence mean of m B = 0.125. We considered α = 0.9, β 0 = 4, γ = 0.2. Top left: susceptible; top right: exposed; bottom left: infected; bottom right: removed. Figure 7 : 7Test 3. Evolution in time of the densities for susceptible, exposed, infectious and removed agents. Left: α = 0.1; right: α = 0.9. Figure 8 : 8Test 3. Evolution of the fraction of infectious agents with different levels of competence as defined in hh J (t) ∞ .Now we can take advantage of condition(18) to get H∈{S,E,I}L + (f H )(t)e 2t ∞ ≤ ν J∈{S,E,I} h J (t)e 2t ∞ ,with ν < 1. If we sum the equations of the system we therefore have J (t)e 2t∞ ≤ h J (0) ∞ e (1+ν)t which is equivalent to J∈{S,E,I} h J (t) ∞ ≤ h J (0) ∞ e −(1−ν)t . Table 1 : 1Different compartments and notations for some of the models found in literature. Table 2 : 2Parameters definition in the SEIR model (1). AcknowledgmentsThis work was partially supported by MIUR (Ministero dell'Istruzione, dell'Università e della Ricerca) PRIN 2017, project "Innovative numerical methods for evolutionary partial differential equations and applications", code 2017KKJP4X. Opinion dynamics over complex networks: Kinetic modelling and numerical methods. G Albi, L Pareschi, M Zanella, Kinetic & Related Models. 101G. Albi, L. Pareschi, and M. Zanella. Opinion dynamics over complex networks: Kinetic modelling and numerical methods. Kinetic & Related Models, 10(1):1-32, 2017. Social media and fake news in the 2016 election. H Allcott, M Gentzkow, Journal of Economic Perspectives. 312H. Allcott and M. Gentzkow. Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2):211-36, May 2017. J B Bak-Coleman, M Alfano, W Barfuss, C T Bergstrom, M A Centeno, I D Couzin, J F Donges, M Galesic, A S Gersick, J Jacquet, A B Kao, R E Moran, P Romanczuk, D I Rubenstein, K J Tombak, J J Van Bavel, E U Weber, Stewardship of global collective behavior. Proceedings of the National Academy of Sciences. 1182021J. B. Bak-Coleman, M. Alfano, W. Barfuss, C. T. Bergstrom, M. A. Centeno, I. D. Couzin, J. F. Donges, M. Galesic, A. S. Gersick, J. Jacquet, A. B. Kao, R. E. Moran, P. Romanczuk, D. I. Rubenstein, K. J. Tombak, J. J. Van Bavel, and E. U. Weber. Stewardship of global collective behavior. Proceedings of the National Academy of Sciences, 118(27), 2021. Hyperbolic compartmental models for epidemic spread on networks with uncertain data: Application to the emergence of Covid-19 in Italy. G Bertaglia, L Pareschi, arXiv:2105.14258Math. Mod. Meth. Appl. Sci. 2021to appearG. Bertaglia and L. Pareschi. Hyperbolic compartmental models for epidemic spread on networks with uncertain data: Application to the emergence of Covid-19 in Italy. Math. Mod. Meth. Appl. Sci., to appear (arXiv:2105.14258), 2021. The power of a good idea: Quantitative modeling of the spread of ideas from epidemiological models. L M Bettencourt, A Cintrón-Arias, D I Kaiser, C Castillo-Chávez, Physica A: Statistical Mechanics and its Applications. 364L. M. Bettencourt, A. Cintrón-Arias, D. I. Kaiser, and C. Castillo-Chávez. The power of a good idea: Quantitative modeling of the spread of ideas from epidemiological models. Physica A: Statistical Mechanics and its Applications, 364:513-536, 2006. Convergence of knowledge in a stochastic cultural evolution model with population structure, social learning and credibility biases. S Billiard, M Derex, L Maisonneuve, T Rey, Mathematical Models and Methods in Applied Sciences. 3014S. Billiard, M. Derex, L. Maisonneuve, and T. Rey. Convergence of knowledge in a stochastic cultural evolution model with population structure, social learning and credibility biases. Mathematical Models and Methods in Applied Sciences, 30(14):2691-2723, 2020. Information Dissemination in Scale-Free Networks: Profusion Versus Scarcity. L Brisson, P Collard, M Collard, E Stattner, Complex Networks & Their Applications VI. C. Cherifi, H. Cherifi, M. Karsai, and M. MusolesiChamSpringer International PublishingL. Brisson, P. Collard, M. Collard, and E. Stattner. Information Dissemination in Scale-Free Networks: Profusion Versus Scarcity. In C. Cherifi, H. Cherifi, M. Karsai, and M. Musolesi, editors, Complex Networks & Their Applications VI, pages 909-920, Cham, 2018. Springer International Publishing. How to model fake news. D Brody, D Meier, ArXiv:1809.0096409D. Brody and D. Meier. How to model fake news. ArXiv:1809.0096409, 2018. A practical difference scheme for Fokker-Planck equations. J Chang, G Cooper, Journal of Computational Physics. 61J. Chang and G. Cooper. A practical difference scheme for Fokker-Planck equations. Journal of Computational Physics, 6(1):1-16, 1970. An epidemic model of rumor diffusion in online social networks. J.-J Cheng, Y Liu, B Shen, W.-G Yuan, The European Physical Journal B. 86J.-J. Cheng, Y. Liu, B. Shen, and W.-G. Yuan. An epidemic model of rumor diffusion in online social networks. The European Physical Journal B, 86, 01 2013. Automatic deception detection: Methods for finding fake news. N Conroy, V Rubin, Y Chen, Proceedings of the Association for Information Science and Technology. the Association for Information Science and Technology52N. Conroy, V. Rubin, and Y. Chen. Automatic deception detection: Methods for finding fake news. Proceedings of the Association for Information Science and Technology, 52(1):1- 4, 2015. Vaccination strategies against COVID-19 and the diffusion of anti-vaccination views. R P Curiel, H G Ramírez, Nature Scientific Reports. 116626R. P. Curiel and H. G. Ramírez. Vaccination strategies against COVID-19 and the diffusion of anti-vaccination views. Nature Scientific Reports, 11:6626, 2021. Epidemics and rumours. D Daley, D Kendall, Nature. 20449631118D. Daley and D. Kendall. Epidemics and rumours. Nature, 204(4963):1118, 1964. Epidemic Modelling: An Introduction. Cambridge Studies in Mathematical Biology. D J Daley, J Gani, Cambridge University PressD. J. Daley and J. Gani. Epidemic Modelling: An Introduction. Cambridge Studies in Mathematical Biology. Cambridge University Press, 1999. Wealth distribution under the spread of infectious diseases. G Dimarco, L Pareschi, G Toscani, M Zanella, Phys. Rev. E. 10222303G. Dimarco, L. Pareschi, G. Toscani, and M. Zanella. Wealth distribution under the spread of infectious diseases. Phys. Rev. E, 102:022303, Aug 2020. Epidemic threshold in structured scale-free networks. V M Eguiluz, K Klemm, Phys. Rev. Lett. 89108701V. M. Eguiluz and K. Klemm. Epidemic threshold in structured scale-free networks. Phys. Rev. Lett., 89:108701, Aug 2002. Fake news: A definition. A Gelfert, Informal Logic. 381A. Gelfert. Fake news: A definition. Informal Logic, 38(1):84-117, 2018. The mathematics of infectious diseases. H Hethcote, SIAM Rev. 42H. Hethcote. The mathematics of infectious diseases. SIAM Rev., 42:599-653, 2000. Epidemiological modeling of news and rumors on twitter. F Jin, E Dougherty, P Saraf, Y Cao, N Ramakrishnan, Proceedings of the 7th Workshop on Social Network Mining and Analysis. the 7th Workshop on Social Network Mining and AnalysisF. Jin, E. Dougherty, P. Saraf, Y. Cao, and N. Ramakrishnan. Epidemiological modeling of news and rumors on twitter. In Proceedings of the 7th Workshop on Social Network Mining and Analysis, pages 1-9, 2013. Contributions to the mathematical theory of epidemics. II. -The problem of endemicity. W O Kermack, Mckendrick, Proceedings of The Royal Society A: Mathematical. 138Physical and Engineering SciencesW. O. Kermack andÀ. McKendrick. Contributions to the mathematical theory of epi- demics. II. -The problem of endemicity. Proceedings of The Royal Society A: Mathemat- ical, Physical and Engineering Sciences, 138:55-83, 1932. Lyapunov functions and global properties for SEIR and SEIS epidemic models. A Korobeinikov, Mathematical Medicine and Biology: A Journal of the IMA. 21A. Korobeinikov. Lyapunov functions and global properties for SEIR and SEIS epidemic models. Mathematical Medicine and Biology: A Journal of the IMA, 21:75-83, 07 2004. Small world effect in an epidemiological model. M Kuperman, G Abramson, Phys. Rev. Lett. 86M. Kuperman and G. Abramson. Small world effect in an epidemiological model. Phys. Rev. Lett., 86:2909-2912, Mar 2001. How college students evaluate and share "fake new" stories. Library & Information. C Leeder, Science Research. 413100967C. Leeder. How college students evaluate and share "fake new" stories. Library & Infor- mation Science Research, 41(3):100967, 2019. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. S Loomba, A De Figueiredo, S P , Nat. Hum. Behav. 5S. Loomba, A. de Figueiredo, and S. P. et al. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nat. Hum. Behav., 5:337-348, 2021. Using an Epidemiological Model to Study the Spread of Misinformation during the Black Lives Matter Movement. M Maleki, E Mead, M Arani, N Agarwal, International Conference on Fake News, Social Media Manipulation and Misinformation (ICFNSMMM). 2021M. Maleki, E. Mead, M. Arani, and N. Agarwal. Using an Epidemiological Model to Study the Spread of Misinformation during the Black Lives Matter Movement. In International Conference on Fake News, Social Media Manipulation and Misinformation (ICFNSMMM), 2021. OECD. Skills Matter: Additional Results from the Survey of Adult Skills. OECD Skills Studies. ParisOECD PublishingOECD. Skills Matter: Additional Results from the Survey of Adult Skills. OECD Skills Studies, OECD Publishing, Paris, 2019. Interacting multiagent systems. Kinetic equations and Monte Carlo methods. L Pareschi, G Toscani, Oxford University PressL. Pareschi and G. Toscani. Interacting multiagent systems. Kinetic equations and Monte Carlo methods. Oxford University Press, 2013. Wealth distribution and collective knowledge. a Boltzmann approach. L Pareschi, G Toscani, Philosophical transactions. Series A, Mathematical, physical, and engineering sciences. 372L. Pareschi and G. Toscani. Wealth distribution and collective knowledge. a Boltzmann approach. Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 372, 01 2014. Kinetic models of collective decision-making in the presence of equality bias. L Pareschi, P Vellucci, M Zanella, Physica A: Statistical Mechanics and its Applications. 467L. Pareschi, P. Vellucci, and M. Zanella. Kinetic models of collective decision-making in the presence of equality bias. Physica A: Statistical Mechanics and its Applications, 467:201- 217, 2017. Structure preserving schemes for nonlinear Fokker-Planck equations and applications. L Pareschi, M Zanella, Journal of Scientific Computing. 74L. Pareschi and M. Zanella. Structure preserving schemes for nonlinear Fokker-Planck equations and applications. Journal of Scientific Computing, 74:1575-1600, 03 2018. Epidemic spreading in scale-free networks. R Pastor-Satorras, A Vespignani, Phys. Rev. Lett. 86R. Pastor-Satorras and A. Vespignani. Epidemic spreading in scale-free networks. Phys. Rev. Lett., 86:3200-3203, Apr 2001. Daley-Kendal models in fake-news scenario. J R Piqueira, M Zilbovicius, C M Batistela, Physica A: Statistical Mechanics and its Applications. 548123406J. R. Piqueira, M. Zilbovicius, and C. M. Batistela. Daley-Kendal models in fake-news scenario. Physica A: Statistical Mechanics and its Applications, 548:123406, 2020. CSI: A hybrid deep model for fake news detection. N Ruchansky, S Seo, Y Liu, International Conference on Information and Knowledge Management, Proceedings, volume Part F131841. N. Ruchansky, S. Seo, and Y. Liu. CSI: A hybrid deep model for fake news detection. In International Conference on Information and Knowledge Management, Proceedings, volume Part F131841, pages 797-806, 2017. The diffusion of misinformation on social media: Temporal pattern, message, and source. J Shin, L Jian, K Driscoll, F Bar, Computers in Human Behavior. 83J. Shin, L. Jian, K. Driscoll, and F. Bar. The diffusion of misinformation on social media: Temporal pattern, message, and source. Computers in Human Behavior, 83:278-287, 2018. Defensive modeling of fake news through online social networks. G Shrivastava, P Kumar, R Ojha, P Srivastava, S Mohan, G Srivastava, IEEE Transactions on Computational Social Systems. 082020G. Shrivastava, P. Kumar, R. Ojha, P. Srivastava, S. Mohan, and G. Srivastava. Defensive modeling of fake news through online social networks. IEEE Transactions on Computational Social Systems, PP:1-9, 08 2020. FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for. K Shu, D Mahudeswaran, S Wang, D Lee, H Liu, Studying Fake News on Social Media. Big Data. 83K. Shu, D. Mahudeswaran, S. Wang, D. Lee, and H. Liu. FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social Media. Big Data, 8(3):171-188, 2020. K Shu, A Sliva, S Wang, J Tang, H Liu, ArXiv:1708.01967Fake News Detection on Social Media: A Data Mining Perspective. K. Shu, A. Sliva, S. Wang, J. Tang, and H. Liu. Fake News Detection on Social Media: A Data Mining Perspective. ArXiv:1708.01967, 2017. Fake News Risk: Modeling Management Decisions to Combat Disinformation. T I Trammell, Stanford UniversityPhD thesisT. I. Trammell. Fake News Risk: Modeling Management Decisions to Combat Disinforma- tion. PhD thesis, Stanford University, 2020. The agenda-setting power of fake news: A big data analysis of the online media landscape from. C Vargo, L Guo, M Amazeen, New Media and Society. 205C. Vargo, L. Guo, and M. Amazeen. The agenda-setting power of fake news: A big data anal- ysis of the online media landscape from 2014 to 2016. New Media and Society, 20(5):2028- 2049, 2018. An overview of online fake news: Characterization, detection, and discussion. Information Processing and Management. X Zhang, A Ghorbani, 572020X. Zhang and A. Ghorbani. An overview of online fake news: Characterization, detection, and discussion. Information Processing and Management, 57(2), 2020. Fake news propagate differently from real news even at early stages of spreading. Z Zhao, J Zhao, Y Sano, O Levy, H Takayasu, M Takayasu, D Li, J Wu, S Havlin, EPJ Data Sci. 97Z. Zhao, J. Zhao, Y. Sano, O. levy, H. Takayasu, M. Takayasu, D. Li, J. Wu, and S. Havlin. Fake news propagate differently from real news even at early stages of spreading. EPJ Data Sci., 9(7):1-14, 2020.
[]
[ "Hierarchical communities in the walnut structure of the Japanese production network", "Hierarchical communities in the walnut structure of the Japanese production network" ]
[ "Abhijit Chakraborty \nGraduate School of Simulation Studies\nThe University of Hyogo\nKobeJapan\n", "Yuichi Kichikawa \nFaculty of Science\nNiigata University\nNiigataJapan\n", "Takashi Iino \nFaculty of Science\nNiigata University\nNiigataJapan\n", "Hiroshi Iyetomi 2☯ ", "Hiroyasu Inoue \nGraduate School of Simulation Studies\nThe University of Hyogo\nKobeJapan\n\nFaculty of Science\nNiigata University\nNiigataJapan\n", "Yoshi Fujiwara \nGraduate School of Simulation Studies\nThe University of Hyogo\nKobeJapan\n", "Hideaki Aoyama \nGraduate School of Science\nKyoto University\nKyotoJapan\n" ]
[ "Graduate School of Simulation Studies\nThe University of Hyogo\nKobeJapan", "Faculty of Science\nNiigata University\nNiigataJapan", "Faculty of Science\nNiigata University\nNiigataJapan", "Graduate School of Simulation Studies\nThe University of Hyogo\nKobeJapan", "Faculty of Science\nNiigata University\nNiigataJapan", "Graduate School of Simulation Studies\nThe University of Hyogo\nKobeJapan", "Graduate School of Science\nKyoto University\nKyotoJapan" ]
[]
This paper studies the structure of the Japanese production network, which includes one million firms and five million supplier-customer links. This study finds that this network forms a tightly-knit structure with a core giant strongly connected component (GSCC) surrounded by IN and OUT components constituting two half-shells of the GSCC, which we call awalnut structure because of its shape. The hierarchical structure of the communities is studied by the Infomap method, and most of the irreducible communities are found to be at the second level. The composition of some of the major communities, including overexpressions regarding their industrial or regional nature, and the connections that exist between the communities are studied in detail. The findings obtained here cause us to question the validity and accuracy of using the conventional input-output analysis, which is expected to be useful when firms in the same sectors are highly connected to each other.
10.1371/journal.pone.0202739
null
52,119,595
1808.10090
93b6c5f38006caa0d138c4a1b7f42e3cd6bfaa88
Hierarchical communities in the walnut structure of the Japanese production network Abhijit Chakraborty Graduate School of Simulation Studies The University of Hyogo KobeJapan Yuichi Kichikawa Faculty of Science Niigata University NiigataJapan Takashi Iino Faculty of Science Niigata University NiigataJapan Hiroshi Iyetomi 2☯ Hiroyasu Inoue Graduate School of Simulation Studies The University of Hyogo KobeJapan Faculty of Science Niigata University NiigataJapan Yoshi Fujiwara Graduate School of Simulation Studies The University of Hyogo KobeJapan Hideaki Aoyama Graduate School of Science Kyoto University KyotoJapan Hierarchical communities in the walnut structure of the Japanese production network RESEARCH ARTICLE ☯ These authors contributed equally to this work. * This paper studies the structure of the Japanese production network, which includes one million firms and five million supplier-customer links. This study finds that this network forms a tightly-knit structure with a core giant strongly connected component (GSCC) surrounded by IN and OUT components constituting two half-shells of the GSCC, which we call awalnut structure because of its shape. The hierarchical structure of the communities is studied by the Infomap method, and most of the irreducible communities are found to be at the second level. The composition of some of the major communities, including overexpressions regarding their industrial or regional nature, and the connections that exist between the communities are studied in detail. The findings obtained here cause us to question the validity and accuracy of using the conventional input-output analysis, which is expected to be useful when firms in the same sectors are highly connected to each other. Introduction A macro economy is the aggregation of the the dynamic behaviour of agents who interact with each other under diverse external (non-economic) conditions. Economic agents are numerous and include consumers, workers, firms, financial institutions, government agencies, and countries. The interactions of these agents result in the creation of economic networks, where nodes are economic agents, and links (edges) connect agents that interact with each other. Therefore, there are various kinds of economic networks depending on the nature of the interactions, which form an overlapping multi-level network of networks. Thus, any evidencebased scientific investigation of the macro economy must be based on an understanding of the real nature of these interactions and the economic network of networks that they form. This concept also applies to the micro-level perspective of economic agents: without knowing who a firm trades with, how can anyone hope to determine the future of that firm? Therefore, it is highly important to use actual network information when studying economic dynamics with either agent-based modelling/simulations or other means of systematic studies such as determining the debt-rank of an economic agent [1][2][3][4][5]. Without this information, it is difficult to apply the validity of the results to the actual economy. In this paper, we study the structure of one of the most important networks, the production network, which is formed by firms (as nodes) and trade relationships (as links) [6][7][8][9]. In the scientific study of both the macro and the micro economy, the production network of the real economic world is a topic of high importance. Before one engages in agent-model building and developing simulations, one needs to understand the structure of this network to be able to understand the dynamics of this network and eventually reach into the realm of economic fluctuations, business cycles, systemic crises, as well as firms' growth and decline. Therefore, in the next Section, we describe the overall statistics and visualization and refer to the unique overall structure of the network as a "walnut" structure. This type of structure is quite different from what is expected because of the existence of the IN-giant strongly connected component (GSCC)-OUT components: In the trade network, the flow of materials and goods begins with imported/mined/harvested raw materials such as oil, iron, other metals and food. Firms who engage in this business form the IN components. These compnoents are then processed to become various products such as semiconductors or powdered food by firms, which are considered to be GSCC components, before they are made into consumer goods by firms, which are considered to be the OUT components. One might think that the existence of IN-GSCC-OUT components is similar to a web network that has a bow-tie structure [10]. However, the production network is different. Ties among the firms form a much tighter network with an overall structure that does not resemble a bow-tie. Then, we study the community structure and reveal its hierarchical nature using the Infomap method [11,12]. In previous studies [6,8], the modularity maximization technique [13] is used to study the community structure of the Japanese production network. However, modularity maximization cannot capture the dynamic aspects of the network. This technique reveals a similar type of community partition for both directed and undirected versions of the network. Moreover, it is well known that the modularity maximization algorithm suffers from a resolution limit problem when trying to identify the communities in a large scale network. The map equation method [11,12] detects communities using the dynamic behaviour of the network. In a recent study [9], the hierarchical map equation is applied to characterize the level 1 communities in the Japanese production network, and a detailed investigation of the topological properties of both the intra and inter communities is conducted. It also shows that the regions and sectors are segregated within the communities. In another study [14], the business cycle correlations of the communities detected by the map equation are studied for the network of firms listed on the Tokyo Stock Exchange. The presence of strong correlations in intra and inter communities is explained by the attributes of both the network topology and the firms. The crucial difference between our paper and [9,14] is that we not only study the top level communities but also study the communities at the other levels as well as the hierarchical structure. Moreover, we determine the compositions of the communities and subcommunities in terms of whether they include upstream and downstream firms, which has not been investigated in previous studies. In our paper, we conduct a level-by-level analysis and identify both communities and "irreducible" communities (communities that are not decomposed into subcommunities at the lower level). We also study the overexpression of some of the major communities to identify both the industrial sector and the regional decomposition. The complex nature of the links that exist between the communities are also studied. A discussion and the conclusion as well as suggestions for future research are provided at the end. Some of the supporting materials are included as Appendices. Production network data and its basic structure Our data for the production network are based on a survey conducted by Tokyo Shoko Research (TSR), one of the leading credit research agencies in Tokyo, and was supplied to us through the Research Institute of Economy, Trade and Industry (RIETI). The data were collected by TSR by means of inquiry from firms who represent the top five suppliers and the top five customers. Although the large firms that have many suppliers and customers submitted replies that are incomplete, these data are supplemented with data on the other side of trade: smaller firms submit replies that include data on large firms, who are important trade partners. By combining all the submissions from both side of trade into one database, large firms are connected to numerous smaller firms, which provides a good approximation of the real complete picture. One might worry because some of the trades last for only a short time and sometimes they only occur once, such as when a firm seeks a good deal for just one particular occasion, and thus cast doubt on the definition of the trade network. The form of data collection used for this study solves this problem: it is most implausible that replies containing data on a one-time trade are included, instead, data on firms that maintain a certain trade frequency are likely to be listed. In this study, we use two datasets: 'TSR Kigyo Jouhou' (firm information), which contains basic financial information on more than a million firms, and 'TSR Kigyo Soukan Jouhou' (firm correlation information), which includes several million supplier-customer and ownership links and a list of bankruptcies. Both of these datasets were compiled in July 2016. (Some of the earlier studies on the production network include [6][7][8][9]). In this study, i ! j denotes a supplier-customer link, where firm i is a supplier for another firm j, or equivalently, j is a customer of i. We extracted only the supplier-customer links for pairs of "active" firms and excluded inactive and failed firms by using an indicator flag for them when we retrieved the basic information. We eliminated self-loops and parallel edges (duplicate links recorded in the data), to create a network of firms (as nodes) and suppliercustomer links (as edges). The network has the largest connected component when it is viewed as an undirected graph, which is the giant weakly connected component (GWCC) that includes 1,066,037 nodes (99.3% of all the active firms) and 4,974,802 edges. This study not only analyzes the network but considers several attributes of each node: the financial information in terms of firm size, which is measured as sales, profit, number of employees and the firm's growth; the major and minor classifications of industrial sectors, details regarding the firm's products, the firm's main banks, the principal shareholders, and miscellaneous other information including geographical location. For the purpose of our study, we focus on two attributes of each firm, namely the industrial sector and the geographical location of the head office. The industrial sectors are hierarchically categorized into 20 divisions, 99 major groups, 529 minor groups and 1,455 industries (Japan Standard Industrial Classification, November 2007, Revision 12). See Table A in S1 Appendix for the number of firms in each division of each industrial sector. Each firm is classified according to the sector it belongs to, and the primary, secondary and tertiary, if any, is identified. The geographical location is converted into a level of one of 47 prefectures or into one of 9 regions (Hokkaido, Tohoku, Kanto, Tokyo, Chubu, Kansai, Chugoku, Shikoku, and Kyushu). See Table B in S1 Appendix for the number of firms in each regional area of Japan. Fig 1 depicts a representative supply-chain network of the automobile industry in Japan. For example, Toyota Motor Corporation, the largest car manufacturer in the nation, obtains mechanical parts from suppliers such as Denso and Aisin Seiki. In addition, Toyota is indirectly connected to Denso through Aisin Seiki. One can also go up from Denso to Murata Manufacturing in the figure. For electronic parts, another important components of cars, Toyota has direct transactions with general electrical manufacturers such as Toshiba and Panasonic, and Toshiba, in turn, obtains parts from Dai Nippon Printing. General trading companies such as Marubeni, Mitsui, and Toyota Tsusho play a key role in the formation of the supply-chain network. In addition, we can observe a circular transaction relation among Toyota Motor, Denso, and Toyota Industries. The existence of such a feedback loop can complicate firms' dynamics in the production network. In terms of the flow of goods and services (and money in the reverse direction), the firms are classified in three categories: the "IN" component, the "GSCC", and the "OUT" component. This structure is called "bow-tie" in a well-known study on the Internet [10]. The GWCC can be decomposed into the parts defined as follows: OUT The firms that are reachable from the GSCC via a direct path. TE "Tendrils"; the remainder of the GWCC It follows from the definitions that GWCC ¼ GSCC þ IN þ OUT þ TEð1Þ We, however, find it far more appropriate to call this structure a "Walnut" structure, as "IN" and "OUT" components are not as separated as in the two wings of a "bow-tie" but are more like the two halves of a walnut shell, surrounding the central GSCC core. This can be explained as follows. The number of firms in each component of the GSCC, IN, OUT and TE is shown in Table 1. Half of the firms are inside the GSCC. 20% of the firms are in the upstream side or IN, and 26% of them are in the downstream side or OUT. In contrast with the well-known "bow-tie structure" in the study conducted by [10] (in which the GSCC is less than one-third of the GWCC), the GSCC in the production network occupies half of the system, meaning that most firms are interconnected by the small geodesic distances or the shortest-path lengths in the economy. In fact, by using a standard graph layout algorithm based on a spring-electrostatic model with three-dimensional space [15], we can Moreover, by examining the shortest-path lengths from GSCC to IN and OUT as shown in Table 2, one can observe that the firms in the upstream or downstream sides are mostly located a single step away from the GSCC. This feature of the economic network is different from the bow-tie structure of many other complex networks. For example, the hyperlinks between web pages of a similar size, (GWCC: 855,802, GSCC: 434,818 (51%), IN: 180,902 (21%), OUT: 165,675 (19%), TE: 74,407 (9%)) which are studied in [16], have a bow-tie structure such that the maximum distance from the GSCC to either IN or OUT is 17, while more than 10% of the web pages in IN or OUT are located more than a single step away from the GSCC. This observation as well as Fig 2 leads us to say that the production network has a "walnut" structure, rather than a bow-tie structure. We depict the schematic diagram in Fig 3. Later, we shall show how each densely connected module or community is located in the walnut structure. Methods Community detection Community detection is widely used to elucidate the structural properties of large-scale networks. In general, real networks are highly non-uniform. Community detection singles out groups of nodes densely connected to each other in a network to divide that network into modules. This process enables us to have a coarse-grained view of the structure of such complicated networks. One of the most popular methods used for community division is maximizing the modularity index [13]. Modularity measures the strength of the partition of a network into communities by comparing the fraction of links in given communities with the expected fraction of links if links were randomized with the same degree of distribution as the original network. However, it is well known that the modularity method suffers from a problem called resolution limit [17] when applied to large networks. That is, optimizing modularity fails to detect small communities even if they are well defined, such as cliques. The map equation method [11] is another method used to detect communities in a network. This method is found to be one of the best performing community detection techniques compared to the others [18]. The map equation method is a flow-based and informationtheoretic method depending on the map equation, which is defined as LðCÞ ¼ q ↷ HðCÞ þ X m i¼1 p i ↻ HðP i Þ:ð2Þ Here, L(C) measures the per step average description length of the dynamics of a random walker migrating through the links between the nodes of a network with a given node partition C = {C 1 , Á Á Á, C ℓ } that consists of two parts. The first term arises from the movements of the random walker across communities, where q ↷ is the probability that the random walker switches communities, and H(C) is the average description length of the community index codewords given by the Shannon entropy. The second term arises from the movements of the random walker within the communities, where p i ↻ is the percentage of the movements within the community C i , and HðP i Þ is the entropy of the codewords in the module codebook i. If the network has densely connected parts in which a random walker stays a long time, one can compress the description length of the random walk dynamics in a network by using a two-level codebook for nodes adapted to such a community structure; this is similar to geographical maps in which different cities recycle the same street names such as "main street' [11]. Therefore, obtaining the best community decomposition in the map equation framework amounts to searching for the node partition that minimizes the average description length L(C). In regard to the resolution limit problem, any two-level community detection algorithms including the map equation are not able to eliminate the limitation. However, the map equation significantly mitigates the problem as has been shown by a recent theoretical analysis [19]. In practice, this is true for our network, as will be demonstrated later. Recently, the original map equation method has been extended to networks with multiscale inhomogeneity. A network is decomposed into modules that include their submodules and then their subsubmodules and so forth. The hierarchical map equation [12] recursively searches for such a multilevel solution by minimizing the description length with possible hierarchical partitions. The map equation framework for the community detection of networks is now more powerful. Therefore, we analyze the production network using this method. The code of the hierarchical map equation algorithm is available at http://www. mapequation.org. Note that this study exclusively considers the community identification for nodes in our network. That is, each node belongs to a unique community at every hierarchical level. However, such community assignment may be too restrictive for a small number of giant conglomerate firms such as Hitachi and Toshiba because of the diversity of their businesses. The map equation is so flexible that it can detect the overlapping community structure of a network in which any node can be a member of multiple communities [20]. However, we use the original algorithm as an initial step toward obtaining a full account of the firm-to-firm transaction data. Overexpression within communities and subcommunities Most real-world networks have a community structure [21]. Such communities are formed in a network based on the principle of homophily [22]. This principle indicates that a node has a tendency to connect with other similar nodes. For example, ethnic and racial segregation are observed in our society [23], biological functions play a key role in the formation of communities in protein-protein interaction networks [24], and the community structure of stock markets is similar to that of their economic sectors [25]. We find that attributes play a crucial role in the formation of the community structure of the production network using the following method. We follow the procedure used in [26] to determine the statistically significant overexpression of different locations and sectors within a community. This method was developed from the statistical validation of the overexpression of genes in specific terms of the Gene Ontology database [27]. In this procedure, a hypergeometric distribution H(X|N, N C , N Q ) is used to measure the probability that X randomly selected nodes in community C of size N C will have attribute Q. The hypergeometric distribution H(X|N, N C , N Q ) can be written as HðXjN; N C ; N Q Þ ¼ N C X N À N C N Q À X N N Q ;ð3Þ where N Q is the total number of elements in the system with attribute Q. Further, one can associate a p value p(N C,Q ) with N C,Q nodes, having attribute Q in community C with H(X|N, N C , N Q ) by the following relation: pðN C;Q Þ ¼ 1 À X N C;Q À 1 X¼0 HðXjN; N C ; N Q Þ:ð4Þ The attribute Q is overexpressed within community C if p(N C,Q ) is found to be lower than some threshold value p c . As we use a multiple-hypothesis test, we need to choose p c appropriately to exclude false positives. We assume that p c = 0.01/N A , as specified in [26], which includes a Bonferroni correction [28]. Here, N A represents the total number of different attributes (In our study we have N A = 9 regional attributes) for all the nodes of the system. Results Hierarchy of communities By using the Infomap method [11,12], we find that the communities have a hierarchical structure, as summarized in Table 3, and determine the number of firms at each level. This hierarchical structure is illustrated in Fig 4, where 2nd level communities are lined up from left to right in a descending order in terms of community size (number of firms), and the width of the triangles reflects the number of subcommunities in each community. We find that most of the subcommunites are on the 2nd level and that most of the firms (94%) belong to 2nd level communities. Compared with 1st and 2nd level communities, the 3rd to the 5th levels are of no significant importance. Therefore, we limit our discussion of the properties of the (sub) communities to those of the 2nd level. Past studies on the application of the hierarchical map equation to real world networks [12,19] show that dense networks have large communities at the finest level with shallow hierarchies, and sparse networks tend to have deep hierarchies. It is also observed that the depth of the hierarchies increases with network size. In the case of the California road network, the hierarchy has a deep level because the road network has geographical constraints that decrease the number of shortcuts between the different parts of the network [12]. In our production network, we observe a relatively shallow hierarchy because it does not have such strict constraints. We visualize the hierarchical decomposition of the whole network into communities and their subcommunities in Fig 5. The configuration of the nodes in three-dimensional space is the same as that in Fig 2. We can see that the network is extremely complex with multi-scale inhomogeneity. The results of an overexpression analysis indicate that the major communities of the 1st and 2nd levels are characterized as industrial sectors and regions, as noted in the subsequent subsections. For the purpose of making the following discussion of communities transparent, let us adopt the following indexing convention: At the top modular level of the hierarchical tree structure, the communities are indexed by their rank in size (the number of firms in the community). Thus, the largest community at the top level is denoted as "C 1 ". At the lower levels, the rank of the size is added after ':'. For example, community "C 1:5 " is the fifth largest 2nd level community among all the 2nd-level communities that belong to the largest top-level community C 1 . Level-1 communities The complementary cumulative function D(s) indicates the fraction of communities at the top level having a size of at least s, as shown in Fig 6. The bimodal nature of the distributions manifests the resolution limit problem. A small number of communities predominates the whole system. Among some 200 communities detected, for example, the largest communities contain 100,000-200,000 firms. However, such extremely large communities are decomposed into subcommunities by the hierarchical map equation in a unified way. This process is quite different from community detection based on modularity. One may address this problem by applying the modularity maximization method recursively; communities are regarded as separated subnetworks that can be further decomposed. However, this procedure lacks a sound basis because it uses different null models to decompose the subnetworks [21]. A more detailed comparison between these two methods is provided in S1 Appendix. The map equation is a method that can be used to divide a directed network into communities in which nodes are tightly connected in both directions. Due to the nature of the network, the flows across communities thus detected should be biased in an either direction. communities, we introduce the polarization ratio defined by P ij ¼ A ij À A ji A ij þ A ji ;ð5Þ where A ij is the total number of links spanning from communities i to j and A ji and that of the opposite links. If the linkage between communities i and j is completely polarized, then P ij becomes ±1 depending on its direction; if the linkage is evenly balanced, then P ij = 0. If we assume that the links have no preference with respect to their direction as a null hypothesis, then the null model predicts that the polarization ratio for the connections between communities i and j fluctuates around 0 with the standard deviation σ given by s ¼ 1 ffiffiffiffi ffi L ij p ;ð6Þ where L ij = A ij + A ji is the total number of links between the two communities. If we focus on intercommunity linkages with L ij ! 100, we see that the ones whose direction is polarized in a statistically meaningful way occupy 86.7% of their total. The corresponding share of intercommunity linkages is 70.1% for L ij ! 10. Most of the connections between communities with more than 100 links are significantly polarized in reference to the random orientation model for intercommunity links. We find the overexpression of the attributes in 1st level communities to determine the factors that play a crucial role in the formation of such communities. Our study considers both the location and the sector attributes. The location attributes are divided into 9 regions, and the sector attributes are categorized in 20 divisions. The details about the sixth largest 1st level Table 4. We also use a finer classification, i.e., 47 prefectures and 99 major sectors for which the results are provided in S1 Appendix. We observe a strong connection between overexpressed sectors and overexpressed regions. In the largest community, mainly manufacturing sectors and heavily urbanized regions (Kanto, Tokyo, Chubu, and Kansai) are overexpressed. The 2nd largest community shows that mainly the agriculture and food industries (see SI) and rural regions (Hokkaido, Tohoku, Shikoku, and Kyusyu-Okinawa) are overexpressed. In terms of Here, 51 major communities containing more than 1,000 firms are selected. The top figure plots the polarization ratio |P ij | of the linkage between communities i and j versus the total number L ij of its constituting links. The dashed curve shows the significance level corresponding to 2σ for the polarizability of intercommunity linkage for the given total number of its constituents, where the random orientation of the individual links is adopted as a null model; see Eq (6) for the standard deviation σ. The bottom figure is a histogram for the frequency of intercommunity linkages in each bin of L ij . The grey (black) bars depict the number of intercommunity linkages with a |P ij | that is higher (lower) than the threshold for the test of statistical significance. https://doi.org/10.1371/journal.pone.0202739.g007 Hierarchical communities in the walnut structure of the Japanese production network overexpression in the 3rd largest community, the construction sector dominates and the corresponding overexpressed region indicates these firms are mainly based in Kanto and Tokyo. The transport and wholesale retail trade industries are the dominate attributes of the 4th largest community, and Tohoku, Kanto, and Chubu are the overexpressed regions. The 5th largest community mainly includes Tokyo, and the primary overexpressed sectors are information and communications, scientific research, and professional and technical services. The 6th largest community primarily primarily includes medicine and health care. To summarize, the following characterizes the six largest communities: Fig 2, where the 50 largest communities at the top level are represented by nodes, and the direct links connecting them, in either direction, are bundled into arrows. We used the following steps to prepare the diagram. We first calculated the center of mass for the IN, GSCC, and OUT components in threedimensional space. The three centers thus obtained determine the two-dimensional plane for the drawing. Second, we fixed the horizontal axis to optimally represent the direction of flow from the IN (left-hand side) components to the OUT (right-hand side) components through the GSCC; in fact, the three centers are almost aligned horizontally. Then, we calculated the center of mass of the major communities and projected them onto the two-dimensional plane to layout the major communities onto it. Finally, we connected these communities by arrows using information on the links between them. The positions of the communities on the horizontal line clearly reflect their characteristics in terms of the walnut structure, as shown in Table 4. Among the 6 largest communities, the 3rd community contains twice as many IN components as the averaged concentration on the leftmost side. On the other hand, the 6th community with the largest OUT concentration is on the rightmost side. The 2nd and 4th communities, which are dominated by OUT components, are also on the right-hand side. The 1st community with excess GSCC components is between the 3rd community and the OUT-excess communities. The 5th community, whose composition is very close to the average one, is rather in middle of the walnut structure. Most of the remaining relatively small communities are localized on the left-hand side. This configuration is understandable, because the IN and GSCC components tend to form integrated communities, as will be shown later. Level-2 communities At the 2nd level, some of the top level communities are decomposed to several subcommunities as shown in Tables D and E in S1 Appendix. The cumulative distribution of the community size at this level is plotted in Fig 9. We use maximum likelihood estimation (MLE) [29] to quantitatively fit a statistically significant power-law decay for the tail of the CCDF, which has the functional form D(s) $ s −γ+1 with https://doi.org/10.1371/journal.pone.0202739.g008 γ = 2.50 ± 0.02. The results indicate that the size of the communities is highly heterogeneous and spans over several orders of magnitude. We also analyzed the overexpressions of selected subcommunities. In terms of subcommunities, we observe wholesale and retail trade is the dominate overexpress attribute of the five largest subcommunities of the largest community. The Kansai region is the only overexpressed region in the 2nd largest subcommunity of the largest community. In C 2:1 , transport and postal activities, accommodations, eating and drinking services, living related and personal services, and amusement services dominate the overexpressed sectors, which are mainly based in urban regions (Tokyo and Chubu). The manufacturing, wholesale and retail trades in Tokyo and the Kansai region are overexpressed in C 2:2 . Wholesale and retail trade dominate the overexpressed attribute in C 2:3 , C 2:4 and C 2:5 . A detailed account of the results is provided in S1 Appendix. The network diagram in Fig 10 shows the overlapping nature of the industrial sectors in the communities. We construct a weighted undirected network of 97 major sectors from sector over expression data for the 2nd modular level. Here, a weighted link of value 1 is formed between a pair of sectors if they are overexpressed in the same community. The link-weight of the network is found to be highly heterogeneous with a horizontal distribution as shown in Fig 11. The top five heaviest weighted links between the sectors are listed in Table 5. Fig 12 is the same plot as Fig 7, but this new plot includes communities at the 2nd modular level. We can confirm that the links between the subcommunities are well polarized. Once again, this result is consistent with the nature of the map equation, which extracts communities of tightly connected nodes in a bidirectional way in a directed network. Fig 13 shows how mixed the IN, OUT, and GSCC components of the walnut structure are in each of the large communities with more than 50 firms at the 2nd level, adopting a triangular diagram representation. We exclude firms belonging to TE; however, these are minor components of the walnut structure. Here, 3,011 communities containing more than 50 firms are selected, for a total of 421,779 firms. Suppose that a community contains firms belonging to the IN, OUT, and GSCC components for which the percentages are given by x 1 , x 2 , and x 3 , respectively. The walnut composition of the community is described by point (x 1 , x 2 , x 3 ) on the plane of x 1 + x 2 + x 3 = 1 in three-dimensional space. One can thereby establish one-to-one correspondence between a point inside an equilateral triangle and a composition of the three x 1 , " x 2 , and " x 3 : the communities in domain G (x 1 < " x 1 , x 2 < " x 2 , x 3 > " x 3 ) are GSCC-dominant; those in IG (x 1 > " x 1 , x 2 < " x 2 , x 3 > " x 3 ) are GSCC-IN hybrid; those in I (x 1 > " x 1 , x 2 < " x 2 , x 3 < " x 3 ) are IN- dominant; those in IO (x 1 > " x 1 , x 2 > " x 2 , x 3 < " x 3 ) are IN-OUT hybrids; those in O (x 1 < " x 1 , x 2 > " x 2 , x 3 < " x 3 ) are OUT-dominant; and those in GO (x 1 < " x 1 , x 2 > " x 2 , x 3 > " x 3 ) are GSCC-OUT hybrids. The total number of communities and firms in each domain are listed in Table 6. We observe that there are relatively fewer communities in the I domain and more communities in the IG domain. The IN components thus tend to combine with the GSCC components to form a single community. On the other hand, there are an appreciable number of communities dominated by the OUT components, leading to relatively few communities of IN-OUT and GSCC-OUT hybrids. This tendency, in terms of the characteristics of the communities, may reflect the industrial structure of Japan, which imports raw materials and produces a wide variety of goods out of these for both export and domestic consumption. We are also interested in what occurs in other countries. Once data on the production networks of other countries is available, we hope to compare their community characteristics with those of Japan. Although the IN components tend to to merge with the GSCC, we can see the large circle at the vertex of Fig 13. On the other hand, Table 2 shows that most nodes in the IN component have a distance of 1 from the GSCC. Therefore, one may think that there is a large community almost purely composed of nodes in the IN components of the Walnut shape (Fig 3). Actually, this configuration indicates an interesting structure where the nodes are mutually connected and simultaneously connected to nodes in the GSCC. It can be precisely said that the community is in the shape of a walnut shell. "#com" and "#firms" refer to the total number of communities and firms, respectively, in each of the six domains defined in Fig 13(b). https://doi.org/10.1371/journal.pone.0202739.t006 Hierarchical communities in the walnut structure of the Japanese production network Comparison of industrial sectors As is mentioned in the Introduction Section, detecting communities in the supply-chain network is crucial for understanding the agglomerative behavior of firms. This type of research is important because the detected communities are densely connected, and it is plausible that these firms affect each other through the links. On the other hand, industrial sectors commonly label firms, and these labels are widely used in the economics literature. If there is no difference between the detected communities and the industrial sectors, then there is no reason to make an effort to detect these communities. Therefore, in this section, we show how the detected communities are different from industrial sectors in terms of the interconnections between the groups. Although different classifications are used for industrial sectors, we discuss the one used in the input-output table [30]. We use this classification because the input-output table is a major Hierarchical communities in the walnut structure of the Japanese production network research domain in economics, and, more importantly, the purpose of the input-output table is to discuss money flows, which corresponds to the purpose of this paper. As previously mentioned, there are 209 communities in the 1st level and 66,133 communities in the 2nd level. On the other hand, the input-output tables have 13, 37, 108, 190, and 397 sectoral classifications, which are nested. We choose to compare 209 communities and 190 industrial sectors because these numbers are comparable. First, we counted the number of links between the communities and the industrial sectors. Fig 14 shows the difference between these two groups. These figures correspond to matrices that show the number of links in row groups and column groups. Each element is divided by the sum of its row. If the intra-links within the groups are dominant, then the diagonal elements of these matrices should have high density. As is shown in Fig 14, we can find the diagonal elements because the communities are denser than the other elements. However, the diagonal elements of the sectors do not have dense links. We see a vertical line in the matrix instead. The suppliers in the line include 5111: Wholesale and 5112: Retailing, and this result is natural because firms sell their products to industrial sectors. The overall ratio of intra-links, i.e., (the number of intra-group links)/(the number of all links) is 20.9% for industrial sectors and 63.3% for communities. We can conclude that the detected communities in this paper explicitly illustrate the agglomeration of firms based on supply-chain networks rather than industrial sectors, which is more commonly used to categorize firms. This result also tells us that communities with densely connected firms consist of various industrial sectors, and they have their own economies, i.e., small universes. In this paper, we do not weight the links of the network. However, obviously, each transaction has a value, and there is a diversity of transactions. We can estimate the weights by using the sales of the firms. If we have totally different results with the results we have obtained here, a further analysis might be necessary. However, the additional analyses based on weighting the links in the networks do not show any significant difference. The details of these results are shown in S1 Appendix: Intra-link density of the weighted links. Conclusion and discussion We analyze the overall structure and hierarchical communities embedded in the production network of one million firms and five million links that represent trade relationships in Japan in 2016, with the aim of simulating the macro/micro level dynamics of the economy. For the former, we find that the IN and OUT components (20% and 26% of the firms) form tight shells (semi-spheres) around the GSCC component, which we call a "walnut" structure rather than a "bow-tie" structure, which is well-known for representing web networks and other type of networks that have loose wings made of IN and OUT components. For the latter, we use the Infomap method to detect a hierarchy that includes 5 layers of communities, of which most of the irreducible (those that do not have any lower level subcommunities) belong to the 2nd level. Furthermore, the size distribution of the 2nd level communities show clear power-law behavior at the large end. In addition to the large number of irreducible communities made primarily of GSCC components and those that exist in IN shells or Out shells, there is a fair number of communities made of IN and GSCC components, GSCC and OUT components, and even IN and OUT components. These communities are expected due to the walnut shape of the overall structure: IN and OUT components are not far from each other as they are in the bow-tie structure, but they form tight shells, whose ends are closely woven with each other. Furthermore, we examine the overexpression of the major communities in terms of industrial sectors and prefectures and find that they are not formed within a sector but span several sectors and prefectures. These communities have various shapes: in some cases, they are formed around goods and services related to a particular item, such as food. Sometimes these communities are made of small firms connected with a major hub such as a large construction company in a particular prefecture or a medical insurance agency. These findings have major implications for the study of the macro economy: Consider an economic crisis. Once this crisis starts, whether it is due to a natural disaster in a particular region of a country or a major failure of a large company, it is expected that it initially affects the community in which this region or company is located. Then the effects of this crisis will spread to other neighboring communities. This analysis is very different from input-output analysis and is expected to be useful because an input-output analysis is based on the assumption that firms in the same sectors are well-connected with each other. In contrast, what we find is that the effects of a crisis will spread throughout communities rather than industries. The hierarchical community structure studied in this paper can be immediately applied to the analysis of large-scale modelling and simulation: the macro economy of a country or countries is an aggregation of products that economically affect the trade network as well as a multitude of networks of networks. Constructing models that span all the networks would be an interesting but exhaustive elaboration of this work. Instead, we may study one community at a time and then connect the results to obtain an overall picture. Research in this direction has already begun and will appear in the near future ( [14,31,32]). Supporting information S1 Appendix. Appendix to the manuscript. (PDF) GWCC the giant weakly connected component: the largest connected component when the network is viewed as an undirected graph. An undirected path exists for each arbitrary pair of firms in the component. GSCC the giant strongly connected component: the largest connected component when the network is viewed as a directed graph. A directed path exists for each arbitrary pair of firms in the component. Fig 1 . 1Representative network of the automobile industry in Japan. Major firms are selected under the following conditions: i) they are connected to Toyota Motor within three degrees of separation, ii) they belong to either the manufacturing or wholesale sectors, iii) they are listed in the first section of the Tokyo Stock Exchange, and iv) They are in the top 40 in terms of sales. The firms thus selected are displayed as nodes and the transactions between them are displayed as arrows. All of the displayed nodes belong to the GSCC component. The size of the nodes is scaled to the sales of the corresponding firm. The color of the nodes distinguishes their industry type; blue and green designate manufacturing and wholesale, respectively.INThe firms through which the GSCC is reached via a direct path. show in Fig 2 by visual inspection how closely most firms are interconnected with each other. Fig 2 . 2Visualization of the network in three-dimensional space. A surface view of the network is shown in panel (a), and a cross-sectional view that is cut through its center is shown in panel (b). The red, green, and blue dots represent firms in the IN, GSCC, and OUT components, respectively. https://doi.org/10.1371/journal.pone.0202739.g002 Fig 3 . 3The walnut structure. The production network as a walnut structure. The area of each component is approximately proportional to its size.https://doi.org/10.1371/journal.pone.0202739.g003 Fig 7 confirms this expectation. To quantify the polarizability of the links between a pair of Fig 4 . 4Hierarchical structure of the communities. Five levels of hierarchical community decomposition are illustrated. The width of the triangle originating in each community at the n-th level is proportional to the number of its subcomunities at the (n + 1)-th level. https://doi.org/10.1371/journal.pone.0202739.g004 Hierarchical communities in the walnut structure of the Japanese production network PLOS ONE | https://doi.org/10.1371/journal.pone.0202739 August 29, 2018 Fig 5 . 5Hierarchical decomposition of the whole network into communities and subcommunities. This panel (a) highlights the 6 largest communities at the top modular level with different colors. Each of these communities is further decomposed into subcommunities as demonstrated in panels (b) through (g), where the 6th largest subcommunities of the 1st through the 6th largest communities are highlighted. https://doi.org/10.1371/journal.pone.0202739.g005 Hierarchical communities in the walnut structure of the Japanese production network PLOS ONE | https://doi.org/10.1371/journal.pone.0202739 August 29, 2018 Fig 6 . 6The complementary cumulative distribution function D(s) of the community size s at the top modular level. https://doi.org/10.1371/journal.pone.0202739.g006 communities and the overexpressed attributes within it are tabulated in Fig 7 . 7Polarizability of the direction of links interconnecting communities at the top level. • The largest community: Manufacturing sectors • The second largest community: Food sectors • The third largest community: Construction sectors • The fourth largest community: Wholesale and retail trade • The fifth largest community: IT sector and scientific research, primarily based in Tokyo • The sixth largest community: Medical and health care Fig 8 is a coarse-grained diagram of the network shown in " is the total number of subcommunities included in each of the 1st level communities. The overexpression in terms of the regions and sector-divisions of the 6th largest communities at the 1st level. The percentage of nodes having a particular attribute is indicated in parentheses. Those with less than 0.01 are not listed. In addition, the percentages of the IN, GSCC, and OUT components are listed for each community. https://doi.org/10.1371/journal.pone.0202739.t004 Fig 8 . 8Network of the 50 largest communities at the top level. The major communities are depicted as nodes, and their size is scaled to the size of their corresponding communities. A bundle of directed links connecting a pair of nodes in either direction is represented by an arrow, the width of which is proportional to the total number of their links. Fig 9 . 9(color online) The complementary cumulative distribution function D(s) of a community with size s at the second modular level. A powerlaw fit to the data (red line) using the maximum likelihood estimation technique yields D(s) $ s −γ+1 with γ = 2.50 ± 0.02, s min = 28.2 ± 7.6, and p value = 0.976. https://doi.org/10.1371/journal.pone.0202739.g009 Fig 10 . 10Overexpression network of sectors. The node size represents the percentage of firms belong to that particular sector.https://doi.org/10.1371/journal.pone.0202739.g010 Fig 11 . 11The complementary cumulative distribution of link-weight in the overexpression network. https://doi.org/10.1371/journal.pone.0202739.g011 walnut components. The averaged composition of all the firms in the selected communities (i.e., the total number of firms in the IN/OUT/GSCC components divided by the total number of firms in the selected communities) is given by "x 1 ¼ 0:174, " x 2 ¼ 0:333, and " x 3 ¼ 0:493. The triangular region in Fig 13 is then decomposed into six domains in reference to " Fig 12 . 12Polarizability of the direction of the links interconnecting communities at the second level. Here, 1086 communities containing over 100 firms are selected. The dashed curve represents the same significance level as in Fig 7. https://doi.org/10.1371/journal.pone.0202739.g012Fig 13. Triangular diagram classifying communities at the second level by their relationship with the walnut structure. Each community is depicted by a circle located at point (x, y) inside the equilateral triangle, which corresponds to the composition (x 1 , x 2 , and x 3 ) of firms belonging to the IN, OUT, and GSCC components that are represented in three-dimensional space; the one-to-one correspondence between (x, y) and (x 1 , x 2 , x 3 ) is illustrated in the associated figure (a). The size of the communities is reflected by the area of their associated circles. The triangular region is decomposed into six domains with the average composition (" x 1 , " x 2 , " x 3 ) of the IN, OUT, and GSCC components for all firms, as designated in the associated figure (b); see the text for more detailed information on the domain decomposition. https://doi.org/10.1371/journal.pone.0202739.g013 Fig 14 . 14Density of links over intergroups. These figures show how many links the intergroups have. The top figure (a) shows the 3D plots of the industrial sectors. The bottom figure (b) shows the 3D plots of the communities. https://doi.org/10.1371/journal.pone.0202739.g014 PLOS ONE | https://doi.org/10.1371/journal.pone.0202739 August 29, 2018 1 / 25 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Table 1 . 1Walnut structure: The sizes of the different components.Component #firms Ratio (%) GSCC 530,174 49.7 IN 219,927 20.6 OUT 278,880 26.2 TE 37,056 3.5 Total 1,066,037 100 "Ratio" refers to the ratio of the number of firms to the total number of the firms in the GWCC. https://doi.org/10.1371/journal.pone.0202739.t001 Table 2 . 2Walnut structure: The shortest distance from GSCC to IN/OUT. The left half shows the number of firms in the IN component that connects to the GSCC firms with the shortest distance 1-4. The left side shows the OUT component.IN to GSCC Table 3 . 3Modular level statistics.Level #com #irr.com #firms Ratio (%) 1 209 106 830 0.078 2 65, 303 60, 603 998,267 93.643 3 18, 271 17, 834 61,748 5.792 4 1, 544 1,539 5,168 0.485 5 10 10 24 0.002 Total 80,092 1,066,037 100.00 Results of community detection using the multi-coding Infomap method. "#com" is the number of all the communities, "#irr.com" is the number of irreducible communities, which are communities that do not have any subcommunities. "#firms" refers to the number of firms in irreducible communities https://doi.org/10.1371/journal.pone.0202739.t003 Table 4 . 4Overexpressions of the 1st level communities.Index Size #subcom Region Sector Table 5 . 5Top five heaviest weighted links between sectors.Rank Node 1 Node 2 Table 6 . 6Classification of communities at the second level based on the walnut structure.Domain #com #firms G 1,010 114,399 IG 841 92,163 I 294 44,563 IO 80 14,362 O 640 139,986 GO 146 16,306 Total 3,011 421,779 PLOS ONE | https://doi.org/10.1371/journal.pone.0202739August 29, 2018 PLOS ONE | https://doi.org/10.1371/journal.pone.0202739 August 29, 2018 7 / 25 PLOS ONE | https://doi.org/10.1371/journal.pone.0202739 August 29, 2018 16 / 25 PLOS ONE | https://doi.org/10.1371/journal.pone.0202739 August 29, 2018 17 / 25 AcknowledgmentsWe are grateful to Y. Ikeda, W. Souma and H. Yoshikawa for their insightful comments and encouragement. We are also grateful to Tokyo Shoko Research Ltd. and RIETI for making this research possible by providing us with the production network data.Author Contributions Agent-Based Computational Economics. Handbook of Computational Economics. Tesfatsion L, Judd K2North HollandTesfatsion L, Judd K, editors. Agent-Based Computational Economics, Handbook of Computational Economics, vol.2. North Holland; 2006. DebtRank: too central to fail? Financial networks, the FED and systemic risk. S Battiston, M Puliga, R Kaushik, P Tasca, G Caldarelli, 10.1038/srep0054122870377Scientific reports. 2Battiston S, Puliga M, Kaushik R, Tasca P, Caldarelli G. DebtRank: too central to fail? Financial net- works, the FED and systemic risk. Scientific reports. 2012; 2. https://doi.org/10.1038/srep00541 PMID: 22870377 Econophysics of agent-based models. F Abergel, H Aoyama, B K Chakrabarti, A Chakraborti, A Ghosh, SpringerAbergel F, Aoyama H, Chakrabarti BK, Chakraborti A, Ghosh A. Econophysics of agent-based models. Springer; 2013. Economics with Heterogeneous Interacting Agents. A Caiani, A Russo, A Palestrini, M Gallegati, SpringerCaiani A, Russo A, Palestrini A, Gallegati M. Economics with Heterogeneous Interacting Agents. Springer; 2016. H Aoyama, Y Fujiwara, Y Ikeda, H Iyetomi, W Souma, H Yoshikawa, Macro, Econophysics-New Studies on Economic Networks and Synchronization. Cambridge University PressAoyama H, Fujiwara Y, Ikeda Y, Iyetomi H, Souma W, Yoshikawa H. Macro-Econophysics-New Stud- ies on Economic Networks and Synchronization. Cambridge University Press; 2017. Large-scale structure of a nation-wide production network. The European Physical Journal B-Condensed Matter and Complex Systems. Y Fujiwara, H Aoyama, 10.1140/epjb/e2010-00275-277Fujiwara Y, Aoyama H. Large-scale structure of a nation-wide production network. The European Physi- cal Journal B-Condensed Matter and Complex Systems. 2010; 77(4):565-580. https://doi.org/10.1140/ epjb/e2010-00275-2 Omori law after large-scale destruction of production network. Y Fujiwara, 10.1143/PTPS.194.158Progress of Theoretical Physics Supplement. 194Fujiwara Y. Omori law after large-scale destruction of production network. Progress of Theoretical Physics Supplement. 2012; 194:158-164. https://doi.org/10.1143/PTPS.194.158 Community Structure of a Large-Scale Production Network in Japan. T Iino, H Iyetomi, 10.1007/978-4-431-55390-8_3The Economics of Interfirm Networks. Watanabe T, Uesugi I, Ono ATokyo; JapanSpringerIino T, Iyetomi H. Community Structure of a Large-Scale Production Network in Japan. In: Watanabe T, Uesugi I, Ono A, editors. The Economics of Interfirm Networks. Tokyo: Springer Japan; 2015. p. 39-65. Available from: https://doi.org/10.1007/978-4-431-55390-8_3. A Chakraborty, H Krichene, H Inoue, Y Fujiwara, arXiv:170600203Characterization of the community structure in a largescale production network in Japan. arXiv preprintChakraborty A, Krichene H, Inoue H, Fujiwara Y. Characterization of the community structure in a large- scale production network in Japan. arXiv preprint arXiv:170600203. 2017;. Graph structure in the Web. A Broder, R Kumar, F Maghoul, P Raghavan, S Rajagopalan, R Stata, 10.1016/S1389-1286(00)00083-9Computer Networks. 331-6Broder A, Kumar R, Maghoul F, Raghavan P, Rajagopalan S, Stata R, et al. Graph structure in the Web. Computer Networks. 2000; 33(1-6):309-320. https://doi.org/10.1016/S1389-1286(00)00083-9 Maps of random walks on complex networks reveal community structure. M Rosvall, C T Bergstrom, 10.1073/pnas.0706851105Proceedings of the National Academy of Sciences. 1054Rosvall M, Bergstrom CT. Maps of random walks on complex networks reveal community structure. Proceedings of the National Academy of Sciences. 2008; 105(4):1118-1123. https://doi.org/10.1073/ pnas.0706851105 Multilevel compression of random walks on networks reveals hierarchical organization in large integrated systems. M Rosvall, C T Bergstrom, 10.1371/journal.pone.001820921494658PloS one. 64Rosvall M, Bergstrom CT. Multilevel compression of random walks on networks reveals hierarchical organization in large integrated systems. PloS one. 2011; 6(4):e18209. https://doi.org/10.1371/journal. pone.0018209 PMID: 21494658 Fast algorithm for detecting community structure in networks. M E Newman, 10.1103/PhysRevE.69.066133Physical review E. 69666133Newman ME. Fast algorithm for detecting community structure in networks. Physical review E. 2004; 69(6):066133. https://doi.org/10.1103/PhysRevE.69.066133 Business cycles' correlation and systemic risk of the Japanese supplier-customer network. H Krichene, A Chakraborty, H Inoue, Y Fujiwara, 10.1371/journal.pone.018646729059233PloS one. 1210186467Krichene H, Chakraborty A, Inoue H, Fujiwara Y. Business cycles' correlation and systemic risk of the Japanese supplier-customer network. PloS one. 2017; 12(10):e0186467. https://doi.org/10.1371/ journal.pone.0186467 PMID: 29059233 Graph drawing by force-directed placement. Software: Practice and Experience. Tmj Fruchterman, E M Reingold, 21Fruchterman TMJ, Reingold EM. Graph drawing by force-directed placement. Software: Practice and Experience. 1991; 21(11):1129-1164. Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters. J Leskovec, K Lang, A Dasgupta, M Mahoney, 10.1080/15427951.2009.10129177Internet Mathematics. 61Leskovec J, Lang K, Dasgupta A, Mahoney M. Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters. Internet Mathematics. 2009; 6(1):29-123. https://doi.org/10.1080/15427951.2009.10129177 Resolution limit in community detection. S Fortunato, M Barthélemy, 10.1073/pnas.0605965104Proceedings of the National Academy of Sciences. the National Academy of Sciences104Fortunato S, Barthélemy M. Resolution limit in community detection. Proceedings of the National Acad- emy of Sciences. 2007; 104(1):36-41. https://doi.org/10.1073/pnas.0605965104 Community detection algorithms: a comparative analysis. A Lancichinetti, S Fortunato, 10.1103/PhysRevE.80.056117Physical review E. 80556117Lancichinetti A, Fortunato S. Community detection algorithms: a comparative analysis. Physical review E. 2009; 80(5):056117. https://doi.org/10.1103/PhysRevE.80.056117 Estimating the resolution limit of the map equation in community detection. T Kawamoto, M Rosvall, 10.1103/PhysRevE.91.012809Phys Rev E. 9112809Kawamoto T, Rosvall M. Estimating the resolution limit of the map equation in community detection. Phys Rev E. 2015; 91:012809. https://doi.org/10.1103/PhysRevE.91.012809 Compression of Flow Can Reveal Overlapping-Module Organization in Networks. Viamontes Esquivel, A Rosvall, M , Phys Rev X. 121025Viamontes Esquivel A, Rosvall M. Compression of Flow Can Reveal Overlapping-Module Organization in Networks. Phys Rev X. 2011; 1:021025. Community detection in graphs. S Fortunato, 10.1016/j.physrep.2009.11.002Physics reports. 4863Fortunato S. Community detection in graphs. Physics reports. 2010; 486(3):75-174. https://doi.org/10. 1016/j.physrep.2009.11.002 An economic model of friendship: Homophily, minorities, and segregation. S Currarini, M O Jackson, P Pin, 10.3982/ECTA7528Econometrica. 774Currarini S, Jackson MO, Pin P. An economic model of friendship: Homophily, minorities, and segrega- tion. Econometrica. 2009; 77(4):1003-1045. https://doi.org/10.3982/ECTA7528 A measure of segregation based on social interactions. The Quarterly Journal of Economics. F Echenique, R G FryerJr, 10.1162/qjec.122.2.441122Echenique F, Fryer RG Jr. A measure of segregation based on social interactions. The Quarterly Jour- nal of Economics. 2007; 122(2):441-485. https://doi.org/10.1162/qjec.122.2.441 Detecting functional modules in the yeast protein-protein interaction network. J Chen, B Yuan, 10.1093/bioinformatics/btl37016837529Bioinformatics. 2218Chen J, Yuan B. Detecting functional modules in the yeast protein-protein interaction network. Bioinfor- matics. 2006; 22(18):2283-2290. https://doi.org/10.1093/bioinformatics/btl370 PMID: 16837529 Dynamics of market correlations: Taxonomy and portfolio analysis. J P Onnela, A Chakraborti, K Kaski, J Kertesz, A Kanto, 10.1103/PhysRevE.68.056110Physical Review E. 68556110Onnela JP, Chakraborti A, Kaski K, Kertesz J, Kanto A. Dynamics of market correlations: Taxonomy and portfolio analysis. Physical Review E. 2003; 68(5):056110. https://doi.org/10.1103/PhysRevE.68. 056110 Community characterization of heterogeneous complex systems. M Tumminello, S Miccichè, F Lillo, J Varho, J Piilo, R N Mantegna, 10.1088/1742-5468/2011/01/P01019Journal of Statistical Mechanics: Theory and Experiment. 1019Tumminello M, Miccichè S, Lillo F, Varho J, Piilo J, Mantegna RN. Community characterization of het- erogeneous complex systems. Journal of Statistical Mechanics: Theory and Experiment. 2011; 2011(01):P01019. https://doi.org/10.1088/1742-5468/2011/01/P01019 Data analysis tools for DNA microarrays. S Drăghici, CRC PressDrăghici S. Data analysis tools for DNA microarrays. CRC Press; 2003. Normal univariate techniques. R G MillerJr, Simultaneous statistical inference. SpringerMiller RG Jr. Normal univariate techniques. In: Simultaneous statistical inference. Springer; 1981. p. 37-108. Power-law distributions in empirical data. A Clauset, C R Shalizi, M E Newman, 10.1137/070710111SIAM review. 514Clauset A, Shalizi CR, Newman ME. Power-law distributions in empirical data. SIAM review. 2009; 51(4):661-703. https://doi.org/10.1137/070710111 Quantitative Input and Output Relations in the Economic Systems of the United States. The Review of Economics and Statistics. W Leontief, 10.2307/192783718Leontief W. Quantitative Input and Output Relations in the Economic Systems of the United States. The Review of Economics and Statistics. 1936; 18(3):105-125. https://doi.org/10.2307/1927837 How Firms Choose their Partners in the Japanese Supplier-Customer Network? An application of the exponential random graph model RIETI Discussion. H Krishene, Y Arata, A Chakraborty, Y Fujiwara, I Inoue, Paper Series. 18-E-011Krishene H, Arata Y, Chakraborty A, Fujiwara Y, Inoue I. How Firms Choose their Partners in the Japa- nese Supplier-Customer Network? An application of the exponential random graph model RIETI Dis- cussion Paper Series. 2018; 18-E-011. Shock Propagation Through Customer-Supplier Relationships: An Application of the Stochastic Actor-Oriented Model. Y Arata, A Chakraborty, Y Fujiwara, H Inoue, H Krichene, M Terai, International Workshop on Complex Networks and their Applications. ChamSpringerArata Y, Chakraborty A, Fujiwara Y, Inoue H, Krichene H, Terai M. Shock Propagation Through Customer-Supplier Relationships: An Application of the Stochastic Actor-Oriented Model. In Interna- tional Workshop on Complex Networks and their Applications 2017 Nov 29 (pp. 1100-1110). Springer, Cham.
[]
[]
[ "S Datta ", "A Jakovác \nInstitute of Physics\nBME Budapest\nH-1111BudapestHungary\n", "F Karsch ", "P Petreczky ", "\nBrookhaven National Laboratory\nUpton11973NYUSA\n", "\nBielefeld University\nD-33615BielefeldGermany\n" ]
[ "Institute of Physics\nBME Budapest\nH-1111BudapestHungary", "Brookhaven National Laboratory\nUpton11973NYUSA", "Bielefeld University\nD-33615BielefeldGermany" ]
[]
We discuss lattice results on the properties of finite momentum charmonium states in a gluonic plasma. We also present preliminary results for bottomonium correlators and spectral functions in the plasma. Significant modifications of χ b 0,1 states are seen at temperatures of 1.5 T c .
10.1063/1.2220179
[ "https://arxiv.org/pdf/hep-lat/0603002v1.pdf" ]
59,632
hep-lat/0603002
6ff83da40cb843bba94972123722545dc51b457c
arXiv:hep-lat/0603002v1 1 Mar 2006 S Datta A Jakovác Institute of Physics BME Budapest H-1111BudapestHungary F Karsch P Petreczky Brookhaven National Laboratory Upton11973NYUSA Bielefeld University D-33615BielefeldGermany arXiv:hep-lat/0603002v1 1 Mar 2006Quarkonia in a deconfined gluonic plasmaQuark-gluon plasmaquarkonia suppression PACS: 1115Ha1238Gc1238Mh2575-q We discuss lattice results on the properties of finite momentum charmonium states in a gluonic plasma. We also present preliminary results for bottomonium correlators and spectral functions in the plasma. Significant modifications of χ b 0,1 states are seen at temperatures of 1.5 T c . Following the suggestion of Matsui and Satz [1] that J/ψ can act as a probe of deconfinement, heavy quarkonia in the context of relativistic heavy ion collisions have been extensively studied both theoretically and experimentally. Early, potential model based studies indicated that all charmonium bound states dissolve by temperatures ∼ 1.1 T c [2]. But direct lattice studies over last few years have concluded that while the excited χ c states dissolve quite early in the plasma [3], the 1S states J/ψ, η c survive till quite high temperatures [3,4], at least in a purely gluonic plasma 1 . It has recently been claimed [6] that the observed J/ψ suppression in SPS and in RHIC is consistent with suppression of only the secondary J/ψ from excited state decays, in accordance with the lattice results. For understanding the mechanism of charmonia dissolution, as well as for phenomenological purposes, it is important also to know the effect of the plasma on a bound state in motion with respect to the plasma rest frame. This problem can be studied directly on lattice, in ways similar to that of the bound states at rest [7]. We look at the momentum-projected Matsubara correlators G(τ, p, T ) = ∑ x e i p. x J H (τ, x)J † H (0, 0) T(1) where J H is a suitable mesonic operator, p the spatial momentum, T is the temperature of the gluonic plasma and the Euclidean time τ ∈ [0, 1/T ). Through analytic continuation, the Matsubara correlator can be related to the hadronic spectral function by an integral equation: G(τ, p, T ) = ∞ 0 dω σ (ω, p, T ) cosh(ω(τ − 1/2T )) sinh(ω/2T ) .(2) The 1S charmonia η c , J/ψ at rest undergo very little significant modification till temperatures of 1.5 T c . However, the finite momentum correlators G(τ, p, T ) show significant temperature dependence even earlier. Medium modifications become stronger with increasing momentum. For p ∼ 1 GeV, significant modifications of the correlator are already seen at 1.1 T c . The spectral function σ (ω, p, T ), extracted from G(τ, p, T ), shows a clear peak also at high momenta, but it is significantly modified from the zero temperature peak. Physically this can be understood as follows. A charmonium state moving in the plasma frame "sees" more energetic gluons, leading to an increase in its collisional width. An in-medium change of the energy-momentum dispersion relation is also possible. Due to paucity of space, we refer to Ref. [7] for further discussion of this, and turn here to bottomonia instead. The ϒ b peak in the dilepton channel will be accessible to both RHIC and LHC, and may produce cleaner setups for plasma-related modifications since normal nuclear modifications for bottomonia are expected to be small. On the other hand, the 1S states ϒ b and η b are very tightly bound and even the potential model calculations estimated a very high dissolution temperature for them [2]. The behavior of 1P bottomonia is less clear: while the potential models predict a dissolution temperature close to T c [2], they also suggest a size similar to the 1S charmonia for these states and therefore one may expect a similar dissolution temperature [2]. A recent study [9], on the other hand, has found modifications of χ b close to T c , unlike η b . More than 40% of the total ϒ b seen in hadronic collisions come from decay of excited bottomonia, and an early dissolution or strong modification of χ b will modify this contribution significantly. We studied bottomonia in gluonic medium following the same methods used for the charmonia study in Ref. [3], and on the finest set of lattices used there. These lattices have a cutoff a −1 = 9.72 GeV, which is somewhat coarse for bottomonia 2 . This, therefore, should only be taken as a pilot study. We studied zero momentum projectedbΓb (point-point) correlators, where Γ = γ 5 , γ i , 1 and γ i γ 5 for η b , ϒ b , χ b 0 and χ b 1 , respectively. We extract σ (ω, T ) from G(τ, T ) using the "Maximum Entropy Method" [8], where the inversion of Eq. (2) is turned into a well-defined problem of finding the most probable spectral function given data and prior information for σ (ω, T ). Also very useful and robust conclusions of possible change of state with deconfinement can be obtained by comparing the correlators measured above T c with G recon,T * (τ, T ), correlators reconstructed from the spectral function obtained at the smallest temperature below T c (see Ref. [3] for details of our analysis method). If the spectral function is not modified with temperature, G(τ, T )/G recon,T * (τ, T ) = 1. A comparison of the measured correlators in the pseudoscalar and scalar channels with the reconstructed correlators is shown in Fig. 1(a). G(τ, T ) for the pseudoscalar shows no significant modification for temperatures upto 2.25 T c , indicating that η b is essentially unmodified at these temperatures. The scalar correlator, on the other hand, shows large changes at long distances already at 1.5 T c . The modification pattern in Fig. 1(a) is somewhat different from that seen in scalar charmonia, where the medium effect was seen to have set in at smaller distances and less abruptly. Figure 1(a) shows qualitatively similar trend as in Ref. [9], but the deviation of the scalar correlator seen by us is much smaller than that seen in Ref. [9]. This could be due to the use of anisotropic lattice in [9]. Figure 1(b) shows a comparison of the ground state peaks at 0.9 T c and 1.5 T c for the pseudoscalar (left) and scalar (right) channels. Plotted here is the dimensionless quantity ρ(ω, T ) = σ (ω, T )/ω 2 . As expected, the η b peak shows no significant modification at 1.5 T c . The χ b 0 peak, on the other hand, shows significant deviation, with a possible shift and broadening. The ϒ b and χ b 1 show similar trends to η b and χ b 0 , respectively. It will be interesting to further study the modification of the χ b peak, both in terms of the nature of the modification and its behavior at temperatures closer to T c . This manuscript has been authored under contract number DE-AC02-98CH1-886 with the US Department of Energy. Computations were done in NERSC, Berkeley and in NIC, Juelich. A.J. is supported by Hungarian Science Fund OTKA (F043465). FIGURE 1 . 1(a) G(τ, T )/G recon (τ, T ) forbγ 5 b andbb at p = 0. (b) Spectral function constructed from G(τ, T ) using maximum entropy method. Naively, one would not expect a much earlier dissolution due to dynamical quarks. A recent study in 2-flavor QCD[5] supports this expectation. Ref.[9], uses an anisotropic lattice which is slightly finer in time, a −1 t = 10.89 GeV, but considerably coarser in space, a −1 s = 2.72 GeV. It also uses a different action, so the cutoff effects should be different. . T Matsui, H Satz, Phys. Lett. B. 178416T. Matsui and H. Satz, Phys. Lett. B 178, 416 (1986). . F Karsch, H Satz, Z. Phys. C. 51209F. Karsch and H. Satz, Z. Phys. C 51, 209 (1990). . S Digal, P Petreczky, H Satz, Phys. Rev. D. 6494015S. Digal, P. Petreczky and H. Satz, Phys. Rev. D 64, 094015 (2001). . S Datta, Nucl.Phys.Proc.Suppl. 119487S. Datta et al., Nucl.Phys.Proc.Suppl. 119, 487 (2003); . Phys. Rev. D. 6994507Phys. Rev. D 69, 094507 (2004). . M Asakawa, T Hatsuda, Phys. Rev. Lett. 9212001M. Asakawa and T. Hatsuda, Phys. Rev. Lett. 92, 012001 (2004). . T Umeda, K Nomura, H Matsufuru, Eur. Phys. J. 3919T. Umeda, K. Nomura and H. Matsufuru, Eur. Phys. J. C39S1, 9 (2005). . H Iida, hep-lat/0509129PoS. 2005H. Iida et al., PoS LAT2005, 184 (2005) (hep-lat/0509129). . R Morrin, hep-lat/0509115PoS. 2005R. Morrin et al., PoS LAT2005, 176 (2005) (hep-lat/0509115). . F Karsch, D Kharzeev, H Satz, hep-ph/0512239F. Karsch, D. Kharzeev and H. Satz, hep-ph/0512239. S Datta, hep-lat/0409147Proc. SEWM. SEWMS. Datta et al., in: Proc. SEWM 2004 (hep-lat/0409147). . M Asakawa, T Hatsuda, Y Nakahara, Prog. Part. Nucl. Phys. 46459M. Asakawa, T. Hatsuda and Y. Nakahara, Prog. Part. Nucl. Phys. 46, 459 (2001). . K Petrov, hep-lat/0509138PoS. 2005K. Petrov et al., PoS LAT2005, 153 (2005) (hep-lat/0509138).
[]
[ "Leggett-Garg Inequality for a Two-Level System under Decoherence: A Broader Range of Violation", "Leggett-Garg Inequality for a Two-Level System under Decoherence: A Broader Range of Violation" ]
[ "Nasim Shahmansoori \nDepartment of Chemistry\nResearch Group on Foundations of Quantum Theory and Information\nSharif University of Technology\nP.O.Box 11365-9516TehranIran\n", "Afshin Shafiee \nDepartment of Chemistry\nResearch Group on Foundations of Quantum Theory and Information\nSharif University of Technology\nP.O.Box 11365-9516TehranIran\n\nSchool of Physics\nInstitute for Research in Fundamental Sciences (IPM)\nP.O.Box19395-5531TehranIran\n" ]
[ "Department of Chemistry\nResearch Group on Foundations of Quantum Theory and Information\nSharif University of Technology\nP.O.Box 11365-9516TehranIran", "Department of Chemistry\nResearch Group on Foundations of Quantum Theory and Information\nSharif University of Technology\nP.O.Box 11365-9516TehranIran", "School of Physics\nInstitute for Research in Fundamental Sciences (IPM)\nP.O.Box19395-5531TehranIran" ]
[]
We consider a macroscopic quantum system in a tilted double-well potential. By solving Hamiltonian equation, we obtain tunneling probabilities which contain oscillation effects. To show how one can decide between quantum mechanics and the implications of macrorealism assumption, a given form of Leggett-Garg inequality is used. The violation of this inequality occurs for a broader range of decoherence effects, compared to previous results obtained for two-level systems.
10.1142/s1230161218500038
[ "https://arxiv.org/pdf/1605.00394v3.pdf" ]
46,892,388
1605.00394
e972564b6b1db46e87d4d895fd596b65282a2307
Leggett-Garg Inequality for a Two-Level System under Decoherence: A Broader Range of Violation Nasim Shahmansoori Department of Chemistry Research Group on Foundations of Quantum Theory and Information Sharif University of Technology P.O.Box 11365-9516TehranIran Afshin Shafiee Department of Chemistry Research Group on Foundations of Quantum Theory and Information Sharif University of Technology P.O.Box 11365-9516TehranIran School of Physics Institute for Research in Fundamental Sciences (IPM) P.O.Box19395-5531TehranIran Leggett-Garg Inequality for a Two-Level System under Decoherence: A Broader Range of Violation numbers: 0365Xp0365Ta0365Yz We consider a macroscopic quantum system in a tilted double-well potential. By solving Hamiltonian equation, we obtain tunneling probabilities which contain oscillation effects. To show how one can decide between quantum mechanics and the implications of macrorealism assumption, a given form of Leggett-Garg inequality is used. The violation of this inequality occurs for a broader range of decoherence effects, compared to previous results obtained for two-level systems. INTRODUCTION Extrapolating the laws of quantum mechanics QM , up to the scale of everyday objects, means that objects composed of many atoms exist in quantum superpositions of macroscopically distinct states. In 1935, Schrodinger attempted to demonstrate the limitations of QM using a thought experiment in which a cat is put in a quantum superposition of alive and dead states [1]. The idea remained theoretical until 1980s, when much progress has been made in demonstrating the macroscopic quantum behavior of various systems such as superconductors [2][3][4][5], nanoscale magnets [6,7], laser-cooled trapped ions [8], photons in a microwave cavity [9] and C 60 molecules [10]. A typical double-well potential system provides a unique opportunity to study the fundamental behavior of a macroscopic quantum system (MQS), such as macroscopic quantum tunneling and quantum coherence. In the context of a double-well potential, Schrodinger's cat describes a state in which one system simultaneously occupies both wells. There are also studies focused on decoherence effects in double-well potentials. Huang et al. [11] showed that decoherence due to the interactions of atoms with the electromagnetic vacuum can cause the collapse of Schrodinger cat-like states. Thermal effects [12] and dissipation [13] constitute other sources of decoherence and can suppress tunneling between wells [14,15]. In addition, double-well potential is used to describe some special phenomena like ammonia filipping. The resulting quantum tunneling have been extensively applied in many branches of physics. For example, it appears in the dynamics of Bose-Einstein condensates, the recent developments of ion trap technology, the ultracold trapped atoms theory and its applications [16][17][18][19][20]. Such a situation brings in mind the question of how the everyday macroscopic world works. The Leggett-Garg inequality (LGI) provides a method to investigate the existence of macroscopic coherence, to test the applicability of QM as we scale from the micro-to the macro-world [21,22]. In this fashion, we can test the correlations of a single system measured at different times. Violation of LGI implies either the absence of a realistic description of the system or the impossibility of measuring the system without disturbing it. QM violates different forms of LGIs on both accounts. A number of experimental tests and violations of these inequalities have been demonstrated in recent years [23,24]. Leggett and Garg initially proposed an rf-SQUID flux qubit as a promising system to test their inequalities [22], which was later improved by Tesche [25]. The first measured violation of a type of LGI was reported by Palacious-Laloy and coworkers [26]. Palacios-Laloy et al. found that LGI was violated by their qubit, albeit with a single data point, with the conclusion that their system could not admit a realistic, non-invasivelymeasurable description. Recently, several experimental tests of LGIs were implemented, all of which confirm the predicted violations in accordance with the fundamental laws of QM [26][27][28][29][30][31][32][33]. Most of these experiments were weak measurements, where the effects of the measured back-action in a sequential set up are minimized [34]. In this article we calculate the so-called time correlations in a tilted double-well potential. To do this, we consider the effect of the environment as a perturbation on the system. Then, we use the time correlations to test a given type of LGI. For a symmetric double-well potential considered as a two-level quantum system undergoing coherent oscillations between the two states, it has been shown that QM violates different forms of LGI [35]. Morover, no violation occurs, when strong decoherence is at work. According to our calculations for a tilted double-well potential, however, it is possible to see the violation, even for significant effects of decoherence. The structure of our paper is as follows. In section 2, we focus on a tilted double-well model, to introduce its Hamiltonian, considering the effects of the environment on it. Then we calculate the tunneling probabilities to obtain time correlations. In section 3, we show the violation of a given LGI under decoherence. Finally, in section 4, we conclude the results. TILTED DOUBLE-WELL POTENTIAL We consider a typical tilted double-well, where its asymmetric form is measured by the amount of the parameter ε. Here |− (|+ ) denotes the state in which the macrosystem is localized in the left (right) well. They also describe the ground states in the left and right wells, respectively. The macroscopic feature of the system is identified by dimensionless equations. We define dimensionless variables for the momentum p, the position q, the HamiltonianĤ S , the potential V (q) and the time t as the following relations ( [35], p.18): q := R R 0 ; t := t conv τ 0Ĥ S :=Ĥ Sconv U 0 ; V (q) := U (R) U 0(1) where, τ 0 := R 0 (U 0 /M ) 1 2 ; U 0 = M (R 0 /τ 0 ) 2(2) Here R 0 and U 0 are the characteristic length and the characteristic energy of the system, respectively. Also, τ 0 is the characteristic time, estimated by the time needed for a particle of mass M to pass the distance R 0 with kinetic energy of the order of U 0 . Regarding the relations (1) and (2), one can define a new dimensionless parameterh instead of plank's constant in units of action U 0 τ 0 :h = P 0 R 0(3) The new constanth quantitatively shows the macroscopic behavior of the system. So that, for smaller values ofh, the situation is more quasi-classical. Yet, to detect the quantum tunneling effect,h shouldn't be too small. Considering a tilted double-well potential, we assume that the value ofh is about 0.1 to support the macroscopic quantum trait of the system, in a quasi-classical situation. When the macrosystem is isolated from its environment, it can be described effectively by the following Hamiltonian: H =h 2 δ ∆ ∆ −δ(4) where δ = (E − − E + )/h is a measure of the tilt ε, and ∆ is a measure of the strength of the tunneling between the two wells. The eigenvalues of the Hamiltonian (4) are ±h 2 (δ 2 + ∆ 2 ) 1 2 and the eigenstates of this Hamiltonian are obtained as: |0 = ( 1 ∆ 2 + B 2 ) 1 2 (∆|+ + B|− ) B = (∆ 2 + δ 2 ) 1 2 + δ (5a) |1 = ( 1 ∆ 2 + A 2 ) 1 2 (∆|+ − A|− ) A = (∆ 2 + δ 2 ) 1 2 − δ (5b) For our next purposes, we refine the eigenstates (5a) and (5b) as the following: |0 = − cos θ|+ + sin θ|− (6a) |1 = sin θ|+ + cos θ|− (6b) where sin θ = B A + B and cos θ = A A + B . Also we define |+ = a|0 + b|1 (7a) |− = a |0 + b |1 (7b) where a = − cos θ, b = sin θ, a = sin θ and b = cos θ. One can show that the probability of the tunneling from the left to the right well is P + = ∆ 2 ∆ 2 + δ 2 sin 2 [(∆ 2 + δ 2 ) 1 2 t 2 ](8) which is independent ofh and contains oscillation effects. Nevertheless, to deal with real systems, the inevitable effects of the environment should be considered. So, in order to retain oscillation effects and therefore the macroscopic quantum coherence, we consider the effects of the environment as a kind of perturbation on the system. For the Hamiltonian of the system,Ĥ s we haveĤ s |n = E n |n ; E 0 < E 1 < E 2 < ...(9) where {|n } denotes the complete set of eigenstates ofĤ s with n = 0, 1, 2, ... and E n is the energy of the system. We also define | * and ε * as the energy eigenstates and the energy eigenvalues of the environment, respectivley. Here, H ε | * = ε * | * ,Ĥ ε |vac = 0(10) and |vac is the vacuum state. The state of the entire system could be written as |n, * ≡ |n | * . Apparently, the environment is assumed to be a bosonic field. The ground state ofĤ ε is |vac and |α = b † α |vac is the state with a single boson α. The state |n, vac is an eigenstate ofĤ 0 =Ĥ s +Ĥ ε with energy E n . We define δE n as the related shift due to the perturbation of the interaction HamiltonianĤ sε between the system and its environment. Then, the stationary perturbation theory gives: δE n δE (1) n + m, * | m, * |Ĥ|n, vac | 2 E n − (E m + ε * )(11) where δE (1) n = n, vac|Ĥ sε |n, vac is the contribution of the first order perturbation. Considering the frequency distribution of the environmental oscillators J(ω), one can define (11) as ( [35], Ch.6): δE n = 1 π m |f mn | 2 Ω mn P ∞ 0 dω ω J(ω) ω + Ω mn(12) Similarly, we have: Γ n = 2 h m |f mn | 2 J(Ω nm )θ(Ω nm)(13) where Γ −1 n is the life time of the shifted energy E n + δE n . The symbol P in relation (12) denotes the principal value and θ(Ω nm ) is a step function which indicates that the macrosystem initially in an excited state is allowed to make transition to the lower states only. We also define f mn = m|f (q)|n where f (q) is an arbitrary function of q, depending on how the macrosystem exerts force on the environmental oscillators and Ω nm = (E n − E m )/h. Here, Ω 10 = ∆. Supposing that the macrosystem is initially in the state |−, vac , we investigate the time evolution of the entire system with perturbation theory, which indicates the preservation of the macroscopic quantum coherence. To do so, we are going to calculate the probability of finding the macrosystem in each well. This could be defined as P + = | +|Ψ(t) | 2(14) where |Ψ(t) , is the quantum state of the entire system at time t: |Ψ(t) = n |n n|e −iĤ•t/hÛ I |Ψ(0)(15) Here,Û I is the time-evolution operator in the interaction picture, given bŷ U I (t) = exp(−iĤ 0 t/h)exp(−iĤt/h)(16) The relation (15) could be written in the following form: |Ψ(t) = n e −iEnt/h |n |χ n (t)(17) where n = ± and |χ n (t) = n|e −iĤεt/hÛ I |Ψ(0) . Hence, we have P + = χ + (t)|χ + (t)(18) The time evolution operatorÛ I could be expanded up to the second order with respect to the interaction Hamiltonian H sε as:Û I (t) 1 − ĩ h t 0 dt 1Ĥsε (t 1 ) − 1 h 2 t 0 dt 2 t2 0 dt 1Ĥsε (t 2 )Ĥ sε (t 1 )(19) whereĤ sε (t) = e iĤ0t/hĤ sε e −iĤ0t/h . In (19),Û I (t) contains the following terms: U vac (t) = − ĩ h t 0 δV (t 1 )dt 1 − 1 2h α t 0 dt 2 t2 0 dt 1fα (t 2 )e −i(t2−t1)ωαf α (t 1 ) (20) U α (t) = i √ 2h t 0 dt 1 e −iωαtf α (t 1 ) (21) U αβ (t) = − 1 2h t 0 dt 2 t2 0 dt 1fβ (t 2 )e −iω β t2+iωαt1f α (t 1 )(22) where δV (t) = 1 2 α ω 2 α (f α (q(t))) 2 , ω α is the frequency of the particle α in the environment. Assuming that the environmental oscillator α is displaced by f α (q), we use the separable model in which f α (q) is independent of α, f α (q) = γ α f (q), where γ α is a positive constant. All time-operators δV (t),f α (t), ... in (20) to (22) are also defined in the interaction picture. Using the relations (20)- (22) one can show that: |χ + (t) = +|e −iĤεt/hÛ I (t)|Ψ(0) = |vac +|Û vac |ψ s (0) + α e −iωαt |α +|Û α |ψ s (0) + αβ e −i(ωα+ω β )t |αβ +|Û αβ (t)|ψ s (0)(23) If ψ s (0) = |− , we get; P + = χ + |χ + = a 2 b 2 | 0|Û vac |0 | 2 + a 2 b 2 | 1|Û vac |1 | 2 +2aa bb 0|Û vac |0 * 1|Û vac |1 + a 2 b 2 | 1|Û α |0 | 2(24) where denotes the real part. We have also used the relations (7a) and (7b) for the states |± . In the symmetric double-well the following two assumptions could be considered: A1: The potential term V (q) and so δV (q) are even. Then, all elements m|Û vac |n = 0, when m − n is odd. A2: The function f (q) is odd, so all elements m|Û α |n = 0, when m − n is even. In the tilted double-well, also, our calculations show that the elements 0|Û vac |1 , 0|Û α |0 and 1|Û α |1 should be zero again, similar to what is resulted from A1 and A2 for a symmetric model. The detailed results are given in Appendix A. There are also some other assumptions, appropriate in our case: A3. The higher orders of f 2 α can be neglected, soÛ αβ 0. A4. The frequency distribution of the environment can be supposed as ohmic. This means that J(∆) = η∆ where η is a measure of the strength of the interaction between the macrosystem and the environment (η h ) and Ω 10 = ∆. As a cosequence, in (13) Γ 1 = (2/h)|f 01 | 2 J(∆) (2η/h)|f 01 | 2 ∆ ∆. Also Γ 0 = 0. whereΩ 10 = (δE 1 − δE 0 )h. This result shows that there is a decay factor e −Γ1t/2 in the term containing cos(Ω 10 t) which reduces the strength of the oscillation due to the decoherence (dissipation) effects. In order to diminish the effect of e −Γ1t/2 , we consider the principal time domain, which requires that Γ 1 t 1 [35]. With the same appraoch, we can calculate other probabilities. For example, when the macrosystem is in the state |+ initially, the probability that it will be found in the state |− at time t is denoted by P +→− . Taking into account the other probabilities P +→+ and P −→− , one can show that: P +→− = cos 2 θ − cos 2 θ cos 2θe −Γ1t − 2 sin 2 θ cos 2 θ cos(Ω 10 t)e −Γ1t/2 (26) P +→+ = cos 2 θ − sin 2 θ cos 2θe −Γ1t + 2 sin 2 θ cos 2 θ cos(Ω 10 t)e −Γ1t/2 (27) P −→− = sin 2 θ + (cos 2 θ cos 2θ)e −Γ1t + 2 sin 2 θ cos 2 θ cos(Ω 10 t)e −Γ1t/2 (28) VIOLATION OF LEGGETT-GARG INEQUALITY UNDER DECOHERENCE In order to show the detection of a cat state, the experimental results in question should be compatible with QM but incompatible with macrorealism (M R). The assumption of M R demands that, first, one can assign definite states to a macrosystem, so that it could be actually in one of these states independent of any observation. Second, it requires the non-invasive measurability of such macrostates which should not be affected, when they are measured. LGI serves to examine quantitatively whether the theories satisfying M R are compatible with QM or not. For this, we use the following LGI: K 1 ≡ |C 32 − C 31 | + C 21 ≤ 1(29) where the time-correlation function for the two-value variables r and q (r, q = ±1) at three moments of time t 3 > t 2 > t 1 is defined as the following for the time sequences (i, j) = (3, 2), (3, 1), (2, 1): C ij = r,q=±1 rqP rti,qtj(30) For the symmetric double well potential and any other two-level system studied, these calculations show a maximum violation of K = 3/2, when the effect of decoherence is negligible [22]. Now, let us assume that: t 3 − t 2 = t 2 − t 1 = τ Ω 10 , Γ 1 Ω 10 = γ, z = e −γτ(31) Then, the estimation of a maximum value of γ that violates LGI gives γ = 0.31 [35,Ch.9]. We also choose τ = π 3 , so that cosΩ 10 (t 2 − t 1 ) = 1 2 . Then, we have: K 1 = |P +t3|+t2 P +t2 + P −t3|−t2 P −t2 − P +t3|−t2 P −t2 −P −t3|+t2 P +t2 − (P +t3|+t1 P +t1 + P −t3|−t1 P −t1 −P +t3|−t1 P −t1 − P −t3|+t1 P +t1 )| + P +t2|+t1 P +t1 +P −t2|−t1 P −t1 − P +t2|−t1 P −t1 − P −t2|+t1 P +t1(32) where, e.g., P +t3|+t2 = P +t2→+t3 is the conditional probability that when the macrosystem is in the state |+ at t 2 , it can be found in |+ at t 3 (t 3 > t 2 ). Generally, we have P rti,qtj = P qtj |rti P rti due to Bayesian rule and P rti is the single variable probability for the system being in the state |r at t i (i = 1, 2, 3). Conditional probabilities are given in relations (25) to (28), albeit without time labling. Let us suppose that the macrosystem is initially in the state |− , so that P +t1 = 0. Accordingly P +t2 is obtained from the following relation: P +t2 = P +t2|+t1 P +t1 + P +t2|−t1 P −t1 = P +t2|−t1(33) Having into account the above considerations and using the relations (25) to (28), one can find that: K 1 = |(sin 2 θ − cos 2 θ + 2 cos 2 θ cos 2θz + 2 sin 2 θ cos 2 θz 1 2 ) (sin 2 θ + cos 2 θ cos 2θz + sin 2 θ cos 2 θz 1 2 ) +(cos 2 θ − sin 2 θ − 2 sin 2 θ cos 2θz + 2 sin 2 θ cos 2 θz 1 2 ) (cos 2 θ − cos 2 θ cos 2θz − sin 2 θ cos 2 θz 1 2 ) −(sin 2 θ − cos 2 θ + 2 cos 2 θ cos 2θz 2 − 2 sin 2 θ cos 2 θz)| + sin 2 θ − cos 2 θ + 2 cos 2 θ cos 2θz + 2 sin 2 θ cos 2 θz 1 2(34) If we consider sin 2 θ = 0.2 and cos 2 θ = 0.8, at z = 1 the inequality is violated, maximally. This situation is analogous to negligible decoherence. Generally, the important result is that for 0.5 < z < 1, the inequality will be violated. This yields 0 < γ < 0.66 which shows a broader range of violation compared to γ = 0.31 for the symmetric double well potential and/or other proposed two-level systems [35][36][37]. In Fig.1. K 1 in (34) is plotted against z for θ = 26.6 • . It is obvious that K 1 increases as z increases from 0 to 1. In Fig.2 K 1 is plotted against sin 2 θ for z = 0.6 (upper curve), z = 0.5 (middle curve) and z = 0.4 (lower curve). CONCLUSION Considering a macrosystem prepared in a quasi-classical situation described in a tilted double-well potential, we studied the effect of the environment as a perturbation source. In this regime, the decoherence (dissipative) effects are reduced according to the so-called principal time domain in which t Γ −1 1 in (13) for n = 1. Calculations of the tunneling probabilities show that the coherence effects are present, in spite of the interaction with the environment. To decide between the predictions of QM and the requirements of M R, a type of LGI (K 1 ≤ 1) is examined in (29), when decoherence is assumed to be present, but not so dominant. The violation of this inequality shows that the quantum behavior of a macrosystem could be present in more realistic situations. So, the key parameter γ (characterizing the effect of dissipation) in (31) is improved from γ = 0.31 in previous works to γ = 0.66. This improvement is crucial for showing the violation of LGIs in proposed future experiments. When the classical trait of the system is increased, which is illustrated by the larger values of γ in (31), the assumption of non-invasive measurement becomes more determinate. This means that time-correlations could be assumed to be achieved by higher time-ordered probabilities at the macro-level [35]. Due to the quantum calculations, however, this should be denied, since no three-varibale joint probability could be defined for our model in quantum formalism from which one can obtain two-variable timecorrelations. So, for broader ranges of violation due to increased values of γ, (γ ∼ Γ 1 ∼ (h) −1 ) which shows more classicality of the system, the violation of Leggett-Garg inequality features the violation of non-invasive measurability of the system in a more concrete fashion. Yet, it is an open queston, if in practice the same violation could be approved. If so, a consistency of M R with an invasive-measurement account should be envisaged more seriously. Appendix A We calculate 0|Û vac |1 and 0|Û α |0 here to show that they are approximately equal to zero, even for non-symmetric double-well potentials. First, for |Û vac |1 , we have: 0|Û vac |1 = − ĩ h 0|δV (t)|1 − 1 2h 0|g|1 (A-1) where g is defined as: g = − 1 2h α t 0 dt 2 t2 0 dt 1fα (t 2 )e −i(t2−t1)ωαf α (t 1 ) (A-2) For the first term, one can show that it is equal to: 1 2 α ω 2 α 0|f 2 α |1 = 1 2 α,m=0,1 ω 2 α 0|f α |m m|f α |1 = 1 2 α ω 2 α ( 0|f α |0 0|f α |1 + 0|f α |1 1|f α |1 ) = 1 2 αγ 2 α ω α (f 00 .f 01 + f 01 .f 11 ) = t π (f 00 .f 01 + f 01 .f 11 ) ω 0 dω ω J(ω) × Ω 10 Ω 10 × f 01 f 01 =⇒ t π f 00 + f 11 f 01 Ω 10 f 2 01 Ω 10 ω 0 dω ω J(ω) = t π f 00 + f 11 f 01 Ω 10 .Γ 1 (A-3) which is negligible, because Γ 1 t/Ω 10 1. The second term is zero, because the following integrals have meaningful values, only when the terms in denominator are equal to zero (i.e., Ω 10 + ω α = 0), which is impossible since ω α > 0 and Ω 10 > 0, so the entire term vanishes. To show this, we have 10 (e −iΩ10t − 1) + 1 i(Ω 10 + ω α ) (e −i(Ω10+ωα)t − 1)}} 0 (A-4) − 1 2h α,m t 0 dt 2 t2 0 dt 1 0|f α (t 2 )|m m|f α (t 1 )|1 e −iωα(t2−t1) = − 1 2h α t 0 dt 2 t2 0 dt 1 e −i(Ω10−ωα)t1 0|f α |0 0|f α |1 e −iωαt2 +e iωαt1 0|f α |1 1|f α |1 e −i(Ω10+ωα)t2 = − 1 2h α t 0 dt 2 ( 1 −i(Ω 10 − ω α ) (e −i(Ω10−ωα)t2 − 1) 0|f α |0 0|f α |1 e −iωαt2 + 1 iω α (e iωαt2 − 1) 0|f α |1 1|f α |1 e −i(Ω10+ωα)t2 = − 1 2h α { 0|f α |0 . 0|f α |1 1 −i(Ω 10 + ω α ) { 1 −iΩ 10 (e −iΩ10t − 1) + 1 −iω α (e −iωαt − 1)} + 0|f α |1 1|f α |1 1 iω α { 1 −iΩ For 0|Û α |0 , one can show that 0|Û α |0 = i √ 2h t 0 dt 1 0|f α (t 1 )|0 e iωαt1 = i 2π √ 2hγ α f mn D 1 (ω α )e iωαt/2 (A-5) where where D 2 (ω; t) = 1 2πt { sin(ωt/2) ω/2 } 2 (A-8) We work in the principal time domain for which D 2 (ω, t) ∼ δ(ω). So the relation (A-7) is equal to zero, since J(0) = 0. So, the term 0|Û α |0 could be neglected. The same situation holds true for the element 1|Û α |1 with relations similar to 0|Û α |0 . where D 1 (ω; t) = 1 2π sin(ωt/2) ω/2 . For | 1|Û α |0 | 2 , we have: | 1|Û α |0 | 2 = tπ h f 2 10 ∞ 0 dωJ(ω)D 2 (ω − Ω 10 ) = t h f 2 10 J(Ω 10 ) = Γ 1 t 1 − e −Γ1t (B-4) where D 2 (ω − Ω 10 ) ∼ δ(ω − Ω 10 ). All the terms that produced by multiplying the terms containingÛ α are zero, because there is Γ 1 Ω 10 ratio in all of them. There is also one non-zero multiplying term as the following: 0|Û vac |0 * 1|Û vac |1 = e −Γ1t/2 cos(Ω 10 t) (B-5) Finally, we obtain: So all the terms with Γ 1 /∆ are negligible, in our calculations. A5. The distribution J(Ω mn ) is always positive. Thus J(0) = 0 and J(−∆) = 0 where Ω 10 = −∆. With all these assumptions in mind, the tunneling probability can be obtained as (see Appendix B): P −→+ = sin 2 θ + (sin 2 θ cos 2θ)e −Γ1t − 2 sin 2 θ cos 2 θ cos(Ω 10 t)e −Γ1t/2 FIG. 1 := 0 FIG. 2 : 102The amount of K 1 vs. z for sin 2 θ The amount of K 1 vs. sin 2 θ for three different values of z = 0.6 (upper curve), z = 0.5 (middle curve) and z = 0.4 (lower curve). Appendix BHere, we calculate the term P −|+ as an instance. Other probabilities can be obtained in the same way. We need to calculate some terms at first and then put them in the main formula. To show this, we have:The terms 0|Û vac |0 , 1|Û vac |1 , 1|Û α |0 and 0|Û α |1 are calculated in[35]. So, we have√ 2hγ α f mn D 1 (ω α + Ω mn ; t)e i(Ωmn+ωα)t/2 (B-3) . * Shahmansoori@ch, sharif.edu † Corresponding Author: [email protected]* [email protected] † Corresponding Author: [email protected] . E Schrodinger, Naturwissenschaften. 23807E. Schrodinger, Naturwissenschaften 23, 807 (1935). . R Rouse, S Han, J E Lukens, Phys. Rev. Lett. 751614R. Rouse, S. Han and J.E. Lukens, Phys. Rev. Lett. 75, 1614 (1995). . J Clarke, A N Cleland, M H Devoret, D Esteve, J M Martinis, Science. 239992J. Clarke, A.N. Cleland, M.H. Devoret, D. Esteve and J.M. Martinis, Science 239, 992 (1988). . P Silvestrini, V G Palmieri, B Ruggiero, M Russo, Phys. Rev. Lett. 793046P. Silvestrini, V.G. Palmieri, B. Ruggiero and M. Russo, Phys. Rev. Lett. 79, 3046 (1997). . Y Nakamura, Y A Pashkin, J S Tsai, Nature. 398786Y. Nakamura, Y.A. Pashkin and J.S. Tsai, Nature 398, 786 (1999). . J R Friedman, M P Sarachik, J Tejada, R Ziolo, Phys. Rev. Lett. 763830J.R. Friedman, M.P. Sarachik, J. Tejada and R. Ziolo, Phys. Rev. Lett. 76, 3830 (1996). . E Barco, J M Hernandez, J Tejada, N Biskup, R Achey, I Rutel, N Dalal, J Brooks, Europhys. Lett. 47722E. del Barco, J.M. Hernandez, J. Tejada, N. Biskup, R. Achey, I. Rutel, N. Dalal and J. Brooks, Europhys. Lett. 47, 722 (1999). . C Monroe, D M Meekhof, B E King, D J A Wineland, Science. 2721131C. Monroe, D.M. Meekhof, B.E. King and D.J.A. Wineland, Science 272, 1131 (1996). . M Brune, E Hagley, J Dreyer, X Maitre, A Maali, C Wunderlich, J M Raimond, S Haroche, Phys. Rev. Lett. 774887M. Brune, E. Hagley, J. Dreyer, X. Maitre, A. Maali, C. Wunderlich, J.M. Raimond and S. Haroche, Phys. Rev. Lett. 77, 4887 (1996). . M Arndt, O Nairz, J V Andreae, C Keler, G V D Zouw, A Zeilinger, Nature. 401680M. Arndt, O. Nairz, J.V. Andreae, C. Keler, G.V.D. Zouw and A. Zeilinger, Nature 401, 680 (1999). . Y P Huang, M G Moore, Phys. Rev. A. 7323606Y. P. Huang and M. G. Moore, Phys. Rev. A 73, 023606 (2006). . L Pitaevski, S Stringari, Phys. Rev. Lett. 87180402L. Pitaevski and S. Stringari, Phys. Rev. Lett. 87, 180402 (2001). . P J Y Louis, P M R Brydon, C M Savage, Phys. Rev. A. 6453613P.J.Y. Louis, P.M.R. Brydon and C.M. Savage, Phys. Rev. A 64, 053613 (2001). . I Zapata, F Sols, A J Legget, Phys. Rev. A. 6721603I. Zapata, F. Sols and A. J. Legget, Phys. Rev. A 67, 021603 (2003) . J P Paz, S Habib, W H Zurek, Phys. Rev. D. 47488J.P. Paz, S. Habib and W.H. Zurek, Phys. Rev. D 47, 488 (1993). . P Kumar, M Ruiz-Altaba, B Thomas, Phys. Rev. Lett. 242749P. Kumar, M. Ruiz-Altaba and B. Thomas, Phys. Rev. Lett. 24, 2749 (1986) . G Theocharis, P G Kevrekidis, D J Frantzeskakis, P Schmelcher, Phys. Rev. E. 7456608G.Theocharis, P.G. Kevrekidis, D.J. Frantzeskakis and P. Schmelcher, Phys. Rev. E 74, 056608 (2006) . F Grossmann, T Dittrich, P Jung, P Hanggi, Phys. Rev. Lett. 67516F. Grossmann, T. Dittrich, P. Jung and P. Hanggi, Phys. Rev. Lett. 67, 516 (1991) . G Della Valle, M Omigotti, C Cianci, V Foglietti, P Laporta, S Longhi, Phys. Rev. Lett. 98263601G. Della Valle, M. Omigotti, C. Cianci, V. Foglietti, P. Laporta and S. Longhi, Phys. Rev. Lett. 98, 263601 (2007) . H Lignier, C Sias, D Ciampini, Y Singh, A Zenesini, O Morsch, E Arimondo, Phys. Rev. Lett. 99220403H. Lignier, C. Sias, D. Ciampini, Y. Singh, A. Zenesini, O.Morsch and E. Arimondo, Phys. Rev. Lett. 99, 220403 (2007) . A J Leggett, J. Phys.: Condens. Matter. 14415A.J. Leggett, J. Phys.: Condens. Matter 14, R415 (2002). . A J Leggett, A Garg, Phys. Rev. Lett. 54857A.J. Leggett and A.Garg, Phys. Rev. Lett. 54, 857 (1985). M E Goggin, M P Almeida, M Barbieri, B P Lanyon, J L Oâăźbrien, A G White, G J Pryde, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA1081256M.E. Goggin, M.P. Almeida, M. Barbieri, B.P. Lanyon, J.L. OâĂŹBrien, A.G. White and G.J. Pryde, Proc. Natl. Acad. Sci. USA 108, 1256 (2011). . V Athalye, S S Roy, T S Mahesh, Phys. Rev. Lett. 107130402V. Athalye, S.S. Roy and T.S. Mahesh, Phys. Rev. Lett. 107, 130402 (2011) . C D Tesche, Phys. Rev. Lett. 642358C.D. Tesche, Phys. Rev. Lett. 64, 2358 (1990). . A Palacios-Laloy, F Mallet, F Nguyen, P Bertet, D Vion, D Esteve, A N Korotkov, Nat. Phys. 6442A. Palacios-Laloy, F. Mallet, F. Nguyen, P. Bertet, D. Vion, D. Esteve and A.N. Korotkov, Nat. Phys. 6, 442 (2010). . C G Knee, S Simmons, E M Gauger, J J L Morton, H Riemann, N V Abrosimov, P Becker, H J Pohl, K M Itoh, M L W Thewalt, G Andrew, D Briggs, S C Benjamin, Nature Commun. 3606C.G. Knee, S. Simmons, E.M. Gauger, J.J.L. Morton, H. Riemann, N.V. Abrosimov, P. Becker, H.J. Pohl, K.M. Itoh, M.L.W. Thewalt, G. Andrew, D. Briggs and S.C. Benjamin, Nature Commun. 3, 606 (2012). . A N Jordan, A N Korotkov, M Buttiker, Phys. Rev. Lett. 9726805A.N. Jordan, A.N. Korotkov and M. Buttiker, Phys. Rev. Lett. 97, 026805 (2006) . N S Williams, A N Jordan, Phys. Rev. Lett. 10026804N.S. Williams and A.N. Jordan, Phys. Rev. Lett. 100, 026804 (2008) . M E Goggin, M P Almeida, M Barbieri, B P Lanyon, J L Obrien, A G White, G L Pryde, Acad. Sci. USA. 1081256M.E. Goggin, M.P. Almeida, M. Barbieri, B.P. Lanyon, J.L. OBrien, A.G. White and G.L. Pryde, Acad. Sci. USA 108, 1256 (2011). . A Fedrizzi, M P Almeida, M A Broome, A G White, M Barbieri, Phys. Rev. Lett. 106200402A. Fedrizzi, M.P. Almeida, M.A. Broome, A.G. White and M. Barbieri, Phys. Rev. Lett. 106, 200402 (2011). . J Dressel, C J Broadbent, J C Howell, A N Jordan, Phys. Rev. Lett. 10640402J. Dressel, C.J. Broadbent, J.C. Howell and A.N. Jordan, Phys. Rev. Lett. 106, 040402 (2011). . J S Xu, C F Li, X B Zou, G C Guo, New Journal of Physics. 14103022J.S. Xu, C.F. Li, X.B. Zou and G.C. Guo, New Journal of Physics 14, 103022 (2012). . Y Aharonov, D Z Albert, L Vaidman, Phys. Rev. Lett. 601351Y. Aharonov, D.Z. Albert and L. Vaidman, Phys. Rev. Lett. 60, 1351 (1988). S Takagi, Macroscopic Quantum Tunneling. New YorkCambridge university pressS. Takagi, Macroscopic Quantum Tunneling, Cambridge university press, New York (2005). . C Emary, N Lambert, F Nori, Rep. Prog. Phys. 7716001C. Emary, N. Lambert, F. Nori, Rep. Prog. Phys. 77, 016001 (2014). . C Emary, Phys. Rev. A. 8732106C. Emary, Phys. Rev. A 87, 032106 (2013).
[]
[ "Localization in the Internet of Things Network: A Low-Rank Matrix Completion Approach", "Localization in the Internet of Things Network: A Low-Rank Matrix Completion Approach" ]
[ "Trung Luong \nDepartment of Electrical and Computer Engineering\nInformation System Laboratory\nSeoul National University\n\n", "Sangtae Nguyen [email protected] \nDepartment of Electrical and Computer Engineering\nInformation System Laboratory\nSeoul National University\n\n", "Byonghyo Kim \nDepartment of Electrical and Computer Engineering\nInformation System Laboratory\nSeoul National University\n\n", "Shim [email protected] \nDepartment of Electrical and Computer Engineering\nInformation System Laboratory\nSeoul National University\n\n" ]
[ "Department of Electrical and Computer Engineering\nInformation System Laboratory\nSeoul National University\n", "Department of Electrical and Computer Engineering\nInformation System Laboratory\nSeoul National University\n", "Department of Electrical and Computer Engineering\nInformation System Laboratory\nSeoul National University\n", "Department of Electrical and Computer Engineering\nInformation System Laboratory\nSeoul National University\n" ]
[]
Location awareness, providing ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of key ingredients for internet of things (IoT). In order to make a proper reaction to the collected information from devices, location information of things should be available at the data center. One challenge for the massive IoT networks is to identify the location map of whole sensor nodes from partially observed distance information. This is especially important for massive sensor networks, relay-based and hierarchical networks, and vehicular to everything (V2X) networks. The primary goal of this paper is to propose an algorithm to reconstruct the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in Riemannian manifold in which a notion of differentiability can be defined, we are able to solve the low-rank matrix completion problem efficiently using a modified conjugate gradient algorithm. From the analysis and numerical experiments, we show that the proposed method, termed localization in Riemannian manifold using conjugate gradient (LRM-CG), is effective in recovering the Euclidean distance matrix for both noiseless and noisy environments.
null
[ "https://arxiv.org/pdf/1702.04054v2.pdf" ]
14,282,091
1702.04054
464658d836fae8186fd0446bfdd50eef243d73be
Localization in the Internet of Things Network: A Low-Rank Matrix Completion Approach 3 Aug 2017 Trung Luong Department of Electrical and Computer Engineering Information System Laboratory Seoul National University Sangtae Nguyen [email protected] Department of Electrical and Computer Engineering Information System Laboratory Seoul National University Byonghyo Kim Department of Electrical and Computer Engineering Information System Laboratory Seoul National University Shim [email protected] Department of Electrical and Computer Engineering Information System Laboratory Seoul National University Localization in the Internet of Things Network: A Low-Rank Matrix Completion Approach 3 Aug 20171 Notice: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. August 4, 2017 DRAFT 2 A part of this paper was presented at the Information Theory and Applications (ITA) workshop, 2016 [1]. August 4, 2017 DRAFT 3 Location awareness, providing ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of key ingredients for internet of things (IoT). In order to make a proper reaction to the collected information from devices, location information of things should be available at the data center. One challenge for the massive IoT networks is to identify the location map of whole sensor nodes from partially observed distance information. This is especially important for massive sensor networks, relay-based and hierarchical networks, and vehicular to everything (V2X) networks. The primary goal of this paper is to propose an algorithm to reconstruct the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in Riemannian manifold in which a notion of differentiability can be defined, we are able to solve the low-rank matrix completion problem efficiently using a modified conjugate gradient algorithm. From the analysis and numerical experiments, we show that the proposed method, termed localization in Riemannian manifold using conjugate gradient (LRM-CG), is effective in recovering the Euclidean distance matrix for both noiseless and noisy environments. I. INTRODUCTION Recently, Internet of Things (IoT) has received much attention for its plethora of applications, such as healthcare, surveillance, automatic metering, and environmental monitoring. In sensing the environmental data (e.g. temperature, humidity, pollution density, and object movements), wireless sensor network consisting of hundreds to thousands sensor nodes is popularly used [2], [3], [4]. In order to make a proper reaction to the collected environmental data, location information of sensor nodes should be available at the data center (basestation) [5], [6]. Since actions in IoT networks, such as fire alarm, energy transfer, and emergency request, are made primarily on the data center, an approach to identify the location information of whole nodes at the data center has been received much attention. In this approach, henceforth referred to as centralized localization, each sensor node measures the distance information of adjacent nodes and then sends it to the data center. Then the data center constructs a map of sensor nodes using the collected distance information [7]. In measuring the distance, various modalities, such as received signal strength indication (RSSI) [8], time difference of arrival (TDoA) [9], and angle of arrival (AoA) [4], have been popularly used. These approaches are simple and effective in measuring the short-range distance and can also be used for indoor environments. When we consider the centralized localization in IoT, there are two major issues to be addressed. First, location information of the sensor node obtained by this approach is local, meaning that the location information is true in the relative sense. Thus, proper adjustment of the location information is needed to identify the absolute (true) location information. In fact, since the local location of a sensor might be different from the absolute location by some combinations of translations, rotations, and reflections, absolute locations of a few sensor nodes (anchor nodes) are needed to transform the local locations into the absolute locations [5], [6]. It has been shown that when the number of anchor nodes is enough (e.g., four or more anchor nodes in R 2 ), one can identify the absolute location of sensor nodes [10]. For the sake of completeness, we provide a brief summary of the standard procedure to obtain the absolute location from the pairwise distance information. Let x i ∈ R k×1 (i = 1, ..., n) be the absolute locations of n sensor nodes randomly distributed in k-dimensional Euclidean space (typically k = 2 or 3) and d ij = x i − x j 2 be the pairwise distance between the sensor nodes i and j, August d 2 n1 d 2 n2 d 2 n3 · · · 0           . Without loss of generality, we set the first sensor node as a reference in the local coordinate system. Then a local location x i of sensor node i is x i = x i − x 1 and the corresponding local location matrix is X = x 1 x 2 x 3 · · · x n T = X − 1x T 1 ,(1) where 1 = 1 1 · · · 1 T ∈ R n×1 and X = x 1 x 2 · · · x n T ∈ R n×k is the absolute location matrix of sensor nodes. By forming a Gramian matrix of X, one can easily find a relationship between the entries of D and X. To be specific, by denoting Y = X X T , we have y ij = x T i x j = (x i − x 1 ) T (x j − x 1 ) = 1 2 x j − x 1 2 2 + x i − x 1 2 2 − x i − x j 2 2 = 1 2 (d 2 i1 + d 2 1j − d 2 ij ), where y ij is the (i, j)-th entry of Y. One can easily show that Y = 1 2 (De 1 1 T + 1e T 1 D − D) , where e 1 = 1 0 · · · 0 T . Since z T Yz = i,j z i y ij z j = i z i x i 2 2 ≥ 0 for all vector z, Y is positive semi-definite (PSD) matrix. If we express its eigendecomposition as Y = QΛQ T , then the matrix square root X = QΛ 1/2 becomes a local location matrix of sensor nodes. Finally, when the absolute locations of a few anchor nodes are provided, absolute locations of all sensor nodes can be recovered from X. Readers are referred to [5], [6] for more details. Second and perhaps more serious issue is that the data center does not have enough distance information to identify the locations of sensor nodes. For various reasons, such as the power 2 1 3 4 5 outage of a sensor node or the limitation of radio communication range, only partial distance information is available at the data center. This situation can also happen in the hierarchical or relay-based IoT networks where an intermediate node sends partial distance information to the data center. Also, in the vehicle networks it might not be possible to measure the distance of all adjacent vehicles when a vehicle is located at the dead zone. To illustrate this scenario, we consider a sensor network consisting of five sensor nodes in Fig. 1. We see that only a small number of pairwise distances is measured, and hence there are many unknown entries in the August x 2 =      2 7      x 1 =      7 9      x 3 =      11 7      x 4 =      12 4      x 5 =      15 6     0           , where the question mark ? indicates unknown entries of D. In general, one cannot recover the original matrix D from the knowledge of a subset of its entries since there are infinitely many completion options for the unknown entries. However, it is now well-known that even though we have only partial measurements on D, as long as D is a low-rank matrix, then D might be recovered from D obs [11], [12]. As will be discussed in the next section, rank of D is at most k + 2 and k is 2 or 3 for most cases. Thus, the Euclidean distance matrix D can be readily considered as a low-rank matrix. The problem to recover a low-rank matrix D from the small number of known entries is expressed as min D ∈ R n×n rank( D), s.t. P E ( D) = P E (D obs ). ( where P E is the sampling operator given by [P E (A)] ij =    A ij if (i, j) ∈ E 0 otherwise. Let r be the radio communication range, then E = {(i, j) : x i − x j 2 ≤ r} would be the set of observed indices. Since solving (2) is numerically infeasible due to the non-convexity of the rank function, many efforts have been made over the years to relax this problem into a tractable form to solve. Candes and Recht showed that a low-rank matrix can be recovered from partial measurements by using the nuclear norm minimization (NNM) [12]. The nice feature of this approach is that the NNM can be cast as a semidefinite programming (SDP) and thus can be solved via a polynomial time complexity algorithm [12], [13]. However, computational overhead is still burdensome and hardly scale to the problem size. An alternative approach is to use the Frobenius-norm minimization given by min D ∈ R n×n 1 2 P E ( D) − P E (D obs ) 2 F , s.t. rank( D) ≤ η,(3) where η is the rank of the original matrix. Since the equality constraint in (2) is replaced by the Frobenius norm-based cost function, this approach is suited for the noisy scenario. In fact, since an addition of noise is unavoidable in the distance measurement process of the IoT networks, an approach robust to the measurement noise is desired. Also, this approach is good fit for the situation where the rank constraint is known a priori. In recent years, various approaches to find a solution of (3) have been suggested. In [14], [15], approaches inspired by the greedy recovery strategy of the compressed sensing (CS) have been proposed. In these approaches, the set of rank-one matrices that best represents the original matrix is found by the iterative process. In [16], [17], alternating least squares (ALS) technique has been suggested. In this technique, a lowrank matrix is factorized into the product of lower dimensional matrices and then the problem is solved via an alternative minimization. In [18], matrix completion problem is modeled as an unconstrained optimization problem over the smooth Riemannian manifold and a nonlinear conjugate gradient method has been used to solve the problem. The main goal of this paper is to propose a Euclidean distance matrix completion technique optimized for IoT localization. Instead of solving the Frobenius norm minimization problem in (3) as it is, we express the Euclidean distance matrix D as a function of the fixed rank positive semidefinite matrix. Since the set of these matrices forms a Riemannian manifold in which the notation of differentiability can be defined, we can recycle, after a proper modifications, an optimization algorithm in the Euclidean space. In fact, in order to solve the Frobenius-norm minimization problem, we propose a modified conjugate gradient algorithm, referred to as localization in Riemannian manifold using conjugate gradient (LRM-CG), whose solution lies on the Riemannian manifold. We show from the recovery condition analysis that the sequence generated by LRM-CG converges to the original Euclidean distance matrix under suitable conditions. We also show from numerical experiments that LRM-CG is effective in recovering the Euclidean distance matrix from partial measurements and also scalable to the dimension of the matrix. We briefly summarize notations used in this paper. P n×n denotes the set of n × n symmetric positive semidefinite (PSD) matrices. < β 1 , β 2 > is the inner product between two vectors (or matrices) β 1 and β 2 , i.e., < β 1 , β 2 >= tr(β T 1 β 2 ). diag(A) is the vector formed by the main diagonal of a matrix A. Sym(A) and Skew(A) are the matrices formed by Sym(A) = 1 2 (A+A T ) and Skew(A) = 1 2 (A − A T ) for any square matrix A, respectively. Note that A = Sym(A) + Skew(A). eye(a) is a diagonal matrix whose diagonal entries are elements of a. For an orthogonal matrix Q ∈ R n×k with n>k, we define its orthogonal complement Q ⊥ ∈ R n×(n−k) such that Q Q ⊥ forms an orthonormal matrix. Given a function f : Y ∈ R n×n → f (Y) ∈ R, ∇ Y f (Y) is the Euclidean gradient of f (Y) with respect to Y, i.e., [∇ Y f (Y)] ij = ∂f (Y) ∂y ij . For a given matrix A = a 1 a 2 · · · a n T ∈ R n×n , the vectorization of A, denoted by vec(A), is defined as vec(A) = a T 1 a T 2 · · · a T n T . Let f : A → f (A) and g : B → g(B) be mappings, then the composite g • f of f and g is defined by g • f : A → (g • f )(A) = g(f (A)). E (i,j) is the standard element of R n×n whose the (i, j)-th element is one and the rest are zeros. For example, E (1,2) =   0 1 0 0   for R 2×2 . II. THE LRM-CG ALGORITHM In this section, we present the proposed LRM-CG algorithm. By exploiting the smooth Riemannian manifold structure for the set of the low-rank symmetric PSD matrices, we formulate the matrix completion problem (3) as an unconstrained optimization problem on the smooth Riemannian manifold. Roughly speaking, smooth manifold is a generalization of the Euclidean space on which a notion of differentiability exists. For more rigorous definition, see, e.g., [19], [20]. A smooth manifold together with an inner product, often called a Riemannian metric, forms a smooth Riemannian manifold. Since the smooth Riemannian manifold is a differentiable structure equipped with an inner product, we can use all necessary ingredients for solving optimization problems with quadratic cost function, such as Riemannian gradient, Hessian matrix, exponential map, and parallel translation [19]. Therefore, optimization techniques in Euclidean vector space (e.g., steepest descent, Newton method, conjugate gradient method) can be readily applied for solving a problem in the smooth Riemannian manifold. A. Problem Model From the definition of pairwise distance d 2 ij = x i − x j 2 2 = x T i x i + x T j x j − 2x T i x j , we have D = g(XX T ),(4) where g(XX T ) = 2Sym(diag(XX T )1 T − XX T ). In the example illustrated in Fig. 1, X is X = x 1 x 2 x 3 x 4 x 5 T =   7 2 11 12 15 9 7 7 4 6   T ,(5) and D is D = g(XX T ) =                    .(6) The next lemma follows immediately from (4). Lemma 2.1: If n sensor nodes are distributed in k-dimensional Euclidean space and n ≥ k, then rank(D) ≤ k + 2. Proof: From (4), we have rank(D) = rank(g(XX T )), which gives rank(D) ≤ rank(2 1diag(XX T ) T + diag(XX T )1 T 2 ) + rank(2XX T ) ≤ 2rank(1diag(XX T ) T ) + rank(XX T ) ≤ 2 + k, where the last inequality is because rank(XX T ) = rank(X) ≤ k and rank(ab T ) ≤ 1 for any two vectors a and b. From this lemma, (3) can be rewritten as min D ∈ R n×n 1 2 P E ( D) − P E (D obs ) 2 F , s.t. rank( D) ≤ k + 2.(7) Since rank(D) = rank(g( X X T )) ≤ k + 2 for any X, we can further simplify the problem as min X ∈ R n×k 1 2 P E (g( X X T )) − P E (D obs ) 2 F .(8) Recalling that Y = X X T , we have min Y ∈ Y 1 2 P E (g(Y)) − P E (D obs ) 2 F ,(9) where Y = { X X T : X ∈ R n×k }. When the sensor nodes are randomly distributed in k-dimensional Euclidean space, rank of the location matrix X is k almost surely 1 . Thus, we can strengthen the constraint set from Y to 1 Y = { X X T : X ∈ R n×k , rank( X) = k}, and as a result, we have B = γ ′ (t) t=0 t ∈ R 0 Y Y T Y Y (a) R Y (B) γ(t) B Y Y T Y Y (b)min Y ∈ Y 1 2 P E (g(Y)) − P E (D obs ) 2 F ,(10) In the sequel, we denote f (Y) = 1 2 P E (g(Y)) − D obs 2 F for notational simplicity. B. Optimization over Riemannian Manifold Let S = {Q ∈ R n×k : Q T Q = I k } 2 and L = {eye([ λ 1 · · · λ k ] T ) : λ 1 ≥ λ 2 ≥ · · · ≥ λ k >0}. Then, for given Y ∈ Y, one can express Y = QΛQ T , and thus an alternative representation of Y is Y = {QΛQ T : Q ∈ S, Λ ∈ L}.(11) It has been shown that Y is a smooth Riemannian manifold [21, Ch.5] [22]. Our approach to solve the problem in a smooth Riemannian manifold is beneficial in two major respects: First, one can easily compute the gradient of the cost function in (10) using the matrix calculus. Second, we can use an algorithm in the Euclidean space to solve the problem (10). Since our work relies to a large extent on properties and operators of differential geometry, we briefly introduce tools and ingredients to describe the proposed algorithm. Since Y is an embedded manifold in the Euclidean space R n×n , its tangent spaces 3 are determined by the derivative of its curves, where the curve γ of Y is a mapping from R to Y. Put it formally, for a given point Y ∈ Y, the tangent space of Y at Y, denoted T Y Y, is defined as T Y Y = {γ ′ (0) : γ is a curve in Y, γ(0) = Y} (see Fig. 2). In the following lemma, we characterize the tangent space of Y. Lemma 2.2: For the manifold Y defined by (11), the tangent space T Y Y at Y is T Y Y =    Q Q ⊥   B C T C 0     Q T Q T ⊥   : B T = B ∈ R k×k , C ∈ R (n−k)×k    .(12) Proof: See Appendix A. A metric on the tangent space T Y Y is defined as the matrix inner product < B 1 , B 2 >= tr(B T 1 B 2 ) between two tangent vectors B 1 , B 2 ∈ T Y Y. We next define the orthogonal projection of a matrix A onto the tangent space T Y Y, which will be used to find the closed-form expression of Riemannian gradient in Subsection II-C. Definition 2.3: The orthogonal projection onto T Y Y is a mapping P T Y Y : R n×n → T Y Y such that for a given matrix A ∈ R n×n , < A − P T Y Y (A), B >= 0 for all B ∈ T Y Y. The following proposition provides a closed form expression of the orthogonal projection operator. Proposition 2.4: For a given matrix A, orthogonal projection P T Y Y (A) of A onto the tangent space T Y Y is P T Y Y (A) = P Q Sym(A) + Sym(A)P Q − P Q Sym(A)P Q ,(13) where P Q = QQ T . Proof: See Appendix B. In order to express the concept of moving in the direction of a tangent space while staying on the manifold, an operation called retraction is used. As illustrated in Fig. 2 Definition 2.5: The retraction R Y (B) of a vector B ∈ T Y Y onto Y is defined as R Y (B) = arg min Z∈ Y Y + B − Z F .(14) In obtaining the closed form expression of R Y (B), an operator W k keeping k largest positive eigenvalues of a matrix, referred to as eigenvalue selection operator, is needed. Since the projection R Y (B) is an element of Y, R Y (B) should be a symmetric PSD matrix with rank k. Thus, for given square matrix A, we are only interested in the symmetric part Sym(A). If we denote the eigenvalue decomposition (EVD) of this as Sym(A) = PΣP T and the k top most eigenvalues of this as σ 1 ≥ σ 2 ≥ · · · ≥ σ k >0 , then W k (A) is defined as W k (A) = PΣ k P T ,(15) where Σ k = eye σ 1 ... σ k 0 ... 0 T . R Y (B) is concisely expressed using the eigenvalue selection operator W k . Lemma 2.6: R Y (B) = W k (Y + B).(16) Proof: See Appendix C. Finally, to develop the conjugate gradient algorithm over the Riemannian manifold Y, we need the Euclidean gradient of the cost function f (Y). Lemma 2.7: Euclidean gradient ∇ Y f (Y) of f (Y) with respect to Y is ∇ Y f (Y) = 2eye(Sym(R)1) − 2Sym(R),(17) where R = P E (g(Y)) − P E (D obs ). Proof: See Appendix D. C. Localization in Riemannian Manifold Using Conjugate Gradient (LRM-CG) In order to solve the problem in (10), we use the conjugate gradient (CG) method. CG method is widely used to solve sparse symmetric positive definite linear systems [24]. First, noting that P E and g are linear mappings, one can easily show that gradf (Y) Y ∇ Y f (Y) Y T P Yf (Y) = 1 2 P E (g(Y)) − D obs 2 F = 1 2 P E g i,j y ij E (i,j) − D obs 2 F = 1 2 i,j y ij P E (g(E (i,j) )) − D obs 2 F (a) = 1 2 i,j y ij vec P E (g(E (i,j) )) − vec(D obs ) 2 2 (b) = 1 2 Avec(Y) − b 2 2 ,(18) where (a) is because M F = vec(M) 2 and (b) follows from vec(Y) = y 11 y 21 · · · y nn T , b = vec(D obs ), and A = vec P E (g(E (1,1) )) · · · vec P E (g(E (n,n) )) . In (18), we see that the cost function f (Y) has the quadratic form of a sparse symmetric positive definite system, and thus the CG algorithm would be an efficient means to solve the problem. The update equation of the conventional CG algorithm in the Euclidean space is Y i+1 = Y i + α i P i ,(19) where α i is the stepsize and P i is the conjugate direction. Note that the stepsize α i is chosen by the line minimization technique (e.g., Armijo's rule [25], [26]) and the search direction P i of the CG algorithm is chosen as a linear combination of the gradient and the previous search direction to generate a direction which are conjugate to the previous ones. In doing so, one can avoid unnecessary searching of directions that have been searched over and achieve the speedup in the algorithm [27], [24]. Since we consider the optimization problem over the Riemannian manifold Y, the conjugate direction P i lies on the tangent space. Thus, to make sure that the update point Y i+1 lies on the manifold, we need a retraction operator. The update equation after applying the retraction operation is Y i+1 = R Y i (α i P i ) = W k (Y i + α i P i ).(20) As studied in Lemma 2.6, the eigenvalue selection operator W k guarantees that the updated point Y i+1 lies on the manifold. We next consider the conjugate direction P i of LRM-CG. In the conventional nonlinear CG algorithm, conjugate direction P i is updated as P i = −∇ Y f (Y i ) + β i P i−1 ,(21) where β i is the conjugate update parameter 4 . Since we optimize over the Riemannian manifold Y, (21) needs to be modified. First, we need to use the Riemannian gradient of f (Y) instead of the Euclidean gradient ∇ Y f (Y) since we find the search direction on the tangent space of Y. Fig. 3). Riemannian gradient, denoted gradf (Y), is distinct from ∇ Y f (Y) in the sense that it is defined on the tangent space T Y Y (see Definition 2.8: Let f be the function differentiable everywhere in the Riemannian manifold Y. The Riemannian gradient gradf (Y) of f at Y is defined as the unique element in T Y Y satisfying < B, gradf (Y) >=< B, ∇ Y f (Y) >,(22) where B is any element in T Y Y. As shown in Fig. 3 , gradf (Y) ∈ T Y Y is a component of the Euclidean gradient ∇ Y f (Y) in T Y Y. In other words, gradf (Y) is the projection of ∇ Y f (Y) onto T Y Y. Indeed, from Definition 2.3, < B, ∇ Y f (Y) − P T Y Y (∇ Y f (Y)) >= 0 for any matrix B ∈ T Y Y. Hence, < B, P T Y Y (∇ Y f (Y)) >=< B, ∇ Y f (Y) > .(23) From (22) and (23), it is clear that gradf (Y) = P T Y Y (∇ Y f (Y)).(24) Second, since the Riemannian gradient gradf (Y i ) and previous conjugate direction P i−1 lie on two different vector spaces T Y i Y and T Y i−1 Y, we need to project P i−1 onto the tangent space T Y i Y 5 before performing a linear combination between of two. In view of this, the conjugate direction update equation of LRM-CG is P i = −gradf (Y i ) + β i P T Y i Y (P i−1 ).(25) Finally, in choosing the stepsize α i in (20), we use the Armijo's rule, a widely used line search strategy. Note that the Armijo's rule is a simple yet effective way to find a stepsize α i minimizing the cost function f , that is, α i ≈ min α>0 f (W k (Y i + α i P i ) [25] , [26]. The proposed LRM-CG algorithm is summarized in Algorithm 1 6 . D. Numerical Experiments In this subsection, we investigate numerical performance of the proposed LRM-CG for both noiseless and noisy scenarios. In our simulation, we compare LRM-CG with following matrix completion algorithms: 5 In transforming a vector from one tangent space to another, an operator called vector transport is used (see Definition 8.1.1 in [19]). For an embedded manifold of R n×n , vector transport is the orthogonal projection operator [19]. 2 Initialize: i = 1, Y 1 ∈ Y: initial point, P 1 : initial conjugate direction. 3 While i ≤ T do 4 R i = P E (g(Y i )) − P E (D obs ) // Generate residual matrix 5 ∇ Y f (Y i ) = 2eye(Sym(R i )1) − 2R i // Compute Euclidean gradient 6 gradf (Y i ) = P TY i Y (∇ Y f (Y i )) // Compute Riemannian gradient 7 H i = gradf (Y i ) − P TY i Y (gradf (Y i−1 )) 8 h =< P i , H i > 9 β i = 1 h 2 < hH i − 2P i H i 2 F , gradf (Y i ) > // Compute CG coefficient 10 P i = −gradf (Y i ) + β i P TY i Y (P i−1 ) // Compute conjugate direction 11 Find a stepsize α i >0 such that • ADMiRA [14]: this algorithm uses greedy projection to identify a set of rank-one matrices that best represents the original matrix. // Perform Armijo's line search f (Y i ) − f (R Yi (α i P i )) ≥ −µα i < gradf (Y i ), P i > 12 Y i+1 = R Yi (α i P i ) // Perform retraction 13 D i+1 = g(Y i+1 ) // Compute updated Euclidean distance matrix 14 If P E (D i+1 ) − P E (D obs ) F <ǫ • LMaFit [16]: this is a nonlinear successive over-relaxation matrix completion algorithm based on nonlinear Gauss-Seidel method. • LRGeomCG [18]: this algorithm is essentially CG method defined over the Riemannian manifold of low rank matrices (but not necessarily positive definite). • SDP [12]: this algorithm solves the NNM problem via semidefinite programming. • TNN-ADMM [31]: as an improved version of the NNM problem, this algorithm solves the truncated NNM problem via an alternating direction method of multipliers. In the noiseless scenario, we generate an n × k location matrix X whose entries are sampled independently and identically from a uniform distribution in unit interval, and then map X into the Euclidean distance matrix D = g(XX T ). As mentioned, entries d ij of D are known (observed) if d ij ≤ r (r is the radio communication range). In the noisy scenario, a noise matrix N ∈ R n×n is added to the Euclidean distance matrix D. Elements n ij of N are sampled independently and identically from the Gaussian distribution with zero mean and variance σ 2 = MSE s = 1 |E| P E ( D) − P E (D) 2 F , MSE a = 1 n 2 D − D 2 F , where |E| is the cardinality of the sampling set E (see (2)). In Fig time of LRM-CG for three distinct matrix dimensions (500×500, 1000×1000, and 5000×5000). The running time is measured using the MATLAB program on a personal computer (Intel Core i5 CPU with 3.4 GHz). As indicated in Table I, LRM-CG recovers 500 × 500 Euclidean distance matrix in a few seconds. Even for 1000 × 1000 matrix, it takes less than one minute to recover the matrix accurately. 2) Recovery Performance: We next investigate the recovery performance of LRM-CG for both noiseless and noisy scenarios. In Fig. 5, we plot the recovery performance of the noiseless scenario for k = 2 and 3 as a function of the sampling ratio, where the sampling ratio is defined as a ratio between the number of observed pairwise distances and total number of pairwise distances. We observe from the figure that LRM-CG performs well in all cases, achieving target performance with smaller number of measurements than conventional techniques requires. In Fig. 6, we plot the performance of LRM-CG as a function of the signal-to-noise ratio (SNR) for the noisy scenario. We observe that LRM-CG outperforms conventional approaches by a large margin and the gain increases sharply with SNR. E. Computational Complexity In this subsection, we examine the computational complexity of LRM-CG in terms of the number of floating point operations (flops). As discussed in Section II-B, LRM-CG computes Euclidean gradient, Riemannian gradient, and the retraction in each iteration. In order to compute the Euclidean gradient (17), we first need to consider the computation of Y i from the (i − 1)-th iteration. Since Y i = QΛQ T (Q is a n × k matrix and Λ is a k × k diagonal matrix), it requires 2k multiplications and (k − 1) additions to compute (17) is at most (9k + 5)|E| + n − 1. ∇ Y f (Y i ) iny ij = k t=1 λ t q it q jt so that the associated computational complexity is (3k − 1) flops. Further, from (4), we need to compute [g(Y)] ij = y ii + y jj − y ij , which requires (9k − 1) flops. The residual matrix R i = P E (g(Y i )) − DR i ) = 1 2 (R i + R T i ), it requires at most (9k + 4)|E| + n − 1 flops to compute 2eye(Sym(R i )1). Since the cardinality of Sym(R i ) is |E|, computational complexity of ∇ Y f (Y i ) = 2eye(Sym(R i )1) − 2Sym(R i ) in Second, recalling that the Riemannian gradient gradf (Y i ) is an orthogonal projection of ∇ Y f (Y i ) onto the tangent space T Y i Y, we need to estimate the computational complexity of the orthogonal projection operator P T Y i Y . In computing P T Y i Y (A) for an n × n matrix A, we need Sym(A), B = Sym(A)Q, and C = Q T Sym(A)Q, which require (2k − 1)n 2 , 2n 2 + (2n − 1)kn, and (2n − 1)kn + (2n − 1)k 2 flops, respectively. Then, from (13), we have P T Y i Y (A) = QQ T Sym(A) + Sym(A)QQ T − QQ T Sym(A)QQ T = QB T + BQ T − QCQ T , which requires O(kn 2 + k 2 n + k 3 ) flops. Finally, in applying Armijo's rule to find the stepsize α i , we need to compute the retraction (16), the retraction operation is obtained via the eigenvalue selection operator W k and this requires the EVD of (Y i + P i ). In general, computational complexity of the EVD for a n×n matrix is expressed as O(n 3 ). However, using operation R Y i (α i P i ). FromY i = QΛQ T and P i ∈ T Y i Y, one can simplify the EVD operation. First, since P i ∈ T Y i Y, we have P i = Q Q ⊥   B i C T i C i 0     Q T Q T ⊥   .(26) Thus, Y i + P i = Q Q ⊥   B i + Λ C T i C i 0     Q T Q T ⊥   = Q (Q ⊥ C i )   B i + Λ I I 0     Q T (Q ⊥ C i ) T   = Q (Q ⊥ C i ) KΛ ′ K T   Q T (Q ⊥ C i ) T   = Q ′ Λ ′ Q ′T .(27) From (27), we see that the EVD of ( Y i + P i ) is simplified to the EVD of the 2k × 2k matrix   B i + Λ I I 0   , which requires only 8k 3 flops. Also, computing Q ⊥ C i and Q ′ = Q (Q ⊥ C i ) K needs (2n − 2k − 1)kn and (4k − 1)kn flops, respectively. Thus, computational complexity of the retraction operation is 8k 3 + (2n − 2k − 1)kn + (4k − 1)kn, which is O(kn 2 ) for k ≪ n. In summary, computational complexity of the proposed algorithm per iteration is O(k|E| + kn 2 + k 2 n + k 3 ) = O(kn 2 ). Since k = 2 or 3 in our problem, complexity per iteration can be expressed as O(n 2 ) flops. III. RECOVERY CONDITION ANALYSIS In this section, we analyze a recovery condition under which the LRM-CG algorithm recovers the Euclidean distance matrices accurately. Overall, our analysis is divided into two parts. In the first part, we analyze a condition ensuring the successful recovery of the sampled (observed) entries, i.e., P E ( D)−P E (D) F = 0. In the second part, we investigate a condition guaranteeing the exact recovery of the Euclidean distance matrices, i.e., D − D F = 0. By exact recovery, we mean that the output D i of LRM-CG converges to the original Euclidean distance matrix D. Definition 3.1: For a sequence of matrices {D i } ∞ i=1 , if lim i→∞ D i − D F = 0, we say {D i } ∞ i=1 converges to D. Further, we say {D i } ∞ i=1 converges linearly to D with convergent rate λ if there exists λ (1>λ ≥ 0) satisfying lim i→∞ D i+1 − D F D i − D F = λ. A. Convergence of LRM-CG at Sampled Entries In lim i→∞ P E (D i ) =      0 d 2 12 (∞) d 2 13 (∞) d 2 21 (∞) 0 0 d 2 31 (∞) 0 0      =      0 29 20 29 0 0 20 0 0      = P E (D). The minimal set of assumptions used for the analytical tractability are as follows: A1 : f (Y i ) − f (R Y i (α i P i )) ≥ −τ α i < gradf (Y i ), P i > for τ satisfying 0<τ <1/2, A2 : | < gradf (R Y i (α i P i )), P i > | ≤ −µ < gradf (Y i ), P i > for µ satisfying τ <µ<1/2, A3 : c gradf (Y i ) F ≥ ∇ Y f (Y i ) F for c satisfying c>1. In essence, A1 and A2 can be considered as extensions of the strong Wolfe's conditions 7 [32]. The assumption A1 says that the cost function f (Y i ) decreases monotonically as long Lemma 3.7). Note that A1 is reasonable assumption since there always exists a stepsize satisfying this assumption. as P i is chosen in an opposite direction of gradf (Y i ) on the tangent space T Y i Y (i.e., < gradf (Y i ), P i >≤ 0) (see Lemma 3.2: There exists α i >0 satisfying A1. Proof: See Appendix E Note that if the stepsize α i is chosen to be very small, then Y i+1 = R Y i (α i P i ) ≈ R Y i (0) = Y i , and thus f (Y i ) − f (R Y i (α i P i )) ≈ 0. and −τ α i < gradf (Y i ), P i >≈ 0. In this case, A1 holds true approximately. However, there would be almost no update of Y i so that the algorithm will converge extremely slowly. To circumvent this pathological scenario, we use A2, which is in essence an extension of the strong Wolfe's condition for the Riemannian manifold. Under this assumption, α i cannot be chosen to be very small since otherwise we have R Y i (α i P i ) ≈ Y i , and thus | < gradf (R Y i (α i P i )), P i > | ≈ | < gradf (Y i ), P i > | ≥ −µ < gradf (Y i ), P i >, which contradicts the assumption A2. The assumption A3 is needed to guarantee the global convergence of LRM-CG. We will discuss more on this in Remark 3.6. 7 Consider an unconstrained minimization in R n with a differentiable cost function f (x) (i.e., min x∈R n f (x)). The update equation is given by xi+1 = xi + αipi for a stepsize αi and a descent direction pi. The well-known strong Wolfe's conditions is given by f (xi) − f (xi+1) ≥ −τ αi < ∇xf (xi), pi >, | < ∇xf (xi+1), pi > | ≤ −µ < ∇xf (xi), pi >, for some constants 0<τ <µ<1. Our first main result, stating successful recovery condition at sampled entries, is formally described in the following theorem. Theorem 3.3 (strong convergence of LRM-CG): Let {D i = g(Y i )} ∞ i=1 be the sequence of the matrices generated by LRM-CG and D be the original Euclidean distance matrix. Under A1, A2, and A3, {P E (D i )} ∞ i=0 converges linearly to P E (D). Remark 3.4 (strongly convergent condition in R n ): Note that lim i→∞ P E (D i ) − P E (D) F = 0 is equivalent to lim i→∞ ∇ Y f (Y i ) F = 0.(28) This condition is often referred to as the strongly convergent condition of the nonlinear CG algorithms in the vector space. The equivalence can be established by the following sandwich lemma. Lemma 3.5: 2 P E (D i ) − P E (D) F ≤ ∇ Y f (Y i ) F ≤ (2 √ n + 2) P E (D i ) − P E (D) F .lim i→∞ inf gradf (Y i ) F = 0.(29) One can observe that the Euclidean gradient ∇ Y f (Y i ) is replaced by the Riemannian gradient gradf (Y i ). Unfortunately, the convergence of the Riemannian gradient in (29) does not imply the convergence of Euclidean gradient in (28) because ∇ Y f (Y i ) 2 F = P T Y Y (∇ Y f (Y i )) 2 F + P ⊥ T Y Y (∇ Y f (Y i )) 2 F = gradf (Y i ) 2 F + P ⊥ T Y Y (∇ Y f (Y i )) 2 F ,(30) where gradf (24)). One can observe from this that the condition in (29) is not sufficient to guarantee (28), that is, one cannot guarantee lim i→∞ P E (D i )−P E (D) F = 0 just from (29). However, by the introduction of A3, equivalence between (28) and (29) can be established. We will show that the assumption A3 holds true with overwhelming probability in Section III-C. (Y i ) = P T Y Y (∇ Y f (Y i )) (see We are now ready to prove Theorem 3.3. Proof of Theorem 3.3: First, we show that under A1 and A2, P E (D i ) − P E (D) F is non- increasing. That is, if χ is defined by χ =      sup Y∈{Y i } ∞ i=1 P ⊥ T Y Y (∇ Y f (Y)) F ∇ Y f (Y) F if ∇ Y f (Y) F = 0 1 otherwise ,(31) then there exists γ>0 such that γ(1 − χ 2 ) ≤ 1 and P E (D i+1 ) − P E (D) 2 F ≤ 1 − γ(1 − χ 2 ) P E (D i ) − P E (D) 2 F .(32) We need the following lemma to prove this. Lemma 3.7: If β i is chosen based on Fletcher-Reeves' rule 8 , that is, β i = < gradf (Y i ), gradf (Y i ) > < gradf (Y i−1 ), gradf (Y i−1 ) > ,(34)then < gradf (Y i+1 ), P i+1 > gradf (Y i+1 ) 2 F ≤ − 1 − 2µ 1 − µ − µ i+1 1 − µ . Proof: See Appendix G. 8 In our simulation, we employ Hager-Zhang's rule in the choice of βi to improve the empirical performance of the CG method [30]: Lemma 3.8: gradf (Y i ) 2 F ≥ 8(1 − χ 2 )f (Y i ). Proof: See Appendix H.βi = 1 h 2 < hHi − 2Pi Hi 2 F , gradf (Yi) >(33) where Hi = gradf (Yi) − P T Y i Y (gradf (Yi−1)) and h =< Pi, Hi >. In our analysis, however, we use Fletcher-Reeves' rule for mathematical tractability. We are now ready to prove (32). First, from A1, we have f (Y i+1 ) ≤ f (Y i ) + τ α i < gradf (Y i ), P i > (a) ≤ f (Y i ) − τ α i 1 − 2µ 1 − µ + µ i 1 − µ gradf (Y i ) 2 F ≤ f (Y i ) − τ α i 1 − 2µ 1 − µ gradf (Y i ) 2 F , (b) ≤ f (Y i ) − 8τ α i 1 − 2µ 1 − µ (1 − χ 2 )f (Y i ), where (a) and (b) follow from Lemma 3.7 and Lemma 3.8, respectively. Let γ i = 8τ α i 1 − 2µ 1 − µ , then γ i >0 (since α i >0) and hence f (Y i+1 ) ≤ (1 − γ i (1 − χ 2 ))f (Y i ) . Recalling that f (Y i ) = 1 2 P E (D i ) − P E (D) 2 F , we have P E (D i+1 ) − P E (D) 2 F ≤ 1 − γ i (1 − χ 2 ) P E (D i ) − P E (D) 2 F .lim i→∞ P E (D i+1 ) − P E (D) F P E (D i ) − P E (D) F = (1 − h(1 − χ 2 )) 1/2 <1 and hence lim i→∞ P E (D i ) − P E (D) F = 0. Thus, the sequence {P E (D i )} ∞ i=1 converges linearly to P E (D). 2) χ = 1 case: In this case, we show that there exists j satisfying ∇ Y f (Y j ) F = 0. As discussed in Remark 3.4 and Lemma 3.5, this is a sufficient condition to guarantee the strong convergence of LRM-CG. In this case, no further update can be made after j-th iteration (and thus linear convergence is naturally guaranteed). To show this, we use the contradiction argument. Suppose that ∇ Y f (Y i ) F = 0 for all i. Then, from (31) we should have sup Y∈{Y i } ∞ i=1 P ⊥ T Y Y (∇ Y f (Y)) F ∇ Y f (Y) F = χ = 1. Further, from (30), we have P ⊥ T Y i Y (∇ Y f (Y i )) 2 F = ∇ Y f (Y i ) 2 F − gradf (Y i ) 2 F ≤ ∇ Y f (Y i ) 2 F − 1 c 2 ∇ Y f (Y i ) 2 F , where the inequality is from A3 (c>1). Thus, = sup Y∈{Y i } ∞ i=1 P ⊥ T Y Y (∇ Y f (Y)) 2 F ∇ Y f (Y) 2 F ≤ sup Y∈{Y i } ∞ i=1 ∇ Y f (Y) 2 F − 1 c 2 ∇ Y f (Y) 2 F ∇ Y f (Y) 2 F = 1 − 1 c 2 , which is contradiction. Thus, ∇ Y f (Y i ) F = 0 for some j. B. Exact Recovery of Euclidean Distance Matrices So far, we have shown that the output of LRM-CG converges to the original Euclidean distance matrix D at sampled entries (i.e., P E (D ∞ ) = P E (D)). In this subsection, we show that all entries of D i converge to that of the original Euclidean distance matrix D with overwhelming probability. Before we proceed, we briefly discuss the probability model of the sampling operator P E . Let δ ij be a Bernoulli random variable that takes value 1 if d ij ≤ r (recall that r is the radio communication range) and 0 otherwise. Since the distance is symmetric (i.e., d ij = d ji ), we have δ ij = δ ji . Also, since the diagonal entries of D are all zeros, we define δ ii = 0 for all i. Then, for a matrix A, P E (A) can be expressed as P E (A) = i =j δ ij < A, e i e T j > e i e T j , = i =j δ ij a ij ,(35)0 0 0      + 3      0 0 0 0 0 1 0 0 0      + 10      0 0 0 1 0 0 0 0 0      + 3      0 0 0 0 0 0 0 1 0      . We now characterize the random variables δ ij using P (d ij ≤ r). Since d ij = x i − x j 2 , it follows P (d ij ≤ r) = P ( x i − x j 2 ≤ r). In this work, we assume that elements of x i (locations of sensor nodes) are i.i.d. random and uniformly distributed over unit interval. By denoting p = P (d ij ≤ r), the probability mass function (PMF) of δ ij can be expressed as f (δ ij ; p) = p δ ij (1 − p) 1−δ ij .(36) The following lemma provides an explicit expression of p in terms of the radio communication range r. p =          πr 2 − 8 3 r 3 + 1 2 r 4 if 0 ≤ r ≤ 1 p 1 (r) if 1 ≤ r ≤ √ 2 1 else , b) If k = 3 (3-dimensional Euclidean space), p =              4 3 πr 3 − 3π 2 r 4 + 8 5 r 5 − 1 6 r 6 if 0 ≤ r ≤ 1 p 2 (r) if 1 ≤ r ≤ √ 2 p 3 (r) if √ 2 ≤ r ≤ √ 3 1 else , August 4, 2017 DRAFT where p 1 (r) = − 2 3 − 2r 2 − 1 2 r 4 + 1 3 (8r 2 + 1) √ r 2 − 1 + 2r 2 sin −1 2 r 2 − 1 + 2 1 + tan sin −1 ( 2 r 2 −1) 2 ,(37)p 2 (r) = − 15π + 37 30 + 6π + 1 2 r 2 − 8π 3 r 3 + 3π + 3 2 r 4 + 1 3 r 6 + 2r 4 sin −1 1 − 1 r 2 + 2 15 − 44 15 r 2 − 16 5 r 4 √ r 2 − 1 − 2r 4 sin −1 1 r − r 4 sin −1 2 r 2 − 1 − 16 3 1 + tan 1 2 sin −1 2 r 2 − 1 3 + 4 1 + tan 1 2 sin −1 2 r 2 − 1 2 ,(38)p 3 (r) = − π 2 + 97 30 − 7 2 r 2 − 3 2 r 4 − 1 6 r 6 + 26 15 + 44 15 r 2 + 8 5 r 4 √ r 2 − 2 + 16 3 r 3 tan −1 1 − 2 r 2 +(2 − 8r 2 ) tan −1 √ r 2 − 2 − (2r 4 − 4r 2 ) sin −1 r 2 − 2 r 2 − 1 + (r 4 + 2r 2 ) sin −1 3 − r 2 r 2 − 1 − 16 3 r 3 tan −1 1 r 4 − 2r 2 + 8r 2 tan −1 1 r 2 − 2 + (2r 4 − 4r 2 ) sin −1 1 r 2 − 1 + 16 3 1 1 + tan 1 2 sin −1 3−r 2 r 2 −1 3 − 4 1 1 + tan 1 2 sin −1 3−r 2 r 2 −1 2 .(39) Proof: See Appendix I. It is worth noting that p 1 (r), p 2 (r), and p 3 (r) increase monotonically with r (see Fig. 7). We now state our main result. 1 − exp − (1 − c) log 1 − c 1 − p + c log c p(40) for some constant c satisfying 0<c<1 and c<p. Remark 3.11: From Lemma 3.9, we see that p gets close to 1 as the radio communication range r increases. Thus, as shown in Fig. 8, the chance of recovering D increases with r. Remark 3.12: Theoretical guarantee on the recovery of a matrix has been provided by Candes and Recht in [12], and later improved in [34], [35]. In short, if entries of a matrix are chosen of D yet show that D can be recovered exactly with overwhelming probability when r is large. Following lemma is useful to prove Theorem 3.10. and A F <∞, then there exists a constant t (0<t<1) satisfying Fig. 9. Suppose that the sensor node 4 is inside the triangle formed by three sensor nodes 1, 2, and 3. Then for a given r, it can be shown that d14 ≤ max(d12, d13), and thus P (d14 ≤ r|d12 ≤ r, d13 ≤ r) = 1 which is not necessarily equivalent to P (d14 ≤ r). t A 2 F ≤ P E (A) 2 F ,(41)D i − D F ≤ 1 √ t P E (D i ) − P E (D) F(42) C. Discussion on A3 In this section, we show that the assumption A3 (c gradf (Y i ) 2 F +ǫ> ∇ Y f (Y i ) 2 F 9 for some c>1 and ǫ>0) holds true with overwhelming probability when r is large. In order to show this, we first need to define the coherence, a measure of concentration in a matrix [12]. Definition 3.14 (Coherence [12]): Let Q be a subspace of R n of dimension k and P Q be the orthogonal projection onto Q. Then the coherence of Q is defined by µ(Q) = n k max 1≤i≤n P Q e i 2 2 . Consider a matrix A of rank k whose singular value decomposition is given by A = UΣV T = k i=1 σu i v T i ,(43) where U = u 1 · · · u k and V = v 1 · · · v k are the matrices constructed by the left and right singular vectors, respectively, and Σ is the diagonal matrix whose diagonal entries are σ i . From (43), we see that the concentration on the vertical direction (concentration in the row) is 9 When ǫ is sufficiently small, one can simply put c gradf (Yi) 2 F ≥ ∇Yf (Yi) 2 F , which is the strict form of A3. Intuitively, if the generated sequence of Riemannian gradient goes to zero ( lim determined by u i and that on the horizontal direction (concentration in the column) is determined by v i . For example, if one of the standard basis vector e i , say e 1 = 1 0 · · · 0 T , lies on the space spanned by u 1 , · · · , u k while others (e 2 , e 3 , · · · ) are orthogonal to this space, then it is clear that nonzero entries of the matrix are only on the first row. Since we need to check the concentration on both vertical and horizontal directions, we need to investigate both µ(U) and µ(V). In this regard, the coherence of a matrix A is defined by [12] µ(A) = max(µ(U), µ(V )). (44) In particular, if A is a positive semidefinite matrix, then it is clear that U = V and thus µ(A) = µ(U). Theorem 3.15: Suppose µ(Y i ) ≤ µ 0 for a given matrix Y i ∈ Y. Then, for any c>1 and ǫ>0, c 2 gradf (Y i ) 2 F + ǫ> ∇ Y f (Y i ) 2 F ,(45) with probability at least 1 − exp − mǫ log mǫ 1 − p + (1 − mǫ) log 1 − mǫ p (46) for some constant m satisfying m>0 and 0<m< 1−p ǫ , provided that n ≥ 2cµ 0 k. Remark 3.16: From Lemma 3.9, we see that p gets close to 1 as r increases. Thus, as shown in Fig. 10, when r is large, (45) holds true with overwhelming probability. Following lemmas are needed to prove the theorem. Lemma 3.17 : ∇ Y f (Y) 2 F − c 2 gradf (Y) 2 F ≤ i =j u =v δ ij | < B, e i e T j >< B, e u e T v >< (I − c 2 P T Y Y )l(e i e T j ), l(e u e T v ) > |, (47) where B = g(Y) − D and l(A) = 2eye(Sym(A)1) − 2Sym(A). Proof: See Appendix K. Lemma 3.18: If n 2 ≥ 4cµ(Y) 2 k 2 and i = j, then | < (I − cP T Y Y )l(e i e T j ), l(e i e T j ) > | ≥ 4 1 − 4cµ(Y) 2 k 2 n 2 ,(48) where c>0 and l(A) = 2eye(Sym(A)1) − 2Sym(A). Proof: See Appendix L. Lemma 3.19: Let δ 1 , δ 2 , · · · , δ N be identically (not necessarily independently) distributed Bernoulli random variables with P (δ i = 1) = p and P (δ i = 0) = 1 −p. Also, let a 1 , a 2 , · · · , a N be positive values. Let q be the largest integer obeying 2 q ≤ N. Then, for any ǫ>0, P ( N i=1 δ i a i ≥ ǫ) ≤ exp − mǫ log mǫ 1 − p + (1 − mǫ) log 1 − mǫ p ,(49)where mǫ = N i=1 a i −ǫ 2 q a min with a min = min i a i , provided that 0<mǫ<1 − p. Proof: See Appendix M. Now, we are ready to prove Theorem 3.15. Proof of Theorem 3.15: Let I = ∇ Y f (Y i ) 2 F − c 2 gradf (Y i ) 2 F , s ij = < g(Y i ) − D, e i e T j >, and g ij = u =v |s ij s uv < (I − c 2 P T Y i Y )l(e i e T j ), l(e u e T v ) > |. In this proof, we will show that P (I ≤ ǫ) is lower bounded by the quantity in (46). First, since I ≤ i =j δ ij g ij from Lemma 3.17 and hence P (I ≤ ǫ | i =j δ ij g ij ≤ ǫ) = 1, we have P ( i =j δ ij g ij ≤ ǫ) = P ( i =j δ ij g ij ≤ ǫ)P (I ≤ ǫ | i =j δ ij g ij ≤ ǫ) = P (I ≤ ǫ, i =j δ ij g ij ≤ ǫ) = P ( i =j δ ij g ij ≤ ǫ | I ≤ ǫ)P (I ≤ ǫ) ≤ P (I ≤ ǫ).(50) What remains is to find out a lower bound of P ( i =j δ ij g ij ≤ ǫ). Equivalently, we find out an upper bound of P ( i =j δ ij g ij ≥ ǫ). First, in order to use Lemma 3.19, we need to find a lower bound of g ij for (i, j) ∈ Ω (Ω = {(i, j) : s ij = 0}). Let s = min (i,j)∈Ω |s ij |, then g ij = u =v |s ij s uv < (I − c 2 P T Y Y )l(e i e T j ), l(e u e T v ) > | ≥ |s 2 ij < (I − c 2 P T Y Y )l(e i e T j ), l(e i e T j ) > | (a) ≥ 4s 2 1 − 4c 2 µ(Y i ) 2 k 2 /n 2 ≥ 4s 2 1 − 4c 2 µ 2 0 k 2 /n 2 , where (a) follows from Lemma 3.18. Now using Lemma 3.19, we have P ( i =j δ ij g ij ≥ ǫ) ≤ exp − mǫ log mǫ 1 − p + (1 − mǫ) log 1 − mǫ p ,(51) where m = ( (i,j)∈Ω g ij − ǫ)/(2 q ǫc 1 ) (c 1 = 4s 2 (1 − 4c 2 µ 2 0 k 2 /n 2 )), provided that 0<m< 1−p ǫ and n ≥ 2cµ 0 k. Here, q is the largest integer obeying 2 q ≤ |Ω| (|Ω| being the cardinality of Ω). From (50) and (51), and noting that P ( i =j δ ij g ij ≤ ǫ) = 1 − P ( i =j δ ij g ij ≥ ǫ), we get the desired result. IV. CONCLUSION In this paper, we have proposed an algorithm to recover the Euclidean distance matrix (and therefore the location map) from partially observed distance information. In solving the Frobenius norm minimization problem with a rank constraint, we expressed the Euclidean distance matrix as a function of the fixed rank positive semidefinite matrix. By capitalizing on the Riemannian manifold structure for this set of matrices, we could solve the low-rank matrix completion problem using the modified conjugate gradient algorithm. We have shown from the recovery condition analysis that the proposed LRM-CG algorithm recovers the original Euclidean distance matrix with overwhelming probability under some suitable conditions. We have also demonstrated from numerical experiments that the LRM-CG algorithm is effective in recovering the original Euclidean distance matrix while exhibiting computational cost scalable to the matrix dimension. Given the importance of the location-aware applications and services in the IoT era, we believe that the proposed LRM-CG algorithm will be a useful tool for various localization problems. While our work focused primarily on the centralized localization scenario, extension to the distributed and/or cooperative scenarios would also be interesting direction worth pursuing. We also would like to point out that our recovery guarantee analysis is not so quantitative due to the probabilistic distance measurement model (Section III.B). By introducing the non-probabilistic model (e.g., fixed ratio communication range model), we might come up with more quantitative and insightful analytic results. We leave these interesting explorations for our future work. APPENDIX A PROOF OF LEMMA 2.2 Proof: LetẎ be a tangent vector at Y ∈ Y, i.e.,Ẏ ∈ T Y Y. Also, let S =    Q Q ⊥   B C T C 0     Q T Q T ⊥   : B T = B ∈ R k×k , C ∈ R (n−k)×k    .(52) Then, what we need to show is thatẎ is an element in S. By the definition of T Y Y, there exists a curve γ(t) in Y such that Y = γ(0) andẎ = d dt γ(t) t=0 . For convenience, we denote γ(t) = Z(t). Using the eigenvalue decomposition Z(t) = Q(t)Λ(t)Q(t) T , we havė Y = d dt Z(t) t=0 =QΛQ T + QΛQ T + QΛQ T(53)Q = QΩ + Q ⊥ K,(54) where Ω is the k × k skew-symmetric matrix (i.e., Ω = −Ω T ) and K is the (n − k) × (n − k) matrix. From (53) and (54), we havė Y = (QΩ + Q ⊥ K)ΛQ T + QΛQ T +QΛ(QΩ + Q ⊥ K) = Q Q ⊥   ΩΛ +Λ + ΛΩ T ΛK T KΛ 0     Q T Q T ⊥   . If we denote B = ΩΛ +Λ + ΛΩ T and C = KΛ, then one can easily see thatẎ ∈ T Y Y ⊆ S. To complete the proof, we need to show that S = T Y Y. This implies that two vector spaces S and T Y Y have the same dimension. Indeed, from (52), we can easily check that the dimension 10 of S is 1 2 k(2n − k + 1), which is the dimension of Y [21, Proposition 1.1]. 10 The dimension of S is obtained by counting the number of independent entries of an element in S. Since B is a k × k symmetric matrix, the number of independent entries of B is k(k+1) 2 . In addition, since C is an arbitrary (n − k) × k matrix, the number of independent entries of C is (n − k)k. Thus, the dimension of S is k(k+1) 2 + (n − k)k = 1 2 k(2n − k + 1). APPENDIX B PROOF OF PROPOSITION 2.4 Proof: First, we partition a matrix A into two parts: A = A 1 + A 2 where A 1 ∈ T Y Y and A 2 ∈ (T Y Y) ⊥ . Then, it is clear that P T Y Y (A) = A 1 and thus the goal is to find out the closed form expression of A 1 . From Lemma 2.2, there exist a symmetric matrix B ∈ R k×k and a matrix C ∈ R (n−k)×k such that A 1 = Q Q ⊥   B C T C 0     Q T Q T ⊥   .(55) Since < A 1 , A 2 >= 0, we have 0 = < A 1 , A 2 > = < A 1 , A − A 1 > = < Q Q ⊥   B C T C 0     Q T Q T ⊥   , A − Q Q ⊥   B C T C 0     Q T Q T ⊥   > = <   B C T C 0   ,   Q T Q T ⊥     A − Q Q ⊥   B C T C 0     Q T Q T ⊥     Q Q ⊥ > = <   B C T C 0   ,   Q T Q T ⊥   A Q Q ⊥ −   B C T C 0   > .(56)Let   A 11 A 12 A 21 A 22   =   Q T Q T ⊥   A Q Q ⊥ =   Q T AQ Q T AQ ⊥ Q T ⊥ AQ Q T ⊥ AQ ⊥   ,(57) then we have 0 = <   B C T C 0   ,   A 11 A 12 A 21 A 22   −   B C T C 0   > = < B, A 11 − B > + < C, 1 2 (A 21 + A T 12 ) − C >, where the equality is because <   α 1 α 2 α 3 α 4   ,   β 1 β 2 β 3 β 4   >=< α 1 , β 1 > + < α 2 , β 2 > + < α 3 , β 3 > + < α 4 , β 4 > . Since B and C are chosen arbitrarily, we should have < B, A 11 − B >= 0,(58)< C, 1 2 (A 21 + A T 12 ) − C >= 0.(59) First, it is clear from (59) that C = 1 2 (A 21 + A T 12 ).(60) Next, noting that A 11 = Sym(A 11 ) + Skew(A 11 ), (58) becomes 0 = < B, A 11 − B > = < B, Sym(A 11 ) − B > + < B, Skew(A 11 ) > (a) = < B, Sym(A 11 ) − B > + < Sym(B), Skew(A 11 ) > (b) = < B, Sym(A 11 ) − B >, where (a) is because B is the symmetric matrix (i.e., B = Sym(B)), and (b) is because < Sym(C), Skew(D) >= 0 for any matrices C and D. Since B is any symmetric matrix, we should have B = Sym(A 11 ) = 1 2 (A 11 + A T 11 ).(61) Substituting (60) and (61) into (55), we have A 1 = Q Q ⊥   1 2 (A 11 + A T 11 ) 1 2 (A T 21 + A 12 ) 1 2 (A 21 + A T 12 ) 0     Q T Q T ⊥   ,(62) where A 11 , A 12 , and A 21 are the components of A (see (57)). Now, what remains is to find a closed form expression for A 1 in terms of A. First, we can rewrite (62) as A 1 = Q Q ⊥   1 2 (A 11 + A T 11 ) 1 2 (A T 21 + A 12 ) 1 2 (A 21 + A T 12 ) 1 2 (A 22 + A T 22 )     Q T Q T ⊥   − Q Q ⊥   0 0 0 1 2 (A 22 + A T 22 )     Q T Q T ⊥   = 1 2 Q Q ⊥   A 11 A 12 A 21 A 22     Q T Q T ⊥   + 1 2 Q Q ⊥   A 11 A 12 A 21 A 22   T   Q T Q T ⊥   − 1 2 Q ⊥ (A 22 + A T 22 )Q T ⊥ .(63) Substituting (57) into (63), we have A 1 = 1 2 Q Q ⊥   Q T Q T ⊥   A Q Q ⊥   Q T Q T ⊥   + 1 2 Q Q ⊥   Q T Q T ⊥   A T Q Q ⊥   Q T Q T ⊥   − 1 2 Q ⊥ (Q T ⊥ AQ ⊥ + Q T ⊥ A T Q ⊥ )Q T ⊥ = 1 2 (A + A T ) − 1 2 Q ⊥ Q T ⊥ (A + A T )Q ⊥ Q T ⊥ = Sym(A) − Q ⊥ Q T ⊥ Sym(A)Q ⊥ Q T ⊥ . Since QQ T + Q ⊥ Q T ⊥ = I and P Q = QQ T , we have A 1 = Sym(A) − (I − QQ T )Sym(A)(I − QQ T ) = P Q Sym(A) + Sym(A)P Q − P Q Sym(A)P Q , which is the desired result. APPENDIX C PROOF OF LEMMA 2.6 Proof: Our goal is to find a simple expression of the retraction operator R Y (B). First, since Z = Sym(Z) for Z ∈ Y, we have Y + B − Z 2 F = Y + B − Sym(Z) 2 F = Skew(Y + B) + Sym(Y + B) − Sym(Z) 2 F (64) = Skew(Y + B) 2 F + Sym(Y + B) − Sym(Z) 2 F (65) = Skew(Y + B) 2 F + Sym(Y + B) − Z 2 F ,(66) where (64) for any C and D. Since the first term in (66) is unrelated to Z, it is clear that R Y (B) = arg min Z∈ Y Sym(Y + B) − Z F . Using the eigenvalue decomposition Sym(Y + B) = KΣK T , we have R Y (B) = arg min Z∈ Y KΣK T − Z F = arg min Z∈ Y K Σ − K T ZK K T F (a) = arg min Z∈ Y Σ − K T ZK F , August 4, 2017 DRAFT where (a) is because KUK T 2 F = tr(KU T K T KUK T ) = U 2 F for any matrix U. Now let R Y (B) = Z * , Σ * = KZ * K T , and Q = K T ZK, then Σ * = KZ * K T = arg min Q Σ − Q F .(67) Since Σ is a diagonal matrix, Σ * should also be a diagonal matrix. Also, Σ * 0 and rank(Σ * ) = k. 11 Thus, Σ * is a diagonal matrix with only k positive entries and the rest being zero. That is, Σ * =                 σ 1 0 · · · 0 0 · · · 0 0 σ 2 · · · 0 0 · · · 0 . . . . . . . . . . . . . . . . . . 0 0 0 · · · σ k 0 · · · 0 0 0 · · · 0 0 · · · 0 . . . . . . . . . . . . . . . . . . 0 0 0 · · · 0 0 · · · 0                 ,(68) where σ 1 ≥ σ 2 ≥ · · · ≥ σ k >0. Recalling that Sym(Y + B) = KΣK T , we finally have R Y (B) = KΣ * K T = W k (Y + B), where the last equality is from (15 Df (Y)[H] = ij h ij ∂ ∂y ij f (Y), it is convenient to compute ∇ Y f (Y) as a unique element of R n×n that satisfies < ∇ Y f (Y), H >= Df (Y)[H],(69) for all H. We first compute Df (Y)[H] and then use (69) to obtain the expression of ∇ Y f (Y). Note that the cost function f (Y) = 1 2 P E (g(Y) − D obs 2 F can be expressed as f (Y) = h(k(Y)) = (h • k)(Y) where h(R) = 1 2 R 2 F ,(70)k(Y) = P E (g(Y)) − P E (D) = (P E • g)(Y) − P E (D).(71) Thus, Df (Y)[H] = D(h • k)(Y)[H] = Dh(k(Y))[Dk(Y)[H]].(72) For any two square matrices R and A, we have Dh(R)[A] = i,j a ij ∂ ∂r ij h(R) = i,j a ij ∂ ∂r ij 1 2 R 2 F = i,j a ij ∂ ∂r ij ( 1 2 p,q r 2 pq ) = i,j a ij r ij = < R, A > .(73) By choosing R = k(Y) and A = Dk(Y)[H] in (73), we can rewrite (72) as Df (Y)[H] = < k(Y), Dk(Y)[H] > (a) = < k(Y), D ((P E (g(Y)) − P E (D)) [H] > = < k(Y), D ((P E • g)(Y) − P E (D)) [H] > (b) = < k(Y), D(P E • g)(Y)[H] > (c) = < k(Y), DP E (g(Y))[Dg(Y)[H]] >, where (a) follows (70), (b) is because P E (D) is not a function of Y and thus the Frechet differential of this is zero, and (c) is due to the chain rule. Before we proceed, we remark that if S is a linear operator (i.e., S(α 1 A 1 +α 2 A 2 ) = α 1 S(A 1 )+ α 2 S(A 2 )), then DS(A)[B] = S(B)(74) for all matrices A and B (see Example 4.4.2 [37]). Since P E is a linear operator, DP E (g(Y))[Dg(Y)[H]] = P E ([Dg(Y)[H]) and hence Df (Y)[H] = < k(Y), P E (Dg(Y)[H]) > (a) = < P E (k(Y)), Dg(Y)[H] > (b) = < k(Y), Dg(Y)[H] > (c) = < k(Y), g(H) > (d) = < k(Y), 2Sym(1diag(H) T ) − 2Sym(H) > = 2 < k(Y), Sym(1diag(H) T ) > −2 < k(Y), Sym(H) >,(75) where (a) is because P E is a self-adjoint operator 12 Now, the first term in (75) is , (b) is because P E (k(Y)) = P E ((P E • g)(Y) − P E (Y)) = (P E • g)(Y) − P E (Y) = k(Y), (c) is because g is2 < k(Y), Sym(1diag(H) T ) > (a) = 2 < Sym(k(Y)), 1diag(H) T > (b) = 2 < Sym(k(Y)), diag(H)1 T > (c) = 2 < Sym(k(Y))1, diag(H) > (d) = 2 < eye(Sym(k(Y))1), H >,(76) where (a) is because Sym() is a self-adjoint operator, (b) is because < U, W >=< U T , W T >, (c) is because < A, b1 T >= tr(A T b1 T ) = tr((A1) T b) =< A1, b >, and (d) is because eye() is the adjoint operator of diag(). Next, the second term in (75) is − 2 < k(Y), Sym(H) >= −2 < Sym(k(Y)), H > .(77) From (75), (76), and (77), we have Df (Y)[H] = 2 < eye(Sym(k(Y))1), H > −2 < Sym(k(Y)), H > = < 2eye(Sym(k(Y))1) − 2Sym(k(Y)), H >(78) From (69) and (78), we have ∇ Y f (Y) = 2eye(Sym(k(Y))1) − 2Sym(k(Y)), which is the desired result. APPENDIX E PROOF OF LEMMA 3.2 Proof: If Y i is the optimal point (i.e, Y i = arg min Y f (Y)), then gradf (Y i ) = 0 and Y i+1 = Y i . For all α i ≥ 0, we have f (R Y i (α i P i )) = f (Y i+1 ) = f (Y i ) + τ α i < gradf (Y i ), P i >, satisfying A1. Next, we consider the case where Y i = arg min Y f (Y). First, we let g(α) = f (R Y i (αP i )), h(α) = f (Y i ) + τ α < gradf (Y i ), P i > .(79) Note that < gradf (Y i ), P i >≤ 0 (see Lemma 3.7) and g(α) ≥ 0. Since g(0) = f (R Y i (0)) = f (Y i ) = h(0), g(α) and h(α) intersect at α = 0. Also, when τ varies from 0 to 1, the slope of h(α) varies from 0 to | < gradf (Y i ), P i > |. Since dg(α) dα α=0 =< gradf (Y i ), P i >, h(α) is the tangential curve of g(α) at α = 0 when τ = 1. Thus, there exits 0<τ <1/2 such that h(α) intersects g(α) at some point α>0, which means that there exist α i >0 satisfying f (R Y i (α i P i )) = g(α i ) ≤ h(α i ) = f (Y i ) + τ α i < gradf (Y i ), P i >, which completes the proof. APPENDIX F PROOF OF LEMMA 3.5 Proof: First, a lower bound of ∇ Y f (Y i ) F is given by ∇ Y f (Y i ) 2 F (a) = 2eye(R i 1) − 2R i 2 F (b) = 2eye(R i 1) 2 F + 2R i 2 F ≥ 2R i 2 F = 4 P E (D i ) − P E (D) 2 F ,(80) where (a) is from (17) and (b) is from the fact that diagonal entries of R j are all zeros and eye(R j 1) is a diagonal matrix. That is, positions of nonzero elements in eye(R i 1) and R i are disjoint. An upper bound is obtained as follows. ∇ Y f (Y i ) F ≤ 2eye(R i 1) F + 2R i F (a) ≤ 2R i 1 2 + 2R i F (b) ≤ 2 R i F 1 2 + 2 R i F ≤ (2 √ n + 2) R i F ≤ (2 √ n + 2) P E (D i ) − P E (D) F ,(81) where (a) is because eye(b) F = b 2 for any vector b, and (b) is because Ab 2 ≤ A F b 2 for any matrix A and any vector b. By combining (80) and (81), we obtain the desired result. APPENDIX G PROOF OF LEMMA 3.7 Proof: Recall from (25) that we have P i+1 = −gradf (Y i+1 ) + β i+1 P T Y i+1 Y (P i ). Thus, < −gradf (Y i+1 ), P i+1 > = − gradf (Y i+1 ) 2 F + β i+1 < −gradf (Y i+1 ), P T Y i+1 Y (P i ) > = − gradf (Y i+1 ) 2 F + β i+1 < −P T Y i+1 Y (gradf (Y i+1 )), P i > (a) = − gradf (Y i+1 ) 2 F + β i+1 < −gradf (Y i+1 ), P i >, where (a) is because gradf (Y i+1 ) ∈ T Y i+1 Y. Then we have < gradf (Y i+1 ), P i+1 > + gradf (Y i+1 ) 2 F = β i+1 |< gradf (Y i+1 ), P i >| (a) ≤ β i+1 µ < −gradf (Y i ), P i >, where (a) is from the assumption A2. If we denote ζ i = − <gradf (Y i ),P i > gradf (Y i ) 2 F , then −ζ i+1 gradf (Y i+1 ) 2 F + gradf (Y i+1 ) 2 F ≤ β i+1 µζ i gradf (Y i ) 2 F , and also | − ζ i+1 + 1| ≤ µβ i+1 gradf (Y i ) 2 F gradf (Y i+1 ) 2 F ζ i .(82) From Fletcher-Reeves rule in (34), we have β i+1 gradf (Y i ) 2 F gradf (Y i+1 ) 2 F = 1 and thus | − ζ i+1 + 1| ≤ µζ i . In other words, ζ i+1 ≥ 1 − µζ i ,(83) and ζ i+1 ≤ 1 + µζ i ζ i ≤ 1 + µζ i−1 . . . ζ 2 ≤ 1 + µζ 1 , where we set ζ 1 = 1. Thus, ζ i ≤ i−1 j=0 µ j = 1 − µ i 1 − µ .(84) From (83) and (84), we finally have ζ i+1 ≥ 1 − µ 1 − µ i 1 − µ = 1 − 2µ + µ i+1 1 − µ , which is the desired result. APPENDIX H PROOF OF LEMMA 3.8 Proof: From (24), we have gradf (Y i ) = P T Y i Y (∇ Y f (Y i )), where ∇ Y f (Y i ) is the Euclidean gradient. Let P ⊥ T Y i Y be the orthogonal operator on the com- plement space of T Y i Y, then we obtain ∇ Y f (Y i ) 2 F = P T Y i Y (∇ Y f (Y i )) + P ⊥ T Y i Y (∇ Y f (Y i )) 2 F = P T Y i Y (∇ Y f (Y i )) 2 F + P ⊥ T Y i Y (∇ Y f (Y i )) 2 F ,(85) and hence gradf (Y i ) 2 F = P T Y i Y (∇ Y f (Y i )) 2 F = ∇ Y f (Y i ) 2 F − P ⊥ T Y i Y (∇ Y f (Y i )) 2 F .(86) Now, we define (85)). From (86) and (87), we have χ =      sup Y∈{Y i } ∞ i=1 P ⊥ T Y Y (∇ Y f (Y)) F ∇ Y f (Y) F if ∇ Y f (Y) F = 0 1 otherwise .(87)Note that 1 ≥ χ ≥ 0 because ∇ Y f (Y) F ≥ P ⊥ T Y Y (∇ Y f (Y)) F (seegradf (Y i ) 2 F =   1 − P ⊥ T Y i Y (∇ Y f (Y i )) 2 F ∇ Y f (Y i ) 2 F   ∇ Y f (Y i ) 2 F (88) ≥ (1 − χ 2 ) ∇ Y f (Y i ) 2 F .(89) Now, what remains is to show that ∇ Y f (Y i ) 2 F ≥ 8f (Y i ). Indeed, from Lemma 2.7, we have ∇ Y f (Y i ) 2 F = eye((R + R T )1) − 2R 2 F , where R = P E (g(Y i )) − P E (D). Noting that R is symmetric with zero diagonal entries r ii = 0, we have 1 4 ∇ Y f (Y i ) 2 F = eye(R1) 2 F + R 2 F − 2 < eye(R1), R > = R1 2 2 + R 2 F − 2 i j r ij r ii = R1 2 2 + R 2 F ≥ R 2 F = 2f (Y i ).(90) By substituting (90) into (89), we obtain the desired result. APPENDIX I PROOF OF LEMMA 3.9 Proof: By denoting x i = x i1 x i2 · · · x ik T , we have p = P (d ij ≤ r) = P ( x i − x j 2 2 ≤ r 2 ) = P ( k t=1 (x it − x jt ) 2 ≤ r 2 ). Before finding the general form of p, we compute the distribution of Y = (X 1 − X 2 ) 2 where X 1 and X 2 are i.i.d. uniformly distributed random variables at unit interval. Let Z = X 1 − X 2 , then the cdf of Z is given by F Z (z) = P (Z ≤ z) = P (X 1 − X 2 ≤ z) = 1 0 P (X 1 ≤ z + x 2 X 2 = x 2 )f X 2 (x 2 )dx 2 = 1 0 P (X 1 ≤ z + x 2 )f X 2 (x 2 )dx 2 = 1 0 F X 1 (z + x 2 )f X 2 (x 2 )dx 2 . Thus, f Z (z) = d dz F Z (z) =    1 − |z| if |z| ≤ 1 0 otherwise Now, we can easily obtain the cdf of Y = Z 2 F Y (y) =          1 if 1 ≤ y 2 √ y − y if 0 ≤ y ≤ 1 0 if y ≤ 0 . and also the pdf of Y f Y (y) =    1 √ y − 1 if 0 ≤ y ≤ 1 0 otherwise .(91) Using this, we can compute p as p = P r k t=1 y t ≤ r 2 = · · · α 1 +α 2 +...+α k ≤r 2 f Y 1 ,...,Y k (α 1 , ..., α k )dα 1 ...dα k = · · · α 1 +α 2 +...+α k ≤r 2 f Y 1 (α 1 )...f Y k (α k )dα 1 ...dα k ,(92) where y t = (x it − x jt ) 2 . When the sensor nodes are located in two dimensional Euclidean space (k = 2), we have p = α 1 +α 2 ≤r 2 f Y 1 (α 1 )f Y 2 (α 2 )dα 1 dα 2 . Let t = α 1 + α 2 , then we have p = α 1 +α 2 ≤r 2 f Y 1 (α 1 )f Y 2 (α 2 )dα 1 dα 2 = r 2 0 1 0 f Y 1 (α 1 )f Y 2 (t − α 1 )dα 1 dt = r 2 0 f Y 1 (t) * f Y 2 (t)dt.(93) where h 3 (u), h 4 (u), and h 5 (u) are given by h 3 (u) = 4π 3 u √ u + 8 5 u 2 √ u − 3π 2 u 2 − 1 6 u 3 , h 4 (u) = 3πu − 8π 3 u √ u + 3π + 4 2 u 2 + 1 6 u 3 + 1 6 (u − 1) 3 − π 3 − 5 2 − 6 √ u − 1 − 28 3 (u − 1) √ u − 1 − 16 5 (u − 1) 2 √ u − 1 − 2u 2 sin −1 1 u 2u 2 sin −1 1 − 1 u − u 2 sin −1 2 u − 1 − 16 3 1 + tan 1 2 sin −1 2 u − 1 3 + 4 1 + tan 1 2 sin −1 2 u − 1 2 , h 5 (u) = 2(u − 2) √ u − 2 − 2u 2 − 1 6 (u − 1) 3 − 3u + 29 2 + 8 5 (u − 2) 2 √ u − 2 + 22 3 (u − 2) √ u − 2 + 14 √ u − 2 + 16 √ 2 3 − 12 π + 2 tan −1 √ u − 2 −8u tan −1 √ u − 2 + 16 3 u √ u tan −1 u − 2 u − 2u(u − 2) sin −1 u − 2 u − 1 +u(u + 2) sin −1 3 − u u − 1 − 16 3 u √ u tan −1 1 u(u − 2) + 8u tan −1 1 u − 2 +2u(u − 2) sin −1 1 u − 1 + 16 3 1 1 + tan 1 2 sin −1 3−u u−1 3 −4 1 1 + tan 1 2 sin −1 3−u u−1 2 . By denoting p 2 (r) = h 3 (1) + h 4 (r 2 ) and p 3 (r) = h 3 (1) + h 4 (2) + h 5 (r 2 ), we get the desired result for k = 3. APPENDIX J PROOF OF LEMMA 3.13 Proof: Since A 2 F = P E (A) 2 F + P ⊥ E (A) 2 F , we can rewrite t A 2 F ≤ P E (A) 2 F as t A 2 F ≤ A 2 F − P ⊥ E (A) 2 F , and also P ⊥ E (A) 2 F ≤ (1 − t) A 2 F .(97) To show that (97) holds true with overwhelming probability, we first have P ( P ⊥ E (A) 2 F ≥ (1 − t) A 2 F ) = P (exp(ǫ P ⊥ E (A) 2 F ) ≥ exp(ǫ(1 − t) A 2 F )) (a) ≤ exp(−ǫ(1 − t) A 2 F )E exp(ǫ P ⊥ E (A) 2 F ) (b) = exp(−ǫ(1 − t) A 2 F )E i =j exp(ǫ(1 − δ ij )a 2 ij ) ,(98) for any ǫ>0, where (a) follows from the Markov inequality and (b) is from (35) P ⊥ E (A) 2 F = i =j (1 − δ ij )a 2 ij (see where (a) is because E In summary, we have P ( P ⊥ E (A) 2 F ≥ (1 − t) A 2 F ) ≤ g(ǫ), where g(ǫ) = exp(mtNǫa 2 min ) (1 − p + p exp(−Nǫa 2 min )) (m = A 2 F /(a 2 min N)). If 0<mt<1, we obtain the minimum value of g(ǫ) at ǫ * = 1/(Na 2 min ) log((1 − mt)p/((1 − p)mt)). Thus, P ( P ⊥ E (A) 2 F ≥ (1 − t) A 2 F ) ≤ g(ǫ * ) = 1 − p 1 − mt 1−mt p mt mt = exp − (1 − mt) log 1 − mt 1 − p + mt log mt p , which is the desired result. where (a) is because eye(e i e T j 1) = eye(e i ). Let δ = e i − e j , then l(e i e T j ) = δδ T . Also, if n 2 ≥ 4cµ(Y) 2 k 2 , then | < (I − cP T Y Y )l(e i e T j ), l(e i e T j ) > | = | < (I − cP T Y Y )δδ T , δδ T > | = | < δδ T − cP T Y Y (δδ T ), δδ T > | = | < δδ T , δδ T > −c < P T Y Y (δδ T ), δδ T > | (a) = | < δδ T , δδ T > −c < P Q δδ T + δδ T P Q − P Q δδ T P Q , δδ T > | = | < δδ T , δδ T > −c(< P Q δδ T , δδ T > + < δδ T P Q , δδ T > − < P Q δδ T P Q , δδ T >)| (b) = |δ T δδ T δ − c(δ T P Q δδ T δ + δ T δδ T P Q δ − δ T P Q δδ T P Q δ)| (c) = | δ 4 2 − c(2 P Q δ 2 2 δ 2 2 − P Q δ 4 2 )| (d) = |4 − 4c P Q δ 2 2 + c P Q δ 4 2 |. (e) ≥ 4 − 4c 4µ(Y) 2 r 2 n 2 + c 16µ(Y) 4 r 4 n 4 ≥ 4 1 − 4cµ(Y) 2 r 2 n 2 , where (a) follows from Proposition 2.4, (b) is because < X, zz T >= tr(Xzz T ) = z T Xz, (c) is because P T Q P Q = P Q and thus δ T P Q δ = δ T P T Q P Q δ = P Q δ 2 2 , (d) is because δ 2 2 = 2 for i = j, and (e) is because P Q δ 2 = P Q e i − P Q e j 2 ≤ P Q e i 2 + P Q e j 2 ≤ 2µ(Y)r/n (see Definition 3.14). APPENDIX M PROOF OF LEMMA 3.19 Proof: For any t>0, we have P ( N i=1 δ i a i ≥ ǫ) = P (exp(t N i=1 δ i a i ) ≥ exp(tǫ)) (a) ≤ exp(−tǫ)E exp(t N i=1 δ i a i ) (b) ≤ exp(−tǫ + t N i=2 n +1 a i )E exp(t 2 n i=1 δ i a i ) = exp(−tǫ + t N i=2 n +1 a i )E 2 n i=1 exp(tδ i a i ) (c) ≤ exp(−tǫ + t N i=2 n +1 a i ) 2 n i=1 E (exp(tδ i a i )) 2 n 1 2 n = exp(−tǫ + t N i=2 n +1 a i ) 2 n i=1 E [exp(tδ i a i 2 n )] 1 2 n (d) = exp(−tǫ + t N i=2 n +1 a i ) 2 n i=1 (1 − p + p exp(t2 n a i )) Fig. 1 . 1Sensor nodes deployed to measure not only environment information but also their pairwise distances. The observed distances are represented by two-sided arrows. The shadow areas represents the radio communication range of the sensor nodes. Consider the case that sensor nodes are randomly distributed in 2D Euclidean space, then rank(X) = 1 if and only if all of nodes are collinear. This event happens if there exists a constant ρ such that xi1 = ρxi2 for any i-th row. The probability of this event n i=1 P (xi1 = ρxi2) = [P (x11 = ρx12)] n becomes negligible when the number of sensor nodes are sufficiently large. Fig. 2 . 2Illustration of (a) the tangent space TY Y and (b) the retraction operator RY at a point Y in the embedded manifold Y. Fig. 3 . 3Riemannian gradient gradf (Y) is defined as the projection of the Euclidean gradient ∇Yf (Y) onto the tangent space TY Y while the Euclidean gradient is a direction for which the cost function is reduced most in R n×n , Riemannian gradient is the direction for which the cost function is reduced most in the tangent space TY Y. Hence, the vector transport of Pi−1 is the orthogonal projection of Pi−1 onto TY i Y Algorithm 1: LRM-CG algorithm 1 Input: D obs : the observed matrix, P E : the sampling operator, ǫ: tolerance, µ ∈ (0 1): given constant, T : number of iterations. 10 − SNR 10 . In obtaining the performance for each algorithm, we perform at least 1000 independent trials. 1) Convergence Efficiency: As a performance measures, we use two types of the mean square errors (MSE): MSE at sampled entries (MSE s ) and MSE at all entries (MSE a ), which are defined respectively as . 4 ,Fig. 4 . 44we plot the log-scale MSE as a function of the number of iterations for the 2-dimensional location vectors. Note that the performance results are obtained for the scenario where 200 sensor nodes randomly distributed. We observe that the log-scale MSE decreases linearly with the number of iteration, which in turn implies that the MSE decreases exponentially with the number of iterations. For example, if r = 0.5, it takes less than 25 iterations to achieve 10 −2 and 53 iterations to achieve 10 −4 . Also, as expected, required number of iterations to achieve a given performance level gets smaller as r increases. To further examine the convergence speed of LRM-CG, we measured the running The MSE performance of LRM-CG: (a) M SEs and (b) M SEa for k = 2 (2-dimensional location vectors). Fig. 5 . 5The MSE performance of the matrix completion algorithms for noiseless scenario for (a) 2-dimensional and (b) 3-dimensional location vectors. Fig. 6 . 6The MSE performance of the algorithms for noisy scenario for (a) 2-dimensional and (b) 3-dimensional location vectors. this subsection, we show that {P Ω (D i )} ∞ 1 , sequence of matrices generated by LRM-CG at sampled points, converges to P Ω (D). For example, if D = By choosing γ = min i γ i , we get the desired result. Now, what remains is to show that lim i→∞ P E (D i ) − P E (D) F = 0 under (32). From (30) and(31), we have 0 ≤ χ ≤ 1, and thus we need to consider two cases: 1) χ<1 case: In this case, one can easily show that 1>(1 − γ(1 − χ 2 )) 1/2 . Using this together with (32), we have where e i is the standard basis of R n . For example, Lemma 3. 9 : 9If an element of the location vectors x i is i.i.d. and uniform on unit interval, then a) If k = 2 (2-dimensional Euclidean space), Theorem 3 . 10 : 310Under the assumption A1, A2, and A3, the output sequence {D i = g(Y i )} ∞ 1 of LRM-CG converges globally to the Euclidean distance matrix D ( lim i→∞ D i − D F = 0) with the probability at least Fig. 7 . 7The sampling parameter p gets close to 1 as r increases. Here, elements of xi are i.i.d. random variables according to the uniform distribution over unit interval. at random, then the n × n matrix with rank k can be recovered with overwhelming probability as long as the number of measurements m follows m = O(kn 1.2 log(n)). The analysis in these works is based on the assumption that observed entries are sampled i.i.d. ( and follow Bernoulli or uniform distribution). Whereas, our analysis does not require independence assumption among the sampled entries of D since the elements of D are related. In other words, random variables δ ij do not need to be independent. For example, consider the scenario illustrated inFig. 9. Since the sensor node 4 is located inside the triangle formed by three sensor nodes (nodes 1, 2, and 3), one can see that d 14 ≤ max(d 12 , d 13 ). Thus, if d 12 and d 13 are already known (i.e., d 12 ≤ r, d 13 ≤ r), then so is d 14 . In other words, P (δ 14 = 1|δ 12 = δ 13 = 1) = 1, while P (δ 14 = 1) is not necessarily one. In our work, we do not put any assumption on the independence of the entries Lemma 3 . 13 : 313For a given matrix A, if the diagonal entries are zeros (i.e., a ii = 0 for all i) Fig. 8 . 8The Euclidean distance matrix D can be recovered with overwhelming probability in (a) 2D and (b) 3D Euclidean space when r is large. with the probability at least 1 − exp − (1 − mt) log 1≥ 1, provided that 0<mt<p<1. Proof: See Appendix J. Proof of Theorem 3.10: Let A = D i − D. Then from Lemma 3.13, we have log 1 = 0 ( 10with the probability at least 1 − exp − (1 − mt) satisfying m ≥ 1 and 0<m< p t . Combining this with lim i→∞ P E (D i ) − P E (D) F Theorem 3.3), we can conclude that lim i→∞ D i − D F = 0, with the probability at least 1 − exp − (1 − c) log 1−c 1−p + c log c p , where c = mt. i→∞ gradf (Yi) F = 0), so does the corresponding sequence of the Euclidean gradient ( lim i→∞ ∇Yf (Yi) F = 0). Fig. 10 . 10The condition (45) holds true with overwhelming probability in (a) 2D and (b) 3D Euclidean space when the radio communication range r is large. Since Q T Q = I, Q is an element of the Stiefel manifold Q = {A : A T A = I, A ∈ R n×k }. The tangent vector of Q at the point Q is given by [19, Example 3.5.2] is because Sym(A) + Skew(A) = A and (65) is because < Skew(C), Sym(D) >= 0 also a linear function and thus Dg(Y)[H] = g(H), and (d) is due to (4). A 2 F 2). Let Ω = {(i, j) : a ij = 0} (i.e., Ω is the index set of nonzero entries of A), and N = 2 ⌊log 2 |Ω|⌋ (|Ω| is the cardinality of Ω). Also, let Ω be a subset of Ω such that| Ω| = N, ) 1 − p + p exp(−Nǫa 2 min ) , random variable A i and M = 2 q (q ≥ 1), (b) is because a min = min (i,j)∈Ω |a ij |, and (c) is because | Ω| = N. δ We first denote B = g(Y) − D, and then express B = ij < B, e i e T j > e i e T j , which givesP E (B) = i =j δ ij < B, e i e T j > e i e T j . Recalling that l(A) = 2eye(Sym(A)1) − 2Sym(A), we have ∇ Y f (Y) = l(P E (ij s ij l(e i e T j ), where s ij =< B, e i e T j > and (a) is because l(αC + βD) = αl(C) + βl(D). Now, if we let I = ∇ Y f (Y) 2 F − c 2 gradf (Y) 2 F , then we have I = i =j u =v δ ij δ uv s ij s uv < l(e i e T j ), l(e u e T v ) > −c 2 i =j u =v δ ij δ uv s ij s uv < P T Y Y (l(e i e T j )), P T Y Y (l(e u e T v )) > (a) = i =j u =v δ ij δ uv s ij s uv < l(e i e T j ), l(e u e T v ) > −c 2 i =j u =v δ ij δ uv s ij s uv < P T Y Y (l(e i e T j )), l(e u e T v ) > = i =j u =v δ ij δ uv s ij s uv < (I − c 2 P T Y Y )l(e i e T j ), l(e u e T v ) > ≤ i =j u =v δ ij δ uv |s ij s uv < (I − c 2 P T Y Y )l(e i e T j ), =j u =v (δ 2 ij + δ 2 uv )|s ij s uv < (I − c 2 P T Y Y )l(e i e T j ), l(e u e T v ) > | = 1 2 i =j u =v (δ ij + δ uv )|s ij s uv < (I − c 2 P T Y Y )l(e i e T j ), l(e u e T v ) |s ij s uv < (I − c 2 P T Y Y )l(e i e T j ), l(e u e T v ) |s ij s uv < (I − c 2 P T Y Y )l(e u e T v ), l(e i e T j ) > | |s ij s uv < (I − c 2 P T Y Y )l(e i e T j ), l(e u e T v ) |s ij s uv < l(e u e T v ), (I − c 2 P T Y Y )l(e i e T j ) |s ij s uv < (I − c 2 P T Y Y )l(e i e T j ), l(e u e T v ) > |, where (a) is because < P T Y Y (E), F >=< P T Y Y (E), P T Y Y (F) > + < P T Y Y (E), P ⊥ T Y Y (F) >=< P T Y Y (E), P T Y Y (F) >, (b) is because x 2 + y 2 ≥ 2xy (x,y ≥ 0), and (c) is because (I − c 2 P T Y Y ) Recalling that l(A) = 2eye(Sym(A)1) − 2Sym(A), we have l(e i e T j ) = 2eye(Sym(e i e T j )1) − 2Sym(e i e T j ) = eye(e i e T j 1 + e j e T i 1) − e i e T j − e j e T i (a) = eye(e i + e j ) − e i e T j − e j e T i = e i e T i + e j e T j − e i e T j − e j e T i = (e i − e j )(e i − e j ) T , t2 n a i )((1 − p) exp(−t2 n a i ) + p) i )((1 − p) exp(−t2 n a min ) + p),where (a) follows from the Markov's inequality, (b) is because δ i ≤ 1 for all i, ) 1/M for positive random variables X i and M = 2 q (q ≥ 1), (d) is because δ i is Bernoulli random variable with P (δ i = 1) = p, and (e) is because a min = min exp(−t2 n a min )+p), then the minimum of g(t) is obtainedAugust 4, 2017 DRAFTat t * = 1/(2 n a min ) ln((1 − mǫ)(1 − p)/(mǫp)) (m = ( N i=1 a i − ǫ)/(2 n ǫa min )). − mǫ log mǫ 1 − p + (1 − mǫ) log 1 − mǫ pwhen 0<mǫ<1 − p, which establishes the lemma. TABLE I . IEXPERIMENTAL RESULTS FOR EUCLIDEAN DISTANCE MATRIX COMPLETION.Size (n × n) Euclidean dimension r time (s) Number of iterations M SE a 500 × 500 2 0.5 9 90 1.0 × 10 −6 3 0.7 10 92 3.4 × 10 −6 1, 000 × 1, 000 2 0.5 38 79 1.3 × 10 −6 3 0.7 45 98 1.4 × 10 −6 5, 000 × 5, 000 obs requires 9k|E| flops (|E| is the number of the observed entries of D obs ). In addition, since it takes (9k + 2)|E| flops to compute Sym( Proof: In general, Euclidean gradient ∇ Y f (Y) can be obtained by taking partial derivatives with respect to each coordinate of the Euclidean space. Since ∇ Y f (Y) is interpreted as a matrix). APPENDIX D PROOF OF LEMMA 2.7 whose inner product with an arbitrary matrix H becomes the Frechet differential Df (Y)[H] of f at Y [36], that is, August 4, 2017DRAFT S is an orthogonal Stiefel manifold embedded in R n×k[20, Theorem 5.12].August 4, 2017 DRAFT Tangent vector on the smooth Riemannian manifold is a generalization of the notion of tangent vector to curves and surfaces in Euclidean space. A collection of all tangent vectors at each point of smooth manifold forms a tangent space at that point, which is indeed a vector space.August 4, 2017 DRAFT There are a number of ways to choose βi. See, e.g.,[24],[28],[29],[30].August 4, 2017 DRAFT One can also find the matlab source code at http://islab.snu.ac.kr/publication.html.August 4, 2017 DRAFT Since Z * ∈ Y, Z * 0 and also Σ * = K T Z * K 0 and rank(Σ * ) = rank(K T Z * K) = rank(Z * ) = k.August 4, 2017 DRAFT Let A and B be two linear operators in R n×n . If < A(A), B >=< A, B(B) >, we say A and B are adjoint to each other in R n×n . Further, if A ≡ B, then we say it is a self-adjoint operator. August 4, 2017 DRAFT After some manipulations, we haveLetwhere h 1 (u) and h 2 (u) are given byDenoting p 1 (r) = h 1 (1) + h 2 (r 2 ), we get the desired result for k = 2.Similarly, when the sensor nodes are located in three dimensional Euclidean space (k = 3),After some manipulations, we havewhere Localization in internet of things network: Matrix completion approach. L Nguyen, S Kim, B Shim, Proc. Inform. Theory Applicat. Workshop. Inform. Theory Applicat. WorkshopL. Nguyen, S. Kim, and B. Shim, "Localization in internet of things network: Matrix completion approach," Proc. Inform. Theory Applicat. Workshop, 2016. Designing an open source maintenance-free environmental monitoring application for wireless sensor networks. M Delamo, S Felici-Castell, J J Perez-Solano, A Foster, J. Syst. Softw. 103M. Delamo, S. Felici-Castell, J. J. Perez-Solano, and A. Foster, "Designing an open source maintenance-free environmental monitoring application for wireless sensor networks," J. Syst. Softw., vol. 103, pp. 238-247, May 2015. Cyber-physical codesign of distributed structural health monitoring with wireless sensor networks. G Hackmann, W Guo, G Yan, Z Sun, C Lu, S Dyke, IEEE Trans. Parallel Distrib. Syst. 25G. Hackmann, W. Guo, G. Yan, Z. Sun, C. Lu, and S. Dyke, "Cyber-physical codesign of distributed structural health monitoring with wireless sensor networks," IEEE Trans. Parallel Distrib. Syst., vol. 25, pp. 63-72, Jan. 2014. Localization algorithms in wireless sensor networks: Current approaches and future challenges. A , Netw. Protocols Algorithms. 21A. Pal, "Localization algorithms in wireless sensor networks: Current approaches and future challenges," Netw. Protocols Algorithms, vol. 2, no. 1, pp. 45-74, 2010. Wireless sensor networks for condition monitoring in the railway industry: A survey. V J Hodge, S O&apos;keefe, M Weeks, A Moulds, IEEE Trans. Intell. Transp. Syst. 16V. J. Hodge, S. O'Keefe, M. Weeks, and A. Moulds, "Wireless sensor networks for condition monitoring in the railway industry: A survey," IEEE Trans. Intell. Transp. Syst., vol. 16, pp. 1088-1106, Jun. 2015. Wireless sensor networks: a survey on recent developments and potential synergies. P Rawat, K D Singh, H Chaouchi, J M Bonnin, J. Supercomput. 681P. Rawat, K. D. Singh, H. Chaouchi, and J. M. Bonnin, "Wireless sensor networks: a survey on recent developments and potential synergies," J. Supercomput., vol. 68, no. 1, pp. 1-48, Apr. 2014. Multidimensional scaling: I. theory and method. W S Torgerson, Psychometrika. 174W. S. Torgerson, "Multidimensional scaling: I. theory and method," Psychometrika, vol. 17, no. 4, pp. 401-419, 1952. Vehicular node localization using received-signal-strength indicator. R Parker, S Valaee, IEEE Trans. Veh. Technol. 56R. Parker and S. Valaee, "Vehicular node localization using received-signal-strength indicator," IEEE Trans. Veh. Technol., vol. 56, pp. 3371-3380, Nov. 2007. Closed-form formulae for time-difference-of-arrival estimation. H C So, Y T Chan, F K Chan, IEEE Trans. Signal Process. 56H. C. So, Y. T. Chan, and F. K. Chan, "Closed-form formulae for time-difference-of-arrival estimation," IEEE Trans. Signal Process., vol. 56, pp. 2614-2620, Jun. 2008. Localization from mere connectivity. Y Shang, W Ruml, Y Zhang, M Fromherz, Proc. ACM Symp. Mobile Ad Hoc Netw. ACM Symp. Mobile Ad Hoc NetwAnnapolis, Maryland, USAY. Shang, W. Ruml, Y. Zhang, and M. Fromherz, "Localization from mere connectivity," in Proc. ACM Symp. Mobile Ad Hoc Netw. Comput., Annapolis, Maryland, USA, Jun. 2003, pp. 201-212. Matrix rank minimization with applications. M Fazel, Standford, CAStandford Univ.Ph.D. dissertationM. Fazel, "Matrix rank minimization with applications," Ph.D. dissertation, Standford Univ., Standford, CA, 2002. Exact matrix completion via convex optimization. E J Candes, B Recht, Found. Comput. Math. 6E. J. Candes and B. Recht, "Exact matrix completion via convex optimization," Found. Comput. Math., vol. 6, pp. 717-772, 2009. A singular value thresholding algorithm for matrix completion. J F Cai, E J Candes, Z Shen, SIAM J. Optimiz. 204J. F. Cai, E. J. Candes, and Z. Shen, "A singular value thresholding algorithm for matrix completion," SIAM J. Optimiz., vol. 20, no. 4, pp. 1956-1982, 2010. Admira: Atomic decomposition for minimum rank approximation. K Lee, Y Bresler, IEEE Trans. Inf. Theory. 569K. Lee and Y. Bresler, "Admira: Atomic decomposition for minimum rank approximation," IEEE Trans. Inf. Theory, vol. 56, no. 9, pp. 4402-4416, 2010. Orthogonal rank-one matrix pursuit for low rank matrix completion. Z Wang, M J Lai, Z Lu, W Fan, H Davulcu, J Ye, SIAM J. Sci. Comput. 1Z. Wang, M. J. Lai, Z. Lu, W. Fan, H. Davulcu, and J. Ye, "Orthogonal rank-one matrix pursuit for low rank matrix completion," SIAM J. Sci. Comput., no. 1, pp. A488-A514, 2015. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Z Wen, W Yin, Y Zhang, Math. Program. Comput. 4Z. Wen, W. Yin, and Y. Zhang, "Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm," Math. Program. Comput., no. 4, pp. 333-361, 2012. Matrix completion and low-rank svd via fast alternating least squares. T Hastie, R Mazumder, J D Lee, R Zadeh, J. Mach. Learning Research. 16T. Hastie, R. Mazumder, J. D. Lee, and R. Zadeh, "Matrix completion and low-rank svd via fast alternating least squares," J. Mach. Learning Research, no. 16, pp. 3367-3402, 2015. Low-rank matrix completion by riemannian optimization. B Vandereycken, SIAM J. Optimiz. 232B. Vandereycken, "Low-rank matrix completion by riemannian optimization," SIAM J. Optimiz., vol. 23, no. 2, pp. 1214- 1236, 2013. Optimization Algorithms on Matrix Manifolds. P A Absil, R Mahony, R Sepulchre, Princeton Univ. PressP. A. Absil, R. Mahony, and R. Sepulchre, Optimization Algorithms on Matrix Manifolds. Princeton Univ. Press, 2008. J Lee, Introduction to Smooth Manifolds. New York: NYSpringer2182nd edJ. Lee, Introduction to Smooth Manifolds, 2nd ed. New York: NY:Springer, 2013, vol. 218. Optimization and Dynamical Systems. U Helmke, J B Moore, Springer-VerlagLondonU. Helmke and J. B. Moore, Optimization and Dynamical Systems. London: Springer-Verlag, 1994. Fixed-rank matrix factorizations and riemannian low-rank optimization. B Mishra, G Meyer, S Bonnabel, R Sepulchre, Comput. Stat. 3-4B. Mishra, G. Meyer, S. Bonnabel, and R. Sepulchre, "Fixed-rank matrix factorizations and riemannian low-rank optimization," Comput. Stat., no. 3-4, pp. 591-621, 2014. Projection-like retractions on matrix manifolds. P A Absil, J Malick, SIAM J. Optimiz. 22P. A. Absil and J. Malick, "Projection-like retractions on matrix manifolds," SIAM J. Optimiz., vol. 22, pp. 135-158, 2012. Nonlinear conjugate gradient methods. Y H Dai, Wiley Encyclopedia of Operations Research and Manage. Sci. Y. H. Dai, "Nonlinear conjugate gradient methods," Wiley Encyclopedia of Operations Research and Manage. Sci., 2011. Minimization of functions having lipschitz continuous first partial derivatives. L Armijo, Pacific J. Math. 1L. Armijo, "Minimization of functions having lipschitz continuous first partial derivatives," Pacific J. Math., no. 1, 1966. C T Kelley, Iterative methods for optimization. Philadelphia: PA: Frontiers in Applied Mathematics. C. T. Kelley, Iterative methods for optimization. Philadelphia: PA: Frontiers in Applied Mathematics, 1999. Methods of conjugate gradients for solving linear systems. M R Hestenes, E Stiefel, NBS. M. R. Hestenes and E. Stiefel, "Methods of conjugate gradients for solving linear systems," NBS, 1952. Function minimization by conjugate gradients. R Fletcher, C M Reeves, J. Comput. 2R. Fletcher and C. M. Reeves, "Function minimization by conjugate gradients," J. Comput., no. 2, pp. 149-154, 1964. A nonlinear conjugate gradient method with a strong global convergence property. Y H Dai, Y Yuan, SIAM J. Optimiz. 1Y. H. Dai and Y. Yuan, "A nonlinear conjugate gradient method with a strong global convergence property," SIAM J. Optimiz., no. 1, pp. 177-182, 1999. A new conjugate gradient method with guaranteed descent and an efficient line search. W W Hager, H Zhang, SIAM J. Optimiz. 1W. W. Hager and H. Zhang, "A new conjugate gradient method with guaranteed descent and an efficient line search," SIAM J. Optimiz., no. 1, pp. 170-192, 2005. Fast and accurate matrix completion via truncated nuclear norm regularization. Y Hu, D Zhang, J Ye, X Li, X He, IEEE Trans. Pattern Anal. Mach. Intell. 9Y. Hu, D. Zhang, J. Ye, X. Li, and X. He, "Fast and accurate matrix completion via truncated nuclear norm regularization," IEEE Trans. Pattern Anal. Mach. Intell., no. 9, pp. 2117-2130, Sep. 2013. Convergence conditions for ascent methods. P Wolfe, SIAM Rev. 2P. Wolfe, "Convergence conditions for ascent methods," SIAM Rev., no. 2, pp. 226-235, 1969. A new, globally convergent riemannian conjugate gradient method. H Sato, T Iwai, Optim. J. Math. Program. Oper. Res. 4H. Sato and T. Iwai, "A new, globally convergent riemannian conjugate gradient method," Optim. J. Math. Program. Oper. Res., no. 4, pp. 1011-1031, 2015. The power of convex relaxation: Near-optimal matrix completion. E J Candes, T Tao, IEEE Trans. Inf. Theory. 5E. J. Candes and T. Tao, "The power of convex relaxation: Near-optimal matrix completion," IEEE Trans. Inf. Theory, no. 5, pp. 2053-2080, May 2010. A simpler approach to matrix completion. B Recht, J. Mach. Learning Research. 12B. Recht, "A simpler approach to matrix completion," J. Mach. Learning Research, vol. 12, pp. 3413-3430, Dec. 2011. Analysis for applied mathematics. W Cheney, SpringerNew YorkW. Cheney, Analysis for applied mathematics. New York: Springer, 2013. Applications of functional analysis and operator theory. V Hutson, J Pym, M Cloud, ElsevierV. Hutson, J. Pym, and M. Cloud, Applications of functional analysis and operator theory. Elsevier, 2005.
[]
[ "Bootstrapping MN and Tetragonal CFTs in Three Dimensions", "Bootstrapping MN and Tetragonal CFTs in Three Dimensions" ]
[ "Andreas Stergiou \nTheoretical Division\nMS B285\nLos Alamos National Laboratory\n87545Los AlamosNMUSA\n" ]
[ "Theoretical Division\nMS B285\nLos Alamos National Laboratory\n87545Los AlamosNMUSA" ]
[]
Conformal field theories (CFTs) with MN and tetragonal global symmetry in d = 2 + 1 dimensions are relevant for structural, antiferromagnetic and helimagnetic phase transitions. As a result, they have been studied in great detail with the ε = 4 − d expansion and other field theory methods.The study of these theories with the nonperturbative numerical conformal bootstrap is initiated in this work. Bounds for operator dimensions are obtained and they are found to possess sharp kinks in the MN case, suggesting the existence of full-fledged CFTs. Based on the existence of a certain large-N expansion in theories with MN symmetry, these are argued to be the CFTs predicted by the ε expansion. In the tetragonal case no new kinks are found, consistently with the absence of such CFTs in the ε expansion. Estimates for critical exponents are provided for a few cases describing phase transitions in actual physical systems. In two particular MN cases, corresponding to theories with global symmetry groups O(2) 2 S 2 and O(2) 3 S 3 , a second kink is found. In the O(2) 2 S 2 case it is argued to be saturated by a CFT that belongs to a new universality class relevant for the structural phase transition of NbO 2 and paramagnetic-helimagnetic transitions of the rare-earth metals Ho and Dy. In the O(2) 3 S 3 case it is suggested that the CFT that saturates the second kink belongs to a new universality class relevant for the paramagnetic-antiferromagnetic phase transition of the rare-earth metal Nd.
10.21468/scipostphys.7.1.010
[ "https://arxiv.org/pdf/1904.00017v1.pdf" ]
90,234,066
1904.00017
196ba09215a33b9eeaeb161b456ac134ec34336e
Bootstrapping MN and Tetragonal CFTs in Three Dimensions April 2019 29 Mar 2019 Andreas Stergiou Theoretical Division MS B285 Los Alamos National Laboratory 87545Los AlamosNMUSA Bootstrapping MN and Tetragonal CFTs in Three Dimensions April 2019 29 Mar 2019Contents 1. Introduction and discussion of results 1 Conformal field theories (CFTs) with MN and tetragonal global symmetry in d = 2 + 1 dimensions are relevant for structural, antiferromagnetic and helimagnetic phase transitions. As a result, they have been studied in great detail with the ε = 4 − d expansion and other field theory methods.The study of these theories with the nonperturbative numerical conformal bootstrap is initiated in this work. Bounds for operator dimensions are obtained and they are found to possess sharp kinks in the MN case, suggesting the existence of full-fledged CFTs. Based on the existence of a certain large-N expansion in theories with MN symmetry, these are argued to be the CFTs predicted by the ε expansion. In the tetragonal case no new kinks are found, consistently with the absence of such CFTs in the ε expansion. Estimates for critical exponents are provided for a few cases describing phase transitions in actual physical systems. In two particular MN cases, corresponding to theories with global symmetry groups O(2) 2 S 2 and O(2) 3 S 3 , a second kink is found. In the O(2) 2 S 2 case it is argued to be saturated by a CFT that belongs to a new universality class relevant for the structural phase transition of NbO 2 and paramagnetic-helimagnetic transitions of the rare-earth metals Ho and Dy. In the O(2) 3 S 3 case it is suggested that the CFT that saturates the second kink belongs to a new universality class relevant for the paramagnetic-antiferromagnetic phase transition of the rare-earth metal Nd. Introduction and discussion of results In recent years it has become clear that the numerical conformal bootstrap as conceived in [1] 1 is an indispensable tool in our quest to understand and classify conformal field theories (CFTs). Its power has already been showcased in the 3D Ising [3] and O(N ) models [4,5], and recently it has suggested the existence of a new cubic universality class in 3D, referred to as C 3 or Platonic [6,7]. Now that the method has showed its strength, it is time for it to be applied to the plethora of examples of CFTs in d = 3 suggested by the ε = 4 − d expansion [8][9][10]. This is of obvious importance, for the bootstrap gives us nonperturbative information that is useful both for comparing with experiments as well as in testing the validity of field theory methods such as the ε expansion in the ε → 1 limit. In this work we apply the numerical conformal bootstrap to CFTs with global symmetry groups that are semidirect products of the form K n S n , where K is either O(m) or the dihedral group D 4 of eight elements, i.e. the group of symmetries of the square. These cases have been 1 See [2] for a recent review. analyzed in detail with the ε expansion and other field theory methods due to their importance for structural, antiferromagnetic and helimagnetic phase transitions. This provides ample motivation for their study with the bootstrap, with the hope of resolving some of the controversies in the literature. One of the cases we analyze in detail in this work is that of O(2) 2 S 2 symmetry. Such theories are relevant for frustrated models with noncollinear order-see [9,Sec. 11.5], [11] and references therein. Monte Carlo simulations as well as the ε expansion and the fixed-dimension expansion have been used in the literature. Disagreements both in experimental as well as theoretical results described in [11] and [12] paint a rather disconcerting picture. In this work we observe a clear kink in a certain operator dimension bound-see Fig. 1 below. Following standard intuition, we attribute this kink to the presence of a CFT with O(2) 2 S 2 symmetry. Using existing results in the literature, namely [4], we can exclude the possibility that this kink is saturated by the CFT of two decoupled O(2) models. Obtaining the spectrum on the kink as explained in [13], we are able to provide estimates for the critical exponents β and ν that are frequently quoted in the literature. 2 We find β = 0.293(3) , ν = 0.566 (6) . (1.1) These results suggest that the ε expansion at order ε 4 perhaps underperforms [14, Table II]. Experimental results for β for XY stacked triangular antiferromagnets are slightly lower and for the helimagnets (spiral magnets) Ho and Dy higher [9, Table 37]. Also, our result for β is below the value measured in the structural phase transition of NbO 2 [15]. Another case of interest is that of CFTs with O(2) 3 S 3 symmetry. Here we again find a kink-see Fig. 2 below-and for the CFT that saturates it we obtain, with a spectrum analysis, β = 0.301(3) , ν = 0.581 (6) . (1.2) Just like in the previous paragraph, we do not find good agreement with results of the ε expansion [14, Table II]. A CFT with O(2) 3 S 3 symmetry is supposed to describe the antiferromagnetic phase transition of Nd [16], but the experimental result for β in [17] is incompatible with our β in (1.2). In both the O(2) 2 S 2 and O(2) 3 S 3 cases we just discussed, we find that the stability of our theory, as measured by the scaling dimension of the next-to-leading scalar singlet, S , is not in question. More specifically, in both cases the scaling dimension of S is slightly below four, while marginality is of course at three. Therefore, the ε expansion appears to fail quite dramatically, for it predicts that S , an operator quartic in φ, has dimension slightly above three [14, Table I]. In fact, the purported closeness of the dimension of S to three according to the ε expansion has contributed to controversies in the literature regarding the nature of the stable fixed point, with 2 In terms of the dimensions of the order parameter φ and the leading scalar singlet S it is β = ∆ φ /(3 − ∆S) and ν = 1/(3 − ∆S). arguments that it may be that of decoupled O(2) models-see section 3 below. The bootstrap shows that the fully-interacting O(2) 2 S 2 and O(2) 3 S 3 CFTs are stable. It is not clear from our discussion so far that our bootstrap bounds are saturated by the CFT predicted by the ε expansion. This is almost certain, however, as we now explain. Our bootstrap results suggest that there is a well-defined large-m expansion in O(m) n S n theories. This was verified by the authors of [10] for the fully interacting O(m) n S n theory of the ε expansion-see v6 of [10] on the arXiv. The important point here is that since the large-m results of the ε-expansion theory reproduce the behavior we see in our bootstrap bounds, we conclude that the kinks we observe are indeed due to the theory predicted by the ε expansion. As we alluded to above, experimental results for phase transitions in the helimagnets Ho and Dy as well as the structural phase transition of NbO 2 differ from those in XY stacked triangular antiferromagnets and the helimagnet Tb [12]. Our conclusions contradict the suggestion of [18,12] H → Aut(N ), f : h → f (h) = f h . The action of f h on N is given by conjugation, f h : N → N , f h : n → hnh −1 . (By definition hnh −1 ∈ N since N G.) With this definition, f is a homomorphism, i.e. f h 1 f h 2 = f h 1 h 2 . One can show that, up to isomorphisms, N, H and f uniquely determine G. The multiplication of two elements (n, h) and (n , h ) of G is given by (n, h)(n , h ) = (nf h (n ), hh ) , (2.1) the identity element is (e N , e H ), with e N the identity element of N and e H that of H, and the inverse of (n, h) is given by (n, h) −1 = (f h −1 (n −1 ), h −1 ) . (2.2) Note that a direct product is a special case of a semidirect product where f is the trivial homomorphism, i.e. the homomorphism that sends every element of H to the identity automorphism of N . In this work we analyze CFTs with global symmetry of the form K n S n , where K n denotes the direct product of n groups K and S n the permutation group of n elements. In this case the action of the homomorphism f : S n → Aut(K n ) is to permute the K's in K n , i.e. f σ : (k 1 , . . . , k n ) → (k σ(1) , . . . , k σ(n) ), with σ an element of S n and k i , i = 1, . . . , n, an element of the i-th K in K n . 3 The first example we analyze is that of the MN m,n CFT. By this we refer to the CFT with global symmetry MN m,n = O(m) n S n . The vector representation is furnished by the operator φ i , i = 1, . . . , mn. The crucial group-theory problem is of course to decompose φ i (x 1 )φ j (x 2 )φ k (x 3 )φ l (x 4 ) into invariant subspaces in order to derive the set of crossing equations that constitutes the starting point for our numerical analysis. Invariant tensors help us in this task. As far as the OPE is concerned we have φ i × φ j ∼ δ ij S + X (ij) + Y (ij) + Z (ij) + A [ij] + B [ij] , (2.3) where S is the singlet, X, Y, Z are traceless-symmetric and A, B antisymmetric. The first step to doing that is to construct the invariant tensors of the group under study. This way of thinking, in terms of invariant theory, was recently applied to the ε expansion in [10], and it turns out to be very useful when thinking about the problem from the bootstrap point of view. Invariant tensors For the MN m,n CFT there are two four-index primitive invariant tensors [10]. They can be defined as follows: γ ijkl φ i φ j φ k φ l = (φ 2 1 + · · · + φ 2 m ) 2 + (φ 2 m+1 + · · · + φ 2 2m ) 2 + · · · + (φ 2 m(n−1)+1 + · · · + φ 2 mn ) 2 , (2.4a) ω ijkl φ i φ j φ k φ l = (φ 1 φ 2 − φ 2 φ 1 ) 2 + (φ 3 φ 4 − φ 4 φ 3 ) 2 + · · · + (φ mn−1 φ mn − φ mn φ mn−1 ) 2 . (2.4b) The tensor γ is fully symmetric, while the tensor ω satisfies ω ijkl = ω jikl , ω ijkl = ω klij , ω ijkl + ω ikjl + ω iljk = 0 . (2.5) A non-primitive invariant tensor with four indices is defined by ξ ijkl φ i φ j φ k φ l = (φ 2 1 + φ 2 2 + · · · + φ 2 mn ) 2 ,(2.6) which respects O(mn) symmetry. One can verify that (repeated indices are always assumed to be summed over their allowed values) γ iijk = 1 3 (m + 2)δ jk , ω iijk = (m − 1)δ jk , (2.7) and γ ijmn γ klmn = 1 9 (m + 8)γ ijkl + 2 27 (m + 2)ω ijkl , γ ijmn ω klmn = 1 3 (m − 1)γ ijkl + 2 9 (m + 2)ω ijkl , ω ijmn ω klmn = (m − 1)γ ijkl + 1 3 (2m − 5)ω ijkl , ω imjn ω kmln = 1 4 (m − 1)γ ijkl + 1 6 (m + 2)ω ijkl + 3 2 ω ikjl . (2.8) Projectors and crossing equation With the help of (2.7) and (2.8) it can be shown that the tensors P S ijkl = 1 mn δ ij δ kl , (2.9a) P X ijkl = 1 m γ ijkl + 2 3m ω ijkl − 1 mn δ ij δ kl , (2.9b) P Y ijkl = (1 − 1 m )γ ijkl − 1 3 (1 + 2 m )ω ijkl , (2.9c) P Z ijkl = −γ ijkl + 1 3 ω ijkl + 1 2 (δ ik δ jl + δ il δ jk ) , (2.9d) P A ijkl = 1 3 (ω ijkl + 2ω ikjl ) , (2.9e) P B ijkl = − 1 3 (ω ijkl + 2ω ikjl ) + 1 2 (δ ik δ jl − δ il δ jk ) , (2.9f) satisfy P I ijmn P J mnkl = P I ijkl δ IJ , I P I ijkl = δ ik δ jl , P I ijkl δ ik δ jl = d I r ,(2.10) where d I r is the dimension of the representation indexed by I, with {d S r , d X r , d Y r , d Z r , d A r , d B r } = {1, n − 1, 1 2 (m − 1)(m + 2)n, 1 2 m 2 n(n − 1), 1 2 mn(m − 1), 1 2 m 2 n(n − 1)} . (2.11) The dimensions d X r , d Y r , d Zφ i (x 1 )φ j (x 2 )φ k (x 3 )φ l (x 4 ) = 1 (x 2 12 x 2 34 ) ∆ φ I O I λ 2 O I P I ijkl g ∆ I , I (u, v) ,(2.12) where the sum over I runs over the representations S, X, Y, Z, A, B, x ij = x i − x j , λ 2 O I are squared OPE coefficients and g ∆ I , I (u, v) are conformal blocks 4 that are functions of the conformallyinvariant cross ratios u = x 2 12 x 2 34 x 2 13 x 2 24 , v = x 2 14 x 2 23 x 2 13 x 2 24 . (2.13) The crossing equation can now be derived. With F ± ∆, (u, v) = v ∆ φ g ∆, (u, v) ± u ∆ φ g ∆, (v, u) ,(2.14) we find 5 S + λ 2 O            F − ∆, 0 0 0 F + ∆, 0            + X + λ 2 O            −F − ∆, F − ∆, 0 0 −F + ∆, F + ∆,            + Y + λ 2 O            0 m−1 n F − ∆, F − ∆, 0 0 − m+2 2n F + ∆,            + Z + λ 2 O            0 0 0 F − ∆, − 1 2 F + ∆, 1 2n F + ∆,            + A − λ 2 O            0 0 1 m F − ∆, 0 0 1 2n F + ∆,            + B − λ 2 O            −F − ∆, 1 n F − ∆, 0 F − ∆, 1 2 F + ∆, − 1 2n F + ∆,            =            0 0 0 0 0 0            . (2.15) The signs that appear as superscripts in the various irrep symbols indicate the spins of the operators we sum over in the corresponding term: even when positive and odd when negative. MN anisotropy The MN m,n fixed points were first studied in [20,21,14] and more recently in [10,22]. The relevant Lagrangian is 6 L = 1 2 ∂ µ φ i ∂ µ φ i + 1 8 (λξ ijkl + 1 3 g γ ijkl )φ i φ j φ k φ l . (3.1) In the ε expansion below d = 4 (3.1) has four inequivalent fixed points. They are 1. Gaussian (λ = g = 0), 4 We define conformal blocks using the conventions of [19]. 5 In (2.15) we omit, for brevity, to label the F ∆, 's and λ 2 O 's with the appropriate index I. The appropriate labeling, however, is obvious from the overall sum in each term. 6 Compared to couplings λ, g of [10, Sec. 5.2.2] we have λ here = λ there − m+2 3(mn+2) g there and g here = g there . O(mn) (λ > 0, g = 0) 3. n decoupled O(m) models (λ = 0, g > 0), 4. n coupled O(m) models with symmetry MN m,n = O(m) n S n (λ = 0, g > 0). 7 These fixed points are known to be physically relevant for m = 2 and n = 2, 3. As already mentioned in the introduction, the MN 2,2 fixed point has applications to frustrated spin systems with noncollinear order [9,Sec. 11.5]. Additionally, it has been argued to describe the structural phase transition of NbO 2 (niobium dioxide) and paramagnetic-helimagnetic transitions in the rare-earth metals Ho (holmium), Dy (dysprosium) and Tb (terbium). The MN 2,3 fixed point is relevant for the antiferromagnetic phase transitions in K 2 IrCl 6 (potassium hexachloroiridate), TbD 2 (terbium dideuteride) and Nd (neodymium) [16]. The MN 2,2 CFT is equivalent to a theory with O(2) 2 /Z 2 symmetry [9,10,22]. An MN 2,2 fixed point independent of the one found in the ε expansion has been suggested in [23], arising after resummation of the perturbative six-loop beta functions. The stability of the MN m,n fixed point for m = 2 and n = 2, 3 has been supported by higherloop ε expansion calculations [21,14]. However, there exist higher-loop calculations based on the fixed-dimension expansion-see [9] and references therein-indicating that the stable fixed point is actually that of n decoupled O(2) models. As mentioned in the introduction, our numerical results indicate that the MN 2,2 and MN 2,3 theories are both stable. Tetragonal symmetry The tetragonal CFT [9,10] has global symmetry R n = D 4 n S n , where D 4 is the eight-element dihedral group. For n = 0 R 0 = {e}, where e is the identity element, and for n = 1 R 1 = D 4 . The order of R n is ord(R n ) = 8 n n!. Note that R n is a subgroup of the hypercubic group C N = Z 2 N S N , N = 2n, whose order is ord(C N ) = 2 N N !. It is easy to see that ord(C N )/ ord(R n ) = (2n − 1)!!, which is an integer for any integer n 0. The number of irreps of the group R n for n = 0, 1, 2, 3, 4, 5, . . . is 1, 5, 20, 65, 190, 506, . . . , respectively. 8 Among the irreps of R n one always finds a 2n-dimensional one; we will refer to this as the vector representation φ i , i = 1, . . . , 2n. In this work we analyze bootstrap constraints on the four-point function of the vector operator φ i . A standard construction of the character table shows that the group R 2 has eight one- 7 Although the theory of n decoupled O(m) models in item 3 on the list also has symmetry MNm,n, we will never characterize it that way; we will reserve that characterization for the fully-interacting case in 4. 8 These numbers have been obtained with the use of the freely available software GAP [24]. dimensional, six two-dimensional and six four-dimensional irreps. 9 In this case we may write 10 4 φ i × 4 φ j ∼ δ ij 1 S + 2 W (ij) + 1 X (ij) + 2 Y (ij) + 4 Z (ij) + 2 A [ij] + 4 B [ij] . (4.1) S is the singlet. The dimensions of the various irreps are given by the number over their symbol. W, X, Y, Z are two-index symmetric and traceless, while A, B are two-index antisymmetric. Invariant tensors In the tetragonal case there are three primitive invariant tensors with four indices, defined by δ ijkl φ i φ j φ k φ l = φ 4 1 + φ 4 2 + · · · + φ 4 2n , (4.2a) ζ ijkl φ i φ j φ k φ l = 2(φ 2 1 φ 2 2 + φ 2 3 φ 2 4 + · · · + φ 2 2n−1 φ 2 2n ) , (4.2b) ω ijkl φ i φ j φ k φ l = (φ 1 φ 2 − φ 2 φ 1 ) 2 + (φ 3 φ 4 − φ 4 φ 3 ) 2 + · · · + (φ 2n−1 φ 2n − φ 2n φ 2n−1 ) 2 . (4.2c) The tensors δ, ζ are fully symmetric, while the tensor ω is the same as that in (2.4b) for m = 2. It can be verified that these satisfy δ iijk = 3ζ iijk = ω iijk = δ jk ,(4.3) and δ ijmn δ klmn = δ ijkl , δ ijmn ζ klmn = 1 3 ζ ijkl + 2 9 ω ijkl , δ ijmn ω klmn = ζ ijkl + 2 3 ω ijkl , ζ ijmn ζ klmn = 1 9 δ ijkl + 4 9 ζ ijkl − 4 27 ω ijkl , To verify that there are only three invariant polynomials of R n made out of the components of the vector φ i , we have computed the Molien series for n = 2, 3, 4. 11 To do this, we think of R n as represented by 2n × 2n matrices acting on the 2n-component vector φ T i . Using those matrices, which represent the group elements g i ∈ G as ρ(g i ), i = 1, . . . , ord(G), we can then explicitly 9 Character tables for a wide range of finite groups can be easily generated using GAP [24]. 10 Of course these S, X, Y, Z, A, B have nothing to do with the ones of section 2. compute the Molien series. The Molien formula is ζ ijmn ω klmn = 1 3 δ ijkl − 2 3 ζ ijkl + 2 9 ω ijkl , ω ijmn ω klmn = δ ijkl + ζ ijkl − 1 3 ω ijkl , ω imjn ω kmln = 1 4 δ ijkl + 1 4 ζ ijkl + 2 3 ω ijkl + 3 2 ω ikjl .M (t) = 1 ord(G) ord(G) i=1 1 det(1 − tρ(g i )) , (4.5) where 1 is the identity matrix of appropriate size. It is obvious that the summands in (4.5) only depend on the conjugacy class, so the sum can be taken to be over conjugacy classes with the appropriate weights. For n = 2, 3, 4 (4.5) gives, respectively, Thus, we see that we have one quadratic and three quartic invariants. The latter are generated by δ ijkl , ζ ijkl and ξ ijkl , and their form is given in (4.2a,b) and (2.6) with m = 2. The unique quadratic invariant is obviously generated by δ ij and it is given by φ 2 = φ 2 1 + φ 2 2 + · · · + φ 2 2n . M 2 (t) = t 4 − t 2 + 1 (t 4 + 1) 2 (t 2 + 1) 2 (t 2 − 1) 4 , M 3 (t) = t 16 − t 14 + t 12 + t 8 + t 4 − t 2 + 1 (t 4 + t 2 + 1) 2 (t 4 − t 2 + 1) 2 (t 4 + 1)(t 2 + 1) 3 (t 2 − 1) 6 , M 4 (t) = (t 20 − t 18 + t 14 + t 12 − t 10 + t 8 + t 6 − t 2 + 1)(t 8 − t 6 + t 4 − t 2 + 1) (t 8 + 1)(t 4 − t 2 + 1)(t 4 + 1) 2 (t 2 + t + 1) 2 (t 2 − t + 1) 2 (t 2 + 1) 4 (t 2 − 1) 8 , Projectors and crossing equation If we now define P S ijkl = 1 2n δ ij δ kl , (4.8a) P W ijkl = 1 2 (δ ijkl − ζ ijkl ) − 1 3 ω ijkl , (4.8b) P X ijkl = 1 2 (δ ijkl + ζ ijkl ) + 1 3 ω ijkl − 1 2n δ ij δ kl , (4.8c) P Y ijkl = ζ ijkl − 1 3 ω ijkl , (4.8d) P Z ijkl = −δ ijkl − ζ ijkl + 1 3 ω ijkl + 1 2 (δ ik δ jl + δ il δ jk ) , (4.8e) P A ijkl = 1 3 (ω ijkl + 2ω ikjl ) , (4.8f) P B ijkl = − 1 3 (ω ijkl + 2ω ikjl ) + 1 2 (δ ik δ jl − δ il δ jk ) , (4.8g) we may verify, using (4.3) and (4.4), the projector relations P I ijmn P J mnkl = P I ijkl δ IJ , I P I ijkl = δ ik δ jl , P I ijkl δ ik δ jl =d I r ,(4.9) whered I r is the dimension of the representation indexed by I, with {d S r ,d W r ,d X r ,d Y r ,d Z r ,d A r ,d B r } = {1 , n, n − 1, n, 2n(n − 1), n, 2n(n − 1)} . (4.10) The generalization of (4.1), valid for any n 2, is 2n φ i × 2n φ j ∼ δ ij 1 S + n W (ij) + n−1 X (ij) + n Y (ij) + 2n(n−1) Z (ij) + n A [ij] + 2n(n−1) B [ij] . (4.11) The projectors (4.8a-g) allow us to express the four-point function of interest in a conformal block decomposition in the 12 → 34 channel: φ i (x 1 )φ j (x 2 )φ k (x 3 )φ l (x 4 ) = 1 (x 2 12 x 2 34 ) ∆ φ I O I λ 2 O I P I ijkl g ∆ I , I (u, v) , (4.12) where the sum over I runs over the representations S, W, X, Y, Z, A, B. For the crossing equation we find 12 S + λ 2 O               F − ∆, 0 0 0 0 F + ∆, F + ∆,               + W + λ 2 O               0 F − ∆, 0 0 0 −F + ∆, 0               + X + λ 2 O               − 1 n F − ∆, F − ∆, F − ∆, −F − ∆, 0 (1 − 1 n )F + ∆, − 1 n F + ∆,               + Y + λ 2 O               0 0 F − ∆, 0 0 −F + ∆, 0               + Z + λ 2 O               2F − ∆, −2F − ∆, −2F − ∆, 2F − ∆, F − ∆, 0 −F + ∆,               + A − λ 2 O               0 0 0 F − ∆, 0 F + ∆, 0               + B − λ 2 O               0 0 0 0 F − ∆, 0 F + ∆,               =               0 0 0 0 0 0 0               . (4.13) Let us make a comment about (4.13). We observe that we obtain the same crossing equation if we exchange the second and third line in all vectors and at the same time relabel W + ↔ Y + . This implies, for example, that operator dimension bounds on the leading scalar W operator and the leading scalar Y operator will be identical. Furthermore, if we work out the spectrum on the Wand the Y -bound, then all operators in the solution will have the same dimensions in both cases (except for the relabeling W + ↔ Y + ). The reason for this is that there exists a transformation of φ i that permutes the projectors P W and P X . 13 Indeed, if φ i → 1 √ 2 (φ i + φ i+1 ) , i odd and φ i → 1 √ 2 (φ i−1 − φ i ) , i even , (4.14) then δ ij → δ ij , δ ijkl → 1 2 (δ ijkl + 3ζ ijkl ) , ζ ijkl → 1 2 (δ ijkl − ζ ijkl ) and ω ijkl → ω ijkl . (4.15) Under (4.15) we obviously have P W ↔ P Y . Let us remark here that something similar happens in the N = 2 cubic theory studied in [6,Sec. 6], again due to the transformation (4.14) that exchanges two projectors. 14 With the crossing equation (4.13) we can now commence our numerical bootstrap explorations. Before that, however, let us first summarize results of the ε expansion for theories with tetragonal anisotropy. Tetragonal anisotropy Theories with tetragonal anisotropy were first studied with the ε expansion a long time ago in [16] and later [25], and they were revisited recently in [10,22]. A standard review is [9,Sec. 11.6]. The Lagrangian one starts with is 15 L = 1 2 ∂ µ φ i ∂ µ φ i + 1 8 (λξ ijkl + 1 3 g 1 δ ijkl + 1 3 g 2 ζ ijkl )φ i φ j φ k φ l . (5.1) For g 1 = g 2 = g this reduces to (3.1). The theory (5.1) in d = 4 − ε has six inequivalent fixed points. They are 16 1. Gaussian (λ = g 1 = g 2 = 0), 2. 2n decoupled Ising models (λ = g 2 = 0, g 1 > 0), 13 This was suggested to us by Hugh Osborn. 3. n decoupled O(2) models (λ = 0, g 1 = g 2 > 0), 4. O(2n) (λ > 0, g 1 = g 2 = 0), 14 In the N = 2 cubic case, which corresponds to n = 1 here in which case the ζ tensor does not exist, we can show that δij → δij and δ ijkl → −δ ijkl + 1 2 (δijδ kl + δ ik δ kl + δ il δ jk ). 15 Compared to couplings λ, g1, g2 of [10, Sec. 7] we have λ here = λ there − 2 3(n+1) g there , g here 16 Fixed points physically-equivalent to those in items 2 and 5 on the list are also found in other positions in coupling space, related to the ones given in the list by the field redefinition in (4.14) [9, 10]. 5. Hypercubic with symmetry C 2n = Z 2 2n S 2n (λ > 0, g 1 > 0, g 2 = 0), 17 6. n coupled O(2) models with symmetry MN 2,n = O(2) n S n (λ > 0, g 1 = g 2 > 0). 18 Note that in the ε expansion there is no R n symmetric fixed point. According to the ε expansion the stable fixed point is the MN 2,n symmetric one we discussed in section 3. Numerical results The numerical results in this paper have been obtained with the use of PyCFTBoot [19] and SDPB [26]. We use nmax = 9, mmax = 6, kmax = 36 in PyCFTBoot and we include spins up to [10] realized that the large-m expansion was easy to obtain in the ε expansion and they updated the arXiv 17 The theory of 2n decoupled Ising models in item 2 on the list has symmetry C2n as well. However, we reserve the C2n characterization for the theory in 5. 18 The theory of n decoupled O(2) models in item 3 on the list has symmetry MN2,n as well. However, we reserve the MN2,n characterization for the theory in item 6. version of [10] to include the relevant formulas. The anomalous dimension of X is equal to ε at leading order in 1/m, and so ∆ ε Continuing our investigation of the MN 2,2 theory for larger ∆ φ we obtain Fig. 3. There we observe the presence of a second kink. Although not as convincing as the kink for smaller ∆ φ in the same theory, it is tempting to associate this kink with the presence of an actual CFT. This is further supported by the results from our spectrum analysis which give us the critical exponents (1.3) that match experimental results very well as mentioned in the introduction. X = d − 2 + ε + O( 1 m ) = 2 + O( 1 m Let us mention here that the spectrum analysis consists of obtaining the functional α right at the boundary of the allowed region (on the disallowed side) and looking at its action on the vectors V ∆, of F ± ∆, that appear in the crossing equation all sectors λ 2 O V ∆, = − V 0,0 , where V 0,0 is the vector associated with the identity operator. Zeroes of α · V ∆, appear for (∆, )'s of operators in the spectrum of the CFT that saturates the kink and provide a solution to the crossing equation. More details for this procedure can be found in [13] and [6,Sec. 3.2]. For the determination of critical exponents we simply find the dimension that corresponds to the first zero of α · V ∆ S ,0 . For the MN 2,3 theory we also find a second kink-see Another physical quantity one can study in a CFT is the central charge C T , i.e. the coefficient in the two-point function of the stress-energy tensor: T µν (x)T ρσ (0) = C T 1 S 2 d 1 (x 2 ) d I µνρσ (x) ,(6.1) where S d = 2π Tetragonal The bound on the leading scalar in the singlet sector in the R n theory is identical, for the cases checked, to the bound obtained for the leading scalar singlet in the O(2n) model. The bound on the leading scalar in the X sector is identical, again for the cases checked, to the bound on the leading scalar in the X sector of the MN 2,n theory. Both these symmetry enhancements are allowed, and they show that if a tetragonal CFT exists, then its leading scalar singlet operator has dimension in the allowed region of the bound of the leading scalar singlet in the O(2n) model. A similar comment applies to the leading scalar X operator and the bound on the leading scalar X operator of the MN 2,n theory. Let us focus on the bound of the leading scalar in the W sector, shown in Fig. 7. It turns out that the W -bound is the same for all n checked, even for n very large. It is also identical 0 Fig. 7: Upper bound on the dimension of the first scalar W operator in the φ i × φ j OPE as a function of the dimension of φ. The area above the curve is excluded. This bound applies to all R n theories checked. to the V -bound in [6,Fig. 14]. The coincidence of the W bound with that of [6,Fig. 14] is ultimately due to the fact that the N = 2 "cubic" theory has global symmetry D 4 . Indeed, taking n decoupled copies of the D 4 theory leads to a theory with symmetry R n . The leading scalar operator V , whose dimension is bounded in the D 4 theory in [6, Fig. 14], gives rise to a direct-sum representation that is reducible under the action of R n . That representation splits into two irreps of R n , namely our W and X, and it is easy to see that, if the R n theory is decoupled, the leading scalar operator in the irrep W must have the same dimension as V of [6,Fig. 14]. Hence, the corresponding bounds have a chance to coincide and indeed they do. If a fully interacting R n theory exists, then the dimension of the leading scalar W operator of that theory is in the allowed region of Fig. 7. We point out here that the putative theory that lives on the bound of [6,Fig. 14] is not predicted by the ε expansion. That theory is currently under investigation with a mixed-correlator bootstrap [28]. To see if a fully-interacting R n theory exists, we have obtained bounds for the leading scalar and spin-one operators in other sectors. Unfortunately, our (limited) investigation has not uncovered any features that could signify the presence of hitherto unknown CFTs with R n global symmetry. Conclusion In this paper we have obtained numerical bootstrap bounds for three-dimensional CFTs with global symmetry O(m) n S n and D 4 n S n , where D 4 is the dihedral group of eight elements. The O(m) n S n case displays the most interesting bounds. We have found clear kinks that appear to correspond to the theories predicted by the ε expansion and have observed that the ε expansion appears to be unsuccessful in predicting the critical exponents and other observables with satisfactory accuracy in the ε → 1 limit. Experiments in systems that are supposed to be described by CFTs with O(2) 2 S 2 symmetry have yielded two sets of critical exponents [12]. Having found two kinks in a certain bound for such CFTs, we conclude that there are two distinct universality classes with O(2) 2 S 2 global symmetry. Our critical-exponent computations in these two different theories, given in (1.1) For theories with O(2) 3 S 3 symmetry we also find two kinks. The corresponding critical exponents are given in (1.2) and (1.4). The CFT that lives on the second kink, with critical exponents (1.4), is the one with which we can reproduce experimental results. This is not the CFT predicted by the ε expansion. A more complete study of the second set of kinks that appear in our bounds would be of interest. Note that the kinks we find do not occur in dimension bounds for singlet scalar operators, so we consider it unlikely (although we cannot exclude it) that the second kinks correspond to a theory with a different global symmetry group as has been observed in a few other cases [29]. [30,31], where evidence for a CFT not seen in the ε expansion was presented. Such CFTs have been suggested to be absent in perturbation theory but arise after resummations of perturbative beta functions. Examples have been discussed in O(2) × O(N ) frustrated spin systems [23,32,33]. These examples have been criticized in [34]. However, the results of [31] for the O(2) × O(3) case are in good agreement with those of [23,33], lending further support to the suggestion that new fixed points actually exist. Our results (1.3) are also in good agreement with the corresponding determinations of critical exponents in [23,33]. The study of more examples with numerical conformal bootstrap techniques is necessary in order to examine the conditions under which perturbative field theory methods may fail to predict the presence of CFTs or in calculating the critical exponents and other observables with accuracy. Examples of critical points examined with the ε expansion in [9,10,22,35] constitute a large unexplored set. The generation of crossing equations for a wide range of finite global symmetry groups was recently automated [36]. This provides a significant reduction of the amount work required for one to embark on new and exciting numerical bootstrap explorations. that in frustrated systems the phase transitions are of weakly first order. Note that our determinations of the correlation-length critical exponent ν are in remarkable agreement with experiments. Some tension exists between our results for the order-parameter critical exponent β and the corresponding experimental measurements. It is worth noting that some experimental results violate the unitarity condition β 1 2 ν. For CFTs with symmetry D 4 n S n we have not managed to obtain any bounds with features not previously found in the literature or not corresponding to a symmetry enhancement to O(2) n S n . The ε expansion does not find a fixed point with D 4 n S n symmetry. The lack of kinks in our plots combined with the lack of CFTs with such symmetry in the ε expansion suggests that they do not exist in d = 3. However, bootstrap studies of D 4 n S n CFTs in larger regions of parameter space are necessary before any final conclusions can be reached. This paper is organized as follows. In the next section we describe in detail the relevant group theory associated with the global symmetry group O(m) n S n and derive the associated crossing equation. In section 3 we briefly mention results of the ε expansion for theories with O(m) n S n symmetry and some of the physical systems such theories are expected to describe at criticality. In section 4 we turn to the group theory of the global symmetry group D 4 n S n and we derive the crossing equation for this case. In section 5 we mention some aspects of the application of the ε expansion to theories with D 4 n S n symmetry. Finally, we present our numerical results in section 6 and conclude in section 7. 2. MN symmetry Let us recall some basic facts about semidirect products. To have a well-defined semidirect product, G = N H, with N, H subgroups of G with H proper and N normal, i.e. H ⊂ G and N G, we need to specify the action of H on the group of automorphisms of N . This action is defined by a map f : If one thinks of the symmetry breaking O(mn) → MN m,n , then the irreducible representations (irreps) X, Y, Z stem from the traceless-symmetric irrep of O(mn), while A, B stem from the antisymmetric irrep of O(mn). The way to figure out the explicit way the O(mn) representations decompose under the action of the MN m,n group is by constructing the appropriate projectors. r are as expected from the results of [10, Eq. (5.95)]. Knowledge of the projectors (2.9a-f) allows the derivation of the corresponding crossing equation in the usual way. The four-point function can be expressed in a conformal block decomposition in the 12 → 34 channel as are valid for any n 2. 11 For n = 3, 4 the computation of the Molien series was performed with GAP[24]. t 2 + 3t 4 + 4t 6 + 8t 8 + O(t 10 ) , M 3 (t) = 1 + t 2 + 3t 4 + 5t 6 + 10t 8 + O(t 10 ) , M 4 (t) = 1 + t 2 + 3t 4 + 5t 6 + 11t 8 + O(t 10 ) . max = 26 .→ 2 . 262For SDPB we use the options --findPrimalFeasible and --findDualFeasible and we choose precision = 660, dualErrorThreshold = 10 −20 and default values for other parameters.6.1. MN For theories with MN m,n symmetry the bound on the leading scalar singlet is the same as the bound on the leading scalar singlet of the O(mn) model. We will thus focus on bounds on the leading scalar in the X sector, which we have found to display the most interesting behavior. Let us mention here that in the theory of n decoupled O(m) models the dimension of the leading scalar in the X sector is the same as the dimension of the leading scalar in the two-index tracelesssymmetric irrep of O(m). Based on the results of [4] we can see that the theory of n decoupled O(m) models is located deep in the allowed region of our corresponding X-bounds below. For some theories with m = n the bounds are shown in Fig. 1. The form of these bounds is rather suggestive regarding the large m, n behavior of the MN m,n theories. Recall that in the O(N ) models as N → ∞ we have ∆ There is another case where a type of large N expansion exists, namely in the O(m) × O(n) theories [27]. There, for fixed m one can find a well-behaved expansion at large n. Of course m and n are interchangeable in the O(m) × O(n) example, but in our MN m,n case it is not clear if we should expect the large-N behavior to arise due to m or due to n. It is perhaps not surprising that it is in fact due to m. Keeping m fixed and increasing n does not have a significant effect on the location of the kink-see Fig. 2. On the other hand, keeping n fixed and raising m causes the kink to move toward the point ( 1 2 , 2)-see Fig. 2. (After these bootstrap results were obtained the authors of Fig. 1 :Fig. 2 :Fig. 3 :Fig. 4 : 1234Upper bound on the dimension of the first scalar X operator in the φ i × φ j OPE as a function of the dimension of φ. Areas above the curves are excluded in the corresponding theories. Upper bound on the dimension of the first scalar X operator in the φ i × φ j OPE as a function of the dimension of φ. Areas above the curves are excluded in the corresponding theories. Upper bound on the dimension of the first scalar X operator in the φ i × φ j OPE as a function of the dimension of φ in the MN 2,2 theory. The area above the curve is excluded. Upper bound on the dimension of the first scalar X operator in the φ i × φ j OPE as a function of the dimension of φ in the MN 2,3 theory. The area above the curve is excluded. Fig. 4 -Fig. 5 : 45which is more pronounced than in the MN 2,2 case. A spectrum analysis for the theory that lives on this second kink yields the critical exponents (1.4), in good agreement with the measurement of [17Central charge values in the MN 2,2 theory assuming that the dimension of the leading scalar X operator lies on the bound inFig. 3. IFig. 6 : 6µνρσ = 1 2 (I µρ I νσ + I µσ I νρ ) − 1 d η µν η ρσ , I µν = η µν − 2 x 2 x µ x ν . Central charge values in the MN 2,3 theory assuming that the dimension of the leading scalar X operator lies on the bound in Fig. 4. In Figs. 5 and 6 we obtain values of the central charge of MN 2,2 and MN 2,3 theories assuming that the leading scalar X operator lies on the bound in Figs. 3 and 4, respectively. The free theory of mn scalars has central charge C free T = mnC scalar T = 3 2 mn. We observe two local minima in Figs. 5 and 6, located at ∆ φ 's very close to those of the kinks in Figs. 3 and 4. We consider this a further indication of the existence of the CFTs we have associated with the kinks in Figs. 3 and 4. match very well the experimental results. It would be of great interest to examine further the conditions under which the renormalization-group flow is driven to one or the other CFT. Beyond the examples mentioned or studied in this work, bootstrap studies of CFTs with O(2) × O(N ) and O(3) × O(N ) symmetry with N > 2 have been performed in Prompted by these disagreements, we have explored theories with O(2) 2 S 2 symmetry in a larger region of parameter space. The idea is that perhaps XY stacked triangular antiferromagnets and the helimagnet Tb are not in the same universality class as NbO 2 and the helimagnets Ho and Dy, although at criticality both these theories have O(2) 2 S 2 global symmetry. We find support for this suggestion due to a second kink in our bound and a second local minimum in the central charge-see Figs. 3 and 5 below. Although this kink is not as sharp as the one described above, a spectrum analysis yields The critical exponent β in (1.2) is not in good agreement with that measured for the antiferromagnetic phase transition of Nd in [17]. Exploration of theories with O(2) 3 S 3 global symmetry in a larger part of the parameter space reveals a second kink and a second local minimum in the central charge, much like in the O(2) 2 S 2 case-see Figs. 4 and 6 below. At the second kink weβ = 0.355(5) , ν = 0.576(8) . (1.3) These numbers are in good agreement with experiments on paramagnetic-helimagnetic transitions in Ho and Dy [9, Table 37] and with the structural phase transition of NbO 2 [15]. find β = 0.394(5) , ν = 0.590(8) , (1.4) in good agreement with the measurement of [17]. The result of our analysis is that there exist two CFTs with O(2) 2 S 2 symmetry and two CFTs with O(2) 3 S 3 symmetry. In the O(2) 2 S 2 case, the first CFT, with critical exponents given in (1.1), is relevant for XY stacked triangular antiferromagnets and the helimagnet Tb. The second, with critical exponents given in (1.3), is relevant for the structural phase transition of NbO 2 and the helimagnets Ho and Dy. In the case of O(2) 3 S 3 symmetry we only found experimental determination of the critical exponent β in Nd in the literature [17]. It agrees very well with the exponent in (1.4), computed for the CFT that saturates the second kink. We should mention here that all CFTs appear to have only one relevant scalar singlet, which in experiments would correspond to the temperature. The ε expansion finds only one CFT in each case, and does not appear to compute the critical exponents and the eigenvalues of the stability matrix with satisfactory accuracy. This type of semidirect product is an example of a wreath product, for which the standard notation is K Sn. In (4.13) we omit, for brevity, to label the F ∆, 's and λ 2 O 's with the appropriate index I. The appropriate labeling, however, is obvious from the overall sum in each term. Acknowledgments I am grateful to Hugh Osborn for countless important comments, remarks and suggestions, and Slava Rychkov and Alessandro Vichi for many illuminating discussions. I also thank Kostas Siampos for collaboration in the initial stages of this project. Some computations in this paper have been performed with the help of Mathematica and the package xAct. The numerical computations in this paper were run on the LXPLUS cluster at CERN. Bounding scalar operator dimensions in 4D CFT. R Rattazzi, V S Rychkov, E Tonni, &amp; A Vichi, 10.1088/1126-6708/2008/12/031arXiv:0807.0004JHEP. 081231hep-thR. Rattazzi, V. S. Rychkov, E. Tonni & A. Vichi, "Bounding scalar operator dimensions in 4D CFT", JHEP 0812, 031 (2008), arXiv:0807.0004 [hep-th]. D Poland, S Rychkov, &amp; A Vichi, 10.1103/RevModPhys.91.015002arXiv:1805.04405The Conformal Bootstrap: Theory, Numerical Techniques, and Applications. 9115002hep-thD. Poland, S. Rychkov & A. Vichi, "The Conformal Bootstrap: Theory, Numerical Techniques, and Applications", Rev. Mod. Phys. 91, 015002 (2019), arXiv:1805.04405 [hep-th]. Solving the 3D Ising Model with the Conformal Bootstrap. S El-Showk, M F Paulos, D Poland, S Rychkov, D Simmons-Duffin, &amp; A Vichi, 10.1103/PhysRevD.86.025022arXiv:1203.6064Phys. Rev. 8625022hep-thS. El-Showk, M. F. Paulos, D. Poland, S. Rychkov, D. Simmons-Duffin & A. Vichi, "Solv- ing the 3D Ising Model with the Conformal Bootstrap", Phys. Rev. D86, 025022 (2012), arXiv:1203.6064 [hep-th]; Solving the 3d Ising Model with the Conformal Bootstrap II. c-Minimization and Precise Critical Exponents. S El-Showk, M F Paulos, D Poland, S Rychkov, D Simmons-Duffin, &amp; A Vichi, 10.1007/s10955-014-1042-7arXiv:1403.4545J. Stat. Phys. 157hep-thS. El-Showk, M. F. Paulos, D. Poland, S. Rychkov, D. Simmons-Duffin & A. Vichi, "Solving the 3d Ising Model with the Conformal Bootstrap II. c-Minimization and Precise Critical Expo- nents", J. Stat. Phys. 157, 869 (2014), arXiv:1403.4545 [hep-th]; Bootstrapping Mixed Correlators in the 3D Ising Model. F Kos, D Poland, &amp; D Simmons-Duffin, 10.1007/JHEP11(2014)109arXiv:1406.4858JHEP. 1411109hep-thF. Kos, D. Poland & D. Simmons-Duffin, "Bootstrapping Mixed Correlators in the 3D Ising Model", JHEP 1411, 109 (2014), arXiv:1406.4858 [hep-th]. Bootstrapping the O(N ) vector models. F Kos, D Poland, &amp; D Simmons-Duffin, 10.1007/JHEP06(2014)091arXiv:1307.6856JHEP. 140691hep-thF. Kos, D. Poland & D. Simmons-Duffin, "Bootstrapping the O(N ) vector models", JHEP 1406, 091 (2014), arXiv:1307.6856 [hep-th]. Bootstrapping the O(N) Archipelago. F Kos, D Poland, D Simmons-Duffin, &amp; A Vichi, 10.1007/JHEP11(2015)106arXiv:1504.07997JHEP. 1511106hep-thF. Kos, D. Poland, D. Simmons-Duffin & A. Vichi, "Bootstrapping the O(N) Archipelago", JHEP 1511, 106 (2015), arXiv:1504.07997 [hep-th]; Precision Islands in the Ising and O(N ) Models. F Kos, D Poland, D Simmons-Duffin, &amp; A Vichi, 10.1007/JHEP08(2016)036arXiv:1603.04436JHEP. 160836hep-thF. Kos, D. Poland, D. Simmons-Duffin & A. Vichi, "Precision Islands in the Ising and O(N ) Models", JHEP 1608, 036 (2016), arXiv:1603.04436 [hep-th]. Bootstrapping hypercubic and hypertetrahedral theories in three dimensions. A Stergiou, 10.1007/JHEP05(2018)035arXiv:1801.07127JHEP. 180535hep-thA. Stergiou, "Bootstrapping hypercubic and hypertetrahedral theories in three dimensions", JHEP 1805, 035 (2018), arXiv:1801.07127 [hep-th]. Bootstrapping Mixed Correlators in Three-Dimensional Cubic Theories. S R Kousvos, &amp; A Stergiou, arXiv:1810.10015hep-thS. R. Kousvos & A. Stergiou, "Bootstrapping Mixed Correlators in Three-Dimensional Cubic Theories", arXiv:1810.10015 [hep-th]. The Renormalization group and the epsilon expansion. K G Wilson, &amp; J B Kogut, 10.1016/0370-1573(74)90023-4Phys. Rept. 1275K. G. Wilson & J. B. Kogut, "The Renormalization group and the epsilon expansion", Phys. Rept. 12, 75 (1974). Critical phenomena and renormalization group theory. A Pelissetto, &amp; E Vicari, 10.1016/S0370-1573(02)00219-3cond-mat/0012164Phys. Rept. 368A. Pelissetto & E. Vicari, "Critical phenomena and renormalization group theory", Phys. Rept. 368, 549 (2002), cond-mat/0012164. Seeking fixed points in multiple coupling scalar theories in the ε expansion. H Osborn, &amp; A Stergiou, 10.1007/JHEP05(2018)051arXiv:1707.06165JHEP. 180551hep-thH. Osborn & A. Stergiou, "Seeking fixed points in multiple coupling scalar theories in the ε expansion", JHEP 1805, 051 (2018), arXiv:1707.06165 [hep-th]. Universality of phase transitions of frustrated antiferromagnets. H Kawamura, 10.1088/0953-8984/10/22/004cond-mat/9805134Journal of Physics: Condensed Matter. 10H. Kawamura, "Universality of phase transitions of frustrated antiferromagnets", Jour- nal of Physics: Condensed Matter 10, 4707 (1998), cond-mat/9805134. Nonperturbative renormalization group approach to frustrated magnets. B Delamotte, D Mouhanna, &amp; M Tissier, 10.1103/PhysRevB.69.134413cond-mat/0309101Phys. Rev. 69134413cond-matB. Delamotte, D. Mouhanna & M. Tissier, "Nonperturbative renormalization group approach to frustrated magnets", Phys. Rev. B69, 134413 (2004), cond-mat/0309101 [cond-mat]. Bootstrapping Conformal Field Theories with the Extremal Functional Method. S El-Showk, &amp; M F Paulos, 10.1103/PhysRevLett.111.241601arXiv:1211.2810Phys. Rev. Lett. 111241601hep-thS. El-Showk & M. F. Paulos, "Bootstrapping Conformal Field Theories with the Extremal Functional Method", Phys. Rev. Lett. 111, 241601 (2013), arXiv:1211.2810 [hep-th]. Critical behavior of certain antiferromagnets with complicated ordering: Four-loop ε-expansion analysis. A I B Mudrov &amp; K, Varnashev, 10.1103/PhysRevB.64.214423cond-mat/0111330Phys. Rev. B. 64214423A. I. Mudrov & K. B. Varnashev, "Critical behavior of certain antiferromagnets with complicated ordering: Four-loop ε-expansion analysis", Phys. Rev. B 64, 214423 (2001), cond-mat/0111330. Unusual critical crossover behaviour at a structural phase transformation. R &amp; J D Pynn, Axe, 10.1088/0022-3719/9/8/003Journal of Physics C: Solid State Physics. 9199R. Pynn & J. D. Axe, "Unusual critical crossover behaviour at a structural phase transforma- tion", Journal of Physics C: Solid State Physics 9, L199 (1976). Physical Realizations of n 4 Vector Models. D Mukamel, 10.1103/PhysRevLett.34.481Phys. Rev. Lett. 34481D. Mukamel, "Physical Realizations of n 4 Vector Models", Phys. Rev. Lett. 34, 481 (1975); ε-expansion analysis of some physically realizable n 4 vector models. D Mukamel, &amp; S Krinsky, 10.1088/0022-3719/8/22/003J. Phys. 8496D. Mukamel & S. Krinsky, "ε-expansion analysis of some physically realizable n 4 vector models", J. Phys. C8, L496 (1975); Physical realizations of n 4-component vector models. II. εexpansion analysis of the critical behavior. D Mukamel, &amp; S Krinsky, 10.1103/PhysRevB.13.5078Phys. Rev. B. 135078D. Mukamel & S. Krinsky, "Physical realizations of n 4-component vector models. II. ε- expansion analysis of the critical behavior", Phys. Rev. B 13, 5078 (1976); Physical realizations of n 4-component vector models. 3. Phase transitions in Cr. P Bak &amp; D. Mukamel ; Dy, Tb , 10.1103/PhysRevB.13.5086Phys. Rev. 135086P. Bak & D. Mukamel, "Physical realizations of n 4-component vector models. 3. Phase transitions in Cr, Eu, MnS 2 , Ho, Dy, and Tb", Phys. Rev. B13, 5086 (1976). Triple-q" Modulated Magnetic Structure and Critical Behavior of Neodymium. P Bak, &amp; B Lebech, 10.1103/PhysRevLett.40.800Phys. Rev. Lett. 40800P. Bak & B. Lebech, ""Triple-q" Modulated Magnetic Structure and Critical Behavior of Neodymium", Phys. Rev. Lett. 40, 800 (1978). XY frustrated systems: Continuous exponents in discontinuous phase transitions. M Tissier, B Delamotte, &amp; D Mouhanna, 10.1103/PhysRevB.67.134422cond-mat/0107183Phys. Rev. 67134422cond-matM. Tissier, B. Delamotte & D. Mouhanna, "XY frustrated systems: Continuous exponents in discontinuous phase transitions", Phys. Rev. B67, 134422 (2003), cond-mat/0107183 [cond-mat]. PyCFTBoot: A flexible interface for the conformal bootstrap. C Behan, 10.4208/cicp.OA-2016-0107arXiv:1602.02810Commun. Comput. Phys. 221hep-thC. Behan, "PyCFTBoot: A flexible interface for the conformal bootstrap", Commun. Com- put. Phys. 22, 1 (2017), arXiv:1602.02810 [hep-th]. Critical behavior of the mn component field model in three-dimensions. N A Shpot, 10.1016/0375-9601(88)90771-2Phys. Lett. 133125N. A. Shpot, "Critical behavior of the mn component field model in three-dimensions", Phys. Lett. A133, 125 (1988); Critical behavior of the mn component field model in three-dimensions. 2: Three loop results. N A Shpot, 10.1016/0375-9601(89)90517-3Phys. Lett. 142474N. A. Shpot, "Critical behavior of the mn component field model in three-dimensions. 2: Three loop results", Phys. Lett. A142, 474 (1989). Critical thermodynamics of three-dimensional M N component field model with cubic anisotropy from higher loop ε expansion. A I B Mudrov &amp; K, Varnashev, 10.1088/0305-4470/34/23/102cond-mat/0108298J. Phys. 34347A. I. Mudrov & K. B. Varnashev, "Critical thermodynamics of three-dimensional M N com- ponent field model with cubic anisotropy from higher loop ε expansion", J. Phys. A34, L347 (2001), cond-mat/0108298; On critical behavior of phase transitions in certain antiferromagnets with complicated ordering. A Mudrov, &amp; K Varnashev, 10.1134/1.141716,10.1134/1.1417160cond-mat/0109338JETP Lett. 74279A. Mudrov & K. Varnashev, "On critical behavior of phase transitions in certain antiferromag- nets with complicated ordering", JETP Lett. 74, 279 (2001), cond-mat/0109338. General Properties of Multiscalar RG Flows in d = 4 − ε. S Rychkov, &amp; A Stergiou, 10.21468/SciPostPhys.6.1.008arXiv:1810.10541SciPost Phys. 68hep-thS. Rychkov & A. Stergiou, "General Properties of Multiscalar RG Flows in d = 4 − ε", SciPost Phys. 6, 008 (2019), arXiv:1810.10541 [hep-th]. The Critical behavior of frustrated spin models with noncollinear order. A Pelissetto, P Rossi, &amp; E Vicari, 10.1103/PhysRevB.63.140414cond-mat/0007389Phys. Rev. 63140414A. Pelissetto, P. Rossi & E. Vicari, "The Critical behavior of frustrated spin models with noncollinear order", Phys. Rev. B63, 140414 (2001), cond-mat/0007389. GAP-Groups, Algorithms, and Programming. The GAP Group. Version 4.10.0"GAP-Groups, Algorithms, and Programming, Version 4.10.0", The GAP Group (2018), https://www.gap-system.org. Stability of the three-dimensional fixed point in a model with three coupling constants from the ε expansion: Three-loop results. A I B Mudrov &amp; K, Varnashev, 10.1103/PhysRevB.57.5704Phys. Rev. B. 575704A. I. Mudrov & K. B. Varnashev, "Stability of the three-dimensional fixed point in a model with three coupling constants from the ε expansion: Three-loop results", Phys. Rev. B 57, 5704 (1998). A Semidefinite Program Solver for the Conformal Bootstrap. D Simmons-Duffin, 10.1007/JHEP06(2015)174arXiv:1502.02033JHEP. 1506174hep-thD. Simmons-Duffin, "A Semidefinite Program Solver for the Conformal Bootstrap", JHEP 1506, 174 (2015), arXiv:1502.02033 [hep-th]. Renormalization-group analysis of chiral transitions. H Kawamura, 10.1103/PhysRevB.38.4916Phys. Rev. B. 384916H. Kawamura, "Renormalization-group analysis of chiral transitions", Phys. Rev. B 38, 4916 (1988); Large n critical behavior of O(n) × O(m) spin models. A Pelissetto, P Rossi, &amp; E Vicari, 10.1016/S0550-3213(01)00223-1hep-th/0104024Nucl. Phys. 607605A. Pelissetto, P. Rossi & E. Vicari, "Large n critical behavior of O(n) × O(m) spin models", Nucl. Phys. B607, 605 (2001), hep-th/0104024. . S R Kousvos, &amp; A Stergiou, To appear", arXiv:19xx.xxxxx [hep-thS. R. Kousvos & A. Stergiou, "To appear", arXiv:19xx.xxxxx [hep-th]. Carving Out the Space of 4D CFTs. D Poland, D Simmons-Duffin, &amp; A Vichi, 10.1007/JHEP05(2012)110arXiv:1109.5176JHEP. 1205110hep-thD. Poland, D. Simmons-Duffin & A. Vichi, "Carving Out the Space of 4D CFTs", JHEP 1205, 110 (2012), arXiv:1109.5176 [hep-th]; Bootstrap experiments on higher dimensional CFTs. Y Nakayama, 10.1142/S0217751X18500367arXiv:1705.02744Int. J. Mod. Phys. 331850036hep-thY. Nakayama, "Bootstrap experiments on higher dimensional CFTs", Int. J. Mod. Phys. A33, 1850036 (2018), arXiv:1705.02744 [hep-th]; Z Li, arXiv:1812.09281Solving QED 3 with Conformal Bootstrap. hep-thZ. Li, "Solving QED 3 with Conformal Bootstrap", arXiv:1812.09281 [hep-th]. Approaching the conformal window of O(n) × O(m) symmetric. Y Nakayama, &amp; T Ohtsuki, Y. Nakayama & T. Ohtsuki, "Approaching the conformal window of O(n) × O(m) symmetric Landau-Ginzburg models using the conformal bootstrap. 10.1103/PhysRevD.89.126009arXiv:1404.0489Phys. Rev. 89126009hep-thLandau-Ginzburg models using the conformal bootstrap", Phys. Rev. D89, 126009 (2014), arXiv:1404.0489 [hep-th]. Bootstrapping phase transitions in QCD and frustrated spin systems. Y Nakayama, &amp; T Ohtsuki, 10.1103/PhysRevD.91.021901arXiv:1407.6195Phys. Rev. 9121901hep-thY. Nakayama & T. Ohtsuki, "Bootstrapping phase transitions in QCD and frustrated spin systems", Phys. Rev. D91, 021901 (2015), arXiv:1407.6195 [hep-th]. Chiral phase transitions: Focus driven critical behavior in systems with planar and vector ordering. P Calabrese, P &amp; A I Parruccini, Sokolov, 10.1103/PhysRevB.66.180403cond-mat/0205046Phys. Rev. 66180403P. Calabrese, P. Parruccini & A. I. Sokolov, "Chiral phase transitions: Focus driven crit- ical behavior in systems with planar and vector ordering", Phys. Rev. B66, 180403 (2002), cond-mat/0205046. Critical behavior of O(2) × O(N ) symmetric models. P Calabrese, P Parruccini, A Pelissetto, &amp; E Vicari, 10.1103/PhysRevB.70.174439cond-mat/0405667Phys. Rev. 70174439P. Calabrese, P. Parruccini, A. Pelissetto & E. Vicari, "Critical behavior of O(2) × O(N ) symmetric models", Phys. Rev. B70, 174439 (2004), cond-mat/0405667. About the relevance of the fixed dimension perturbative approach to frustrated magnets in two and three dimensions. B Delamotte, M Dudka, Yu Holovatch, &amp; D Mouhanna, 10.1103/PhysRevB.82.104432arXiv:1009.1492Phys. Rev. 82cond-mat.stat-mechB. Delamotte, M. Dudka, Yu. Holovatch & D. Mouhanna, "About the relevance of the fixed dimension perturbative approach to frustrated magnets in two and three dimensions", Phys. Rev. B82, 104432 (2010), arXiv:1009.1492 [cond-mat.stat-mech]; Analysis of the 3d massive renormalization group perturbative expansions: a delicate case. B Delamotte, M Dudka, Yu Holovatch, &amp; D Mouhanna, 10.5488/CMP.13.43703arXiv:1012.3739Cond. Matt. Phys. 1343703cond-mat.stat-mechB. Delamotte, M. Dudka, Yu. Holovatch & D. Mouhanna, "Analysis of the 3d massive renormalization group perturbative expansions: a delicate case", Cond. Matt. Phys. 13, 43703 (2010), arXiv:1012.3739 [cond-mat.stat-mech]. R B A Zinati, A Codello, &amp; G Gori, arXiv:1902.05328Platonic Field Theories. hep-thR. B. A. Zinati, A. Codello & G. Gori, "Platonic Field Theories", arXiv:1902.05328 [hep-th]. autoboot: A generator of bootstrap equations with global symmetry. M Go, &amp; Y Tachikawa, arXiv:1903.10522hep-thM. Go & Y. Tachikawa, "autoboot: A generator of bootstrap equations with global symmetry", arXiv:1903.10522 [hep-th].
[]
[ "Co-optimization of Energy and Reserve with Incentives to Wind Generation: Case Study", "Co-optimization of Energy and Reserve with Incentives to Wind Generation: Case Study" ]
[ "Yves Smeers ", "Member, IEEESebastian Martin ", "Member, IEEEJosé A Aguado ", "\nDepartment of Electrical Engineering\nUniversité Catholique de Louvain (UCL)\nBelgium\n", "\nUniversity of Malaga\nMalagaSpain\n" ]
[ "Department of Electrical Engineering\nUniversité Catholique de Louvain (UCL)\nBelgium", "University of Malaga\nMalagaSpain" ]
[]
This case study presents an analysis and quantification of the impact of the lack of co-optimization of energy and reserve in the presence of high penetration of wind energy. The methodology is developed in a companion paper, Part I. Two models, with and without co-optimization are confronted. The modeling of reserve and the incentive to renewable as well as the calibration of the model are inspired by the Spanish market. A sensitivity analysis is performed on configurations that differ by generation capacity, ramping capability, and market parameters (available wind, Feed in Premium to wind, generators risk aversion, and reserve requirement). The models and the case study are purely illustrative but the methodology is general.
10.1109/tpwrs.2021.3114376
[ "https://arxiv.org/pdf/2107.09636v1.pdf" ]
236,134,081
2107.09636
99c4b625c1e2f88678250059a7feb133a7ea6b4a
Co-optimization of Energy and Reserve with Incentives to Wind Generation: Case Study Yves Smeers Member, IEEESebastian Martin Member, IEEEJosé A Aguado Department of Electrical Engineering Université Catholique de Louvain (UCL) Belgium University of Malaga MalagaSpain Co-optimization of Energy and Reserve with Incentives to Wind Generation: Case Study 1Index Terms-co-optimizationenergy and reservecomple- mentarity conditionsmarket equilibrium This case study presents an analysis and quantification of the impact of the lack of co-optimization of energy and reserve in the presence of high penetration of wind energy. The methodology is developed in a companion paper, Part I. Two models, with and without co-optimization are confronted. The modeling of reserve and the incentive to renewable as well as the calibration of the model are inspired by the Spanish market. A sensitivity analysis is performed on configurations that differ by generation capacity, ramping capability, and market parameters (available wind, Feed in Premium to wind, generators risk aversion, and reserve requirement). The models and the case study are purely illustrative but the methodology is general. Co-optimization of Energy and Reserve with Incentives to Wind Generation: Case Study Yves Smeers, Sebastian Martin, Member, IEEE, and José A. Aguado, Member, IEEE Abstract-This case study presents an analysis and quantification of the impact of the lack of co-optimization of energy and reserve in the presence of high penetration of wind energy. The methodology is developed in a companion paper, Part I. Two models, with and without co-optimization are confronted. The modeling of reserve and the incentive to renewable as well as the calibration of the model are inspired by the Spanish market. A sensitivity analysis is performed on configurations that differ by generation capacity, ramping capability, and market parameters (available wind, Feed in Premium to wind, generators risk aversion, and reserve requirement). The models and the case study are purely illustrative but the methodology is general. Index Terms-co-optimization, energy and reserve, complementarity conditions, market equilibrium NOTATION Only the terms used in this Part II are included in this section. For a complete list of terms see Part I. A. Indices and sets g, G Dispatchable generators, g ∈ G. w, W T Wind turbines, w ∈ W T . k, Ω Index and set for scenarios, k ∈ Ω. B. Parameters A/A Upper/lower factor for required up. reserve (p.u.). B Lower bound factor for committed downward reserve respect to the committed upward reserve (p.u.). R g /R g Upward/Downward ramping slope respect to the generation capacity (p.u.). C g Slope of gen. cost for dispatch. gen. g, (e/MWh). F iP Premium to the schedu. wind gen., (e/MWh). M y Balancing reserve factor for wind turbines, (p.u). M x Balancing reserve factor for dispatch. gen., (p.u.). C. Variables Primal variables d Energy sales of the firm in day-ahead, (MWh). ru g /rd g Committed upward/downward reserve capacity from dispatchable generator g, (MW). I. CASE STUDY: DESCRIPTION AND DATA The case study is inspired by a phenomenon that affected the dynamics of European generation in the period 2008-2013 [1] and still persists in moderate form today [2]- [4]. Existing flexible Combined Cycle Gas Turbines (CCGT) necessary to the system because of their contribution to reliability and flexibility became unprofitable, with the consequence that special non market arrangements had to be found to retain them in the system. This situation was summarized in [5]: "There's plenty of flexibility, but so far it has no value". The statement can be interpreted in two ways: either the system is awash with flexibility and its short-term price is effectively null, which is a transient phenomenon; alternatively the market does not price flexibility in a correct way, which is a matter of market design. We use the models COM (Co-Optimization) and EQM (Equilibrium Model) developed in Part I to assess whether co-optimization of energy and reserve would have significantly affected the cash flow generated by these flexible units. The analysis is illustrative but the methodology can easily be scaled up and is only limited by the capabilities of commercial optimization codes to solve the underpinning two stages stochastic programs. We structure the analysis around two questions: 1) Can one characterize a range of situations where flexibility, expressed in terms of reserves requirements, has effectively no value and alternatively, conditions where it has value? If so, can one characterize situations that retain flexible equipment in the system? 2) Does market design (co-optimization) have an impact on the above? And if so, how does it depend on market parameters such as wind condition µ, Feed in Premium F iP to wind, generators' risk aversion Ψ, and reserve requirement M y ? This section I of Part II introduces a case study that considers a number of configurations of generation capacity (High, Low, and Very Low), ramping capacity (High, Low), and market parameters (wind availability µ, F iP , risk aversion Ψ, reserve requirement M y ). Section II contains a discussion of the numerical results. The conclusions in Section III close the paper. A. Energy demand The price and energy demand are endogenous and related by a linear inverse demand function, (1): Price = Γ 0 − Φ 0 ·(Energy Demand in Day-Ahead)(1) where Γ 0 = 209.78 (e/MWh) and Φ 0 = 0.0056 (e/MW 2 ·h), based on a calibration on historical data from the Spanish system. The confidence level for Conditional Value at Risk (CVaR) is Θ = 0.95. B. Generation, Ramping and Market Configurations Three configurations of generation capacity are considered and presented in Table I. Wind capacity (22.57 GW) and the demand function are identical in all cases. Dispatchable generation capacities take three values: 28.5 GW (High), 25.35 GW (Low), and 23.35 GW (Very Low). These values can be interpreted as stages in the retirement of CCGT to restore profitability of the remaining capacity. The parameter Λ ∈ [0, 1] is described in detail in Part I. It is used for testing the capability of the system to manage forecast errors on wind generation. Λ sets the ramping capacity already committed in the scheduled generation to move from a period to the next one. For instance, for the upward reserve the capacity already committed (not available for flexibility) is R g · Λ · x g . Two configurations for ramping capability are considered: Low Λ = 1, and High Λ = 0. Λ takes the same value for all dispatchable generators in each test. The characteristics of the six possible combinations of generation and ramping and the number of the figures reporting the results are given in Table II. The reference situation is taken as Low ramping with Low generation. We also consider configurations with low generation and high ramping to develop a story line. Four parameters further characterize the market configurations: wind condition µ, F iP to wind, generators' risk aversion Ψ ∈ [0, 1], and reserve requirement M y by renewable generation. Eight configurations, listed on Table III, are considered in the case study. We refer to those configurations in the figures by using the abbreviated name in the columns' head in Table III. C. Reserve Modeling Reserve is modeled using what we call balancing reserve factors (MW of reserve/MW scheduled), [6], M x = 0.02 for conventional generators in all the configurations, and M y for wind generators. The values of these parameters are calibrated on the historical observations in the Spanish market. For wind turbines we consider two cases that reflect the error forecast due to the time between commitment and real time: • M y = 0.15, for short time horizon, around 6 h. • M y = 0.60, for long time horizon, around 24 h. Longer forecasting time horizon imply greater uncertainty and forecast errors. Using the balancing factors, the reserve required by the Transmission System Operator (TSO) is given by (2), M x · g∈G x g + M y · w∈W T y w(2) where x g is the scheduled dispatchable generation by g and y w is the scheduled wind generation by turbine l. The constraints for the upward, g∈G ru g , and downward, g∈G rd g , reserves are also inspired by the Spanish Grid Code, [7]. The upward reserve must remain between 90%, A = 0.9, and 110%, A = 1.1, of the reserve required by the TSO, (2). The downward reserve must remain between 40%, B = 0.4, and 100% of the value of the upward committed reserve. The value for the committed reserves corresponds here to the sum of the primary and secondary reserves. D. Uncertain Parameters The available wind generation is the only uncertain parameter in the analysis. The values of µ and the scenarios are reported in Table IV; all these values are given as a percentage of the rated wind generation capacity, that is 22573 MW, as indicated in Table I. We consider the wind variability through the expected value of wind µ, and also the forecast errors through 12 scenarios for each value of µ. We consider three values of µ that correspond to low wind µ = 4.01%, average wind µ = 23.83%, and high wind µ = 65.00%. All scenarios have the same probability, 1 12 ≈ 8.33%. The following describes the construction of the wind scenarios taking into account the forecast errors. We use a Beta distribution to model the wind power forecast error. Let the load factor for wind generation be q = Power output Rated power ∈ [0, 1] , then according to [8], [9] this load factor fits a Beta distribution: f (q) = Γ(α+β) Γ(α)Γ(β) q α−1 · (1 − q) β−1 , q ∈ [0, 1], where Γ(α+β) Γ(α)Γ(β) is a scale factor such as 1 0 f (x)dx = 1, and the parameters α and β are directly related with the mean µ and the standard deviation σ of that distribution: α = µ 2 1−µ σ 2 − µ and β = α· 1 µ − 1 . The analysis of empirical data shows that σ fits a linear function of µ, σ(µ) = k 1 · µ + k 2 , [8]- [10], where the coefficients k 1 and k 2 depend mainly on the time horizon and the geographic dispersion of the wind turbines. Here we use the expression given in [10] for large scale generation and a time horizon of 24 h, and assume the same expression for a horizon of 6h: σ = 1 5 µ + 1 50 (in per unit). To build the scenarios, we divide the range [0, 1] for the load factor into segments and associate each scenario with a segment. Here we consider 12 scenarios, let k be the index for the extreme points of each segment, z k ∈ [0, 1], then the range [0, 1] is discretized using 13 points, 0 = z 1 < z 2 < . . . < z 13 = 1. The value and the probability of scen. k are: a) Value of scenario k: µ(k) = z k+1 z k Γ(α+β) Γ(α)Γ(β) x α · (1 − x) β−1 dx, that is the expected value on the segment that defines the scenario. b) Probability of scenario k, pr(k) = z k+1 z k Γ(α+β) Γ(α)Γ(β) x α−1 · (1 − x) β−1 dx, integral of the probability density function of the Beta distribution on the segment associated with the scenario. The points 0 = z 1 < z 2 < . . . < z 13 = 1 are selected to get segments of equal probability: 1 12 = z k+1 z k Γ(α+β) Γ(α)Γ(β) x α−1 · (1 − x) β−1 dx, k = 1, 2, . . . , 12. E. Models Implementation and Solving COM is a quadratic programming problem with linear constraints, made of 367 equations and 284 variables. As expected it is solved with no difficulty by standard off the shelf solvers such as CPLEX, CONOPT and MINOS under GAMS [11]. The computation time is around 0.78 seconds per problem on a machine with Intel i7-5820K CPU @ 3.30 GHz and 32 GB RAM under Debian 4.19.171-2 x86 64. We crosschecked the model with two alternative formulations using the Karush-Kuhn-Tucker ( KKT) conditions of COM, and solving it with PATH, also under GAMS. One formulation with handwritten KKT, and the other provided by the automatic generation of the dual by the Extended Mathematical Programming on GAMS. We got the same results with the three formulations. EQM is a linear complementarity problem that modifies COM's KKT conditions (see Part I). This problem misses the perfect arbitraging between reserve and energy of COM. An interesting question is whether this is reflected in solving capabilities. And indeed a direct application of PATH (under GAMS) on EQM failed in some cases while PATH never failed on the corresponding COM problems. We accordingly used an iterative approach that interacts PATH with a linear subproblem calculating the CVaR to update the risk neutral probabilities at each iteration. This worked in all studied cases, but it is at this stage a heuristic. Each EQM problem is made of 636 equations and 636 variables, and each linear subproblem for the CVaR contains 14 equations and 13 variables. Computation time to solve each EQM using the iterative approach is ≈ 1.86 sec. using GAMS [11] on a machine with Intel i7-5820K CPU @ 3.30 GHz and 32 GB RAM under Debian. COM is a convex problem that, except for degeneracy, always has an unique solution. Scalability is not an issue as the (very high) capabilities of LP commercial codes set the limits. The situation is different for EQM that is not amenable to optimization. One needs to verify the existence of an equilibrium and the possibility of multiple solutions; one must also examine scalability. A full answer to these questions goes beyond the scope of this paper but the following gives the intuition. Existence of a solution to EQM can be proved by a homotopy argument, [12]. One can show that a solution of COM is a solution of a modified EQM, and that the EQM that one wants to solve can be obtained by a continuous transformation of COM. A standard reasoning of degree theory will then imply that the uniqueness of the solution of COM implies a finite number of isolated solutions of EQM. These mathematical properties have an economic interpretation. As argued in Part I, COM differs from EQM by the fact that co-optimization takes into account the opportunity cost on reserves of decisions of energy. The continuous transformation of COM into EQM amounts to progressively decrease the impact of this opportunity cost. Multiplicity of solution also has an economic interpretation: co-optimization tries to minimize the excess demand (positive or negative) by a search of both energy and reserve prices. It is easy to show that this excess demand is a "monotone" (that is well behaved) mapping of these energy and reserve prices. We argued in Part I that the separation of energy and reserve (dropping co-optimization) implies that one needs to find a zero of the excess demand by only playing with reserve prices, and letting the market find the corresponding energy production and price through a separate auction. The problem is that the obtained excess demand of reserve is not necessarily monotone (can be badly behaved) in the price of reserve, which is the source of a multiplicity of solutions. This has consequences on scalability. There is now a lot of experience (that originated in Hogan's work on project independence in the seventies [13]) in solving these problems by sequences of optimizations problems. Except for the need to resort to an iterative procedure, scalability is only restricted by the possibilities offered by commercial codes. But this remains a heuristic. A final question is whether multiplicity of solution can occur in practice. As argued later in the case study multiplicity of solutions can indeed occur. Former European Union (EU) experience with the separation of the clearing energy and transmission showed that awkward situations could happen in practice where the clearing of transmission and energy were incompatible (transmission rights and energy flows were in opposite directions). This pattern already reappeared with the recent separation of energy transmission between the UK and the EU due to Brexit. The analysis and interpretation of multiplicity of solutions goes beyond this paper. Table III are depicted in Figs. 1 to 5. The configurations differ by the parameters (µ, F iP , Ψ, M y ) and are referred to by the column names in Table III 2) (Equil. price), with base value 78.15 e/MWh, that corresponds to the highest equilibrium price for the Low Generation configurations. Using that base value, the maximum value of variable cost for generators, max g {C g }, is around 55.5%. This means a (Equil. price) lower than 55.5% is an incentive for plant retiring. 3) (Gross welf.), gross welfare, with base value 3552878.90 e. It is computed as the objective function for COM both for COM and EQM. 4) (Net welf.), net welfare, with the same base value as (Gross welf.), is obtained by subtracting the F iP payment, w∈W T y w ·F iP , from the (Gross welf.). 5) (Consum. pay.): consumer payment for energy, (Γ 0 − Φ 0 · d)·d, with base value 1837531.10 e. This payment goes to the generator. 6) (Reser. cap. pay.): payment for committed reserve capacities, g∈G (ru g ·(κ − γ) + rd g ·(κ − γ)), with base value 1837531.10 e , the same as for the (Consum. pay.), so both percentages can be added to get a global payment. Because reserve is a service managed by the TSO in EQM, this payment goes from the consumer to the TSO, when it needs to incentivize the generator to provide reserve. Because the Spanish system includes an upper bound on reserve, we also introduce penalties, γ and γ, on the generator in order to induce it to remain within 1 Names in parenthesis refer to the figures. these bounds. In that case, these penalties are levied by the TSO and rebated to the consumer as a reduction of fixed access charges. A tentative motivation for the TSO's upper bound for reserve is briefly discussed here. Because of zero marginal cost, margins accruing from wind generation are higher than those from fossil fuel plants. This effect is further enhanced by the F iP . There is thus an incentive to bid wind instead of fossil plants in Day-Ahead (DA). The Real Time (RT) correction in case of discrepancies between DA forecast and RT realization mitigates this incentive, but to an extent that is difficult to foresee ex ante. There may thus remain an incentive to bid wind higher than expectation and to keep fossil capacities for reserve. The TSO may want to restrict this practice: it can do so by setting a reserve requirement and adding some interval (from 90 to 110%) of that value for the generator to choose. This [90, 110]% is referred to as the TSO's interval in the following. II. CASE STUDY: RESULTS AND DISCUSSION Samples of results for the eight configurations listed on The common EU wisdom is that the electricity market is a commodity (energy) matter and that services (here reserve) are an other business. This is reflected in the separation of the Power eXchanges and TSOs. Market conditions where energy prices are insufficient for remunerating plants that are necessary for the functioning of the system, suggest exploring whether services (reserve) provided by the generators could constitute an other valuable source of revenue. This underpins the case study with the subsidiary question of whether cooptimization of energy and reserve could modify the value of the reserves. Three different situations can be identified: i) Reserve is available as an abundant byproduct of generation and hence has "no value"; ii) Reserve is excessive, it hits the upper value of the TSO's interval, and has a negative value. Increasing the upper bound to a level where it is no longer binding will extend the range where it has "no value"; and iii) The case of most interest is when reserve is effectively a scarce resource; it hits the lower value (90%) of the TSO's interval and should be remunerated. A. High Generation Capacity (28.5 GW) and High Ramping Capability (Λ = 0): HH (Fig 1) This case (HH) fully reflects the 2013 statement "There's plenty of flexibility but so far it has no value" [5]. Electricity prices (Equil. price) are lower or equal to the highest fuel cost of the CCGT in most cases and hence will also not cover the fixed operating costs of all plants 2 . The problem is structural and due to a legacy of flexible generation capacity that exceeds the need for reserve and generation. This is a short-term problem as the excess capacity will eventually retire. But it creates a stranded asset issue that needs to be managed: one needs an ordered exit in order to avoid a simultaneous dismantling of all the excess capacity. In contrast, and assuming that the unprofitable plants are retained through some special contractual arrangement, the consumer benefits from the disequilibrium: its bill (Consum. pay.) is low The low electricity price and the excessive level of the reserve are two faces of a same coin, with the financial consequence that the latter reinforces the impact of the former. COM and EQM only show small differences: demand in DA is higher and energy price (Equil. price) lower in a few COM configurations. Reserve is also more efficiently used in COM as shown by smaller payment from generators (Reser. cap. pay.) to TSO reflecting less excessive reserve. (Gross welf.) are a bit higher in EQM, but (Net welf.) are identical. Existing differences are real but small. The striking feature of the situation is the dramatic disequilibrium in the generation system, that indeed prevailed in Europe before massive capacity impairment. B. High Generation Capacity (28.5 GW) and Low Ramping Capability (Λ = 1): HL (Fig. 2) This case (LH) slightly differs from the previous one (HH). It changes the value of energy and reserves and moderately increases the differences between the results of EQM and COM that remain limited to the same items: electricity demand, equilibrium price and reserve capacity payment. Electricity price (Equil. price) increases compared to HH but remains generally too low to cover both fuel costs and fixed operating costs, except when demand for reserve increases because of imprecise forecast (high M y cases in Res. H. and Res. L.) or low wind (Wind. L.). Low wind reduces demand because it implies higher fossil generation, with the aggravation that low ramping also contributes to the reduction of demand in DA (92.92% in HL compared to 95.08% in HH). This is neither a scarcity of capacity nor a market power effect: it is more profitable to use capacity for reserve of wind generation than directly for fossil generation. Recall that hindering that pattern has been mentioned above as one possible justification of the upper bound of the TSO's interval ([90,110]% reserve requirement). The reserve behavior is mostly analogous to the one in HH, Fig. 1, but a bit tighter. This is due to the lower flexibility that reduces the contribution of capacity to reserve. Generally speaking, reserve remains in the "no value" area with (Reser. cap. pay.) negative or zero. An exception occurs with high requirement for reserve capacity (Res.L and Res.H) and risk aversion high (Risk H.) where (Reser. cap. pay.) is zero or positive. Reserve is now higher or equal to the lower bound of the TSO's interval. The energy price (Equil. price) in high reserve demand (Res. H. and Res. L.) comfortably exceeds the fuel cost of the plants, which should make them able to cover their fixed operating costs. The same is true in low wind (Wind. L.). The more efficient use of reserve in COM than in EQM observed in HH is mostly confirmed with reserve being now positive with high reserved demand (Res. H.), except for one case that deserves a particular attention. Configuration (Res. L.) with high demand for reserve shows a reserve capacity payment higher in COM than in EQM, which is an anomaly with respect to all other results. We conducted a deeper investigation of (Res. L.), using a homotopy reasoning from configuration (FiP. L.) to (Res. L.). This amounts to parameterize the problem from the low to the high demand for reserve, M y = 15% → M y = 60%, (all other parameters remaining equal). The results show a discontinuous behavior for a demand of reserve M y around 46.5% where PATH switches from an isolated equilibrium to an other one where it remains till M y = 60%. Note that the results remain within basic principles, in other words EQM does not dominate COM in terms of welfare, which would have contradicted basic economics. It is simply an illustration that the lack of co-optimization, that is the separate clearing of energy and reserve, creates uncertainty (and counter intuitive behaviors) in the outcome of the market and that these are worth further exploration. The general conclusion of these two cases (HH and HL), high generation with high/low ramping, is that except in the configurations of high reserve demand, and low wind one should expect some generation capacity to be retired in order to restore financial sustainability. The next cases examine this situation. C. Low Generation Capacity (25.35 GW) and Low Ramping Capability (Λ = 1): LL (Fig. 3) A reduction of capacity is the expected logical consequence of the loss of profitability of generation observed in Figs. 1 (HH) and 2 (HL). This reduction should in principle increase the price of energy, as well as the revenue accruing from, now possibly scarce, reserves. This is effectively what happens in (LL), at least for energy: the price (Equil. price) is now sufficiently higher than the 55.5% benchmark to suggest that it can support fuel and fixed operating costs. Inducing investment is obviously another matter. The only exception to this general finding occurs with high wind (Wind H.) where energy price is still a low 52.65%. Reserve also look better even though generally remaining abundant as observed by the profile of reserve capacity payment (Res cap. pay), which remain negative or zero in six configurations, some of them associated to excessive reserves. Positive values occur for high reserve demand (Res. H. and Res. L.): the reserve capacity in those cases is effectively a payment from the consumer to the TSO, and eventually to the generator that contributes to the value of the plant. Other phenomena are worth mentioning. The payment for excessive reserve that reduces the value of capacities decreases compared to the case with higher generation capacity (HL): excess reserve diminishes when excess generation capacity decreases. Similarly the positive value of the reserve capacity increases the contribution to the plants profit, with (Reser. cap. pay.) becoming a non negligible fraction of the revenue of the plants when the demand for reserve is high (Res. L. and Res. H.). Also EQM and COM now perform differently, with COM significantly reducing the cost of reserve capacity. This has consequences on the electricity price and the payment for services by the consumers (both lower in COM than in EQM), implying a (modestly) higher welfare in COM. Cooptimization of energy and reserve thus appears useful if the demand for reserve is high. This suggests further reducing the capacity to test a possibly general scarcity of reserves, what is discussed next. We conclude with two cases of a further reduction of capacity, both with very low dispatchable generation (23.35 GW), one with low ramping (VLL) Fig. 4, the other with high ramping (VLH) Fig. 5. Most of the phenomena observed in Fig. 4 for VLL are similar to those discussed for LL, Fig. 3, just a bit stronger. The price of energy (Equil. price) increases and the negative value of reserve capacity payment (Reser. cap. pay.) persists, but decreases in the configurations (Wind L.) and (Risk L.). Reserve thus remains non constraining in these cases. The co-optimization of energy and reserve in COM improves the efficiency of the market by avoiding diverting too much capacity from energy to reserve. Also the share of reserve drastically increases in importance in the remuneration of plants with significant differences of efficiency between EQM and COM. A striking result is that these effects disappear with a high ramping capacity as shown in Fig. 5 for VLH. The valuation of reserves and the role of market design are now completely different. The price of electricity (Equil. price) remains high enough to support fuel and fixed operating cost of the plants: this is a result of the lower capacity. But reserves loose all their values and return to the pattern depicted in Fig. 1 for III. CONCLUSION The dramatic impact of wind penetration on power markets is now well recognized. Wind decreases the price of the commodity and damages the economics of conventional and flexible generators. While this may seem like a good step towards the energy transition, this creates stranded costs for plant owners. It also raises an issue of vulnerability of the energy market if flexible capacity is suddenly retired from the system, because it can no longer cover its fixed operating costs, and alternatives are not yet available. The phenomenon is now well understood; it was most flagrant in the period 2008-2013 in the EU, and recent documents show that it is still rampant. Possibly less understood is the extent to which the demand for reserve by the TSO for accommodating wind, can create a countervailing demand for services that could mitigate or substitute the pressure coming from low energy prices whether in the residual conventional system or its replacement. Still less understood might be the possible impact of market design on that countervailing demand. More specifically in this paper, the EU develops a strong renewable policy in a regime of separation of energy and services. Because there exists clear arbitrage possibilities between energy and reserve a relevant objective is to try to characterize the demand for reserve services in the current evolution of reduced fossil generation and whether the EU separation of energy and reserve has an impact on that demand compared to a co-optimization design. This work attempts to provide some insight on this issue. Leaving aside transmission we construct two models that embed energy and reserve in two market organizations: EQM keeps energy and reserve separate and COM co-optimizes them. The models are not meant to represent a particular market, but try of satisfy some degree of realism by taking inspiration from a European situation. The analysis is conducted by referring to the (costly) evolution of fossil generation since the beginning of this century and in particular in the period 2008-2013. We find that there is no pricing power in reserve when generation capacity is important and fossil plants are unable to cover their fixed and variable operating costs on the energy market. Market design (COM or EQM) is also irrelevant. But a high demand for reserve due to wind combined with a reduction of economically redundant generation capacity restores plant profitability, reveals the value of reserve and the relevance of co-optimization at least in a basic situation of little flexibility capability. The revenue accruing from reserve significantly contribute to the viability of flexible plants in COM and EQM, but because of a more efficiently use of reserves COM decreases the price to the consumer and increases demand and welfare. But these advantages look fragile to an increase of reserve capability. Increasing the reserve potential of existing plant could quickly decrease their value both in COM and EQM and be possibly self defeating for an investor. This result obviously requires further analysis. This should be combined with the relaxation of important simplifications made in the representation of the EU system idiosyncrasies that could only be removed at the cost of considerable technical difficulties. This will be treated in a future work. Γ 0 0Constant of the inverse demand function, (e/MWh). Θ Confidence level for firm Conditional Value at Risk (CVaR), Θ ∈ (0, 1). Λ Parameter for ramping availability, Λ ∈ [0, 1]. Φ 0 Slope of the inverse demand function (e/(MWh) 2 ). Ψ Level of risk aversion, Ψ ∈ [0, 1], zero is risk neutral. Y. Smeers is with the Center for Operations Research and Econometrics (CORE) at Université Catholique de Louvain (UCL), Belgium (e-mail: [email protected]) S. Martin and J. A. Aguado are with the Department of Electrical Engineering, University of Malaga, Malaga, Spain (e-mail: [email protected]; [email protected]). Their work was supported in part by the Spanish Ministry of Economy and Competitiveness through the project ENE2016-80638-R, and in part by the University of Málaga (Campus de Excelencia Internacional Andalucía Tech). x g Scheduled generation of dispatchable unit g, (MW). y w Scheduled generation of wind turbine l, (MW). Dual variables (all in e/MWh) γ/γ Upper bound committed upward/downward reserve. κ/κ Lower bound for committed up./down. reserve. . To facilitate comparisons results are expressed in percentage of a base value, as explained below. COM and EQM results are respectively represented in continuous and dashed lines in each figure. The selected results and the base values are listed and briefly described below: 1) (Demand Day-Ahead) 1 with base value 30120.39 MWh. Fig. 1 . 1Results for High generation and High ramping (HH). COM is in continuous line and EQM is in dashed line. and it receives a rebate on access charges (Reser. cap. pay.), this can obviously not last. The supply of reserve is excessive in almost all configurations with just a few cases where it remains within the TSO's interval. The following elaborates on this general situation: it applies to both COM and EQM. Reserves hit the upper bound of the TSO's interval and are thus excessive in five configurations. They are signaled by negative Reserve Capacity payments (Reser. cap. pay.).The three other configurations have zero reserve capacity payment with their reserve remaining within the TSO's interval. They occur in case of risk aversion (Risk H.) and high reserve requirement, M y = 60%, which corresponds to an imprecise DA forecast of RT wind (Res. H. and Res. L.). Fig. 2 . 2Results for High generation and Low ramping (HL). COM is in continuous line and EQM is in dashed line. D. Very Low Generation Capacity (23.35 GW) and Low (Λ = 1): VLL (Fig. 4) /High (Λ = 0) Ramp. Capa.: VLH (Fig. 5) Fig. 3 . 3Results for Low generation and Low ramping (LL). COM is in continuous line and EQM is in dashed line. HH. The (Reser. cap. pay.) becomes highly negative in the configurations for different wind (Wind L., Wind H.), F iP (FiP L., FiP H.), and risk neutral generators (Risk L.), and close to zero (or zero) with high risk aversion (Risk H.) and high reserve requirements (Res. L., Res. H.). This suggests a very unstable market for reserve, at least in this model. TABLE I SUMMARY IOF GENERATION CONFIGURATIONS High Gen. Low Gen. Very Low Cg Rg = R gXg Xg Xg g Tech. (MW) (MW) (MW) (e/MWh) % of Xg 1 CCGT 0 3274.98 1274.98 44.31 53.33 2 CCGT 0 2056.58 2056.58 43.88 53.33 3 CCGT 10632.7 2153.5 2153.5 43.45 53.33 4 Nuclear 1519.23 1519.23 1519.23 10.91 2.08 5 Nuclear 6053.35 6053.35 6053.35 10.29 2.08 6 Coal 2035.89 2035.89 2035.89 37.5 20 7 Coal 5119.13 5119.13 5119.13 38.44 25 8 Coal 1198.12 1198.12 1198.12 19.77 25 9 Coal 1945.51 1945.51 1945.51 20.24 25 Wind 22573.00 22573.00 22573.00 - - Dispatch. 28503.93 25356.29 23356.29 - - Total 51076.93 47929.29 45929.29 - - TABLE II SUMMARY IIOF DATA FOR GENERATION AND RAMPING CONFIGURATIONSConfig. Fig. Disp. Gen. Wind Gen. Ramp. (Λ) (GW) (GW) (p.u.) HH (High Gen. High Ramp.) 1 28.50 22.57 0.00 HL (High Gen. Low Ramp.) 2 28.50 22.57 1.00 LL (Low Gen. Low Ramp.) 3 25.35 22.57 1.00 VLL (Very Low Gen. Low Ramp.) 4 23.35 22.57 1.00 VLH (Very Low Gen. High Ramp.) 5 23.35 22.57 0.00 TABLE III SUMMARY OF VALUES FOR MARKET CONFIGURATIONS Wind Wind F iP F iP Risk Risk Res. Res. L. H. L. H. L. H. L. H. µ, wind (%) 4.01 65.00 23.83 23.83 23.83 23.83 23.83 23.83 F iP , (e/MWh) 30.00 30.00 0.00 80.00 30.00 30.00 0.00 80.00 Ψ, risk avers. 0.40 0.40 0.40 0.40 0.00 1.00 0.40 0.40 My, (%) 15.00 15.00 15.00 15.00 15.00 15.00 60.00 60.00 TABLE IV SCENARIOS IVFOR AVAILABLE WIND (% ON RATED CAPACITY)Scen. µ (%), (base 22.57 GW) Scen. µ (%), (base 22.57 GW) k 4.01 23.83 65.00 k 4.01 23.83 65.00 1 0.59 12.64 35.55 7 3.66 24.09 67.81 2 1.20 16.13 46.56 8 4.27 25.60 71.13 3 1.69 18.12 52.44 9 5.00 27.27 74.52 4 2.15 19.76 56.99 10 5.93 29.24 78.18 5 2.62 21.24 60.89 11 7.27 31.86 82.46 6 3.12 22.66 64.44 12 10.57 37.34 89.05 Verifying that statement would require a full profile of wind availability (multi-period) and hence a more developed model, our statement is thus only a judgmental comparison. European utilities: How to lose half a trillion euros. The Economist. The Economist. European utilities: How to lose half a trillion euros, 12 October 2013. Global Electricity Utilities in Transition. Leaders and Laggards: 11 Case Studies. T Buckley, S Nicholas, Institute for Energy Economics and Financial Analysis (IEEFA)Technical reportT. Buckley and S. Nicholas. Global Electricity Utilities in Transition. Leaders and Laggards: 11 Case Studies. Technical report, Institute for Energy Economics and Financial Analysis (IEEFA), October 2017. Report from the Commission to the European Parliament, The Council, The European Economic and Social Committee and The Committee of the Regions. Energy prices and costs in Europe. SWD(2020) 951 final. COM(2020) 951 finalEuropean Commission. Technical ReportEuropean CommissionEuropean Commission. Report from the Commission to the European Parliament, The Council, The European Economic and Social Com- mittee and The Committee of the Regions. Energy prices and costs in Europe. SWD(2020) 951 final. Technical Report COM(2020) 951 final, European Commission, 14 October 2020. European Commission. Quarterly Report on European Electricity Markets with focus on energy storage and 2019 wholesale prices. 12Technical Reportfourth quarter of 2019, Directorate-General for Energy, unit A.4, Market Observatory for EnergyEuropean Commission. Quarterly Report on European Electricity Mar- kets with focus on energy storage and 2019 wholesale prices. Technical Report vol. 12, issue 4, fourth quarter of 2019, Directorate-General for Energy, unit A.4, Market Observatory for Energy, 2020. Results for Very Low generation and Low ramping (VLL). COM is in continuous line and EQM is in dashed line. Fig. 4. Results for Very Low generation and Low ramping (VLL). COM is in continuous line and EQM is in dashed line. Results for Very Low generation and High ramping (VLH). COM is in continuous line and EQM is in dashed line. Fig. 5. Results for Very Low generation and High ramping (VLH). COM is in continuous line and EQM is in dashed line. 12 Insights on Germany's Energiewende. Discussion paper 010/03-1-2013/EN. Agora Energiewende, Agora Energiewende. 2Agora Energiewende. 12 Insights on Germany's Energiewende. Dis- cussion paper 010/03-1-2013/EN, Agora Energiewende, Rosenstrasse 2, 10178 Berlin, Germany, February 2013. Multi-temporal assessment of power system flexibility requirement. Thomas Heggarty, Jean-Yves Bourmaud, Robin Girard, Georges Kariniotakis, Applied Energy. 238Thomas Heggarty, Jean-Yves Bourmaud, Robin Girard, and Georges Kariniotakis. Multi-temporal assessment of power system flexibility requirement. Applied Energy, 238:1327 -1336, 2019. Resolución de 13 de julio de 2006, de la Secretaría General de Energía, por la que se aprueba el procedimiento de operación 1.5 "Establecimiento de la reserva para la regulación frecuencia-potencia. Secretaría General De Energía, Spanish) Boletín Oficial del Estado (BOE). Secretaría General de Energía. Resolución de 13 de julio de 2006, de la Secretaría General de Energía, por la que se aprueba el procedimiento de operación 1.5 "Establecimiento de la reserva para la regulación frecuencia-potencia". (In Spanish) Boletín Oficial del Estado (BOE), (173):27473-27474, 21 July 2006. Qualification of wind power forecasts. S Bofinger, A Luig, H G Beyer, Global Wind Power Conf. Paris, FranceS. Bofinger, A. Luig, and H. G. Beyer. Qualification of wind power forecasts. In Global Wind Power Conf., Paris, France, Apr. 2-5 2002. Assessment of the Cost Associated With Wind Generation Prediction Errors in a Liberalized Electricity Market. Power Systems. A Fabbri, T G S Román, J R Abbad, V H M Quezada, IEEE Transactions on. 203A. Fabbri, T.G.S. Román, J.R. Abbad, and V.H.M. Quezada. Assessment of the Cost Associated With Wind Generation Prediction Errors in a Liberalized Electricity Market. Power Systems, IEEE Transactions on, 20(3):1440-1446, 2005. Estimating the Spinning Reserve Requirements in Systems With Significant Wind Power Generation Penetration. M A Ortega-Vazquez, D S Kirschen, TPWRS. 241M.A. Ortega-Vazquez and D.S. Kirschen. Estimating the Spinning Re- serve Requirements in Systems With Significant Wind Power Generation Penetration. TPWRS, 24(1):114-124, 2009. . GAMS Development Corporation. General Algebraic Modeling System (GAMS) Release. 24GAMS Development Corporation. General Algebraic Modeling System (GAMS) Release 24.1.3. Washington, DC, USA, 2013. Homotopy methods to compute equilibria in game theory. P , Jean-Jacques Herings, Ronald Peeters, Econ. Theory. 42P. Jean-Jacques Herings and Ronald Peeters. Homotopy methods to compute equilibria in game theory. Econ. Theory, 42:119-156, 2010. Energy policy models for project independence. William W Hogan, William W. Hogan. Energy policy models for project independence. . Computers & Operations Research. 23Computers & Operations Research, 2(3):251-271, 1975. Yves Smeers received the M.S. and Ph.D. degrees from. Pittsburgh, PA; Louvain-la-Neuve; BelgiumCarnegie Mellon University ; at Université Catholique de LouvainYves Smeers received the M.S. and Ph.D. degrees from Carnegie Mellon University, Pittsburgh, PA, in 1971 and 1972, respectively. He is currently a Professor Emeritus and a member of the Center for Operations Research and Econometrics, at Université Catholique de Louvain, Louvain-la-Neuve, Belgium. He is currently a teaching assistant at the University of Málaga. His research interests include: optimization theory, methodologies for teaching, and economics of electric energy systems. Sebastián Martín (S'08, M'14) was born in. Málaga, SpainHe graduated from University of Granada, Spain, as Civil Engineer in 2003. He received an Industrial Engineer and a PhD degrees from University of Málaga, SpainSebastián Martín (S'08, M'14) was born in Málaga, Spain, in 1980. He graduated from University of Granada, Spain, as Civil Engineer in 2003. He received an Industrial Engineer and a PhD degrees from University of Málaga, Spain, in 2007 and 2014 respectively. He is currently a teaching assistant at the University of Málaga. His research interests include: optimization theory, methodologies for teaching, and eco- nomics of electric energy systems. respectively. Currently, he is full Professor and Head of the Department of Electrical Engineering at the University of Málaga. José Antonio Aguado (M'01) received the Ingeniero Eléctrico and Ph.D. degrees from the University of Málaga. Málaga, SpainHis research interests include operation and planning of smart grids and numerical optimization techniquesJosé Antonio Aguado (M'01) received the Ingeniero Eléctrico and Ph.D. degrees from the University of Málaga, Málaga, Spain, in 1997 and 2001, respectively. Currently, he is full Professor and Head of the Department of Electrical Engineering at the University of Málaga. His research interests include operation and planning of smart grids and numerical optimization techniques.
[]
[ "Updating Weight Values for Function Point Counting", "Updating Weight Values for Function Point Counting" ]
[ "Wei Xia \nIT Department\nHSBC Bank Canada\nVancouverCanada\n", "Danny Ho \nNFA Estimation Inc\nLondonCanada\n", "Luiz Fernando Capretz \nDept. Electrical & Computer Engineering\nUniversity of Western Ontario\nLondonCanada\n", "Faheem Ahmed \nCollege of Information Technology\nEmirates University\nAl-AinU.A.E\n" ]
[ "IT Department\nHSBC Bank Canada\nVancouverCanada", "NFA Estimation Inc\nLondonCanada", "Dept. Electrical & Computer Engineering\nUniversity of Western Ontario\nLondonCanada", "College of Information Technology\nEmirates University\nAl-AinU.A.E" ]
[ "IOS Press International Journal of Hybrid Intelligent Systems" ]
While software development productivity has grown rapidly, the weight values assigned to count standard Function Point (FP) created at IBM twenty-five years ago have never been updated. This obsolescence raises critical questions about the validity of the weight values; it also creates other problems such as ambiguous classification, crisp boundary, as well as subjective and locally defined weight values. All of these challenges reveal the need to calibrate FP in order to reflect both the specific software application context and the trend of today's software development techniques more accurately. We have created a FP calibration model that incorporates the learning ability of neural networks as well as the capability of capturing human knowledge using fuzzy logic. The empirical validation using ISBSG Data Repository (release 8) shows an average improvement of 22% in the accuracy of software effort estimations with the new calibration.Index TermsFunction point analysis, software size measure, effort prediction model, software estimation
10.3233/his-2009-0061
null
35,668,340
2005.11218
1de2eb834f1524ede04d4de11024ebf4cf8eccea
Updating Weight Values for Function Point Counting IOS PressCopyright IOS PressJanuary 2019. January 2019 Wei Xia IT Department HSBC Bank Canada VancouverCanada Danny Ho NFA Estimation Inc LondonCanada Luiz Fernando Capretz Dept. Electrical & Computer Engineering University of Western Ontario LondonCanada Faheem Ahmed College of Information Technology Emirates University Al-AinU.A.E Updating Weight Values for Function Point Counting IOS Press International Journal of Hybrid Intelligent Systems IOS Press61January 2019. January 201910.3233/HIS-2009-0061*corresponding author - While software development productivity has grown rapidly, the weight values assigned to count standard Function Point (FP) created at IBM twenty-five years ago have never been updated. This obsolescence raises critical questions about the validity of the weight values; it also creates other problems such as ambiguous classification, crisp boundary, as well as subjective and locally defined weight values. All of these challenges reveal the need to calibrate FP in order to reflect both the specific software application context and the trend of today's software development techniques more accurately. We have created a FP calibration model that incorporates the learning ability of neural networks as well as the capability of capturing human knowledge using fuzzy logic. The empirical validation using ISBSG Data Repository (release 8) shows an average improvement of 22% in the accuracy of software effort estimations with the new calibration.Index TermsFunction point analysis, software size measure, effort prediction model, software estimation INTRODUCTION Accurate software estimation is a crucial part of any software project so that the project can be priced adequately and resources allocated appropriately. If the management underestimates the actual cost, the organization can lose money on the project, or even worse an inaccurate estimation can ruin a small software company. Conversely, if the management overestimates, the client may decide that, on the basis of cost-benefit analysis or return of investment, there is no point in having the software built. Alternatively, the client may contract another company whose estimate is more reasonable. Additionally, the client certainly wants to know when the project will be delivered, if the management is unable to keep to its schedule, then at best the organization loses credibility, at worst penalty clauses are invoked. In all cases, the managers responsible for the software estimation have a lot of explaining to do. Hence it is clear that accurate software estimation is vital. Therefore, many models for estimating software development effort have been proposed: COCOMO [1,2], SLIM [3], and Function Point (FP) [4] are arguably the most popular. These models can be considered algorithmic models, that is, pre-specified formulas for estimating development efforts that are calibrated from historical data. COCOMO and SLIM assume Source Lines of Code (SLOC) as a major input. FP, however, is based on higher-level features of a software project, such as the size of files, the types of transactions, and the number of reports; these features facilitate estimation early in the software life cycle. Algorithmic effort prediction models are limited by their inability to cope with vagueness and imprecision in the early stages of the software life cycle. Although software engineering has been influenced by a number of ideas from different fields [5], software estimation models combining algorithmic models with machine learning approaches, such as neural networks and fuzzy logic, have been viewed with scepticism by the majority of software managers [6]. Srinivasan and Fisher [7] illustrate a neural network learning approach to estimating software development effort known as Back-propagation. They indicate possible advantages of the approach relative to traditional models, but also point out limitations that motivate continued research. Furthermore, MacDonell [8] also considers the applicability International Journal of Hybrid Intelligent Systems, 6(1):1-14, January 2019, IOS Press DOI: https://doi.org/10.3233/HIS-2009-0061 of fuzzy logic modelling methods to the task of software source code sizing, and suggests that fuzzy predictive models can outperform their traditional regression-based counterparts, particularly with refinement using data and knowledge. Ahmed, Saliu and AlGhamdi [9] discuss an adaptive fuzzy logic framework for software effort prediction that incorporates experts' knowledge; they demonstrate the capabilities of the framework through empirical validation carried out on artificial data sets and on the COCOMO public database of completed projects. Xu and Khoshgoftaar [10] present a fuzzy identification cost estimation modeling technique to deal with linguistic data and automatically generate fuzzy membership functions and rules; they report that their model provides a method of cost estimation that is significantly better than COCOMO. Briefly, neural network techniques are based on the principle of learning from historical data, whereas fuzzy logic is a method used to make rational decisions in an environment of uncertainty and vagueness. However, fuzzy logic alone does not enable learning from the historical database of software projects. Once the concept of fuzzy logic is incorporated into the neural network, the result is a neuro-fuzzy system that combines the advantages of both techniques. Neuro-fuzzy cost estimation models are more appropriate when uncertainties and imprecise information are accounted for. Huang et al. [11] present a general framework that combines neuro-fuzzy techniques with algorithmic approaches, and show that such a combination can result in an average improvement of 15% for software estimation accuracy. Huang et al. [12] used a neuro-fuzzy approach as a front-end to COCOMO, this combination yielded better estimation results than COCOMO alone. The research presented in this paper extends that general framework to FP counting method. FP is a metric of measuring software size that was first proposed by Albrecht [4] at IBM in 1979. The introduction of FP represented a major step in comparison with SLOC counting, because FP focuses on system "functionality" rather than on calculating SLOC. Based on the FP theory, other variations, such as International FP Users Group (IFPUG) [13], COSMIC [14], and Mark II [15] were created. Research on the combination of the machine learning approach with FP was conducted by Finnie, Wittig and Desharnais [16], who compared three estimation techniques using FP as an estimate of system size. The models considered are based on regression analysis, artificial neural networks and case-based reasoning. Although regression models performed poorly on the given data set, the authors observed that both artificial neural networks and case-based reasoning appeared to be valuable for software estimation models. Hence, they concluded that case-based reasoning is appealing because of its similarity to the expert judgement approach and for its potential in supporting human judgement. Additionally, Yau and Tsoi [17] introduced a fuzzified FP analysis model to help software size estimators express their judgment and use the fuzzy B-spline membership function to derive their assessment values. Their weakness involved the used of limited in-house software, which significantly limits the validation of their model. Lima, Farias and Belchior [18] also proposed the application of concepts and properties from the fuzzy set theory in order to perform fuzzy FP analysis; a prototype that automates the calculation of FPs using the fuzzified model was created. However, as in the case of Yau and Tsoi [17], the calibration was done using a small database comprised of legacy systems, developed mainly in Natural-2, Microsoft Access and Microsoft Visual Basic, a fact which limits the generality of their work. A new FP weight system using an artificial neural network was established by Al-Hajri et al. [19]. Their research was similar to ours in the fact that they also used the data set provided by the International Software Benchmarking Standards Group (ISBSG) in order to validate their model. In their work, they replaced the original complexity table with a new table gathered with the training methods from neural networks. Although their results are quite accurate, the correlation is still unsatisfactory due to the wide variation of data points with many outliers. Previous research projects have indicated that the combination of machine learning approaches and algorithmic models yields a more accurate prediction of software costs and effort, which is competitive with traditional algorithmic estimators. However, our proposed neuro-fuzzy model goes even further: it is a unique combination of statistical analysis, neural networks and fuzzy logic. Specifically, we obtained an equation from statistical analysis, defined a suite of fuzzy sets to represent human judgement, and used a neural network to learn from a comprehensive historical database of software projects. First, we will briefly introduce the theory of function points, and then in the next section, we will illustrate its weaknesses and limitations. Table I. Table II where Ci is the Degree of Influence (DI) rating of each GSC. Finally, an FP is calculated by the multiplication of UFP and VAF, as expressed in Equation 3. VAF UFP FP × = Equation 3 Despite its well recognized usefulness as software metric, FP has its own drawbacks and weaknesses. The next section expands on these difficulties, particularly the problems with the current FP complexity weight systems. Section 3 proposes a FP calibration model, termed the neuro-fuzzy FP (NFFP) model, which overcomes those problems. In section 4 we describe the experimental methodology and discuss the results of performance evaluation, the reliability and validity of the proposed model. Finally, the last section summarizes the conclusions of this work. PROBLEMS WITH THE FP COMPLEXITY WEIGHT SYSTEM The FP complexity weight system refers to all the complexity calculations and weight values expressed in FP. Five problems with the FP complexity weight system are identified. Problems 1 and 2 are rooted in the classification of the UFP, whereas problems 3, 4 and 5 arise from the weight values of UFP. Problem 1: Ambiguous Complexity Classification Each of the five FP function components (ILF, EIF, EI, EO and EQ) is classified as low, average or high, according to the complexity matrices that are based on the counts of each component's associated files, such as DET, RET and FTR. Such complexity classification is easy to perform, but the categorization itself can be ambiguous. For example, Table III International Journal of Hybrid Intelligent Systems, 6( DET while B has 20 DET. According to the complexity matrix, both A and B are classified as having the same complexity and are assigned the identical weight value of 10. However, A has 30 more DET than B and is therefore more complex. Despite their difference, they are assigned the same complexity weight, thus exposing the problem of ambiguous classification. Problem 2: Crisp Boundary in Complexity Classification The boundary between two different complexity classifications is very crisp. An example is given in Table III, where one software project has two ILF, B and C. Both B and C have 3 RET, but B has 20 DET, while C has 19 DET. B is classified as average by the complexity matrix and assigned a weight of 10, whereas C is classified as low and assigned a weight of 7. B has only one more DET than C and they both have the same number of RET. However, B has been classified as average and assigned three more weight units than C, because the boundary between B and C is very crisp with no smooth transition between the values. Problem 3: Weight Values Obsolete The weight values of unadjusted FP [4] are said to reflect the size of the software. They prompts the question: "Are these weight values are still applicable to today's software or whether are they obsolete?" Problem 4: Weight Values Defined Subjectively The weight values of unadjusted FP were determined by Albrecht by "debate and trial" [4], based on his experience and knowledge. Albrecht contributed significantly to the theory of FP; nevertheless, with no actual follow-up projects to justify his values, the question remains as to whether the weight values were defined subjectively without convincing support or whether they reflect objective data. Problem 5: Weight Values Defined Locally The weight values of unadjusted FP were decided based on the study of data processing systems at IBM. The assignment of weight values was restricted to only one organization and to only a limited amount of software types; however, this set of weight values is applied universally and is not limited to one organization or one type of software. Thus, one cannot be sure if weight values that were defined locally can be applied on a global basis. Remarks on the Encountered Problems The existing weight system of FP does not measure complexity perfectly. Problems 1 and 2 do not fully reflect software complexity under a specific application context; they definitely need calibration. Problems 3, 4 and 5 do not reflect the trends of today's software and need further calibration as well. In an attempt to address these problems, this paper proposes a novel FP calibration model. FP CALIBRATION MODEL USING NEURO-FUZZY Technical View of the Model The neuro-fuzzy FP model is a unique approach that incorporates FP measurements with the neural networks, fuzzy logic and statistical regression techniques; a technical view of this model is depicted in Figure 1. The first component, statistical regression, is a mathematical technique used to represent the relationship between selected values and observed values from the statistical data. Secondly, the neural network technique is based on the principle of learning from previous data. This neural network is trained with a series of inputs and desired outputs from the training data so as to minimize the prediction error [20]. Once the training is complete and the appropriate weights for the network links are determined, new inputs are presented to the neural network to predict the corresponding estimation of the response variable. The final component of our model, fuzzy logic, is a technique used to make rational decisions in an environment of uncertainty and imprecision [21], [22]. It is rich in its capability to represent the human linguistic ability with the terms of fuzzy set, fuzzy membership function, fuzzy rules, and the fuzzy inference process [23]. Once the concept of fuzzy logic is incorporated into the neural network, the result is a neuro-fuzzy system that combines the advantages of both techniques [24]. Layered Architecture The neuro-fuzzy FP model consists of three layers: the input, processing and output Fuzzy Complexity Measurement System The fuzzy part of the neuro-fuzzy FP model is composed of two fuzzy complexity measurement systems: the pre-defined system in the input layer and the adjusted system in the output layer. The two fuzzy systems share the same structure, but the pre-defined system uses the original unadjusted FP weight values whereas the adjusted system uses the calibrated weight values. Using fuzzy logic in a comprehensive way produces an exact complexity degree of each FP component with the associated file numbers and overcomes We define three new linguistic terms: small, medium and large, to express the inputs qualitatively. For example, if an ILF has one RET, we assume that the ILF's RET input is Input Layer Processing Layer Output Layer Equation Regression International Journal of Hybrid Intelligent Systems, 6( The fuzzy sets are defined to represent the linguistic terms in the complexity matrix. (Table IV), all the inputs and outputs of the five FP function components can be represented by fuzzy sets. The five components share the same fuzzy set structure but have different parameters in fuzzy membership functions; the parameters are assigned to make the boundary more gradual according to the original complexity matrices. An example of the complexity matrix for ILF/EIF is given to illustrate the fuzzy set structure and the definition of parameters. As illustrated in Figure 3(a), the inputs of DET determine the coordinates of the trapezoid, where a and d locate the "feet" of the trapezoid, and b and c locate the "shoulders". Figure 3(b) shows the outputs of the complexity matrix of ILF/EIF that are represented in the fuzzy sets of low, average, and high, and Table V demonstrates the parameters of the fuzzy membership functions. The parameters a, b, and c (a<b<c) define the coordinates of the triangle, where a and c locate the "feet" of the triangle, and b locates the "peak". Using a similar method, we can define the fuzzy sets for the remaining function components (EI, EO and EQ) and fuzzify all of the inputs and outputs in the FP complexity weight matrices. The fuzzy inference process using the Mamdani approach [26] is applied based on the fuzzy sets and rules. For each fuzzy rule, we apply the minimum method to evaluate the "AND" operation and obtain the antecedent result as a single number. The antecedent of each rule implies the consequence using the minimum method. All of the consequences are aggregated to an output fuzzy set using the maximum method. Eventually, the output fuzzy set is defuzzified to a single number using the centroid calculation method, which takes the center of gravity of the final fuzzy space and produces a clear output value. Extraction of the Estimation Equation The aim of this section is to establish an equation that can estimate the software cost in terms of work effort and then be used in the neural network training as an activation function. Although similar estimation equations are proposed in [27] and [28], we decided to create a new equation based on the ISBSG Data Repository -release 8. Data Preparation In order to obtain a reasonable equation, the raw data must be filtered by several criteria, since it is necessary to ensure that the model is built on the basis of a reliable data set. According to the information provided by the ISBSG [29], only the data points whose quality ratings are A or B should be considered. Hence, we discarded the C and D rating projects and were left with the remaining 1,445 projects. Next, we wanted to ensure that further analysis is based on the most widely used counting methods. Although FP has several variations of counting methods, including IFPUG [13], COSMIC [14] and Mark II [15], the IFPUG method is the most popular standard of counting as it is used by 90% of the projects (1,827 out of 2,027) in the whole ISBSG data set. Hence, only the projects counted using the IFPUG method were selected. The work efforts of the projects are recorded at different resource levels, including level one (development team), level two (development support team), level three (computer operations) and level four (end user or client). To ensure the reliability of further analysis, we ensured that our data was based on the same resource level as the majority of projects. Thus, we chose to have the projects recorded at the first resource level, a level which covers 70% (1,433 out of 2,027) of the projects in the whole ISBSG data set. Development type is the final criterion on which we based our analysis. There are three major development types of the projects in the ISBSG data set: new development (838 projects), enhancement (1,132 projects), and redevelopment (55 projects). However, there is one utility development project and one purchase package project that do not belong to any one of the three major development types. The new development and redevelopment projects can be classified into one large group, whose results are calculated using the original FP equation (Equation 3) of the first section. However, the enhancement projects are calculated using a very different equation [13] (see Equation 4); these projects should be treated as a separate case study. VAFbefore UFPdelete VAFafter UFPchange UFPadd FPenhance × + × + = ) ( Equation 4 The new development and redevelopment projects that were based on the original FP equation were considered. In order for the weight values to be further calibrated using the neural network, the neuro-fuzzy FP model considers the projects that offer the 15 categories of unadjusted FP, in other words, the five function components with low, average and high classifications. Furthermore, in order to add flexibility to the model, we The application of all of these criteria results in a significant decrease in the number of projects. As a result, a subset of 409 projects was obtained where the quality rating is A or B, the counting method is IFPUG, the effort resource level is one, and the development type is new or redevelopment. A further reduction of the projects to those that provide the 15 unadjusted FP categories and 14 GSC rating values resulted in a 184 project data set that was used to build the equation. Similarly, Angelis, Stamelos and Morisio [30] conducted research on the ISBSG data repository and encountered the same problem; they used ISBSG -release 6, which contains 789 projects, but had only 63 projects left after applying the filtering criteria. Statistical Analysis After applying the filtering criteria, the data obtained was transformed to satisfy the assumptions for regression analysis; the positive relationship between the work effort and size has been reported in [27], [28], and [31]. FP is a functional software size that is calculated by multiplying the UFP and the VAF, as shown in Equation 3. However, the definition of VAF, which is supposed to reflect the technical complexity, has been criticized for overlapping and for being outdated [30], [32]. The unadjusted FP has been standardized as an unadjusted functional size measurement through the International Organization for Standardization (ISO) [33], but the VAF has not. The ISBSG -release 8 data field description document recommends using the field of "Normalized Work Effort" as a project work effort. Thus, we have chosen UFP as the software size, and the normalized work effort as the effort in statistical analysis. The statistical regression analysis assumes that the underlying data are normally distributed; however, our original data is highly skewed. To approximate a normal distribution, we apply a logarithmic transformation to these variables in order to decrease the large values and to bring the data closer together. After the transformation, ln UFP and where α, β, A, B are all coefficients calculated from the regression process. Certain post regression analysis was done to check the validity of the regression. It is observed that the residuals are normally distributed, independent, with a constant variance and having a mean value that equals zero. Therefore, the assumptions for statistical regression analysis are satisfied and we can conclude that Equation 5 and its equivalent form Equation 6 are valid. Remarks on Equation Extraction Though of simple form, Equation 6 is derived from the filtered data set and analyzed by a reliable statistical procedure that includes logarithmic transformation, statistical does not include any special ISBSG parameters, and thus, it can estimate FP-oriented projects and can be extended to include cost drivers for future works. Neural Network Learning Model The neural network technique is used in the processing layer of the neuro-fuzzy FP model to learn the weight values of unadjusted FP and to calibrate FP so that it reflects the trend of current software. The weight values obtained from learning via the neural network are then utilized in the adjusted fuzzy complexity weight system. Network Structure The neural network used in the neuro-fuzzy FP model is a typical multi-layer feed-forward network whose structure is depicted in Figure 5. ). The output layer has only one neuron, Z, that receives the outputs from neuron Y of the middle layer and neuron X16 of the input layer. The activation function of neuron Z is complexity. This complexity should be proportional to the project work effort, which is based on the common sense notion that the more complex the software, the more effort should be put in. Overall, the equation obtained by statistical analysis has a sound mathematical ground and a solid explanation. It is used as the activation function in the neural network, and thus, the infamous problem of the traditional neural network behaving like a black-box is avoided. Z = v2 •Y Based on the neural network structure, a back-propagation learning algorithm is conducted in order to obtain the calibrated weight values of UFP. The purpose of this algorithm is to minimize the prediction difference between the estimated and actual efforts. Given NN projects, the prediction difference can be expressed as the error signal defined in where E is the error signal, Zn is the estimated effort of the nth project and Zdn is the actual effort of the nth project, the desired output. The learning procedure is subject to monotonic constraints; in other words, the UFP weight values must be Low < Average < High. Remarks on Neural Network Learning The neural network part of the model is designed to calibrate UFP weight values and to solve the three problems with the FP complexity weight system mentioned in Section 2, which include Problem 3 (weight values obsolete), Problem 4 (weight values defined subjectively), and Problem 5 (weight values defined locally). The new calibrated weight values overcome Problems 4 and 5, because they are acquired from the ISBSG Data Repository -release 8, which is compiled from dozens of countries and covers a broad range of software industry. Furthermore, Problem 3 is also addressed because 75% of the projects are fewer than five years old. Post-Tuned Fuzzy Weight Measurement System The calibrated weight values obtained from the neural network learning are imported in the adjusted fuzzy weight measurement system and are specifically applied to the output External Inputs is given in Figure 6. The adjusted weight measurement system can be used to count FP for new projects more accurately. Also, the pre-defined and adjusted fuzzy measurements constitute the complete fuzzy measurement system of the neuro-fuzzy FP model. ISBSG -release 8 is a large and wide-range project data repository, so the calibrated FP weight values learned from this repository reflect the maturity level of the software industry at that time. However, software development is a rapidly growing industry and these calibrated weight values will not reflect tomorrow's software. In the future, when modern project data is available, the FP weight values will again need to be re-calibrated to reflect the latest software industry trend. Our study is a data-driven type of research where we extracted a model based on known facts. The proposed model is more meaningful for small projects, which are actually the most common type of projects in the software industry. This limitation is due to the ISBSG data set characteristics used in this study, and it may raise concerns about the validity, specifically with large projects. In reality, there are more small projects than large ones, and even the large projects tend to be subdivided into smaller projects, so that they become easier to manage. Although the proposed approach has some potential to threaten the model's validity, we followed appropriate research procedures by conducting and reporting tests to guarantee the reliability of the study, and certain measures were also taken to ensure its validity. CONCLUSIONS FP as a software size metric is an important topic in the software engineering domain. The validation results of the neuro-fuzzy FP model with the empirical data repository FP Analysis is a process used to calculate software functional size. Currently, the most pervasive version is regulated in the Counting Practices Manual -version 4.2, which was released by the International Function Point User Group (IFPUG)[13].Counting FP requires the identification of five types of functional components: Internal Logical Files (ILF), External Interface Files (EIF), External Inputs (EI), External Outputs (EO) and External Inquiries (EQ). Each functional component is classified as a certain complexity based on its associated file numbers such as Data Element Types (DET), File Types Referenced (FTR) and Record Element Types (RET). The complexity matrices for the five components are shown in illustrates how each function component is then assigned a weight according to its complexity. The Unadjusted Function Point (UFP) is calculated with Equation 1, where Wij are the complexity weights and Zij are the counts for each function component. have never been updated since being introduced in 1979 and are applied universally. In contrast, software development methods have evolved steadily since 1979, but today's software differs drastically from what it was over two decades ago. Such an imbalance Figure 1 - 1Overview of the FP Calibration Model layer: The input layer of the model consists of the pre-defined fuzzy complexity measurement system based on experts' experience, which is subjective knowledge, and the project data provided by ISBSG[25],considered to be objective knowledge. The pre-defined system produces an exact complexity degree for eachfunction component of FP. The project data imported in this layer is used to extract an estimation equation by means of the statistical regression technique and to train the neural network in the processing layer. b) Processing layer: An equation that estimates work effort is derived in the processing layer by analyzing the project data imported from the input layer using a statistical regression technique. The equation is then applied in the neural network as an activation function. Also, the neural network learning block calibrates the weight values of unadjusted FP by learning from the historical project data. c) Output layer: The calibrated weight values of unadjusted FP (UFP) are generated from the neural-network learning block. They are used in the output layer to adjust the parameters of the pre-defined fuzzy complexity measurement system, producing an adjusted fuzzy complexity measurement system. The pre-defined and the post-tuned (adjusted) fuzzy measurement system combine to form the fuzzy logic component of the model. Figure 2 - 2Layered Architecture of FP Calibration Model Problems 1 and 2 by obtaining a more precise categorization. As mentioned in the first section, the five FP function components are classified according to the complexity matrices. The inputs in the original weight matrices are the numbers of files associated with each function component, and the output is the component's complexity classification: low, average or high. Each input and output in the linguistic complexity matrix is represented by a fuzzy set named after its linguistic term. The membership grade is captured through the membership functions of each fuzzy set. There are several basic types of fuzzy membership functions[24]; the trapezoidal and the triangular types of membership functions are selected because the complexity increases linearly with the file numbers and also because these types of membership functions are appropriate in preserving the values. The inputs are of the trapezoidal type and the outputs are of the triangular type. Since all of the original complexity matrices are now unified to the linguistic complexity matrix Figure 3 - 3Neuro projects that provided the 14 GSC rating values; almost all of the projects that provide the 15 UFP categories provide 14 GSC rating values as well. lnFigure 4 - 4Work Effort are approximately normally distributed. The relationship between the work effort and the size is illustrated, using two-dimensional graphs as shown in Figures 4(a) and 4(b), before and after the logarithmic transformation respectively. Furthermore, there is an International Journal of Hybrid Intelligent Systems, Effort and UFP (b) ln Work Effort and ln UFP Graphs of UFP and Work Effort The regression process applied to the data set is automated by the statistical software package SPSS version 12. An equation in the form of Equation 5 was calculated, and its equivalent form is shown in Equation 6:ln Effort = α •ln UFP + β and post regression analysis. It contains UFP as the only predictor and excludes VAF, a parameter receiving much criticism.1 The equation is flexible to the extent that it The network consists of three layers: input, middle, and output. The input layer is composed of 16 neurons, denoted as Xi, with i ranging in value from 1 to 16. Among this group of 16, neurons X1 to X15 represent the three complexity ratings of five unadjusted FP function components. The inputs of these 15 neurons are the numbers of their respective function components. They are denoted by codes such as NINLOW (number of low External Inputs), NFLAVG (number of average Internal Logical Files), and NQUHGH (number of high External Inquiries), and are described in detail inTable VII. These 15 neurons are all connected to neuron Y in the middle layer and are associated with the weights of wi, with i ranging in value from 1 to 15. Neuron X16 is a bias node with a constant input of one and is connected to neuron Z in the output layer with an associated weight of v2, which represents the coefficient A in Equation6. Figure 5 - 5Neural Network Structure of Neuro-Fuzzy FP ModelTable VII -NOTATION OF NEURON Xi INPUTSIn the middle layer, neuron Y receives the outputs from the 15 neurons in the input layer and is then connected to neuron Z in the output layer. The output of neuron Y . An example of the adjusted output membership functions of Figure 6 - 6Post-tuned Fuzzy Sets for External Inputs4. MODEL ASSESSMENTFive experiments were conducted to validate the model. For each experiment, the original data set (184 projects) was randomly separated into 100 training data points and 84 test data points. The outliers are the abnormal project data points with large noise that may distort the training result. Thus, we used the training data set excluding the outliers to calibrate UFP weight values, but used the rest of the data points to test the model.The calibrated UFP weight values obtained from five experiments are listed inTable VIII, and the original weight values as comparison. The observation of lower weight values after calibration means that fewer work efforts are needed to accomplish the same complex software component. This is in accordance with the fact that overall productivity of the software industry has been continuously increasing since Function Points was invented ( ISBSG -release 8) show a 22% improvement in software cost estimation. This result International Journal of Hybrid Intelligent Systems, the original unadjusted FP weight values require updated calibration for more accurate cost estimations. This paper provides a framework to calibrate the complexity weight values and solves the problems with the FP weight mentioned in Section 2. The fuzzy part of the neuro-fuzzy FP model produces an exact complexity degree for each functional component by processing its associated file numbers using fuzzy logic. This part of the model overcomes two problems with the unadjusted FP complexity classification: ambiguous classification (Problem 1) and crisp boundary (Problem 2), as described in sub-section 3.3. The neural network part of the neuro-fuzzy FP model calibrates UFP weight values using the ISBSG Data Repository -release 8, which contains 2,027 projects from dozens of countries and covers a broad range of project types from many industries, with 75% of the projects being less than five years old. This part of the model overcomes three problems with the unadjusted FP complexity weight values: obsolete weight values (Problem 3), weight values defined subjectively (Problem 4), and weight values defined locally (Problem 5) as laid out in sub-sections 3.5 and 3.6. Finally, sub-section 3.4 presents a new equation to estimate the software cost in work effort initially derived from the ISBSG Data Repository -release 8. It is further improved with the reliable filtered data set and analyzed by a reliable statistical procedure. This equation fulfills the requirement of being flexible and integral, and has the potential to involve other cost drivers in future research. Table I - ICOMPLEXITY MATRIX FOR FP FUNCTION COMPONENTSTable II -FUNCTION COMPONENT COMPLEXITY WEIGHT ASSIGNMENTILF/EIF DET EI DET EO/EQ DET RET 1-19 20-50 51+ FTR 1-4 5-15 16+ FTR 1-5 6-19 20+ 1 Low Low Avg 0-1 Low Low Avg 0-1 Low Low Avg 2-5 Low Avg High 2 Low Avg High 2-3 Low Avg High 6+ Avg High High 3+ Avg High High 4+ Avg High High Component Low Average High External Inputs 3 4 6 External Outputs 4 5 7 External Inquiries 3 4 6 Internal Logical Files 7 10 15 External Interface Files 5 7 10 International Journal of Hybrid Intelligent Systems, 6(1):1-14, January 2019, IOS Press DOI: https://doi.org/10.3233/HIS-2009-0061 Once calculated, UFP is multiplied by a Value Adjustment Factor (VAF), which takes into account the supposed contribution of technical and quality requirements. The VAF is calculated from 14 General System Characteristics (GSC), using Equation 2; the GSC includes the characteristics used to evaluate the overall complexity of the software. shows a software project with two ILF, A and B. Both A and B have 3 RET, but A has 501):1-14, January 2019, IOS Press DOI: https://doi.org/10.3233/HIS-2009-0061 Table III - IIIPROBLEM 1(AMBIGUOUS CLASSIFICATION) AND PROBLEM 2 (CRISP BOUNDARY) small. Similarly, if an EI has six DETs, we assume that EI's DET input is medium. Also, we use the linguistic terms low, average and high for the output, which is the same as the original matrices. For example, the linguistic terms of low, average, and high are used to1):1-14, January 2019, IOS Press DOI: https://doi.org/10.3233/HIS-2009-0061 describe the weight values of 7, 10 and 15 of ILF respectively. Thus, the linguistic terms defined are consistent with the original complexity matrices. We unified all of the complexity matrices for the five FP components to one equivalent linguistic complexity matrix shown in Table IV. Table IV - IVLINGUISTIC COMPLEXITY MATRIX and RET are represented in the fuzzy sets of small, medium, and large, andTable V showsthe parameters of the membership functions.The parameters a, b, c, and d (a<b≤ c<d)Input 1 Input 2 Small Medium Large Small Low Low Average Medium Low Average High Large Average High High International Journal of Hybrid Intelligent Systems, 6(1):1-14, January 2019, IOS Press DOI: https://doi.org/10.3233/HIS-2009-0061 Table V - VNEURO-FUZZY FP MODEL FUZZY SETS PARAMETERS3.3.1. Problems Tackled with the Proposed Approach If we apply our method to the previous examples illustrating the FP complexity Problems 1 and 2, we obtain three weight values that are much more accurate. Table VI shows the original weight values and the fuzzy weight values of ILF A, B and C, therefore demonstrating that the new fuzzy weight values solve both Problem 1 (ambiguous classification) and Problem 2 (crisp boundary). Table VI - VIPROBLEMS SOLVED USING FUZZY LOGIC Table - VIII -Calibrated UFP Weight ValuesThe validation results of the five experiments are assessed by Mean Magnitude Relative Error (MMRE) for estimation accuracy. MMRE is defined as: for n projects, The results are listed inTable IXwhere "Improvement %." is the MMRE improvement in percentage for each experiment. Based on the MMRE assessment results, an average of 22% cost estimation improvement has been achieved with the Neuro-Fuzzy Function Points Calibration model. The MMRE after calibration is around 100%, which is still relatively large and is due to the absence of well-defined cost drivers like COCOMO factors. Unfortunately ISBSG Release 8 does not have data on cost drivers.( ) ∑ = − = n i i i i Actual Actual Estimated n MMRE 1 / | | 1 . Table IX - IXMMRE Validation ResultThe validation results of the five experiments are also assessed by Prediction at level p , where N is the total number of projects, k is the number of projects with absolute relative error of p. Four PRED criteria are assessed in this work, namely Pred 25, Pred 50, Pred 75 and Pred 100.Table X lists the PRED assessment result. Exp.1 Exp.2 Exp.3 Exp.4 Exp.5(PRED) criteria, i.e. N k p PRED / ) ( = Component Low Average High Original Calibrated Original Calibrated Original Calibrated External Inquiries 3 1.8 4 2.9 6 5.4 External Outputs 4 3.3 5 3.3 7 6.2 External Inquiries 3 1.8 4 2.9 6 5.4 Internal Logical Files 7 5.4 10 9.8 15 14.9 External Interface Files 5 4.6 7 6.9 10 10 MMRE Original 1.38 1.58 1.57 1.39 1.42 MMRE Calibrated 1.10 1.28 1.17 1.03 1.11 Improvement % 20% 19% 25% 26% 22% Average Improvement % 22% International Journal of Hybrid Intelligent Systems, 6(1):1-14, January 2019, IOS Press DOI: https://doi.org/10.3233/HIS-2009-0061 Table X - XPRED Validation Results Figure 7 plots the comparison of the original and the calibrated PRED results where the overall improvement is observed: the line with square signs (calibrated) is above the line with diamond signs (original).Figure 7 -PRED Validation Results Comparison4.1 Weakness of the ModelThreats to validity are conditions that limit the researcher's ability to generalize the results of the experiment to industrial practice, which was the case with this study. Specific measures were taken to support validity; for example, a random sampling technique was used to draw samples from the population in order to conduct experiments, and filtering was applied to the ISBSG data set. Five experiments were conducted by drawing five different random samples in order to generalize the results. "Post hoc" analysis of effect size and power reinforced the validity of the experiments by yielding a large effect size.The proposed calibration of the FP element's weights were applied to the ISBSG data set to monitor the effectiveness of the approach; a potential threat to the validity of this study involved the question of whether or not similar results would be obtained with an entirely different sample. In this investigation, we calibrated the weights of the five FP elements using only the ISBSG data set, which has raised a threat to the validity of the calibration process. The ISBSG data set contains projects using different function point counting techniques, such as IFPUG, COSMIC and MARK II. Because 90% of the sample used the IFPUG counting method, we therefore restricted our experiments to IFPUG projects. This decision may lead to the question as to whether the proposed model's outcome will be valid if the model is used with the other two types of FP counting technique besides IFPUG.Average Original Average Calibrated Average Improvement Pred 25 13% 12% 0% Pred 50 23% 27% 4% Pred 75 40% 46% 6% Pred 100 60% 67% 8% 0% 10% 20% 30% 40% 50% 60% 70% 80% Pred 25 Pred 50 Pred 75 Pred 100 Original Calibrated International Journal of Hybrid Intelligent Systems, 6(1):1-14, January 2019, IOS Press DOI: https://doi.org/10.3233/HIS-2009-0061 A test was done to select VAF as the predictor and included it in the equation, but the stepwise regression process automated by SPSS discarded VAF from the equation due to its poor performance. . B Boehm, Prentice HallSoftware Engineering EconomicsB. Boehm, Software Engineering Economics, Prentice Hall, 1981. . B Boehm, E Horowitz, R Madachy, D Reifer, B Clark, B Steece, A Brown, S , B. Boehm, E. Horowitz, R. Madachy, D. Reifer, B. Clark, B. Steece, A. Brown, S. Software Cost Estimation with COCOMO II. C Chulani, Abts, Prentice HallChulani, and C. Abts, Software Cost Estimation with COCOMO II, Prentice Hall, 2000. . L H Putnam, W Myers, Prentice HallMeasures of ExcellenceL. H. Putnam and W. Myers, Measures of Excellence, Prentice Hall, 1992. Measuring Application Development Productivity. A Albrecht, Proceedings of the Joint SHARE/GUIDE/IBM Application Development Symposium. the Joint SHARE/GUIDE/IBM Application Development SymposiumA. Albrecht, "Measuring Application Development Productivity," Proceedings of the Joint SHARE/GUIDE/IBM Application Development Symposium, October 1979, pp. 83-92. Splitting the Difference: The Historical Necessity of Synthesis in. S Shapiro, 10.3233/HIS-2009-0061International Journal of Hybrid Intelligent Systems. 61IOS Press DOIS. Shapiro, "Splitting the Difference: The Historical Necessity of Synthesis in International Journal of Hybrid Intelligent Systems, 6(1):1-14, January 2019, IOS Press DOI: https://doi.org/10.3233/HIS-2009-0061 Software Engineering. IEEE Annals of the History of Computing. 191Software Engineering", IEEE Annals of the History of Computing, vol. 19, no. 1, pp. 20-54, 1977. Can Neural Networks Be Easily Interpreted in Software Cost Estimation?. A Idri, T M Khosgoftaar, A Abran, IEEE International Conference on Fuzzy Systems. A. Idri, T. M. Khosgoftaar and A. Abran "Can Neural Networks Be Easily Interpreted in Software Cost Estimation?", IEEE International Conference on Fuzzy Systems, pp. 1162-1167, 2002. Machine Learning Approaches to Estimating Software Development Effort. K Srinivasan, D Fisher, IEEE Transactions on Software Engineering. 212K. Srinivasan and D. Fisher, "Machine Learning Approaches to Estimating Software Development Effort", IEEE Transactions on Software Engineering, vol. 21, no. 2, pp. 126-137, 1995. Software Source Code Sizing Using Fuzzy Logic Modeling. S G Macdonell, Information and Software Technology. 45S. G. MacDonell, "Software Source Code Sizing Using Fuzzy Logic Modeling", Information and Software Technology, vol. 45, pp. 389-404, 2003. Adaptive Fuzzy-Logic Based Framework for Software Development Effort Prediction. M A Ahmed, M O Saliu, J Alghamdi, Information and Software Technology. 47M. A. Ahmed, M. O. Saliu and J. AlGhamdi, "Adaptive Fuzzy-Logic Based Framework for Software Development Effort Prediction", Information and Software Technology, vol. 47, pp. 31-48, 2005. Identification of Fuzzy Models of Software Cost Estimation. Z Xu, T M Khoshgoftaar, Fuzzy Sets and Systems. 145Z. Xu and T. M. Khoshgoftaar, "Identification of Fuzzy Models of Software Cost Estimation", Fuzzy Sets and Systems, vol. 145, pp. 141-163, 2004. A Soft Computing Framework for Software Effort Estimation. X Huang, D Ho, J Ren, L F Capretz, Soft Computing Journal. 102SpringerX. Huang, D. Ho, J. Ren and L. F. Capretz, "A Soft Computing Framework for Software Effort Estimation", Soft Computing Journal, Springer, vol. 10, no. 2, pp. 170-177, 2006. Improving the COCOMO Model with a Neuro-Fuzzy Approach. X Huang, D Ho, J Ren, L F Capretz, Applied Soft Computing Journal, Elsevier Science. 7X. Huang, D. Ho, J. Ren and L. F. Capretz "Improving the COCOMO Model with a Neuro-Fuzzy Approach", Applied Soft Computing Journal, Elsevier Science, vol. 7, pp. 29-40, 2007. Function Point Counting Practices Manual, International Function Point Users Group. Ifpug, IFPUG, Function Point Counting Practices Manual, International Function Point Users Group, January 2004. COSMIC Full Function Point Measurement Manual, Common Software Measurement International Consortium. COSMIC. COSMIC, COSMIC Full Function Point Measurement Manual, Common Software Measurement International Consortium, January 2003. Function Point Analysis: Difficulties and Improvements. C Symons, IEEE Transactions on Software Engineering. 141C. Symons, "Function Point Analysis: Difficulties and Improvements," IEEE Transactions on Software Engineering, vol. 14, no. 1, pp. 2-11, January 1988. A Comparison of Software Effort Estimation Techniques: Using Function Points with Neural Networks, Case-Based Reasoning and Regression Models. G R Finnie, G E Wittig, J-M Desharnais, Journal of Systems Software. 39G. R. Finnie, G. E. Wittig and J-M. Desharnais, "A Comparison of Software Effort Estimation Techniques: Using Function Points with Neural Networks, Case-Based Reasoning and Regression Models", Journal of Systems Software, vol. 39, pp. 281-289, 1977. . 10.3233/HIS-2009-0061International Journal of Hybrid Intelligent Systems. 61IOS Press DOIInternational Journal of Hybrid Intelligent Systems, 6(1):1-14, January 2019, IOS Press DOI: https://doi.org/10.3233/HIS-2009-0061 Modelling the Probabilistic Behaviour of Function Point Analysis. C Yau, H-L Tsoi, Information and Software Technology. 40C. Yau and H-L. Tsoi, "Modelling the Probabilistic Behaviour of Function Point Analysis", Information and Software Technology, vol. 40, pp. 59-68, 1998. Fuzzy Modeling for Function Points Analysis. O S Lima, P F M Farias, A D Belchior, Software Quality Journal. 11O. S. Lima, P. F. M. Farias and A. D. Belchior, "Fuzzy Modeling for Function Points Analysis", Software Quality Journal, vol. 11, pp. 149-166, 2003. Modification of Standard Function Point Complexity Weights System. M A Al-Hajri, A A A Ghani, M S Sulaiman, M H Selamat, Journal of Systems and Software. 74M. A. Al-Hajri, A. A. A. Ghani, M. S. Sulaiman and M. H. Selamat, "Modification of Standard Function Point Complexity Weights System", Journal of Systems and Software, vol. 74, pp. 195-206, 2005. S Haykin, Neural Networks: A Comprehensive Foundation. Prentice HallS. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, 1998. Fuzzy Logic. L A Zadeh, Computer. 21L. A. Zadeh, "Fuzzy Logic," Computer, vol. 21, pp. 83-93, April 1988. Fuzzy Logic with Engineering Applications. T J Ross, John Wiley & Sons2nd ed.T. J. Ross, Fuzzy Logic with Engineering Applications, 2 nd ed., John Wiley & Sons, 2004. Fuzzy Sets. L A Zadeh, Information and Control. 8L. A. Zadeh, "Fuzzy Sets," Information and Control, vol. 8, pp. 338-353, March 1965. J S R Jang, C T Sun, E Mizutani, Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence. Prentice HallJ. S. R. Jang, C. T. Sun, and E. Mizutani, Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence, Prentice Hall, 1997. Data Isbsg, Repository, International Software Benchmarking Standards Group. 8ISBSG, Data Repository -release 8, International Software Benchmarking Standards Group, 2004. Application of Fuzzy Logic to Approximate Reasoning Using Linguistic Synthesis. E H Mamdani, IEEE Transactions on Computers. 2612E. H. Mamdani, "Application of Fuzzy Logic to Approximate Reasoning Using Linguistic Synthesis," IEEE Transactions on Computers, vol. 26, no. 12, pp. 1182-1191, 1977. Software Function, Source Lines of Code and Development Effort Prediction: A Software Science Validation. A J Albrecht, J E Gaffney, IEEE Transactions on Software Engineering. 96A. J. Albrecht and J. E. Gaffney, "Software Function, Source Lines of Code and Development Effort Prediction: A Software Science Validation," IEEE Transactions on Software Engineering, vol. 9, no. 6, pp. 639-648, June 1983. Software Development Cost Estimation Using Function Points. J E Matson, B E Barrett, J M Mellichamp, IEEE Transactions on Software Engineering. 204J. E. Matson, B. E. Barrett, and J. M. Mellichamp, "Software Development Cost Estimation Using Function Points," IEEE Transactions on Software Engineering, vol. 20, no. 4, pp. 275-287, April 1994. ISBSG, Guidance on the Use of the ISBSG Data, International Software Benchmarking Standards Group. ISBSG, Guidance on the Use of the ISBSG Data, International Software Benchmarking Standards Group, 2004. Building a Software Cost Estimation Model Based on Categorical Data. L Angelis, I Stamelos, M Morisio, 10.3233/HIS-2009-0061Proceedings of the 7th International International Journal of Hybrid Intelligent Systems. 61IOS Press DOIL. Angelis, I. Stamelos, and M. Morisio, "Building a Software Cost Estimation Model Based on Categorical Data," Proceedings of the 7th International International Journal of Hybrid Intelligent Systems, 6(1):1-14, January 2019, IOS Press DOI: https://doi.org/10.3233/HIS-2009-0061 . Symposium on Software Metrics. IEEE Computer SocietySymposium on Software Metrics, IEEE Computer Society, 2001. An Empirical Validation of Software Cost Estimation Models. C F Kemerer, Communications of the ACM. 305C. F. Kemerer, "An Empirical Validation of Software Cost Estimation Models," Communications of the ACM, vol. 30, no. 5, pp. 416-429, May 1987. Come Back Function Point Analysis (Modernised) All Is Forgiven!. C Symons, C. Symons, "Come Back Function Point Analysis (Modernised) All Is Forgiven!," <http://www.gifpa.co.uk/library/Papers/Symons/fesmafpa2001/paper.html>, 2001. 1 Unadjusted Functional Size Measurement Method -Counting Practices Manual, International Organization for Standardization. Iso, ISO, IFPUG 4.1 Unadjusted Functional Size Measurement Method -Counting Practices Manual, International Organization for Standardization, ISO/IEC 20926, 2003.
[]
[ "PoS(LL2014)073 Progress on the infrared structure of multi-particle gauge theory amplitudes", "PoS(LL2014)073 Progress on the infrared structure of multi-particle gauge theory amplitudes" ]
[ "Lorenzo Magnea [email protected] \nUniversity of Torino\nINFN\nSezione di Torino\nWeimarGermany\n" ]
[ "University of Torino\nINFN\nSezione di Torino\nWeimarGermany" ]
[ "Loops and Legs in Quantum Field Theory -LL" ]
I will review some of the recent intense activity concerning infrared and collinear divergences in gauge theory amplitudes. The central quantity in these studies is the multi-particle soft anomalous dimension matrix, which is completely known at two loops for both massless and massive particles, and whose properties are currently being studied at three-loops and beyond. I will describe how, in the massless case, the simple dipole-like structure of the anomalous dimension up to two loops can be exploited in the high-energy limit to study effects that go beyond the standard form of Regge factorization. Furthermore, I will briefly review some of the techniques that have recently been developed to compute the soft anomalous dimension at high orders in perturbation theory, and I will give some examples of applications, including a result valid to all orders in perturbation theory for a specific class of diagrams.
10.22323/1.211.0073
[ "https://pos.sissa.it/211/073/pdf" ]
118,696,856
1408.0682
12f2a15bb0391382e080b3c8ddb54e71e1433454
PoS(LL2014)073 Progress on the infrared structure of multi-particle gauge theory amplitudes 2014 Lorenzo Magnea [email protected] University of Torino INFN Sezione di Torino WeimarGermany PoS(LL2014)073 Progress on the infrared structure of multi-particle gauge theory amplitudes Loops and Legs in Quantum Field Theory -LL 2014* Speaker. Infrared Structure Lorenzo Magnea I will review some of the recent intense activity concerning infrared and collinear divergences in gauge theory amplitudes. The central quantity in these studies is the multi-particle soft anomalous dimension matrix, which is completely known at two loops for both massless and massive particles, and whose properties are currently being studied at three-loops and beyond. I will describe how, in the massless case, the simple dipole-like structure of the anomalous dimension up to two loops can be exploited in the high-energy limit to study effects that go beyond the standard form of Regge factorization. Furthermore, I will briefly review some of the techniques that have recently been developed to compute the soft anomalous dimension at high orders in perturbation theory, and I will give some examples of applications, including a result valid to all orders in perturbation theory for a specific class of diagrams. PoS(LL2014)073 Introduction The interest in the structure of infrared divergences of non-abelian gauge theory amplitudes has revived in recent years, due to several reasons, both phenomenological and theoretical. A practical concern is the fact that infrared singularities must be understood in great detail in order to construct efficient subtraction algorithms, which are needed in order to compute infraredsafe quantities at non-trivial orders in perturbation theory. The need to provide precise predictions for complex processes such as jet production at LHC has driven a vast effort to push these calculations beyond NLO [1], focusing interest on the infrared structure of scattering amplitudes at high orders. Another issue of great relevance for phenomenology is the need to resum large logarithmic corrections, which jeopardize the reliability of the perturbative expansion for many cross sections of physical interest. The technology of resummation is based upon the knowledge of soft and collinear singularities [2,3]: indeed, while actual divergences cancel in physical cross sections, they leave behind logarithmic enhancements, which can be computed to all orders in perturbation theory with increasing accuracy, provided the structure of singularities is sufficiently well understood. Resummations are well developed for cross sections involving only two colored particles at tree level, such as electroweak vector boson production, DIS, or Higgs production [4]. The needs of LHC have however generated interest in the possibility to provide resummed predictions for more complicated multi-particle processes. These predictions in turn require a deeper understanding of infrared divergences of multi-particle amplitudes. Finally, it must be noted that the study of long-distance singularities is of great interest also from a purely theoretical point of view. One aspect of this interest is the role played by singularities in the scattering amplitudes of N = 4 Super Yang-Mills theory, which have been the focus of massive investigations and impressive progress in the past several years [5]. In this case the quantum (super) conformal invariance of the theory makes the very idea of an S-matrix rather precarious: indeed, scattering amplitudes are, to a large extent, defined by their infrared regularization, and infrared divergences in fact determine the entire structure of the perturbative expansion, at least up to the conformally invariant 'remainder functions' which start arising at two-loops and with at least six colored particles [6,7]. The all-order knowledge of infrared divergences for planar amplitudes has been instrumental to provide ideas and check results in this field [8], and it is to be hoped that the same will happen in the non-planar case as well. More generally, infrared singularities provide a window into the all-order behavior of perturbative gauge theories: they are sufficiently simple and universal to be computable with great accuracy, yet they retain highly nontrivial information about dynamics. Multi-particle amplitudes are especially interesting in this regard, since they display, at the level of the anomalous dimension, highly non-trivial correlations between color and kinematic degrees of freedom, which must ultimately be reflected (in QCD) by the patterns of parton evolution which prepare colored particles for hadronization. In what follows, I will focus mostly on progress which has been achieved in the last couple of years on two fronts: on the one hand, the application of our understanding of infrared structure to the high-energy limit, where one can control contributions going beyond the simplest form of Reggeization [9,10], making predictions at two and three loops [11]; on the other hand, the devel- PoS(LL2014)073 Infrared Structure Lorenzo Magnea opment and application of computational tools for the calculation of the soft anomalous dimension at high orders [12,13], and in particular at three loops. This calculation, which would have looked unfeasible just a few years ago, appears now within reach. On the structure of infrared divergences The distinguishing character of soft and collinear radiation is universality. Divergences arise from emissions that are unsuppressed at large distances and long times, and thus happen on energy scales which are drastically different from those associated with hard processes. In other words, infrared radiation does not possess the resolving power to probe the hard scattering: it happens 'much later' or 'much earlier', so that quantum-mechanical interference with the short-distance process is suppressed by powers of the hard scale. While this picture is rather intuitive and physically appealing, the structure of perturbation theory makes proving it very difficult [2,14,15]. When the dust settles, one is left with a simple factorization formula, which applies to all gauge theory amplitudes, and is exact for infrared divergent contributions. One writes [16,17] M p i µ , α s = Z p i µ , α s H p i µ , α s , (2.1) where the scattering amplitude M , involving n colored particles with momenta p i , is a vector in color space, while Z is a universal operator encoding infrared divergences, and acting upon the finite vector of "matching coefficients" H . The infrared operator Z obeys a renormalization group equation which, in dimensional regularization [18] and for d = 4 − 2ε > 4, has a very simple solution, expressed in terms of an anomalous dimension matrix Γ as Z p i µ , α s = P exp 1 2 µ 2 0 dλ 2 λ 2 Γ p i λ , α s (λ 2 ) . (2.2) Quite naturally, the matrix Γ is the focus of all investigations in this field: its knowledge completely solves the perturbative infrared problem for all gauge theory scattering amplitudes. Currently, Γ is known at two loops, both for massless [19,20] and for massive [21,22,23,24,25] particles. In the massless case, the problem has an extra symmetry, since the momenta of all hard particles can be independently rescaled without affecting infrared emissions. This symmetry puts strong constraints on the soft anomalous dimension matrix: a simple solution to these constraints is the dipole formula [26,27] Γ dip p i λ , α s (λ 2 ) = 1 4 γ K α s (λ 2 ) ∑ (i, j) ln −s i j λ 2 T i · T j − ∑ i γ i α s (λ 2 ) ,(2.3) stating that only two-particle color and kinematic correlations contribute to the soft anomalous dimension. In Eq. (2.3) T i is a color generator in the representation of particle i, and the running coupling enters only through the anomalous dimensions γ i , responsible for collinear emissions from hard parton i, and γ K , which is the (light-like) cusp anomalous dimension [28,29], rescaled by the quadratic Casimir eigenvalue of the relevant color representation. As a consequence, so long as Eq. (2.3) applies, the path ordering operator P can be omitted in Eq. (2.2). The dipole formula PoS(LL2014)073 is exact at two loops, and it is known that it can only receive corrections starting with three-loop four-point amplitudes, due to the existence at that order of conformally invariant cross-ratios of momenta which automatically satisfy the rescaling constraints. Evidence for the existence of such corrections, albeit at four loops, was recently uncovered in Ref. [30]. Further corrections could arise, starting at four loops, if the cusp anomalous dimension γ K (α s ) were to contain terms proportional to higher rank Casimir eigenvalues, at that order or beyond. The nature of the corrections to the dipole formula is currently the subject of intense investigations [31,32]. For massive particles, the simplicity arising from the extra rescaling symmetry is lost, and indeed the two-loop calculation shows that tripole correlations among colored particles do arise. Denoting the Minkowsky angle between hard particles i and j by γ i j = 2p i · p j p 2 i p 2 j ≡ −α i j − 1 α i j (2.4) and then defining ξ i j ≡ log α i j , one can express three-particle correlations in the two-loop soft anomalous dimension as Γ (2) trip (ξ mn ) = i f abc ∑ i jk T a i T b j T c k F (2) ξ i j , ξ jk , ξ ki , (2.5) where [22] F (2) ξ i j , ξ jk , ξ ki = 4 3 ∑ i jk ε i jk g (ξ i j ) ξ jk coth ξ jk . (2.6) Remarkably, Eq. (2.6) has a factorized form in terms of relatively simple functions of individual cusp angles: indeed, g is a function of uniform weight w = 2, while the factor ξ i j coth ξ i j is proportional to the one-loop non-light-like cusp anomalous dimension. Clearly, also in the massive case, there is a structural simplicity which has yet to be fully understood. On the high-energy limit As a first example of application of our understanding of long-distance singularities, let's consider the high-energy limit. Focusing on four-point amplitudes, this is the limit in which s → ∞, while |t| remains limited. To the extent that the dipole formula, Eq. (2.3), applies, the infrared operator Z in this limit simplifies considerably, taking on the form [9,10,11,33] Z s µ 2 , t µ 2 , α s = Z 1,R t µ 2 , α s Z S s t , α s + O t s , (3.1) where Z S is a matrix carrying all the leading-power energy dependence, while Z 1,R is a colorsinglet factor. Introducing 'Mandelstam' combinations of color generators, T s ≡ T 1 + T 2 , and similarly for T t and T u , one can write the operator Z S as Z S s t , α s = exp K(α s ) log s −t − i π 2 (1 + κ ab ) T 2 t + i π 2 T 2 s − T 2 u + κ ab T 2 t . (3.2) PoS(LL2014)073 Infrared Structure Lorenzo Magnea The factor κ ab distinguishes quark and gluon amplitudes, with a, b = q, g, and one has κ gg = κ qg = 1, while κ qq = 4/N 2 c − 1, due to the different symmetry properties of the color factors of the respective amplitudes. K(α s ), on the other hand, is given by a scale integral of the cusp anomalous dimension [34,35,36,37], K (α s ) = − 1 4 µ 2 0 dλ 2 λ 2γ K α s (λ 2 ) . (3.3) Finally, the interesting feature of the color-singlet operator Z 1,R , whose explicit expression can be found in [11], is that it can be written as a product of factors, each associated to one of the external legs. Distinguishing again quark and gluon scattering one can write Z ab 1,R t µ 2 , α s = Z a 1,R t µ 2 , α s 2 Z b 1,R t µ 2 , α s 2 , (3.4) making it natural to think of each external leg contribution as a 'jet' factor. These results of infrared factorization at high energy must be compared to the expressions one gets from Regge factorization [38,39], under the common assumptions that the only singularities of these amplitudes in the complex angular momentum plane should be Regge poles. Using these methods one finds that the t-channel color octet components of quark and gluon amplitudes can be written as [40,41] M [8] ab s µ 2 , t µ 2 , α s = 2πα s H (0),[8] ab × C a t µ 2 , α s A + s t , α s + κ ab A − s t , α s C b t µ 2 , α s + R [8] ab s µ 2 , t µ 2 , α s + O t s , (3.5) where the functions C a are impact factors, which depend on the identity of the particles undergoing the high-energy scattering, H (0), [8] ab is the matrix element for tree-level octet exchange, and the (logarithmic) energy dependence is contained in the 'Regge trajectory' factor A ± s t , α s = −s −t α(t) ± s −t α(t) . (3.6) We have also included in Eq. (3.5) an octet 'remainder' function R [8] ab , accounting for the fact that Eq. (3.5) is not exact at leading power, but only to leading logarithmic accuracy, and at NLL for the real part of the amplitude [42]. As we will see below, infrared factorization allows us to extract useful information on the remainder, going beyond the contributions of Regge poles. Comparing factorizations The simplest way to compare the two high-energy factorizations, Eq. (2.1) and Eq. (3.5), is to expand the matrix element in powers of the coupling and in powers of the high-energy logarithm, as M [ j] s µ 2 , t µ 2 , α s = 4πα s ∞ ∑ n=0 n ∑ i=0 α s π n ln i s −t M (n),i,[ j] t µ 2 ,(4. PoS(LL2014)073 Infrared Structure Lorenzo Magnea and perform similar expansion for the various factors on the right-hand sides of Eq. (2.1) and Eq. (3.5). Thus, for any function involving high-energy logarithms, (n) will denote the perturbative order, i the power of log(s/|t|), and [ j] the color representation in a t-channel basis. Even before expanding, however, a direct comparison of Eq. (2.1) and Eq. (3.5) at leading logarithmic (LL) level shows that the Regge trajectory α(t) must be strictly related to the cusp integral, Eq. (3.3). More precisely, one finds that if an amplitude, at tree-level, and in the high-energy limit, is dominated by t-channel exchange of a particle in representation [ j] of the gauge group, then that particle will undergo the process of 'Reggeization', and the divergent contributions to its Regge trajectory will be given by [9,10,36] α [ j] t µ , α s (µ) = C [ j] 2 K (α s (−t)) ,(4.2) where C [ j] 2 is the quadratic Casimir eigenvalue of representation [ j]. A more detailed analysis allows one to begin identifying the terms that break the Regge-polebased factorization Eq. (3.5), starting at two loops and at NNLL accuracy. Doing this is quite interesting, since at this level a more general picture of Reggeization is expected to arise, involving Regge cuts in the complex angular momentum plane. A precise identification of the contributions to the amplitude which violate Eq. (3.5) may provide key information to jump-start a new kind of high-energy resummation. In this spirit, one observes that at two loops Eq. (2.1) suggests a natural expression for impact factors, C (2) a = 1 2 Z (2) 1,R,aa − 1 8 Z (1) 1,R,aa 2 + 1 4 Z (1) 1,R,aa Re H (1),0,[8] aa /H (0),[8] aa + O ε 0 ,(4.3) simply arising from the expansion of the 'jet factors' in Eq. (3.4), acting on the finite factors H (n), [8] . A direct matching of the two factorizations, however, gives a much more intricate expression for the impact factor, which evidently breaks universality, involving mixing with non-octet components and depending on the identities of all particles involved in the scattering. The expressions derived from IR factorization allow for a precise identification of non-universal terms, which are naturally assigned to the 'remainder' function R [8] . We can check the consistency of our approach by computing the newly defined remainders for quark and gluon amplitudes, which yields double poles of the form R (2),0,[8] qq = π 2 4ε 2 1 − 3 N 2 c , R (2),0,[8] gg = − 3π 2 2ε 2 , R (2),0,[8] qg = − π 2 4ε 2 . (4.4) Next, we can construct a function measuring the discrepancy between the predictions of Regge factorization for the quark-gluon amplitude, which are based on universality, and the actual matrix elements. We find ∆ (2),0,[8] ≡ M (2),0 qg H (0),[8] qg − C (2) q +C (2) g +C (1) q C (1) g − π 2 4 (1 + κ qg ) (α (1) ) 2 = 1 2 R (2),0,[8] qg − 1 2 R (2),0,[8] qq + R (2),0,[8] gg = π 2 ε 2 3 16 N 2 c + 1 N 2 c ,(4.5) which precisely reproduces the result of Ref. [43], where a failure of Eq. (3.5) was first observed. PoS(LL2014)073 Infrared Structure Lorenzo Magnea Moving on to the three-loops, one finds, as expected, that the breaking of universality occurs at the level of single-logarithmic terms. Indeed, if one attempts to find an expression for the three-loop Regge trajectory using soft-collinear ingredients one finds a set of non-universal terms, involving both color mixing and process-dependent contributions. Following our general strategy, we assign these terms to the remainder functions. This gives a prediction for singular, single-logarithmic terms in three-loop quark and gluon amplitudes, which break Regge universality. As an example, triple pole contributions are given by R (3),1,[8] qq = π 2 ε 3 2N 2 c − 5 12N c , R (3),1,[8] gg = − π 2 ε 3 2 3 N c , R (3),1,[8] qg = − π 2 ε 3 N c 24 . (4.6) Single pole contributions can be computed as well, but they are, as might be expected, considerably lengthier, involving interference with finite two-loop contributions as well. We conclude this section by noting that, just as infrared factorization provides important information in the high-energy limit, one may also use Regge factorization to extract constraints on the finite parts of the amplitudes, which are not in principle controlled by Eq. (2.1). Specifically, and interestingly, one may show [33] that LL and NLL octet hard parts of quark and gluon amplitudes vanish in dimensional regularization to all orders in perturbation theory. More precisely one finds that Im H (n),n, [8] = 0 , Re H (n),n,[8] = O(ε n ) ,(4.7) Im H (n),n−1, [8] = O(ε n ) , Re H (n),n−1,[8] = O(ε n−2 ) , where the only finite order information used is the vanishing of the one-loop octet hard part. Eq. (4.7) reinforces the idea that high-energy logarithms are infrared in nature: thus, infraredfinite high-energy logarithms must come from the interference of soft and collinear functions with lower-order contributions subleading in ε. These constraints, discussed in greater detail in [33], are the subject of ongoing investigations. Weaving webs Another direction of investigation which has seen remarkable progress in recent years is the development of techniques for the direct calculation of the soft anomalous dimension. The key element of these techniques is the fact that soft gluon contributions to the infrared operator Z can be expressed in terms of correlators of Wilson lines, and such correlators are known to exponentiate. Furthermore, diagrammatic rules exist that allow one to compute directly the logarithm of the correlator, considerably simplifying the task of extracting the anomalous dimension. For abelian gauge theories, the exponent is very simple, involving only connected diagrams [44], and thus only correlations induced by matter loops. For non-abelian theories, in the case of amplitudes involving only two hard partons, this rule generalizes [45,46,47] in a relatively simple way: the exponent is built out of diagrams, called 'webs', which are two-Wilson-line-irreducible, i.e. that cannot be partitioned into disconnected subdiagrams by cutting only the two Wilson lines. These diagrams then enter the exponent with modified color factors which can be recursively computed from the ordinary Feynman rules. The general case of multi-particle non-abelian amplitudes is PoS(LL2014)073 considerably more intricate [48,49,50,51,52,53]. In particular, the concept of 'web' must be rather drastically generalized: webs are in this case sets of Feynman diagrams, which are closed under the operation of permuting the order of their attachments to each Wilson line. Writing each diagram D as the product of its color factor C(D) and its kinematic integral F (D), one finds that each web contributes to the logarithm of the correlator through the combination W = ∑ D,D ∈W F (D)R DD C(D ) , (5.1) where the sum runs over all diagrams D, D contributing to the web W , and R DD is a 'web mixing matrix' whose entries are rational numbers of combinatorial origin. The physically interesting information in the Wilson line correlator is contained in its ultraviolet divergences, which can be mapped to the infrared divergences of the original scattering amplitude. Extracting these divergences is non-trivial, due to the highly singular nature of these correlators, which contain their own infrared divergences, as well as collinear divergences in the massless case. To disentagle singularities of different physical origin, it is useful to consider nonlightlike Wilson lines (which are of course also directly related to infrared singularities of massive colored particles), and introduce a mass scale m as an infrared regulator. Dimensional regularization then controls the UV singularities which are the target of the calculation. One considers therefore the regulated correlator S γ i j , α s (µ), ε, m µ ≡ 0 Φ (m) β 1 ⊗ Φ (m) β 2 ⊗ . . . ⊗ Φ (m) β L 0 , (5.2) where γ i j are the cusp angles defined in Eq. (2.4), and each Wilson line Φ β i , pointing in the direction β i defined by the momentum p i of the i-th particle, is regulated at large distances by an exponential regulator [12], as Φ (m) β i = P exp igµ ε ∞ 0 dλ β i · A (λ β i ) e −mλ √ β 2 i ,(5.3) where we take momenta to be time-like, β 2 i > 0. The multiplicative renormalizability of Wilson line correlators now ensures that a UV finite version of S can be constructed, S ren. γ i j , α s (µ 2 ), ε, m µ = S γ i j , α s (µ 2 ), ε, m µ Z γ i j , α s (µ 2 ), ε ,(5.4) with the matrix renormalization factor Z containing all physically relevant information. The matrix nature of Z entails a further subtlety: in order to properly subtract the UV divergences of S , the expression for Z at high orders must include commutators of lower-order contributions, as dictated by the Baker-Campbell-Haussdorf formula. The structure of these commutator counterterms is easily worked out: for example, up to three loops, one finds that the renormalization factor Z can be expressed in terms of the soft anomalous dimension Γ as Z (α s , ε) = exp α s 1 2ε Γ (1) + α 2 s 1 4ε Γ (2) − b 0 4ε 2 Γ (1) (5.5) + α 3 s 1 6ε Γ (3) + 1 48ε 2 Γ (1) , Γ (2) − 1 6ε 2 b 0 Γ (2) + b 1 Γ (1) + b 2 0 6ε 3 Γ (1) , (5.6) PoS(LL2014)073 Infrared Structure Lorenzo Magnea where Γ (n) is the n-th order perturbative coefficient of Γ(α s ). One sees that in the non-abelian case the logarithm of Z involves multiple UV poles, even in the conformal limit in which the coefficients b n of the β function vanish. Armed with these tools, computing Γ is a well defined, if lengthy and rather intricate, task. It is sufficient to compute the regularized soft function S , which can be done directly at the level of the exponent, as S (α s , ε) = exp w (α s , ε) = exp ∞ ∑ n=1 ∞ ∑ k=−n α n s ε k w (n,k) . (5.7) The anomalous dimension is now constructed in terms of combinations of the perturbative coefficients w (n,k) , including commutator counterterms. Up to three loops, one gets Γ (1) = −2w (1,−1) , Γ (2) = −4w (2,−1) − 2 w (1,−1) , w (1,0) , Γ (3) = −6w (3,−1) + 3 2 b 0 w (1,−1) , w (1,1) + 3 w (1,0) , w (2,−1) + 3 w (2,0) , w (1,−1) + w (1,0) , w (1,−1) , w (1,0) − w (1,−1) , w (1,−1) , w (1,1) . (5.8) Notice that, while Γ (n) is given, as expected, by a combination of single pole and β function contributions, also positive powers of ε in the logarithm of the regularized correlator play a role. Notice also that the matrix w(α s , ε) has a non-trivial structure: order by order, it is the sum of all relevant webs, each of which in general contributes to several color structures. Furthermore, most webs have multiple UV poles, and therefore are not directly physical: they depend on the IR cutoff m. Only when appropriate commutator subtractions have been included one is left with a physical contribution to Γ, corresponding to a single UV pole, and thus independent of m. Multiple gluon exchange webs I would like to conclude this contribution by giving a few details about a class of webs which has recently received a lot of attention [12,13,54]. These are the webs which are generated if one computes the soft function S with a path integral weighted by just the quadratic terms in the action S γ i j , α s (µ), ε, m µ MGEW ≡ [DA] Φ (m) β 1 ⊗ Φ (m) β 2 ⊗ . . . ⊗ Φ (m) β L exp iS 0 [A] . (6.1) In diagrammatic terms, these webs involve graphs in which all gluons attach directly to the Wilson lines, and there are no three-or four-gluon vertices: they are called 'Multiple Gluon Exchange Webs' (MGEWs). While their graphical appearance is abelian, it is important to realize that, due to importance of the ordering of gluon insertions on the Wilson lines, they have a true non-abelian character, and they contribute to the same color structures as fully connected webs involving gluon self-interactions. MGEWs are sufficiently well understood that several of their properties are known or conjectured to all orders in perturbation theory, and in fact, as we will mention below, certain diagrams PoS(LL2014)073 Infrared Structure Lorenzo Magnea in this class are explicitly known for any number of gluons. To begin with, kinematic factors for diagrams contributing to MGEWs admit a universal parametrization, which leads to an explicit, completely general integral representation. One can write, for any diagram D contributing to a MGEW at n loops, F (n) (D) = κ n Γ(2nε) 1 0 n ∏ k=1 dx k γ k P ε (x k , γ k ) φ (n) D (x i ; ε) . (6.2) Here κ = −g 2 /(8π 2 ) + O(ε), γ k is the cusp angle subtended by the k-th gluon, and x k is an anglelike variable measuring the degree of collinearity of the k-th gluon with the emitting and the absorbing Wilson lines. P ε (x k , γ k ) is a function arising from the coordinate space gluon propagator, and given by P ε (x, γ) ≡ x 2 + (1 − x) 2 − x(1 − x)γ) −1+ε ; (6.3) finally, the kernel φ D (x i ; ε) = 1 0 n−1 ∏ k=1 dy k (1 − y k ) −1+2ε y −1+2kε k Θ D {x k , y k } , (6.4) where Θ D [{x k , y k }] is a product of Heaviside functions enforcing the proper ordering on each line. The integral representation in Eq. (6.2) is sufficiently general and manageable to allow for calculations at very high orders. To substantiate this, consider the special class of highly symmetric n-loop MGE diagrams illustrated in Fig. 1 for n = 6, which were called 'Escher staircases' in [49]. At each order, there are two such diagrams, enantiomers of each other. As is apparent from their graphical structure, these diagrams have no subdivergences, and thus yield directly a single pole, contributing to the anomalous dimension, without the need for subtractions. Remarkably, these diagrams can be computed for any n, and give a very simple result. To display it, define S n (x i ) ≡ n ∏ i=1 1 − x i x i . (6.5) as well as θ + (n) ≡ θ S n (x i ) − 1 ,(6.6) and θ − (n) ≡ 1 − θ + (n), which define the overall orientation of the staircase. In terms of these simple ingredients, one finds that the kernel (6.4) of the staircase diagrams for any n can be written in the remarkably compact form φ (n) S (x i ; 0) = 1 (n − 1)! θ + (n) log S n (x i ) n−1 . (6.7) This form was used in Ref. [13] to prove an all-order result for a physical contribution to the anomalous dimension: one finds that the coefficient of a specific color structure appearing in the MGEWs which feature the staircase diagrams must vanish for any n. In general, beyond simple and symmetric cases like the staircase diagrams, one must realize that computing the integrals in Eq. (6.2) is only the first step in a more articulate procedure, and PoS(LL2014)073 Infrared Structure Lorenzo Magnea several further steps are needed in order to arrive from individual diagrams to the anomalous dimension: diagrams must be combined into webs, which in general comprise several color structures, and, importantly, commutator counterterms must be subtracted. Only at that point multiple poles will cancel, and one can construct contributions to the physical anomalous dimension. Starting from Eq. (6.2), and building up the contribution to the anomalous dimension corresponding to web W and color structure j, after subtraction of counterterms, one finds an ε-independent expression of the form F (n) W, j α i = 1 0 n ∏ k=1 dx k p 0 (x k , α k ) G (n) W, j x i , q(x i , α i ) ,(6.8) where the function p 0 is the limit of Eq. (6.3) as ε → 0, now expressed in terms of the variable α defined in Eq. (2.4), while q(x, α) is the logarithm of the quadratic form in x appearing in Eq. (6.3). This logarithm arises when Eq. (6.3) is expanded in powers of ε and combined with higher-order poles arising in individual diagrams. At this level, and at this level only, the structure of the answer simplifies drastically, and its analytic properties become apparent. One finds, for all webs computed so far, up to four loops and five external legs, that the function F n W, j obeys the following key properties, which were conjectured in [12] to be valid for all MGEWs: first, F n is factorized, that is, it is a sum of terms, each of weight 2n − 1, and each given by a product of functions depending on individual cusp angles; second, the symbol of F n is built out of a very simple alphabet, comprising only the letters {α i , 1 − α 2 i }, with i = 1, . . . , n. These general insights, paired with high-order and all-order examples, suggest that the problem of MGEWs may be solvable in closed form. In parallel, efforts are continuing to compute other, more connected webs contributing to the three-loop soft anomalous dimension for both massless and massive particle scattering. The remarkable progress that has been achieved in the past few years makes it likely that this problem too will be completely solved in the not too far future, a statement that would have been considered quite implausible in previous iterations of this Conference. PoS(LL2014)073 D (x i ; ε) contains the information about the ordering of gluon attachments on each Wilson line, which is specific to diagram D; one may write φ (n) Figure 1 : 1Example of an Escher staircase with six external legs. 1) AcknowledgementsWork supported by MIUR (Italy), under contract 2010YJ2NYW_006 and by the University of Torino and the Compagnia di San Paolo under contract ORTO11TPXK. The author thanks VittorioDel Duca, Giulio Falcioni, Einan Gardi, Mark Harley, Chris White and Leonardo Vernazza for the fruitful collaborations that yielded some of the results reviewed here. . J See, A Currie, E W N Gehrmann-De Ridder, J Glover, Pires, arXiv:1310.3993JHEP. 1401hep-phSee, for example, J. Currie, A. Gehrmann-De Ridder, E. W. N. Glover and J. Pires, JHEP 1401 (2014) 110, [arXiv:1310.3993 [hep-ph]]. G F Sterman, hep-ph/9606312*Boulder 1995, QCD and beyond*. 327G. F. Sterman, In *Boulder 1995, QCD and beyond*, p. 327, [hep-ph/9606312]. . L Magnea, arXiv:0806.3353Pramana. 72hep-phL. Magnea, Pramana 72 (2009) 69, [arXiv:0806.3353 [hep-ph]]. . E Gardi, G Grunberg, arXiv:0709.2877Nucl. Phys. B. 79461hep-phE. Gardi and G. Grunberg, Nucl. Phys. B 794 (2008) 61, [arXiv:0709.2877 [hep-ph]]. . H Elvang, Y. -T Huang, arXiv:1308.1697hep-thH. Elvang and Y. -t. Huang, arXiv:1308.1697 [hep-th]. . V Duca, C Duhr, V A Smirnov, arXiv:0911.5332JHEP. 100399hep-phV. Del Duca, C. Duhr and V. A. Smirnov, JHEP 1003 (2010) 099, [arXiv:0911.5332 [hep-ph]]. . A B Goncharov, M Spradlin, C Vergu, A Volovich, arXiv:1006.5703Phys. Rev. Lett. 105151605hep-thA. B. Goncharov, M. Spradlin, C. Vergu and A. Volovich, Phys. Rev. Lett. 105 (2010) 151605, [arXiv:1006.5703 [hep-th]]. . Z Bern, L J Dixon, V A Smirnov, hep-th/0505205Phys. Rev. D. 7285001Z. Bern, L. J. Dixon and V. A. Smirnov, Phys. Rev. D 72 (2005) 085001, [hep-th/0505205]. . V Duca, C Duhr, E Gardi, L Magnea, C D White, arXiv:1109.3581JHEP. 111221hep-phV. Del Duca, C. Duhr, E. Gardi, L. Magnea and C. D. White, JHEP 1112 (2011) 021, [arXiv:1109.3581 [hep-ph]]. . V Duca, C Duhr, E Gardi, L Magnea, C D White, arXiv:1108.5947Phys. Rev. D. 8571104hep-phV. Del Duca, C. Duhr, E. Gardi, L. Magnea and C. D. White, Phys. Rev. D 85 (2012) 071104, [arXiv:1108.5947 [hep-ph]]. . V Duca, G Falcioni, L Magnea, L Vernazza, arXiv:1311.0304Phys. Lett. B. 732hep-phV. Del Duca, G. Falcioni, L. Magnea and L. Vernazza, Phys. Lett. B 732 (2014) 233, [arXiv:1311.0304 [hep-ph]]. . E Gardi, arXiv:1310.5268JHEP. 1404hep-phE. Gardi, JHEP 1404 (2014) 044, [arXiv:1310.5268 [hep-ph]]. . G Falcioni, E Gardi, M Harley, L Magnea, C D White, arXiv:1407.3477hep-phG. Falcioni, E. Gardi, M. Harley, L. Magnea and C. D. White, arXiv:1407.3477 [hep-ph]. . L J Dixon, L Magnea, G F Sterman, arXiv:0805.3515JHEP. 080822hep-phL. J. Dixon, L. Magnea and G. F. Sterman, JHEP 0808 (2008) 022, [arXiv:0805.3515 [hep-ph]]. . I Feige, M D Schwartz, arXiv:1403.6472hep-phI. Feige and M. D. Schwartz, arXiv:1403.6472 [hep-ph]. . T Becher, M Neubert, arXiv:0901.0722Phys. Rev. Lett. 102hep-phT. Becher and M. Neubert, Phys. Rev. Lett. 102 (2009) 162001, [arXiv:0901.0722 [hep-ph]]. . E Gardi, L Magnea, arXiv:0908.3273Nuovo Cim. C. 325hep-phE. Gardi and L. Magnea, Nuovo Cim. C 32N5-6 (2009) 137, [arXiv:0908.3273 [hep-ph]]. . L Magnea, G F Sterman, Phys. Rev. D. 424222L. Magnea and G. F. Sterman, Phys. Rev. D 42 (1990) 4222. . S M Aybat, L J Dixon, G F Sterman, hep-ph/0606254Phys. Rev. Lett. 9772001S. M. Aybat, L. J. Dixon and G. F. Sterman, Phys. Rev. Lett. 97 (2006) 072001, [hep-ph/0606254]. . S M Aybat, L J Dixon, G F Sterman, hep-ph/0607309Phys. Rev. D. 7474004S. M. Aybat, L. J. Dixon and G. F. Sterman, Phys. Rev. D 74 (2006) 074004, [hep-ph/0607309]. . N Kidonakis, arXiv:0903.2561Phys. Rev. Lett. 102232003hep-phN. Kidonakis, Phys. Rev. Lett. 102 (2009) 232003, [arXiv:0903.2561 [hep-ph]]. . A Ferroglia, M Neubert, B D Pecjak, L L Yang, arXiv:0907.4791Phys. Rev. Lett. 103hep-phA. Ferroglia, M. Neubert, B. D. Pecjak and L. L. Yang, Phys. Rev. Lett. 103 (2009) 201601, [arXiv:0907.4791 [hep-ph]]. . A Ferroglia, M Neubert, B D Pecjak, L L Yang, arXiv:0908.3676JHEP. 0620911hep-phA. Ferroglia, M. Neubert, B. D. Pecjak and L. L. Yang, JHEP 0911 (2009) 062, [arXiv:0908.3676 [hep-ph]]. . A Mitov, G F Sterman, I Sung, arXiv:1005.4646Phys. Rev. D. 8234020hep-phA. Mitov, G. F. Sterman and I. Sung, Phys. Rev. D 82 (2010) 034020, [arXiv:1005.4646 [hep-ph]]. . Y. -T Chien, M D Schwartz, D Simmons-Duffin, I W Stewart, arXiv:1109.6010Phys. Rev. D. 8545010hep-thY. -T. Chien, M. D. Schwartz, D. Simmons-Duffin and I. W. Stewart, Phys. Rev. D 85 (2012) 045010, [arXiv:1109.6010 [hep-th]]. . E Gardi, L Magnea, arXiv:0901.1091JHEP. 090379hep-phE. Gardi and L. Magnea, JHEP 0903 (2009) 079, [arXiv:0901.1091 [hep-ph]. . T Becher, M Neubert, arXiv:0903.1126JHEP. 090681hep-phT. Becher and M. Neubert, JHEP 0906 (2009) 081, [arXiv:0903.1126 [hep-ph]]. . G P Korchemsky, A V Radyushkin, Phys. Lett. B. 171459G. P. Korchemsky and A. V. Radyushkin, Phys. Lett. B 171 (1986) 459. . G P Korchemsky, A V Radyushkin, Nucl. Phys. B. 283342G. P. Korchemsky and A. V. Radyushkin, Nucl. Phys. B 283 (1987) 342. . S Caron-Huot, arXiv:1309.6521hep-thS. Caron-Huot, arXiv:1309.6521 [hep-th]. . L J Dixon, E Gardi, L Magnea, arXiv:0910.3653JHEP. 100281hep-phL. J. Dixon, E. Gardi and L. Magnea, JHEP 1002 (2010) 081, [arXiv:0910.3653 [hep-ph]. . V Ahrens, M Neubert, L Vernazza, arXiv:1208.4847JHEP. 1209hep-phV. Ahrens, M. Neubert and L. Vernazza, JHEP 1209 (2012) 138, [arXiv:1208.4847 [hep-ph]. . V Duca, G Falcioni, L Magnea, L Vernazza, in preparationV. Del Duca, G. Falcioni, L. Magnea and L. Vernazza, in preparation. . G P Korchemsky, hep-ph/9311294Phys. Lett. 325G. P. Korchemsky, Phys. Lett. B325 (1994) 459, [hep-ph/9311294]. . I A Korchemskaya, G P Korchemsky, hep-ph/9409446Nucl. Phys. 437I. A. Korchemskaya, G. P. Korchemsky, Nucl. Phys. B437 (1995) 127, [hep-ph/9409446]. . I A Korchemskaya, G P Korchemsky, hep-ph/9607229Phys. Lett. 387I. A. Korchemskaya, G. P. Korchemsky, Phys. Lett. B387 (1996) 346, [hep-ph/9607229]. . L Magnea, hep-ph/0006255Nucl. Phys. B. 593L. Magnea, Nucl. Phys. B 593 (2001) 269, [hep-ph/0006255]. An Introduction to Regge Theory and High-Energy Physics. P D B Collins, CambridgeP. D. B. Collins, "An Introduction to Regge Theory and High-Energy Physics", Cambridge 1977. . V , Del Duca, hep-ph/9503226V. Del Duca, hep-ph/9503226. I I Balitsky, L N Lipatov, V S Fadin, *Leningrad 1979, Proceedings, Physics Of Elementary Particles*. Leningrad109I. I. Balitsky, L. N. Lipatov and V. S. Fadin, In *Leningrad 1979, Proceedings, Physics Of Elementary Particles*, Leningrad 1979, 109. . V S Fadin, L N Lipatov, Nucl. Phys. B. 406259V. S. Fadin and L. N. Lipatov, Nucl. Phys. B 406 (1993) 259. . V S Fadin, R Fiore, M G Kozlov, A V Reznichenko, hep-ph/0602006Phys. Lett. B. 639V. S. Fadin, R. Fiore, M. G. Kozlov and A. V. Reznichenko, Phys. Lett. B 639 (2006) 74, [hep-ph/0602006]. . V , Del Duca, E W N Glover, hep-ph/0109028JHEP. 011035V. Del Duca and E. W. N. Glover, JHEP 0110 (2001) 035, [hep-ph/0109028]. . D R Yennie, S C Frautschi, H Suura, Annals Phys. 13379D. R. Yennie, S. C. Frautschi and H. Suura, Annals Phys. 13 (1961) 379. . G F Sterman, AIP Conf. Proc. 7422G. F. Sterman, AIP Conf. Proc. 74 (1981) 22. . J G M Gatheral, Phys. Lett. B. 13390J. G. M. Gatheral, Phys. Lett. B 133 (1983) 90. . J Frenkel, J C Taylor, Nucl. Phys. B. 246231J. Frenkel and J. C. Taylor, Nucl. Phys. B 246 (1984) 231. . E Laenen, G Stavenga, C D White, arXiv:0811.2067JHEP. 090354hep-phE. Laenen, G. Stavenga and C. D. White, JHEP 0903 (2009) 054 [arXiv:0811.2067 [hep-ph]]. . E Gardi, E Laenen, G Stavenga, C D White, arXiv:1008.0098JHEP. 1011hep-phE. Gardi, E. Laenen, G. Stavenga and C. D. White, JHEP 1011 (2010) 155, [arXiv:1008.0098 [hep-ph]]. . E Gardi, C D White, arXiv:1102.0756JHEP. 110379hep-phE. Gardi and C. D. White, JHEP 1103 (2011) 079, [arXiv:1102.0756 [hep-ph]]. . E Gardi, J M Smillie, C D White, arXiv:1108.1357JHEP. 1109hep-phE. Gardi, J. M. Smillie and C. D. White, JHEP 1109 (2011) 114, [arXiv:1108.1357 [hep-ph]]. . E Gardi, J M Smillie, C D White, arXiv:1304.7040JHEP. 130688hep-phE. Gardi, J. M. Smillie and C. D. White, JHEP 1306 (2013) 088, [arXiv:1304.7040 [hep-ph]]. . A Mitov, G Sterman, I Sung, arXiv:1008.0099Phys. Rev. D. 8296010hep-phA. Mitov, G. Sterman and I. Sung, Phys. Rev. D 82 (2010) 096010, [arXiv:1008.0099 [hep-ph]]. . J M Henn, T Huber, arXiv:1304.6418JHEP. 1309hep-thJ. M. Henn and T. Huber, JHEP 1309 (2013) 147, [arXiv:1304.6418 [hep-th]].
[]
[]
[ "Asma Bahamyirou \nFaculté de Pharmacie\nUniversité de Montréal\n\n", "Mireille E Schnitzer \nFaculté de Pharmacie\nUniversité de Montréal\n\n" ]
[ "Faculté de Pharmacie\nUniversité de Montréal\n", "Faculté de Pharmacie\nUniversité de Montréal\n" ]
[]
Administrative data, or non-probability sample data, are increasingly being used to obtain official statistics due to their many benefits over survey methods. In particular, they are less costly, provide a larger sample size, and are not reliant on the response rate. However, it is difficult to obtain an unbiased estimate of the population mean from such data due to the absence of design weights. Several estimation approaches have been proposed recently using an auxiliary probability sample which provides representative covariate information of the target population. However, when this covariate information is highdimensional, variable selection is not a straight-forward task even for a subject matter expert. In the context of efficient and doubly robust estimation approaches for estimating a population mean, we develop two data adaptive methods for variable selection using the outcome adaptive LASSO and a collaborative propensity score, respectively. Simulation studies are performed in order to verify the performance of the proposed methods versus competing methods. Finally, we presented an anayisis of the impact of Covid-19 on Canadians. Les sources de données administratives ou leséchantillons non probabilistes sont de plus en plus considérés en pratique pour obtenir des statistiques officielles vu le gain qu'on en tire (moindre coût, grande taille d'échantillon, etc.) et le déclin des taux de réponse. Toutefois, il est difficile d'obtenir des estimations sans biais provenant de ces bases de donnéesà cause du poids d'échantillonnage manquant. Des méthodes d'estimations ontété proposées récemment qui utilisent l'information auxiliaire provenant d'unéchantillon probabiliste représentative de la population cible. En présence de données de grande dimension, il est difficile d'identifier les variables auxiliaires qui sont associées au mécanisme de sélection. Dans ce travail, nous développons une procédure de sélection de variables en utilisant le LASSO adaptatif et un score de propension collaboratif. Desétudes de simulations ontété effectuées en vue de comparer différentes approches de sélection de variables. Pour terminer, nous avons présenté une application sur l'impact de la COVID-19 sur les Canadiens.
null
[ "https://arxiv.org/pdf/2103.15218v1.pdf" ]
232,404,301
2103.15218
1d93903dd65ac3ac3d7532f96a9b2f376e7fb18f
28 Mar 2021 Asma Bahamyirou Faculté de Pharmacie Université de Montréal Mireille E Schnitzer Faculté de Pharmacie Université de Montréal 28 Mar 2021arXiv:2103.15218v1 [stat.ME] Data Integration through outcome adaptive LASSO and a collaborative propensity score approachKey word: Non-probability sampleProbability sampleOutcome adaptive LASSOInverse weighted estimators Administrative data, or non-probability sample data, are increasingly being used to obtain official statistics due to their many benefits over survey methods. In particular, they are less costly, provide a larger sample size, and are not reliant on the response rate. However, it is difficult to obtain an unbiased estimate of the population mean from such data due to the absence of design weights. Several estimation approaches have been proposed recently using an auxiliary probability sample which provides representative covariate information of the target population. However, when this covariate information is highdimensional, variable selection is not a straight-forward task even for a subject matter expert. In the context of efficient and doubly robust estimation approaches for estimating a population mean, we develop two data adaptive methods for variable selection using the outcome adaptive LASSO and a collaborative propensity score, respectively. Simulation studies are performed in order to verify the performance of the proposed methods versus competing methods. Finally, we presented an anayisis of the impact of Covid-19 on Canadians. Les sources de données administratives ou leséchantillons non probabilistes sont de plus en plus considérés en pratique pour obtenir des statistiques officielles vu le gain qu'on en tire (moindre coût, grande taille d'échantillon, etc.) et le déclin des taux de réponse. Toutefois, il est difficile d'obtenir des estimations sans biais provenant de ces bases de donnéesà cause du poids d'échantillonnage manquant. Des méthodes d'estimations ontété proposées récemment qui utilisent l'information auxiliaire provenant d'unéchantillon probabiliste représentative de la population cible. En présence de données de grande dimension, il est difficile d'identifier les variables auxiliaires qui sont associées au mécanisme de sélection. Dans ce travail, nous développons une procédure de sélection de variables en utilisant le LASSO adaptatif et un score de propension collaboratif. Desétudes de simulations ontété effectuées en vue de comparer différentes approches de sélection de variables. Pour terminer, nous avons présenté une application sur l'impact de la COVID-19 sur les Canadiens. INTRODUCTION Administrative data, or non-probability sample data, are being increasingly used in practice to obtain official statistics due to their many benefits over survey methods (lower cost, larger sample size, not reliant on response rate). However, it is difficult to obtain unbiased estimates of population parameters from such data due to the absence of design weights. For example, the sample mean of an outcome in a non-probability sample would not necessarily represent the population mean of the outcome. Several approaches have been proposed recently using an auxiliary probability sample which provides representative covariate information of the target population. For example, one can estimate the mean outcome in the probability sample by using an outcome regression based approach. Unfortunately, this approach relies on the correct specification of a parametric outcome model. Valliant & Dever (2011) used inverse probability weighting to adjust a volunteer web survey to make it representative of a larger population. Elliott & Valliant (2017) proposed an approach to model the indicator representing inclusion in the nonprobability sample by adapting Bayes' rule. Rafei et al. (2020) extended the Bayes' rule approach using Bayesian Additive Regression Trees (BART). Chen (2016) proposed to calibrate non-probability samples using probability samples with the least absolute shrinkage and selection operator (LASSO). In the same context, Beaumont & Chu (2020) proposed a tree-based approach for estimating the propensity score, defined as the probability that a unit belongs to the non-probability sample. Wisniowski et al. (2020) developed a Bayesian approach for integrating probability and nonprobability samples for the same goal. Doubly robust semiparametric methods such as the augmented inverse propensity weighted (AIPW) estimator (Robins, Rotnitzky and Zhao, 1994) and targeted minimum loss-based estimation (TMLE; van der Laan & Rubin, 2006;van der Laan & Rose, 2011) have been proposed to reduce the potential bias in the outcome regression based approach. The term doubly robust comes from the fact that these methods require both the estimation of the propensity score model and the outcome expectation conditional on covariates, where only one of which needs to be correctly modeled to allow for consistent estimation of the parameter of interest. Chen, Li & Wu (2019) developed doubly robust inference with non-probability survey samples by adapting the Newton-Raphson procedure in this setting. Reviews and discussions of related approaches can be found in Beaumont (2020) and Rao (2020). Chen, Li & Wu (2019) considered the situation where the auxiliary variables are given, i.e. where the set of variables to include in the propensity score model is known. However, in practice or in high-dimensional data, variable selection for the propensity score may be required and it is not a straight-forward task even for a subject matter expert. In order to have unbiased estimation of the population mean, controlling for the variables that influence the selection into the non-probability sample and are also causes of the outcome is important (VanderWeele & Shpitser, 2011). Studies have shown that including instrumental variables -those that affect the selection into the non-probability sample but not the outcome -in the propensity score model leads to inflation of the variance of the estimator relative to estimators that exclude such variables (Schisterman et al., 2009;Schneeweiss et al., 2009;. However, including variables that are only related to the outcome in the propensity score model will increase the precision of the estimator without affecting bias (Brookhart et al. 2006;Shortreed & Ertefaie, 2017). Using the Chen, Li & Wu (2019) estimator for doubly robust inference with a non-probability sample, Yang, Kim & Song (2020) proposed a two step approach for variable selection for the propensity score using the smoothly clipped absolute deviation (SCAD; Fan & Li, 2001). Briefly, they used SCAD to select variables for the outcome model and the propensity score model separately. Then, the union of the two sets is taken to obtain the final set of the selected variables. To the best of our knowledge, their paper is the first to investigate a variable selection method in this context. In causal inference, multiple variable selection methods have been proposed for the propensity score model. We consider two in particular. Shortreed & Ertefaie (OALASSO;2017) developed the outcome adaptive LASSO. This approach uses the adaptive LASSO (Zou;2006) but with weights in the penalty term that are the inverse of the estimated covariate coefficients from a regression of the outcome on the treatment and the covariates. Benkeser, Cai & van der Laan (2019) proposed a collaborative-TMLE (CTMLE) that is robust to extreme values of propensity scores in causal inference. Rather than estimating the true propensity score, this method instead fits a model for the probability of receiving the treatment (or being in the non-probability sample in our context) conditional on the estimated conditional mean outcome. Because the treatment model is conditional on a single-dimensional covariate, this approach avoids the challenges related to variable and model selection in the propensity score model. In addition, it relies on only sub-parametric rates of convergence of the outcome model predictions. In this paper, we firstly propose a variable selection approach in high dimensional covariate settings by extending the outcome adaptive LASSO (Shortreed & Ertefaie, 2017). The gain in the present proposal relative to the existing SCAD estimator (Yang, Kim & Song 2020) is that the OALASSO can accommodate both the outcome and the selection mechanism in a onestep procedure. Secondly, we adapt the Benkeser, Cai & van der Laan (2019) collaborative propensity score in our setting. Finally, we perform simulation studies in order to verify the performance of our two proposed estimators and compare them with the existing SCAD estimator for the estimation of the population mean. The remainder of the article is organized as follows. In Section 2, we define our setting and describe our proposed estimators. In Section 3, we present the results of the simulation study. We present an analysis of the impact of Covid-19 on Canadians in Section 4. A discussion is provided in Section 5. Methods In this section, we present the two proposed estimators in our setting: 1) an extension of the OALASSO for the propensity score (Shortreed & Ertefaie, 2017) and 2) the application of Benkeser, Cai & van der Laan's (2020) alternative propensity score. The framework Let U = {1, 2, ..., N} be indices representing members of the target population. Define {X, Y } as the auxiliary and response variables, respectively where X = (1, X (1) , X (2) , ..., X (p) ) is a vector of covariates (plus an intercept term) for an arbitrary individual. The finite target population data consists of {(X i , Y i ), i ∈ U}. Let the parameter of interest be the finite population mean µ = 1/N i∈U Y i . Let A be indices for the non-probability sample and let B be those of the probability sample. As illustrated in Figure 1, A and B are possibly overlapping subsets of U. Let d i = 1/π i be the design weight for unit i with π i = P (i ∈ B) known. The data corresponding to B consist of observations {(X i , d i ) : i ∈ B} with sample size n B . The data corresponding to the non-probability sample A consist of observations {(X i , Y i ) : i ∈ A} with sample size n A . O i = {X i , ∆ i , I B,i , ∆ i Y i , I B,i d i } to represent the i-th subject's data realization. Let p i = P (∆ i = 1|X i ) be the propensity score (the probability of the unit belonging to A). In order to identify the target parameter, we assume these conditions in the finite population: (1) Ignorability, such that the selection indicator ∆ and the response variable Y are independent given the set of covariates X (i.e. ∆ ⊥ Y |X) and (2) positivity such that p i > ǫ > 0 for all i. Note that assumption (1) implies that E(Y |X) = E(Y |X, ∆ = 1), which means that the conditional expectation of the outcome can be estimated using only the non-probability sample A. Assumption (2) guarantees that all units have a non-zero probability of belonging in the non-probability sample. Estimation of the propensity score Let's assume for now that the propensity score follows a logistic regression model with p i = p(X i , β) = exp(X T i β)/{1 + exp(X T i β)}. The true parameter value β 0 is defined as the argument of the minimum (arg min) of the risk function N i=1 [∆ i log{p(X i , β)} + (1 − ∆ i ) log{1 − p(X i , β)}] , with summation taken over the target population. One can rewrite this based on our observed data as β 0 = arg min β A X T i β + N i=1 log(1 + e X T i β ).(1) Equation (1) cannot be solved directly since X has not been observed for all units in the finite population. However, using the design weight of the probability sample B, β 0 can be estimated by minimising the pseudo risk function as arg min β A X T i β − B d i log(1 + e X T i β ).(2) Let X B be the matrix of auxiliary information (i.e. the design matrix) of the sample B and L(β) the pseudo risk function defined above. Define U(β) = ∂L(β) ∂β = A X i − B p i d i X i , the gradient of the pseudo risk function. Also define H(β) = − B d i p i (1 − p i )X i X T i = X T B S B X B , the Hessian of the pseudo risk function, where S i = −d i p i (1 − p i ) and vector S B = (S i ; i ∈ B). The parameter β in equation (2) can be obtained by solving the Newton-Raphson iterative procedure as proposed in Chen, Li & Wu (2019) by setting β (t+1) = β (t) − H{β (t) } −1 U{β (t) }. Variable selection for propensity score In a high dimensional setting, suppose that an investigator would like to choose relevant auxiliary variables for the propensity score that could help to reduce the selection bias and standard error when estimating the finite population mean. In the causal inference context of estimating the average treatment effect, Shortreed & Ertefaie (2017) proposed the OALASSO to select amongst the X (j) s in the propensity score model. They penalized the aforementioned risk function by the adaptive LASSO penalty (Zou, 2006) where the coefficient-specific weights are the inverse of an estimated outcome regression coefficient representing an association between the outcome, Y , and the related covariate. In our setting, let the true coefficient values of a regression of Y on X be denoted α j . The parameters β = (β 0 , β 1 , ..., β p ), corresponding to the covariate coefficients in the propensity score, can be estimated by minimizing the pseudo risk function in (2) penalized by the adaptive LASSO penalty: β = arg min β A X T i β − B d i log(1 + e X T i β ) + λ p j=1ω j |β j |.(3) whereω j = 1/|α j | γ for some γ > 0 andα j is a √ n-consistent estimator of α j . Consider a situation where variable selection is not needed (λ = 0). Chen, Li & Wu (2019) proposed to estimate β by solving the Newton-Raphson iterative procedure. One can rewrite the gradient as U(β) = B [∆ i d i − p i d i ]X i = X T B (Σ B −Z B ) with vectors Z B = (p i d i ; i ∈ B) and Σ B = (∆ i d i ; i ∈ B). The Newton-Raphson update step can be written as: (4) is equivalent to the estimator of the weighted least squares problem with Y * as the new working response and S i = −d i p i (1 − p i ) as the weight associated with unit i. Thus, in our context as well, we can select the important variables in the propensity score by solving a weighted least squares problem penalized with an adaptive LASSO penalty. β (t+1) = β (t) − (X T B S B X B ) −1 X T B (Σ B − Z B ) = (X T B S B X B ) −1 X T B S B Y * (4) where Y * = X B β (t) − S −1 B (Σ B − Z B ). Equation Implementation of OALASSO Now we describe how our proposal can be easily implemented in a two-stage procedure. In the first stage, we construct the pseudo-outcome by using the Newton-Raphson estimate of β defined in equation (2) and the probability sample B. In the second stage, using sample B, we solve a weighted penalized least squares problem with the pseudo-outcome as response variable. The selected variables correspond to the non-zero coefficients of the adaptive LASSO regression. The proposed algorithm for estimating the parameters β in equation (3) with a given value of λ is as follows: Algorithm 1 OALASSO for propensity score estimation 1: Use the Newton-Raphson algorithm for the unpenalized logistic regression in Chen, Li & Wu (2019) to estimateβ in (2). 2: Obtain the estimated propensity scorep i = p(X i ,β) for each unit. 3: Construct an estimate of the new working response Y * by plugging in the estimatedβ. 4: Select the useful variables by following steps (a)-(d) below: (a) Define S i = −d ipi (1 −p i ) for each unit in B. (b) Run a parametric regression of Y on X using sample A. Obtaiň α j , the estimated coefficient of X (j) , j = 1, ..., p. (c) Define the adaptive LASSO weightsω j = 1/|α j | γ , j = 1, ..., p for γ > 0. (d) Using sample B, run a LASSO regression of Y * on X withω j as the penalty factor associated with X (j) with the given λ. β = arg min β B S i (Y * i − β T X i ) 2 + λ p j=1ω j |β j | (e) The non-zero coefficient estimate from (d) are the selected variables. 5: The final estimate of the propensity score is p i = p(X i , β) = exp(X T i β) 1+exp(X T i β) For the adaptive LASSO tuning parameters, we choose γ = 1 (Nonnegative Garotte Problem; Yuan & Lin, 2007) and λ is selected using V-fold cross-validation in the sample B. The sampling design needs to be taken 8 into account when creating the V-fold in the same way that we form random groups for variance estimation (Wolter, 2007). For cluster or stratified sampling for example, all elements in the cluster or stratum should be placed in the same fold. SCAD variable selection for propensity score Yang, Kim & Song (2020) proposed a two step approach for variable selection using SCAD. In the first step, they used SCAD to select relevant variables for both the propensity score and the outcome model, respectively. Denote C p (respectively C m ) the selected set of relevant variables for the propensity score (respectively the outcome model). The final set of variables used for estimation is C = C p ∪ C m . Horvitz & Thompson (1952) proposed the idea of weighting observed values by inverse probabilities of selection in the context of sampling methods. The same idea is used to estimate the population mean in the missing outcome setting. Recall that p(X) = P (∆ = 1|X) is the propensity score. In order to estimate the population mean, the units in the non-probability sample A are assigned the weights w i = 1/ p i where p i = p(X i , β) is the estimated propensity score obtained using Algorithm 1. The inverse probability weighted (IPW) estimator for the population mean is given by Inverse weighted estimators µ IP W n = i∈A w i Y i / N , where N = i∈A w i . For the estimation of the variance, we use the proposed variance of Chen, Li & Wu (2019) which is given by V (µ IP W n ) = 1 N 2 A i∈A (1 − p i ) Y i − µ IP W n p i − b T 2 X i 2 + b T 2 D b 2 with D = N −2 B V p ( i∈B d i p i X i ), where V p () denotes the design-based variance of the total under the probability sampling design for B and b 2 = i∈A 1 p i − 1 (Y i − µ IP W n )X T i i∈B d i p i (1 − p i )X i X T i −1 9 Augmented Inverse Probability Weighting Doubly robust semi-parametric methods such as AIPW (Scharfstein, Rotnitzky & Robins, 1999) or Targeted Minimum Loss-based Estimation (TMLE, van der Laan & Rubin, 2006; van der Laan & Rose, 2011) have been proposed to potentially reduce the error resulting from misspecified outcome regressions but also avoid total dependence on the propensity score model specification. We denote m(X) = E(Y |X) and let m(X) be an estimate of m(X). Under the current setting, the AIPW estimator proposed in Chen, Li & Wu (2019) for µ is µ AIP W n = 1 N A i∈A Y i − m(X i ) p(X i , β) + 1 N B i∈B d i m(X i ) where N A = i∈A 1/p(X i , β), N B = i∈B d iV (µ AIP W n ) = 1 N 2 A i∈A (1 − p i ) Y i − m(X i ) − H N p i − b T 3 X i 2 + W with H N = N −1 A i∈A {Y i − m(X i )}/ p i , t i = p i X T i b 3 + m(X i )− N −1 B i∈B d i m(X i ), W = 1/ N 2 B V p ( i∈B d i t i ) , where V p () denotes the design-based variance of the total under the probability sampling design for B and b 3 = i∈A 1 p i − 1 {Y i − m(X i ) − H N }X T i 3 Simulation study Data generation and parameter estimation We consider a similar simulation setting as Chen, Li & Wu (2019). However, we add 40 pure binary noise covariates (unrelated to the selection mechanism or outcome) to our set of covariates. We generate a finite population F N = {(X i , Y i ) : i = 1, . .., N} with N = 10, 000, where Y is the outcome variable and X = {X (1) , ..., X (p) }, p = 44 represents the auxiliary variables. Define Z 1 ∼ Bernoulli(0.5), Z 2 ∼ Unif orm(0, 2), Z 3 ∼ Exponential(1) and Z 4 ∼ χ 2 (4). The observed outcome Y is a Gaussian with a mean θ = 2 + 0. N(0, 1). From the finite population, we select a probability sample B of size n B ≈ 500 under a Poisson sampling with probability π ∝ {0.25 + X (2) + 0.03Y }. We also consider three scenarios for selecting a non-probability sample A with the inclusion indicator ∆ ∼ Bernoulli(p): 6X (1) +0.6X (2) +0.6X (3) +0.6X (4) , where X (1) = Z (1) , X (2) = Z (2) +0.3X (1) , X (3) = Z (3) + 0.2{X (1) + X (2) }, X (4) = Z (4) + 0.1{X (1) + X (2) + X (3) }, with X( • Scenario 1 considers a situation in which the confounders X (1) and X (2) (common causes of inclusion and the outcome) have a weaker relationship with inclusion (∆ = 1) than with the outcome: P (∆ = 1|X) = expit{−2 + 0.3X (1) + 0.3X (2) − X (5) − X (6) } • Scenario 2 considers a situation in which both confounders X (1) and X (2) have a weaker relationship with the outcome than with inclusion: P (∆ = 1|X) = expit{−2 + X (1) + X (2) − X (5) − X (6) } • Scenario 3 involves a stronger association between the instrumental variables X (5) and X (6) and inclusion: P (∆ = 1|X) = expit{−2 + X (1) + X (2) − 1.8X (5) − 1.8X (6) } To evaluate the performance of our method in a nonlinear setting (Scenario 4), we simulate a fourth setting following exactly Kang & Schafer (2007). In this scenario, we generate independent Z (i) ∼ N(0, 1), i = 1, ..4. The observed outcome is generated as Y = 210 + 27.4Z (1) + 13.7Z (2) + 13.7Z (3) + 13.7Z (4) + ǫ, where ǫ ∼ N(0, 1) and the true propensity model is P (∆ = 1 | Z) = expit{−Z (1) + 0.5Z (2) − 0.25Z (3) − 0.1Z (4) }. However, the analyst observes the variables X (1) = exp{Z (1) /2}, X (2) = Z (2) /[1 + exp{Z (1) }] + 10, X (3) = {Z (1) Z (3) /25 + 0.6} 3 , and X (4) = {Z (2) + Z (4) + 20} 2 rather than the Z (j) s. The parameter of interest is the population mean µ 0 = N −1 N i=1 Y i . Under each scenario, we use a correctly specified outcome regression model for the estimation of m(X). For the estimation of the propensity score, we perform logistic regression with all 44 auxiliary variables as main terms, LASSO, and OALASSO, respectively. For the Benkeser method, we also use logistic regression for the propensity score. Because the 4th scenario involves model selection but not variable selection, we only compare logistic regression with the Benkeser method for the propensity score. We fit a misspecified model and the highly adaptive LASSO (Benseker & van der Lann, 2016) for the outcome model. The performance of each estimator is evaluated through the percent bias (%B), the mean squared error (MSE) and the coverage rate (COV), computed as %B = 1 R R r=1 µ r − µ µ × 100 MSE = 1 R R r=1 ( µ r − µ) 2 COV = 1 R R r=1 I(µ ∈ CI r ) respectively, where µ r is the estimator computed from the rth simulated sample, CI r = ( µ r − 1.96 √ v r , µ r + 1.96 √ v r ) is the confidence interval with v r the estimated variance using the method proposed by Chen, Li & Wu (2019) for the rth simulation sample, and R = 1000 is the total number of simulation runs. Results Tables 2, 3 and 4 contain the results for the first three scenarios. In all three, the IPW estimators performed the worst overall in terms of % bias. Similar to Chen, Li & Wu (2019), the coverage rates of IPW were suboptimal in all scenarios and the standard error was substantially underestimated. The AIPW estimator, implemented with logistic regression, LASSO and OALASSO for the propensity score, performed very well in all scenarios with unbiased estimates and coverage rates close to the nominal 95%. In comparison to IPW and AIPW with logistic regression, incorporating the LASSO or the OALASSO did not improve the bias but did lower the variance and allowed for better standard error estimation. The Benkeser method slightly increased the bias of AIPW and had underestimated standard errors, leading to lower coverage. The Yang method had the highest bias compared to the other implementations of AIPW and greatly overestimated standard error in all three scenarios. For the first three scenarios, Figure 2 displays the percent selection of each covariate (1,...,44), defined as the percentage of estimated coefficients that are non-zero throughout the 1000 generated datasets. Overall, the LASSO tended to select the true predictors of inclusion: X (1) , X (2) , X (5) and X (6) . For example, in scenario (2), confounders (X (1) , X (2) ) were selected in around 94% of simulations and instruments (X (5) , X (6) ) around 90%. However, the percent selection of pure causes of the outcome (X (3) , X (6) ) was around 23%. On the other hand, when OALASSO was used for the propensity score, the percent selection of confounders (X (1) , X (2) ) was around 98% and instruments (X (5) , X (6) ) was 64%. However, the percent selection of pure causes of the outcome (X (3) , X (4) ) increased to 83%. When using Yang's proposed selection method, X (1) , X (2) and X (3) were selected 100 percent of the time. Table 5 contains the results of the Kang and Shafer (2007) setting. AIPW with HAL for the outcome model and either the collaborative propensity score (AIPW -Benkeser method) or propensity score with logistic regression with main terms (AIPW -Logistic (2) ) achieved lower % bias and MSE compared to IPW. However, when the outcome model was misspecified, AIPW with logistic regression (AIPW -Logistic (1) ) performed as IPW. In this scenario, the true outcome expectation and the propensity score functionals were nonlinear, making typical parametric models misspecified. Consistent estimation of the outcome expectation can be obtained by using flexible models. The collaborative propensity score was able the reduce the dimension of the space and collect the necessary information using the estimated conditional mean outcome for unbiased estimation of the population mean with a coverage rate that was close to nominal. Table 2: Scenario 1: Estimates taken over 1000 generated datasets. %B (percent bias), MSE (mean squared error), MC SE (monte carlo standard error), SE (mean standard error) and COV (percent coverage). IPW-Logistic: IPW with logistic regression for propensity score; IPW-LASSO: IPW with LASSO regression for propensity score; IPW-OALASSO: IPW with OALASSO regression for propensity score; AIPW-Logistic: AIPW with logistic regression for propensity score; AIPW-LASSO: AIPW with LASSO regression for propensity score; AIPW-OALASSO: AIPW with OALASSO regression for propensity score; AIPW-Benkeser: AIPW with the collaborative propensity score; AIPW-Yang: Yang's proposed AIPW. Table 3: Scenario 2: Estimates taken over 1000 generated datasets. %B (percent bias), MSE (mean squared error), MC SE (monte carlo standard error), SE (mean standard error) and COV (percent coverage). IPW-Logistic: IPW with logistic regression for propensity score; IPW-LASSO: IPW with LASSO regression for propensity score; IPW-OALASSO: IPW with OALASSO regression for propensity score; AIPW-Logistic: AIPW with logistic regression for propensity score; AIPW-LASSO: AIPW with LASSO regression for propensity score; AIPW-OALASSO: AIPW with OALASSO regression for propensity score; AIPW-Benkeser: AIPW with the collaborative propensity score; AIPW-Yang: Yang's proposed AIPW. Table 4: Scenario 3: Estimates taken over 1000 generated datasets. %B (percent bias), MSE (mean squared error), MC SE (monte carlo standard error), SE (mean standard error) and COV (percent coverage). IPW-Logistic: IPW with logistic regression for propensity score; IPW-LASSO: IPW with LASSO regression for propensity score; IPW-OALASSO: IPW with OALASSO regression for propensity score; AIPW-Logistic: AIPW with logistic regression for propensity score; AIPW-LASSO: AIPW with LASSO regression for propensity score; AIPW-OALASSO: AIPW with OALASSO regression for propensity score; AIPW-Benkeser: AIPW with the collaborative propensity score; AIPW-Yang: Yang's proposed AIPW. Data Analysis In this section, we apply our proposed method to a survey which was conducted by Statistics Canada to measure the impacts of COVID-19 on Canadians. The main topic was to determine the level of trust Canadians have in others (elected officials, health authorities, other people, businesses and organizations) in the context of the COVID-19 pandemic. Data was collected from May 26 to June 8, 2020. The dataset was completely non-probabilistic with a total of 35, 916 individuals responding and a wide range of basic demographic information collected from participants along with the main topic variables. The dataset is referred to as Trust in Others (TIO). We consider Labor Force Survey (LFS) as a reference dataset, which consists of n B = 89, 102 subjects with survey weights. This dataset does not have measurements of the study outcome variables of interest; however, it contains a rich set of auxiliary information common with the TIO. Summaries (unadjusted sample means for TIO and design-weighted means for LFS) of the common covariates are listed in Tables 8 and 9 in the appendix. It can be seen that the distributions of the common covariates between the two samples are different. Therefore, using TIO only to obtain any estimate about the Canadian population may be subject to selection bias. We apply the proposed methods and the sample mean to estimate the population mean of two response variables. Both of these variables were assessed as ordinal : Y 1 , "trust in decisions on reopening, Provincial/territorial government" -1: cannot be trusted at all, 2, 3: neutral, 4, 5: can be trusted a lot; and Y 2 , "when a COVID-19 vaccine becomes available, how likely is it that you will choose to get it?" -1: very likely, 2: somewhat likely, 3: somewhat unlikely, 4: very unlikely, 7: don't know. Y 1 was converted to a binary outcome which equals 1 for a value less or equal to 3 (neutral) and 0 otherwise. The same type of conversion was applied for Y 2 to be 1 for a value less or equal to 2 (somewhat likely) and 0 otherwise. We used logistic regression, outcome adaptive group LASSO (Wang & Leng, 2008;Hastie et al. 2008; as we have categorical covariates), and the Benkeser method for the propensity score. We also fit group LASSO for the outcome regression when implementing AIPW. Each categorical covariate in Table 8,9 were converted to binary dummy variables. Using 5-fold cross-validation, the group LASSO variable selection procedure identified all available covariates in the propensity score model. Table 6 below presents the point estimate, the standard error and the 95% Wald-type confidence intervals. For estimating the standard error, we used the variance estimator for IPW and the asymptotic variance for AIPW proposed in Chen, Li & Wu (2019). For both outcomes, we found significant differences in estimates between the naive sample mean and our proposed methods for both AIPW with OA group LASSO and the Benkeser method. For example, the adjusted estimates for Y 1 suggested that, on average, at most 40% (using both outcome adaptive group LASSO or the Benkeser method) of the Canadian population have no trust at all or are neutral in regards to decisions on reopening taken by their provincial/territorial government compared to 43% if we would have used the naive mean. The adjusted estimates for Y 2 suggested that at most 80% using the Benkeser method (or 82% using outcome adaptive group LASSO) of the Canadian population are very or somewhat likely to get the vaccine compared to 83% if we would have used the naive mean. In the othe hand, there was no significant differences between OA group LASSO and group LASSO compared to the naive estimator. The package IntegrativeFPM (Yang, 2019) threw errors during application, which is why it is not included. Discussion In this paper, we proposed an approach to variable selection for propensity score estimation through penalization when combining a non-probability sample with a reference probability sample. We also illustrated the application of the collaborative propensity score method of Benkeser, Cai & van der Laan (2020) with AIPW in this context. Through the simulations, we studied the performance of the different estimators and compared them with the method proposed by Yang. We showed that the LASSO and the OALASSO can reduce the standard error and mean squared error in a high dimensional setting. The collaborative propensity score produced good results but the related confidence intervals were suboptimal as the true propensity score is not estimated there. Overall, in our simulations, we have seen that doubly robust estimators generally outperformed the IPW estimators. Doubly robust estimators incorporate the outcome expectation in such a way that can help to reduce the bias when the propensity score model is not correctly specified. Our observations point to the importance of using doubly robust methodologies in this context. In our application, we found statistically significant differences in the results between our proposed estimator and the corresponding naive estimator for both outcomes. This analysis used the variance estimator proposed by Chen, Li & Wu (2019) which relies on the correct specification of the propensity score model for IPW estimators. For future research, it would be quite interesting to develop a variance estimator that is robust to propensity score misspecification and that can be applied to the Benkeser method. Other possible future directions include post-selection variance estimation in this setting. Figure 1 : 1Population and observed samples. and β can be estimated using either the Newton-Raphson algorithm inChen, Li & Wu (2019) or our proposed OALASSO. One can also use the alternative propensity score proposed by Benkeser, Cai & van der Laan (2020) and therefore replacing p i = p(X i , β) by the estimated probability of belonging to the nonprobability sample conditional on the estimated outcome regression P {∆ = 1| m(X i )}. For the estimation of the variance, we use the proposed variance of Chen, Li & Wu (2019) which is given by 5) , ..., X (24) ∼ Bernoulli(0.45) and X (25) , ..., X (44) ∼ 5 :Figure 2 : 52Scenario 4 (non-linear model setting): Estimates taken over 1000 generated datasets. %B (percent bias), MSE (mean squared error), MC SE (monte carlo standard error), SE (mean standard error) and COV (percent coverage). IPW-Logistic: IPW with logistic regression for propensity score; AIPW-Logistic (1): AIPW with logistic regression for propensity score and a misspecified model for the outcome; AIPW-Logistic (2): AIPW with logistic regression for propensity score and HAL for the outcome model; AIPW-Benkeser: AIPW with the collaborative propensity score.Estimator%B MSE MC SE SE %COV IPW Percent selection of each variable into the propensity score model over 1000 simulations under scenarios 1-3. Sample X (1) ... X (p) Y ∆ d ) can be represented as O = {X, ∆, I B , ∆Y, I B d},where ∆ is the indicator which equals 1 if the unit belongs to the nonprobability sample A and 0 otherwise, I B is the indicator which equals 1 if the unit belongs to the probability sample B and 0 otherwise, and d is the design weight. We useA . ... . . 1 . ... . . 1 B . ... . 0 . . ... . 0 . Table 1: Observed data structure The observed data (Table 1 Table Table 6 : 6Point estimate, standard error and 95% Wald confidence interval. IPW-Logistic (Grp LASSO/OA Grp LASSO): IPW with logistic regression (Group LASSO/outcome adaptive Group LASSO) for propensity score; AIPW-Logistic (Grp LASSO/OA Grp LASSO): AIPW with logistic regression (Group LASSO/outcome adaptive Group LASSO for propensity score; AIPW-Benkeser: AIPW with the collaborative propensity score.Y 1 Sample mean 0.430 (0.002) 0.424 -0.435 IPW -Logistic 0.382 (0.024) 0.330 -0.430 IPW -Grp LASSO 0.383 (0.024) 0.335 -0.431 IPW -OA Grp LASSO 0.386 (0.024) 0.340 -0.433 AIPW -Logistic 0.375 (0.022) 0.328 -0.415 AIPW -Grp LASSO 0.372 (0.014) 0.344 -0.401 AIPW -OA Grp LASSO 0.373 (0.014) 0.348 -0.403 AIPW -Benkeser 0.401 (0.002) 0.396 -0.406 Y 2 Sample mean 0.830 (0.001) 0.826 -0.834 IPW -Logistic 0.820 (0.013) 0.794 -0.847 IPW -Grp LASSO 0.810 (0.013) 0.784 -0.836 IPW -OA Grp LASSO 0.808 (0.013) 0.784 -0.833 AIPW -Logistic 0.810 (0.013) 0.784 -0.837 AIPW -Grp LASSO 0.796 (0.012) 0.774 -0.819 AIPW -OA Grp LASSO 0.796 (0.011) 0.775 -0.818 AIPW -Benkeser 0.788 (0.003) 0.783 -0.794 Table 8 : 8Distributions of common covariates from the two samples. At least College -cegep -other non university certificate 30568(85.10%) 41038(50.06%) At least University certificate or diploma below bachelor 23544(65.55%) 22826(30.50%)TIO LFS Table 9 : 9Distributions of common covariates from the two samples.TIO LFS ACKNOWLEDGEMENTSTable 7: Distributions of common covariates from the two samples. Methods Doubly robust estimation in missing data and causal inference models. H Bang, J M Robins, Biometrics. 61Bang, H. & Robins J M. (2005). Doubly robust estimation in missing data and causal inference models. Biometrics, 61, 962-972. Les enquêtes probabilites sont-elles vouéesà disparaître pour la production de statistiques officielles. J F Beaumont, Survey Methodology. 461Beaumont, J. F. (2020). Les enquêtes probabilites sont-elles vouéesà dis- paraître pour la production de statistiques officielles?. Survey Methodology, 46(1), 1-30. Statistical data integration through classification trees. J F Beaumont, K Chu, Report paper for ACSMBeaumont, J. F. & Chu, K. (2020). Statistical data integration through classification trees. Report paper for ACSM. The highly adaptive LASSO estimator. D Benkeser, M J Van Der Laan, 2016 IEEE International Conference on Data Science and Advanced Analytics. IEEEBenkeser, D. & van der Laan, M. J. (2016) The highly adaptive LASSO estimator. In 2016 IEEE International Conference on Data Science and Advanced Analytics, IEEE, 689-696. A nonparametric super-efficient estimator of the average treatment effect. D Benkeser, W Cai, M J Van Der Laan, Statistical Science. 35Benkeser, D., Cai, W. & van der Laan, M. J. (2020). A nonparametric super-efficient estimator of the average treatment effect. Statistical Science, 35, 3, 484-495. Variable selection for propensity score models. M A Brookhart, S Schneeweiss, K J Rothman, R J Glynn, J Avorn, T Sturmer, American Journal of Epidemiology. 163Brookhart, M. A., Schneeweiss, S., Rothman, K. J., Glynn, R. J., Avorn, J. & Sturmer, T. (2006). Variable selection for propensity score models. American Journal of Epidemiology, 163: 1149-1156. Random Forests. L Breiman, Thorax online. 4515Breiman L. (2007). Random Forests. Thorax online, 45(1): 5(2). Using LASSO to Calibrate Non-probability Samples using Probability Samples. K T Chen, Dissertations and Theses (Ph.D. and Master's). Chen, K. T. (2016). Using LASSO to Calibrate Non-probability Samples using Probability Samples. Dissertations and Theses (Ph.D. and Mas- ter's). Doubly robust inference with Nonprobability survey samples. Y Chen, P Li, C Wu, Journal of the American Statistical Association. 000Chen, Y., Li, P. & Wu, C. (2019). Doubly robust inference with Non- probability survey samples. Journal of the American Statistical Associa- tion, 2019, VOL. 00, NO. 0, 1-11. A generalization of sampling without replacement from a finite universe. J Chu, D Benkeser, M Van Der Laan, Journal of the American Statistical Association. 76Chu, J., Benkeser, D. & van der Laan, M. J (2020). A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 76, 109-118. Constructing Inverse Probability Weights for Marginal Structural Model. M J Cole, M A Hernan, American Journal of Epidemiology. 1686Cole, M. J. & Hernan, M. A. (2008). Constructing Inverse Probability Weights for Marginal Structural Model. American Journal of Epidemiol- ogy, 168(6): 656-664. Inference for nonprobability samples. M R Elliott, R Valliant, Statistical Science. 322Elliott, M. R. & Valliant, R. (2017). Inference for nonprobability samples. Statistical Science. Vol. 32, No. 2, 249--264. Variable selection via nonconcave penalized likelihood and its oracle properties. J Fan, R Li, J. Am. Statist. Ass. 96Fan, J. & Li, R. (2001) Variable selection via nonconcave penalized likeli- hood and its oracle properties. J. Am. Statist. Ass., 96, 1348-1360 An application of collaborative tar-geted maximum likelihood estimation in causal inference and genomics. S Gruber, M J Van Der Laan, International Journal of Biostatistics. 6118Gruber, S. & van der Laan, M. J. (2010). An application of collabora- tive tar-geted maximum likelihood estimation in causal inference and genomics. International Journal of Biostatistics, 6(1): 18. T Hastie, R Tibshirani, J Friedman, The Elements of Statistical Learning. SpringerHastie, T., Tibshirani, R, & Friedman, J. (2008). The Elements of Statis- tical Learning. Springer. Robust inference on the average treatment effect using the outcome highly adaptive lasso. D G Horvitz, D J Thompson, Biometric Methodology. 47Horvitz, D. G. & Thompson, D. J. (1952). Robust inference on the aver- age treatment effect using the outcome highly adaptive lasso. Biometric Methodology, 47, 663-685. Demystifying Double Robustness: A Comparison of Alternative Strategies for Estimating a Population Mean from Incomplete Data. J D Y Kang, J L Shafer, Statistical Science. 22Kang, J. D. Y. & Shafer, J. L. (2007). Demystifying Double Robustness: A Comparison of Alternative Strategies for Estimating a Population Mean from Incomplete Data. Statistical Science, 22, 4, 523-539.. Weight Trimming and Propensity Score Weighting. B K Lee, J Lessler, E A Stuart, PLoS One. 63[] Lee, B. K., Lessler, J. & Stuart, E. A. (2011). Weight Trimming and Propensity Score Weighting. PLoS One, 6(3). Big data for finite population inference: Applying quasi-random approaches to naturalistic driving data using Bayesian Additive Regression Trees. A Rafei, A C Flannagan, M R Elliott, Journal of Survey Statistics and Methodology. 8Rafei, A., Flannagan A. C. & Elliott, M. R. (2020). Big data for finite population inference: Applying quasi-random approaches to naturalistic driving data using Bayesian Additive Regression Trees. Journal of Survey Statistics and Methodology , 8, 148--180. On making Valid inferences by integrating data from surveys and other sources. J N K Rao, 10.1007/s13571-020-00227-wThe Indian Journal of Statistics. Rao, J. N. K. (2020). On making Valid inferences by integrating data from surveys and other sources. The Indian Journal of Statistics, https://doi.org/10.1007/s13571-020-00227-w. Estimation of regression coefficients when some regressors are not always observed. J M Robins, A Rotnitzky, L P Zhao, Journal of the American Statistical Association. 89427Robins, J. M, Rotnitzky, A. & Zhao, L. P. (1994). Estimation of regression coefficients when some regressors are not always observed. Journal of the American Statistical Association, 89(427), 846-866. Estimating causal effects of treatments in randomized and nonrandomized studies. D B Rubin, Journal of Educational Psychology. 66Rubin, D. B. (2017). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688- 701. Adjusting for nonignorable dropout using semiparametric nonresponse models, (with discussion and rejoinder). D O Scharfstein, A Rotnitzky, J M Robins, Journal of the American Statistical Association. Scharfstein, D. O., Rotnitzky, A. & Robins J. M. (1999). Adjusting for nonignorable dropout using semiparametric nonresponse models, (with discussion and rejoinder). Journal of the American Statistical Association, pp.1096-1120 (1121-1146). Overadjustment bias and unnecessary adjustment in epidemiologic studies. E F Schisterman, S Cole, R W Platt, Epidemiology. 20488Schisterman, E. F., Cole, S. & Platt, R. W. (2009). Overadjustment bias and unnecessary adjustment in epidemiologic studies. Epidemiology, 20, 488. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. S Schneeweiss, J A Rassen, R J Glynn, J Avorn, H Mogun, M A Brookhart, Epidemiology. 20512Schneeweiss, S., Rassen, J. A., Glynn, R. J., Avorn, J., Mogun, H. & Brookhart, M. A. (2009). High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology, 20, 512. Outcome-adaptive lasso: Variable selection for causal inference. S M Shortreed, A Ertefaie, Biometrics. 734Shortreed, S. M. & Ertefaie, A. (2017). Outcome-adaptive lasso: Variable selection for causal inference. Biometrics, 73(4): 1111-1122. Regression shrinkage and selection via the lasso. R Tibshirani, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 58Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 58, 267-288. Estimating propensity adjustments for volunteer web surveys. R Valliant, J A Dever, Sociological Methods & Research. 40Valliant, R. & Dever, J. A. (2011). Estimating propensity adjustments for volunteer web surveys. Sociological Methods & Research, 40: 105--137. Collaborative double robust targeted maximum likelihood estimation. M J Van Der Laan, S Gruber, The International Journal of Biostatistics. 6van der Laan, M. J. & Gruber, S. (2010). Collaborative double robust tar- geted maximum likelihood estimation. The International Journal of Bio- statistics, 6, 1-68. Targeted learning: causal inference for observational and experimental data. M J Van Der Laan, S Rose, Springer Series in Statistics. Springervan der Laan, M. J. & Rose, S. (2011). Targeted learning: causal infer- ence for observational and experimental data, Springer Series in Statistics, Springer. Targeted maximum likelihood learning. M J Van Der Laan, D Rubin, International Journal of Biostatistics. 2van der Laan, M. J. & Rubin D. (2006). Targeted maximum likelihood learning. International Journal of Biostatistics, 2. A new criterion for confounder selection. T J Vanderweele, I Shpitser, Biometrics. 674VanderWeele, T. J. & Shpitser, I. (2011). A new criterion for confounder selection. Biometrics, 67(4): 1406-1413. A Note on Adaptive Group Lasso. H Wang, C Leng, Computational Statistics & Data Analysis. 5212Wang, H. & Leng, C. (2008). A Note on Adaptive Group Lasso. Compu- tational Statistics & Data Analysis, 52 (12), 5277-5286. Integrating probability and nonprobability samples for survey inference. A Wisniowski, J W Sakshaug, D A P Ruiz, A G Blom, Journal of Survey Statistics and Methodology. 8Wisniowski, A., Sakshaug, J. W., Ruiz, D. A. P. & Blom, A. G. (2020). Integrating probability and nonprobability samples for survey inference. Journal of Survey Statistics and Methodology, 8, 120-147. Introduction to variance estimation. K M Wolter, Springer series in StatisticsWolter, K. M. (2007). Introduction to variance estimation. Springer series in Statistics. Doubly Robust Inference when Combining Probability and Non-probability Samples with Highdimensional Data. S Yang, J K Kim, R Song, Journal of the Royal Statistical Society: Series B. 822Yang, S., Kim, J. K. & Song, R. (2020). Doubly Robust Inference when Combining Probability and Non-probability Samples with High- dimensional Data. Journal of the Royal Statistical Society: Series B , 82, Part 2, pp. 445-465. . S Yang, IntegrativeFPM r packageYang, S. (2019). IntegrativeFPM r package. https://github.com/shuyang1987/IntegrativeFPM/. On the non-negative garrotte estimator. M Yuan, Y Lin, Journal of the Royal Statistical Society: Series B. 69Yuan, M. & Lin, Y. (2007). On the non-negative garrotte estimator. Jour- nal of the Royal Statistical Society: Series B, 69, 143-161. The adaptive LASSO and Its Oracle Properties. H Zou, Journal of the American Statistical Association. 101Zou H. (2006). The adaptive LASSO and Its Oracle Properties. Journal of the American Statistical Association, 101.
[ "https://github.com/shuyang1987/IntegrativeFPM/." ]
[ "EventGraD: Event-Triggered Communication in Parallel Machine Learning", "EventGraD: Event-Triggered Communication in Parallel Machine Learning" ]
[ "Soumyadip Ghosh es:[email protected] \nDepartment of Electrical Engineering\nUniversity of Notre Dame\nUSA\n", "Bernardo Aquino \nDepartment of Electrical Engineering\nUniversity of Notre Dame\nUSA\n", "Vijay Gupta \nDepartment of Electrical Engineering\nUniversity of Notre Dame\nUSA\n" ]
[ "Department of Electrical Engineering\nUniversity of Notre Dame\nUSA", "Department of Electrical Engineering\nUniversity of Notre Dame\nUSA", "Department of Electrical Engineering\nUniversity of Notre Dame\nUSA" ]
[]
Communication in parallel systems imposes significant overhead which often turns out to be a bottleneck in parallel machine learning. To relieve some of this overhead, in this paper, we present EventGraD -an algorithm with event-triggered communication for stochastic gradient descent in parallel machine learning. The main idea of this algorithm is to modify the requirement of communication at every iteration in standard implementations of stochastic gradient descent in parallel machine learning to communicating only when necessary at certain iterations. We provide theoretical analysis of convergence of our proposed algorithm. We also implement the proposed algorithm for data-parallel training of a popular residual neural network used for training the CIFAR-10 dataset and show that EventGraD can reduce the communication load by up to 60% while retaining the same level of accuracy. In addition, EventGraD can be combined with other approaches such as Top-K sparsification to decrease communication further while maintaining accuracy.
10.1016/j.neucom.2021.08.143
[ "https://arxiv.org/pdf/2103.07454v2.pdf" ]
232,223,071
2103.07454
dbd03371282c39ad743e6f975e91586532abee3d
EventGraD: Event-Triggered Communication in Parallel Machine Learning December 10, 2021 8 Dec 2021 Soumyadip Ghosh es:[email protected] Department of Electrical Engineering University of Notre Dame USA Bernardo Aquino Department of Electrical Engineering University of Notre Dame USA Vijay Gupta Department of Electrical Engineering University of Notre Dame USA EventGraD: Event-Triggered Communication in Parallel Machine Learning December 10, 2021 8 Dec 202110.1016/j.neucom.2021.08.143Published in (Soumyadip Ghosh), [email protected] (Bernardo Aquino), [email protected] (Vijay Gupta)Machine LearningEvent-Triggered CommunicationParallel Computing * Corresponding author Communication in parallel systems imposes significant overhead which often turns out to be a bottleneck in parallel machine learning. To relieve some of this overhead, in this paper, we present EventGraD -an algorithm with event-triggered communication for stochastic gradient descent in parallel machine learning. The main idea of this algorithm is to modify the requirement of communication at every iteration in standard implementations of stochastic gradient descent in parallel machine learning to communicating only when necessary at certain iterations. We provide theoretical analysis of convergence of our proposed algorithm. We also implement the proposed algorithm for data-parallel training of a popular residual neural network used for training the CIFAR-10 dataset and show that EventGraD can reduce the communication load by up to 60% while retaining the same level of accuracy. In addition, EventGraD can be combined with other approaches such as Top-K sparsification to decrease communication further while maintaining accuracy. Introduction Artificial intelligence in general, and machine learning in particular, is revolutionizing many aspects of our life [1]. Machine learning (ML) algorithms in various applications have achieved significant benefits through training of a large number of parameters using huge data sets. Focus has now shifted to ensure that these algorithms can be executed for complex problems in a reasonable amount of time. Initial speed-ups in ML algorithms were due to better algorithm design (e.g. using mini-batches) or hardware (e.g. introduction of graphics processing units(GPUs)). However, to stay relevant, machine learning must continue to scale up in the size and complexity of the application problem. The challenge is both in the large number of parameters that need to be trained and the consequent large amount of data that needs to be processed. An obvious answer is to go from one processing element -that may have neither the memory nor the computational capability needed for machine learning implementations to solve complex problems -to multiple processing elements (sometimes referred to as parallel or distributed implementations) [2,3]. For instance, there has been a lot of recent interest in machine learning using artificial neural networks on large-scale clusters such as supercomputers [4,5]. Both data-parallel (in which the dataset is divided into multiple processors, with each processor having a copy of the entire neural network model) and model-parallel (in which the neural network model is divided among multiple processors, with each processor having access to the entire dataset) architectures have been considered [6]. For some applications such as federated learning which involves edge devices such as smartphones or smart speakers, distributed training is often the only choice due to data privacy concerns [7]. One of the biggest challenges of training in any parallel or distributed environment is the overhead associated with communication between different processors or devices. In high performance computing clusters, communication of messages over networks often takes a lot of time, consumes significant power and can lead to network congestion [8,9,10]. Specifically for parallel machine learning, during training, the processors need to exchange the weights and biases with each other before moving to the next training iteration. For example, in a data parallel architecture, the weights and biases among the different processors are averaged with each other (either directly or through a central parameter server) before executing the next training iteration. Such an exchange usually happens by message passing at the end of every iteration. As the number of processing elements increases, the issue of such communication being a major bottleneck in these implementations is known widely [11,12,13,14]. Consequently there has been a lot of research aimed at reducing communication in parallel machine learning [11,12,13,14,15]. This rich stream of work has suggested various ways of reducing the size or number of messages as means of alleviating the communication overhead. In this paper, we propose a novel algorithm to reduce communication in parallel machine learning involving artificial neural networks. Specifically, we utilize the idea of event-triggered communication from control theory to design a class of communication-avoiding machine learning algorithms. In this class of algorithms, communication among the processing elements occurs intermittently and only on an as-needed basis. This leads to a significant reduction in the number of messages communicated among the processing elements. Note that algorithms to reduce communication have been proposed in other applications of parallel computing as well, such as parallel numerical simulation of partial differential equations [16,17,18]. The core idea of our algorithm is to exchange the neural network parameters (the weights and biases) only when a certain criterion related to the utility of the information being communicated is satisfied, i.e., in an event-triggered fashion. We present both theoretical analysis and experimental demonstration of this algorithm. Experimentally, we show that the algorithm can yield the same accuracy as standard implementations with 60% lesser number of messages transmitted among processors using a popular residual neural network on the CIFAR-10 dataset. Our implementation is open-source and available at [19]. A reduction in the number of messages implies a reduction in both the time and energy overhead of communication and can prevent congestion in the network. Theoret-ically, we show that our algorithm has a bound on the convergence rate (in terms of number of iterations) of the order O 1 √ Kn + G(K−1) √ K + G 2 1/2 (K−1) √ K where K is number of iterations, n is number of processors, and G(K) and G 1/2 (K) are terms related to a bound on the threshold of event-triggered communication. In particular, if the bound on the threshold is chosen to be a sequence that decreases geometrically as a function of the iteration number, the bound on the convergence rate becomes of the order O 1 √ Kn + 1 √ K which is similar to the asymptotic convergence rate of parallel stochastic gradient descent in general. An earlier version of this algorithm without theoretical results was experimentally demonstrated as a proof of concept on the smaller MNIST dataset in [20]. In contrast, this paper contains a comprehensive theoretical treatment of the algorithm with additional experiments on the CIFAR-10 dataset. In our previous work [18], we have considered event-triggered communication for a different domain of parallel numerical partial differential equation solvers and highlighted the implementation challenges similar to this paper. However we considered a fixed threshold without any mathematical treatment in that work unlike the adaptive threshold along with theoretical convergence results studied in this paper. While we focus on data-parallel stochastic gradient descent in parallel machine learning for theoretical analysis and experimental verification of the algorithm in this paper, the idea of event-triggered communication can be applied to model-parallel and hybrid configurations and can be extended to other training algorithms as well such as Adam, RMSProp, etc. Similarly, event-triggered communication can also be used in federated learning where communication can have a more severe overhead due to the geographical separation between the devices involved in training such as smartphones. The paper is organised as follows. Section 2 surveys related work and Section 3 introduces the necessary background. The proposed algorithm is introduced in Section 4 with theoretical analysis in Section 5 and implementation details in Section 6. Section 7 contains the experimental results followed by conclusion in Section 8. For notational convenience, we denote the abbreviation PE to be a processing element that signifies one core of a processor. Related Work In this section, we review some communication-efficient strategies of distributed training of neural networks from literature. We primarily focus on parallel stochastic gradient descent [21,22]. We also review some works on event-triggered communication and then highlight our specific contributions on using event-triggered communication for parallel training of neural networks. Parameter Server -A popular approach for parallelization of stochastic gradient descent is the centralized parameter server approach where multiple workers compute the gradients in their assigned sub-dataset and send them to a central parameter server. The parameter server updates the neural network model parameters using the individual gradients and sends the updated parameters back to the workers, who then move on to the next iteration. The original approach results in a synchronized algorithm. This requirement of synchronization was relaxed by the Hogwild algorithm [23] where the worker PEs can send gradients to the parameter server asynchronously without any lock step. Elastic Averaging SGD proposed by [24] reduces communication by introducing the notion of an elastic period of communication between the workers and the parameter server. Other approaches have also been proposed in the literature [25]. There have been studies to reduce communication with the parameter server in the context of federated learning as well [26,27,28]. However, the parameter server approach often suffers from poor scalability due to the dependence on a central node, which can become a bottleneck. AllReduce -Another popular approach for parallelization of stochastic gradient descent does not consider a centralized parameter server. Rather, every PE maintains a copy of the model and the PEs average the parameters of the model by communicating in an all-to-all fashion among themselves using a reduction mechanism commonly known as AllReduce [29]. Since such all-to-all communication incurs a lot of overhead, a lot of research has focused on reducing this overhead by using optimized variants. The authors in [14] have proposed one-bit quantization where each gradient update is quantized to 1-bit, resulting in a reduction of the data volume that needs to be communicated. Threshold quantization was developed in [30] where only those gradient updates that are greater than a specified threshold are transmitted after encoding them with a fixed value. A hybrid approach combining both 1-bit and threshold quantization was given in the adaptive quantization proposed in [31]. Deep Gradient Compression in [13] compresses the size of gradients and accumulates the quantization error and momentum to maintain accuracy. Several approaches have been proposed to minimize communication by reducing the precision of gradients, e.g., using half precision (16-bit) for training [32] and mixed precision [33]. Sparsified methods that communicate only the top-k most significant values have been proposed by [34,35]. Combining the two methods of quantization and sparsification is presented in [36]. Reduction with neighbors -Instead of averaging the parameters using all-to-all communication among all the PEs via AllReduce, another approach has been proposed where the averaging is done only with the neighboring PEs in the topology in which the PEs are connected [37]. This approach uses ideas from consensus algorithms which is now widely studied in many different communities [38]. We would like to point out that there is confusion in literature about the term "decentralized" -some works call the AllReduce approach involving averaging among all PEs as decentralized because of the absence of a central parameter server [39], while others call the approach that involves averaging with just the neighboring PEs as decentralized because it does not require any central operation on all the PEs in the topology [37,40]. We adopt the latter usage and call the algorithms that require averaging with just the neighboring PEs as decentralized. While it might seem that such a scheme will converge slower than a centralized approach due to the delayed dissemination of information to all the nodes, the authors in [40] showed that the convergence rate is similar in order to the centralized approach after the same number of iterations provided the number of iterations is sufficiently large. While [40] considers that the neighbors of a particular PE remain fixed across iterations, there are interesting gossip algorithms [41,42,43] that choose neighbors randomly and exchange information. More recently, the authors in [15] propose an error-compensated communication compression mechanism in this context using bit-clipping that reduce communication costs. Event-Triggered Communication -Event-Triggered Communication has been proposed as a mechanism for reducing communication in networked control systems [44,45]. Methods employing event-triggered communication in consensus algorithms have been variously proposed, e.g., [46,47,48]. Closely related to consensus is the problem of distributed optimization where there have been various event-triggered approaches proposed [49,50,51]. Of particular relevance is [52] which suggests an event-triggered communication scheme with an adaptive threshold of communication that is dependent on the state of the last trigger instant in a continuous time control system. Our Contribution -The decentralized parallel stochastic gradient descent in [40] still considers that communication of parameters with neighbor PEs happens at every iteration. Since the values of the parameters may not change significantly in every iteration, communication at every iteration may not be necessary. Thus the main idea behind the algorithms presented in this paper is to communicate these parameters in events only when their values change by a certain threshold. We consider the scenario of data-parallel training of a neural network in a high performance computing (HPC) cluster where there are fixed neighboring PEs for every PE and show that communicating in events with neighboring PEs reduces the number of messages passed in the network. Decreasing the message count decreases the overall data to be communicated and as pointed out in literature [11,12,13,14,15], reducing the data to be communicated reduces the overhead associated with communication. More concretely, the contributions of our work are: • We propose an event-triggered communication algorithm where the neural network parameters, i.e., the weights and biases, are communicated only when their norm changes by some threshold. The threshold is chosen in an adaptive manner based on the rate of change of the parameters. The paper closest to ours seems to be [53] where the authors considered a federated learning scenario and proposed an event-triggered communication scheme for the model parameters based on thresholds that are dependent on the learning rate and showed reduction in communication for distributed training. As compared to that work, we consider an adaptive threshold rather than selecting the same threshold across all parameters. In particular, the threshold is adaptive to the local slope of a parameter and thus it can adjust according to the parameter's evolution which will depend on factors such as the type of the parameter, the neural network model and the dataset. Hence the adaptive threshold makes our algorithm robust to different neural network models and different datasets. Our theoretical results are based on a generic bound on the threshold unlike [53] which provides a bound considering a certain form of threshold dependent on the learning rate. Further, we highlight the implementation challenges of event-triggered communication in an HPC environment which is different than the federated learning setting considered in [53] that usually involves wireless communication. Problem Formulation This section lays the mathematical preliminaries for the main algorithm introduced in the next section. We consider a decentralized communication graph (V, W ) where V denotes the set of n PEs and W ∈ R n×n is the symmetric doubly stochastic adjacency matrix. W ii corresponds to the weight of the state of the i-th PE while W ij corresponds to the weight of the state of the j-th PE on the state of the i-th PE. We assume that W represents a ring topology that remains fixed throughout iterations. Now the objective of data-parallel training of any neural network can be expressed as min x∈R n f (x) = 1 n n i=1 E ξ∼D F i (x; ξ) :=fi(x) (1) where D is the sampling distribution, considered to be the same in every PE. Further, the neural network in every PE is considered to have N parameters. These parameters are the weights and biases in the neural network model. For mathematical formulation, let us define the concatenation of all local parameters X k , random samples ξ k , stochastic gradients ∂F (X k ; ξ k ) and expected gradients ∂f (X k ) as : X k := [x k,1 · · · x k,n ] ∈ R N ×n , ξ k := [ξ k,1 · · · ξ k,n ] ∈ R n , ∂F (X k ; ξ k ) := [∇F 1 (x k,1 ; ξ k,1 ) ∇F 2 (x k,2 ; ξ k,2 ) · · · ∇F n (x k,n ; ξ k,n )] ∈ R N ×n , ∂f (X k ) := [∇f 1 (x k,1 ) ∇f 2 (x k,2 ) · · · ∇f n (x k,n )] ∈ R N ×n . The algorithm is said to converge to a e-approximate solution if K −1 K−1 k=0 E ∇f X k 1 n n 2 ≤ e. Now the training algorithm for the decentralized stochastic gradient descent mentioned in [40] can be expressed as: X k+1 = X k W − γ∂F (X k ; ξ k ) ,(2) where γ is the step size or learning rate. From (2), it is clear that values from neighbor PEs are needed to calculate the values in a particular PE. Thus the parameters of the neural network, i.e., the weights and biases, are communicated between the neighbor PEs after every iteration. For details on how to choose W optimally, the reader is referred to [54]. Usually the values of W are taken to be 1 Ni+1 where N i are the number of neighbors of the i-th PE. This means that the parameters at the i-th PE are averaged with that of its neighbors after every iteration. For the ring topology that we assume, N i = 2 for all i. After training concludes, the models in all the PEs are usually averaged to produce one model which is then evaluated on the test dataset. This algorithm from [40], named D-PSGD in that paper, is stated in pseudo code in Algorithm A. Since communication between neighboring PEs happen regularly after every iteration, we refer to this algorithm as the one with regular communication. We modify this algorithm to include event-triggered communication as proposed in the next section. Our algorithm works as follows -Every PE tracks the changes in the parameters of its model. When the norm of a particular parameter in a PE has changed by some threshold, it is sent to the neighboring PEs. At other iterations, that particular parameter is not sent to the neighbors and the neighbors continue updating their own model using the last received version of that parameter. For mathematically describing the algorithm, let us first define the vector of previously communicated valuesX k aŝ X k := [x k,1 · · ·x k,n ] ∈ R N ×n .(3) Note that eachx k,i is a vector of the norm of N parameters, i.e., x k,i = [x k,i,1 . . .x k,i,N ] ∈ R N .(4) Now the event-triggered condition can be expressed aŝ x k+1,i,I =      x k+1,i,I if x k,i,I − x k+1,i,I ≥ δ k,i,Î x k,i,I if x k,i,I − x k+1,i,I < δ k,i,I , where δ k,i,I is the threshold for the I-th parameter in the i-th PE at k-th iteration. Consequently, the training algorithm gets modified from (2) to X k+1 =X k W − γ∂F X k ; ξ k ,(5) which represents our algorithm with event-triggered communication. The pseudo code is specified in Algorithm B. δ k,i,I = x k,i,I − x k+1,i,I k −k Slope ×h,(6) wherek is the iteration corresponding tox k,i,I , i.e., when the last value was communicated. The intuition behind making the threshold dependent on the slope is to ensure it is chosen according to the trend of evolution of the parameter. This helps on saving communication as much as possible while ensuring that communication does not stop, i.e., happens once in a while. If a parameter is changing fast, that means that it will satisfy the criterion for communication soon -thus a high threshold (due to the high slope) can be suitable. However, if the parameter is changing slowly, there might a long period before the next communication happens which might slow down convergence of the overall algorithm. Hence the threshold is decreased (due to the low slope) to incentivize communication. The horizon h is a hyperparameter that is chosen by the user. Its purpose is to serve as a look-ahead to calculate the next threshold. It might seem that h requires tuning as well, thereby nullifying its advantages over the static threshold. However, the same value of h can be chosen for the different parameters because the threshold is already modulated by the slope. If the neural network model is changed due to change in the depth, width or type of layers, the threshold will adjust accordingly. Choosing a different dataset where the data follows a different distribution is also likely to change the evolution of the neural network parameters which the adaptive threshold can capture. Thus the adaptive threshold selection mechanism plays a huge role in keeping our algorithm EventGraD portable as much as possible across multiple models and multiple datasets. The slope is calculated between these two points which is then multiplied by the horizon to obtain the new adaptive threshold. Analysis The theoretical convergence properties of the proposed algorithm are studied in this section. Let us consider the error or difference between the last communicated state and the current state as: k,i,I =x k,i,I − x k,i,I ,(7)=⇒ E k =X k − X k ,(8) where E k = [ k,1 . . . k,n ] ∈ R N ×n . According to our algorithm, the error is bounded by the corresponding threshold as k,i,I ≤ δ k,i,I .(9) Since δ k,i,I is different for different i and different I, considering the different values for any theoretical analysis seems intractable. Rather we consider the following assumption: Assumption 1. The thresholds δ k,i,I can be bounded by a function dependent on only k as δ k,i,I 2 ≤ g(k).(10) Assumption 1 makes analysis of the convergence properties of the algorithm feasible by considering a bound on thresholds for all parameters in all PEs. Further, we consider the following assumptions that are usually used for analysis of SGD algorithms. 3. Bounded Variance: The variance of the stochastic gradient E i∼U i([n]) E ξ∼Di ∇F i (x; ξ) − ∇f (x) 2 is bounded for any x with i sampled uniformly from {1, . . . , n} and ξ from the distribution D i . That is, there are constants σ and ς, such that: E ξ∼Di ∇F i (x; ξ) − ∇f i (x) 2 ≤ σ 2 , ∀i, ∀x, E i∼U i([n]) ∇f i (x) − ∇f (x) 2 ≤ ς 2 , ∀x. 4. We start from X 0 = 0 without loss of generality. Let C 1 = 1 − γ 2 − 72γ 3 C 2 1 − √ ρ 2 L 2 , C 2 = 1 − 36γ 2 1 − √ ρ 2 nL 2 G(K) = K k=0 g(k), G 1/2 (K) = K k=0 g(k)(11) Theorem 1. Considering the assumptions, we obtain the following convergence rate for the algorithm C 1 K K−1 k=0 E ∇f X k 1 n n 2 + γ − γ 2 L 2K K−1 k=0 E ∂f (X k )1 n n 2 ≤ f (0) − f * K + γ 2 Lσ 2 2n + 12C −1 2 γ 3 nL 2 2L 2 + 1 + 3γL 2 + L + 1 2K + 72γ 3 L 4 KC 2 1 − √ ρ 2 G(K − 1) + C −1 2 γρL 2 G 2 1/2 (K − 1) + 2nγ 3 σ 2 L 2 C 2 (1 − ρ) + 18nγ 3 ς 2 L 2 C 2 1 − √ ρ 2(12) Proof. Provided in appendix. Theorem 1 represents the convergence of the average of the models in all PEs. In order to obtain a closer result, we consider an appropriate learning rate and then state the following corollary: Let , we have the following convergence rate: C 3 = 1 − √ ρ 2 2L 2 + 1 6ρL 2 , C 4 = 7L 2 + L + 1 2(13)1 K K−1 k=0 E ∇f X k 1 n n 2 ≤ (2(f (0) − f * ) + L) 1 K + 1 √ Kn + 2C 3 √ K + 2C 4 K G(K − 1) + 2 √ K G 2 1/2 (K − 1) if the total number of iterations K is large enough, in particular, K ≥ 4n 3 L 2 σ 3 (f (0) − f * + L/2) σ 2 (1 − ρ) + 9ς 2 1 − √ ρ 2 , and K ≥ 72L 2 n 2 σ 2 (1 − √ ρ) 2 , and K ≥ √ n(L + 1) 2ρL 2 √ n + σ 2 Proof. Provided in appendix. Corollary 1 shows that the bound on the convergence rate is dependent on the threshold related terms G(K) and G 1/2 (K). When K is large enough, the 1 K terms will decay faster than the 1 √ K terms and therefore the convergence rate is of the order O 1 √ Kn + G(K−1) √ K + G 2 1/2 (K−1) √ K . Note that a threshold of 0 reduces to the regular algorithm in [40]. Thus with g(k) = 0, we obtain G(k) = 0 and G 1/2 (k) = 0 and hence the rate of convergence reduces to O( 1 K + 1 √ Kn ) which is consistent with [40]. Now we provide a more concrete bound by choosing g(k) according to a popular eventtriggered threshold specified in [55]. Corollary 2. If g(k) is chosen of the form g(k) = αβ k where α, β are appropriate constants and 0 < β < 1, then the rate of convergence is of the order O 1 √ Kn + 1 √ K . Proof. We obtain G(K − 1) = α 1−β K−1 1−β and G 2 1/2 (K) = α 1− √ β K−1 1− √ β 2 . The corollary then follows by noting that 2C3 √ K + 2C4 K α 1−β K−1 1−β ∼ O 1 √ K and 4 √ K α 1− √ β K−1 1− √ β 2 ∼ O 1 √ K when K is sufficiently large. Implementation There are a lot of popular frameworks for machine learning like PyTorch, [20]. Results We perform experiments to evaluate the performance of our algorithm. All parameters, all of which cannot be shown in this paper due to space constraints. Therefore we show a few parameters which vary in their style of evolution in (6) is shown in Fig 5. Since the threshold is proportional to the local slope, we see in parameter 1 and 3 that higher slopes during the early iterations of training lead to higher thresholds followed by a decrease in threshold due to decrease in slope. For parameters 2 and 5, which stay relatively flat, the threshold also follows a flat trend. It is important to note that since every parameter changes differently, their thresholds have to be chosen accordingly. After demonstrating similar rate of convergence, we focus on the main advantage of the event-triggered algorithm over the regular algorithm -reduction in the number of messages communicated while attaining similar accuracy. The reduction in messages is quantified by the percentage of messages of the regular algorithm that is sent in the event-triggered algorithm. Table 1 We see that the event-triggered communication algorithm exchanges approximately 40% of the messages of the regular (baseline) communication algorithm while achieving similar accuracy. As a reminder, the regular algorithm is the D-PSGD algorithm specified in [40] that is also suitable for the asynchronous AllReduce architecture [14,34] which is different from the decentralized reduction with just neighbors scenario that we deal with. However, to demonstrate how these approaches can be extended to our decentralized scenario and combined with the event-triggered approach, we focus on the sparsification method of Top-K. Specifically, when an event is triggered, we send just the Top-K percentage of the elements in a parameter, i.e., the weight matrix or the bias vector. Note that for a Top-K percent value of K, 2K percent of messages is being sent because the indices of the Top-K percent elements have to be sent in addition to their values. Table 2 shows the results of combining Top-K percent sparsification with event-triggered communication. Before we compare the results in Table 2 with Table 1, we note some important points. Firstly, even though all the simulation details of the event-triggered communication are kept the same between Table 1 and Table 2, the percentage of messages sent are different between them. This is because sending just the Top-K elements of a parameter changes the overall evolution of the neural network which in turn leads to different adaptive thresholds and hence different sequence of events. Secondly, we have to consider the percentage of overall communication for Top-K sparsification in contrast to percentage of messages considered in Table 1. This is due to the fact that the objective of Top-K sparsification is to reduce the size of each message sent. Note that in Table 1, the percentage of overall communication is equivalent to the percentage of messages mentioned since entire parameters are sent during events. However, in the case of Top-K in Table 2, the percentage of overall communication is 2K % of the percentage of messages sent. Now we see that the accuracy in Table 2 remains almost similar to that of Table 1 but the overall communication is approximately 7% of that of the regular (baseline) communication algorithm. This is in contrast to the overall communication of around 40% in the event-triggered algorithm in Table 1 with respect to the baseline. Thus Top-K sparsification combined with event-triggered communication requires around 1 6 -th of the communication required in just event-triggered communication while maintaining similar accuracy. 9. Acknowledgements Conclusion Appendix The proof for the theoretical results in this paper are provided here. First we state some necessary lemmas. Lemma 1 and Lemma 2 are reproduced from [40]. Lemma 1. Using Assumption 2, we obtain 1 n n − W k e i 2 ≤ ρ k , ∀i ∈ 1, 2, . . . , n, k ∈ N Proof. Let W ∞ := lim k→∞ W k . Because of the assumptions, we get 1n n = W ∞ e i ∀i since W is doubly stochastic and ρ < 1. Thus 1 n n − W k e i 2 = (W ∞ − W k )e i 2 ≤ W ∞ − W k 2 e i 2 = W ∞ − W k 2 ≤ ρ k . Lemma 2. Under Assumption 2, the following holds: E ∂f (X j ) 2 ≤ n h=1 3EL 2 n i =1 x j,i n − x j,h 2 + 3nς 2 + 3E ∇f X j 1 n n 1 n 2 , ∀j Proof. The term E ∂f (X j ) 2 is bounded as follows: E ∂f (X j ) 2 ≤ 3E ∂f (X j ) − ∂f X j 1 n n 1 n 2 + 3E ∂f X j 1 n n 1 n − ∇f X j 1 n n 1 n 2 + 3E ∇f X j 1 n n 1 n 2 ≤ 3E ∂f (X j ) − ∂f X j 1 n n 1 n 2 F + 3nς 2 + 3E ∇f X j 1 n n 1 n 2 ≤ n h=1 3EL 2 n i =1 x j,i n − x j,h 2 + 3nς 2 + 3E ∇f X j 1 n n 1 n 2 . Lemma 3. For any two vectors a, b, the following is satisfied a + b 2 ≤ 2 a 2 + 2 b 2(14) Proof. We start with a + b 2 = a 2 + 2 a, b + b 2 . Now a, b < a 2 b 2 < a 2 + b 2 2 where the first is Cauchy-Schwarz inequality and the second is the geometric mean-arithmetic mean inequality. Substituting the above, the inequality follows. Proof to Theorem 1. We begin with f X k+1 1n n : Ef X k+1 1 n n = Ef  X k W 1 n n − γ ∂F X k ; ξ k 1 n n   = Ef  X k 1 n n − γ ∂F X k ; ξ k 1 n n   ≤ Ef X k 1 n n − γE ∇f X k 1 n n , ∂f X k 1 n n + γ 2 L 2 E n i=1 ∇F i (x k,i ; ξ k,i ) n 2(15) where the previous step comes from the general Lipschitz property f (y) < f (x)+ ∇f (x) T (y − x) + L 2 y − x 2 . The last term above is the second order moment of n i=1 ∇Fi(x k,i ;ξ k,i ) n . Now we can write E n i=1 ∇F i (x k,i ; ξ k,i ) n 2 = E n i=1 ∇F i (x k,i ; ξ k,i ) − ∇f i (x k,i ) n 2 + E n i=1 ∇f i (x k,i ) n 2 .(16) Applying (16) in (15), we obtain: Ef X k 1 n n − γE ∇f X k 1 n n , ∂f X k 1 n n + γ 2 L 2 E n i=1 ∇F i (x k,i ; ξ k,i ) − ∇f i (x k,i ) n 2 + γ 2 L 2 E n i=1 ∇f i (x k,i ) n 2(17) For the second last term, we can show that E n i=1 ∇F i (x k,i ; ξ k,i ) − ∇f i (x k,i ) n 2 = 1 n 2 n i=1 E (∇F i (x k,i ; ξ k,i ) − ∇f i (x k,i )) 2 .(18) Applying (18) into (17), Ef X k 1 n n − γE ∇f X k 1 n n , ∂f X k 1 n n + γ 2 L 2n 2 n i=1 E (∇F i (x k,i ; ξ k,i ) − ∇f i (x k,i )) 2 + γ 2 L 2 E n i=1 ∇f i (x k,i ) n 2 ≤ (19) Ef X k 1 n n − γE ∇f X k 1 n n , ∂f X k 1 n n + γ 2 Lσ 2 2n + γ 2 L 2 E n i=1 ∇f i (x k,i ) n 2(20) Using the property a, b = 1 2 a 2 + b 2 − a − b 2 , we can rewrite the above as: Ef X k 1 n n + γ 2 Lσ 2 2n − γ 2 E ∇f X k 1 n n 2 + γ 2 L − γ 2 E n i=1 ∇f i (x k,i ) n 2 + γ 2 E ∇f X k 1 n n − ∂f X k 1 n n 2 :=T1(21) Using the Lipschitz property again, we bound the first term as: Ef X k 1 n n ≤ Ef X k 1 n n + E ∇f X k 1 n n , E k 1 n n + L 2 E k 1 n n 2 ≤ Ef X k 1 n n + 1 2 E ∇f X k 1 n n 2 + 1 2 E k 1 n n 2 + L 2 E k 1 n n 2 = Ef X k 1 n n + 1 2 E ∇f X k 1 n n 2 + L + 1 2 E k 1 n n 2 ≤ Ef X k 1 n n + 1 2 E ∇f X k 1 n n 2 + L + 1 2 E k 2 F 1 n n 2 ≤ Ef X k 1 n n + 1 2 E ∇f X k 1 n n 2 + L + 1 2 ng(k) 1 n = Ef X k 1 n n + 1 2 E ∇f X k 1 n n 2 + L + 1 2 g(k)(22) where we used the inequality E k 2 F ≤ ng(k). F is the Frobenius norm. Now we bound T 1 as: T 1 = E ∇f X k 1 n n − ∂f X k 1 n n 2 ≤ 1 n 2 n i=1 E ∇f i n j=1x k,j n − ∇f i (x k,i ) 2 ≤ L 2 n 2 n i=1 E n j=1x k,j n −x k,i 2 :=Q k,i ,(23) whereQ k,i is the square distance of the broadcasted variable i to the average of all broadcasted local variables. From (7) and Lemma 3, we can conclude that Q k,i ≤ 2Q k,i + 2Q k,i , i.e., T 1 ≤ 2L 2 n 2        n i=1 E n j=1 x k,j n − x k,i 2 Q k,i + n i=1 E n j=1 k,j n − k,i 2 Q k,i       (24) where k,i contains k,i,I for all parameters I = {1, . . . , N }. Now we can bound Q k,i as: Q k,i ≤ ng(k)(25) which impliesQ k,i ≤ 2Q k,i + 2ng(k).(26) Now we need to find a bound for Q k,i Q k,i = E n j=1 x k,j n − x k,i 2 = E X k 1 n n − X k e i 2 where e i is the one-hot encoded vector = E X k−1 W 1 n − γ∂F X k−1 ; ξ k−1 1 n n −X k−1 W e i + γ∂F X k−1 ; ξ k−1 e i 2 = E X k−1 1 n + E k−1 1 n − γ∂F X k−1 ; ξ k−1 1 n n − X k−1 W e i − E k−1 W e i + γ∂F X k−1 ; ξ k−1 e i 2 = E X 0 1 n + K−1 k=0 E k 1 n − γ K−1 k=0 ∂F X k ; ξ k 1 n n − X 0 W k e i − K−1 k=0 E k W e i +γ K−1 k=0 ∂F X k ; ξ k W K−1+k e i 2 X0=0 = E K−1 k=0 E k 1 n n − K−1 k=0 E k W e i − γ K−1 k=0 ∂F X k ; ξ k 1 n n − W K−1+k e i 2 ≤ E K−1 k=0 E k 1 n n − W e i 2 + γ 2 E K−1 k=0 ∂F X k ; ξ k 1 n n − W K−1+k e i 2 ≤ E K−1 k=0 E k 2 F 1 n n − W e i 2 + 2E k =k E k F 1 n n − W e i E k F 1 n n − W e i + γ 2 E K−1 k=0 ∂F X k ; ξ k 1 n n − W K−1+k e i 2 ≤ E K−1 k=0 ng(k)ρ + 2E k =k ng(k) √ ρ √ ρ ng(k ) + γ 2 E K−1 k=0 ∂F X k ; ξ k 1 n n − W K−1+k e i 2 ≤ nρ K−1 k=0 g(k) + nρ k =k 2 g(k)g(k ) + γ 2 E K−1 k=0 ∂F X k ; ξ k 1 n n − W K−1+k e i 2 ≤ nρ K−1 k=0 g(k) 2 :=(G 1/2 (K−1)) 2 +2γ 2 E K−1 k=0 ∂F X k ; ξ k − ∂f (X k ) 1 n n − W K−1+k e i 2 :=T2 + 2γ 2 E K−1 k=0 ∂f X k 1 n n − W K−1+k e i 2 :=T3 , where G 1/2 (k) = K k=0 g(k) as defined before. We then bound T 2 as follows: T 2 = E K−1 k=0 ∂F X k ; ξ k − ∂f (X k ) 1 n n − W K−1+k e i 2 = K−1 k=0 E ∂F X k ; ξ k − ∂f (X k ) 1 n n − W K−1+k e i 2 ≤ K−1 k=0 E ∂F X k ; ξ k − ∂f (X k ) 2 F 1 n n − W K−1+k e i 2 ≤ K−1 k=0 nσ 2 1 n n − W K−1+k e i 2 ≤ nσ 2 K−1 k=0 ρ K−1+k ≤ nσ 2 1 − ρ(27) The bound for T 3 is as follows: T 3 = E K−1 k=0 ∂f (X k ) 1 n n − W K−1+k e i 2 = K−1 k=0 E ∂f (X k ) 1 n n − W K−1+k e i 2 :=T4 + K−1 k =k E ∂f (X k ) 1 n n − W K−1+k e i , ∂f (X k ) 1 n n − W K−1+k e i :=T5(28) We will bound T 4 and T 5 separately. T 4 is bound as: T 4 = K−1 k=0 E ∂f (X k ) 1 n n − W K−1+k e i 2 ≤ K−1 k=0 E ∂f (X k ) 2 1 n n − W K−1+k e i 2 Lemma 2 ≤ 3 K−1 k=0 n i=1 EL 2Q k,i 1 n n − W K−1+k e i 2 + 3nς 2 1 − ρ + 3 K−1 k=0 E ∇f X k 1 n n 1 T n 2 1 n n − W K−1−k 2(29) T 5 is bound as: T 5 = K−1 k =k E ∂f (X k ) 1 n n − W K−1+k e i , ∂f (X k ) 1 n n − W K−1+k e i ≤ K−1 k =k E ∂f (X k ) 1 n n − W K−1+k e i ∂f (X k ) 1 n n − W K−1+k e i ≤ K−1 k =k E    ∂f (X k ) 2 2 + ∂f (X k ) 2 2    ρ K−1− k+k 2 ≤ K−1 k =k E ∂f (X k ) 2 ρ K−1− k+k 2 Lemma 2 ≤ 3 K−1 k =k   n i=1 EL 2Q k,i + ∇f X k 1 n n 1 T n 2   ρ K−1− k+k 2 :=T6 + K−1 k =k 3nς 2 ρ K−1− k+k 2 :=T7(30) Now T 6 is bounded as follows: T 6 = 3 K−1 k =k   n i=1 EL 2Q k,i + ∇f X k 1 n n 1 T n 2   ρ K−1− k+k 2 (26) ≤ 6 K−1 k=0   n i=1 2EL 2 Q k,i + 2n 2 L 2 g(k) + ∇f X k 1 n n 1 T n 2   K−1 k =k+1 √ ρ 2K−2−k−k ≤ 6 K−1 k=0   n i=1 2EL 2 Q k,i + 2n 2 L 2 g(k) + ∇f X k 1 n n 1 T n 2   √ ρ K−1−k 1 − √ ρ(31) And T 7 is bounded as such: T 7 = 6nς 2 K−1 k>k ρ K−1− k+k 2 = 6nς 2 ρ k 2 − 1 ρ k 2 − √ ρ √ ρ − 1 2 √ ρ + 1 (32) ≤ 6nς 2 1 1 − √ ρ 2(33) Now combining T 6 and T 7 into T 5 , and then T 5 and T 4 into T 3 , we obtain the following upper bound: T 3 ≤ 3 K−1 k=0 n i=1 EL 2Q k,i 1 n n − W K−1+k e i 2 + 3nς 2 1 − ρ + 3 K−1 k=0 E ∇f X k 1 n n 1 T n 2 1 n n − W K−1−k 2 + 6 K−1 k=0   n i=1 2EL 2 Q k,i + 2n 2 L 2 g(k) + ∇f X k 1 n n 1 T n 2   √ ρ K−1−k 1 − √ ρ + 6nς 2 1 − √ ρ 2 ≤ 3 K−1 k=0 n i=1 EL 2Q k,i 1 n n − W K−1+k e i 2 + 3 K−1 k=0 E ∇f X k 1 n n 1 T n 2 1 n n − W K−1−k 2 + 6 K−1 k=0   n i=1 2EL 2 Q k,i + 2n 2 L 2 g(k) + ∇f X k 1 n n 1 T n 2   √ ρ K−1−k 1 − √ ρ + 9nς 2 1 − √ ρ 2 ≤ 3 K−1 k=0 n i=1 EL 2Q k,i 1 n n − W K−1+k e i 2 + 3 K−1 k=0 E ∇f X k 1 n n 1 T n 2 1 n n − W K−1−k 2 + 6 K−1 k=0   n i=1 2EL 2 Q k,i + ∇f X k 1 n n 1 T n 2   √ ρ K−1−k 1 − √ ρ + 9nς 2 1 − √ ρ 2 + 12n 2 L 2 G(K − 1)(34) As a reminder, we have defined the following expressions as G(k) = K k=0 g(k), G 1/2 (k) = K k=0 g(k). Now we plug the above bound, along with the bound on T 2 to obtain the following: Q k,i ≤ nρG 2 1/2 (K − 1) + 24γ 2 n 2 L 2 G(K − 1) + 2γ 2 nσ 2 1 − ρ + 6γ 2 K−1 k=0 n i=1 EL 2Q k,i 1 n n − W K−1−k e i 2 + 6γ 2 K−1 k=0 E ∇f X k 1 n n 1 T n 2 1 n n − W K−1−k 2 + 12γ 2 K−1 k=0   n i=1 2EL 2 Q k,i + E ∇f X k 1 n n 1 T n 2   √ ρ K−1−k 1 − √ ρ + 18γ 2 nς 2 1 − √ ρ 2 ≤ nρG 2 1/2 (K − 1) + 24γ 2 n 2 L 2 G(K − 1) + 2γ 2 nσ 2 1 − ρ + 18γ 2 nς 2 1 − √ ρ 2 + 12γ 2 K−1 k=0 n i=1 EL 2 (Q k,i + ng(k)) ρ K−1−k + 6γ 2 K−1 k=0 E ∇f X k 1 n n 1 T n 2 ρ K−1−k + 12γ 2 K−1 k=0   n i=1 2EL 2 Q k,i + E ∇f X k 1 n n 1 T n 2   √ ρ K−1−k 1 − √ ρ ≤ nρG 2 1/2 (K − 1) + 12γ 2 n 2 (2L 2 + 1)G(K − 1) + 2γ 2 nσ 2 1 − ρ + 18γ 2 nς 2 1 − √ ρ 2 + 12γ 2 K−1 k=0 E ∇f X k 1 n n 1 T n 2 ρ K−1−k + 2 √ ρ K−1−k 1 − √ ρ + 12γ 2 K−1 k=0 n i=1 EL 2 Q k,i ρ K−1−k + 2 √ ρ K−1−k 1 − √ ρ .(35)Let M k = E n i=1 Q k,i n ,i.e., the expected average of Q k in all nodes. We then adjust the previous equation to obtain a bound on M k as follows: M k ≤ nρG 2 1/2 (K − 1) + 12γ 2 n 2 (2L 2 + 1)G(K − 1) + 2nγ 2 σ 2 1 − ρ + 18nγ 2 ς 2 1 − √ ρ 2 + 12γ 2 K−1 k=0 E ∇f X k 1 n n 1 T n 2 ρ K−1−k + 2 √ ρ K−1−k 1 − √ ρ + 12γ 2 L 2 n K−1 k=0 M k ρ K−1−k + 2 √ ρ K−1−k 1 − √ ρ(36) Now we sum it from k = 0 to K − 1 to obtain the following: K−1 k=0 M k ≤ KnρG 2 1/2 (K − 1) + 12Kγ 2 n 2 (2L 2 + 1)G(K − 1) + 2Knγ 2 σ 2 1 − ρ + 18Knγ 2 ς 2 1 − √ ρ 2 + 12γ 2 K−1 k=0 K−1 i=0 E ∇f X i 1 n n 1 T n 2 ρ K−1−i + 2 √ ρ K−1−i 1 − √ ρ + 12nγ 2 L 2 K−1 k=0 K−1 i=0 M i ρ K−1−i + 2 √ ρ K−1−i 1 − √ ρ ≤ KnρG 2 1/2 (K − 1) + 12Kγ 2 n 2 (2L 2 + 1)G(K − 1) + 2Knγ 2 σ 2 1 − ρ + 18Knγ 2 ς 2 1 − √ ρ 2 + 12γ 2 K−1 k=0 E ∇f X k 1 n n 1 T n 2 K−1 i=0 ρ K−1−i + 2 K−1 i=0 √ ρ K−1−i 1 − √ ρ + 12nγ 2 L 2 K−1 k=0 M k K−1 i=0 ρ K−1−i + 2 K−1 i=0 √ ρ K−1−i 1 − √ ρ ≤ KnρG 2 1/2 (K − 1) + 12Kγ 2 n 2 (2L 2 + 1)G(K − 1) + 2Knγ 2 σ 2 1 − ρ + 18Knγ 2 ς 2 1 − √ ρ 2 + 36γ 2 1 − √ ρ 2 K−1 k=0 E ∇f X k 1 n n 1 T n 2 + 36nγ 2 L 2 1 − √ ρ 2 K−1 k=0 M k(37) Rearranging the terms, the expression becomes: 1 − 36nγ 2 L 2 1 − √ ρ 2 K−1 k=0 M k ≤ KnρG 2 1/2 (K − 1) + 12Kγ 2 n 2 (2L 2 + 1)G(K − 1) + 2Knγ 2 σ 2 1 − ρ + 18Knγ 2 ς 2 1 − √ ρ 2 + 36γ 2 1 − √ ρ 2 K−1 k=0 E ∇f X k 1 n n 1 T n 2(38) Defining C 2 = 1 − 36nγ 2 L 2 (1− √ ρ) 2 , we can rewrite it to: K−1 k=0 M k ≤ C −1 2 KnρG 2 1/2 (K − 1) + 12Kγ 2 n 2 (2L 2 + 1)G(K − 1) + 2Knγ 2 σ 2 C 2 (1 − ρ) + 18Knγ 2 ς 2 C 2 1 − √ ρ 2 + 36γ 2 C 2 1 − √ ρ 2 K−1 k=0 E ∇f X k 1 n n 1 T n 2(39) We also know that T 1 is bounded as such: T 1 ≤ 2L 2 n 2 n i=1 Q k,i + n i=1 ng(k) ≤ 2L 2 n (M k + ng(k))(40) We then input this bound on T 1 in (22) and obtain: Ef X k+1 1 n n ≤ Ef X k 1 n n + 1 2 E ∇f X k 1 n n 2 + L + 1 2 g(k) + γ 2 Lσ 2 2n − γ 2 E ∇f X k 1 n n 2 + γ 2 L − γ 2 E ∂f (X k )1 n n 2 + γL 2 n M k + γL 2 g(k) ≤ Ef X k 1 n n + γL 2 2 E ξ k 1 n n 2 + L + 1 2 g(k) + γ 2 Lσ 2 2n − 1 − γ 2 E ∇f X k 1 n n 2 + γ 2 L − γ 2 E ∂f (X k )1 n n 2 + γL 2 n M k + γL 2 g(k) ≤ Ef X k 1 n n + 3γL 2 + L + 1 2 g(k) + γ 2 Lσ 2 2n − 1 − γ 2 E ∇f X k 1 n n 2 + γ 2 L − γ 2 E ∂f (X k )1 n n 2 + γL 2 n M k(41) Summing from k = 0 to k = K − 1 on both sides yields: 1 − γ 2 K−1 k=0 E ∇f X k 1 n n 2 − γ 2 L − γ 2 K−1 k=0 E ∂f (X k )1 n n 2 ≤ f (0) − f * + Kγ 2 Lσ 2 2n + γL 2 n K−1 k=0 M k + 3γL 2 + L + 1 2 G(K − 1)(39) ≤ f (0) − f * + Kγ 2 Lσ 2 2n + C −1 2 KργL 2 G 2 1/2 (K − 1) + 12C −1 2 Kγ 3 nL 2 (2L 2 + 1) + 3γL 2 + L + 1 2 G(K − 1) + 2Kγ 3 nσ 2 L 2 C 2 (1 − ρ) + 18Knγ 3 ς 2 L 2 C 2 1 − √ ρ 2 + 36γ 3 L 2 C 2 1 − √ ρ 2 K−1 k=0 E ∇f X k 1 n n 2(42) Rearranging the terms, dividing by K and using the Lipschitz inequality ∇f X k 1n n 2 ≤ 2 ∇f X k 1n n 2 + 2L 2 g(k) (similar to what we have done in (22)), we obtain: 1 K 1 − γ 2 − 72γ 3 L 2 C 2 1 − √ ρ 2 K−1 k=0 E ∇f X k 1 n n 2 + γ − γ 2 L 2K K−1 k=0 E ∂f (X k )1 n n 2 ≤ f (0) − f * K + γ 2 Lσ 2 2n + 12C −1 2 γ 3 nL 2 2L 2 + 1 + 3γL 2 + L + 1 2K + 72γ 3 L 4 KC 2 1 − √ ρ 2 G(K − 1) + C −1 2 γρL 2 G 2 1/2 (K − 1) + 2nγ 3 σ 2 L 2 C 2 (1 − ρ) + 18nγ 3 ς 2 L 2 C 2 1 − √ ρ 2(43) where C 1 = 1−γ 2 − 72γ 3 C2(1− √ ρ) 2 L 2 . This completes the proof. Proof to Corollary 1. First we want to remove the term K−1 k=0 E ∂f (X k )1n n 2 in the LHS and maintain the inequality. For that, the coefficient of that term has to satisfy: γ − γ 2 L 2K > 0 =⇒ γ < 1 L(44) Now we have C 1 K K−1 k=0 E ∇f X k 1 n n 2 ≤ f (0) − f * K + γ 2 Lσ 2 2n + 12C −1 2 γ 3 nL 2 2L 2 + 1 + 3γL 2 + L + 1 2K + 72γ 3 L 4 KC 2 1 − √ ρ 2 G(K − 1) + C −1 2 γρL 2 G 2 1/2 (K − 1) + 2nγ 3 σ 2 L 2 C 2 (1 − ρ) + 18nγ 3 ς 2 L 2 C 2 1 − √ ρ 2(45) We choose γ = 1 2ρL 2 √ K+σ √ K/n . Also γ should satisfy γ < 1. In order to satisfy that as well as (44), we enforce 1 2ρL 2 √ K + σ K/n < 1 L + 1 =⇒ K > √ n(L + 1) 2ρL 2 √ n + σ 2(46) The γ satisfies the following as well: γ < 1 σ K n =⇒ γ 2 < n σ 2 K(47) Since γ < 1, we also have γ 3 ≤ n σ 2 K . Now if K ≥ 72L 2 n 2 σ 2 (1− √ ρ) 2 , we can bound C 2 as C 2 ≥ 1 2 . Further we can also bound C 1 as follows: C 1 = 1 − γ 2 − 72γ 3 C 2 1 − √ ρ 2 L 2 ≥ 1 − γ 2 − 144L 2 γ 3 1 − √ ρ 2 ≥ 1 2(48) Finally we have: 1 2K K−1 k=0 E ∇f X k 1 n n 2 ≤ (f (0) − f * + L/2) K +   1 − √ ρ 2 2L 2 + 1 3 2ρL 2 √ K + σ K/n + 7L 2 + L + 1 2K   G(K − 1) + 2ρL 2 G 2 1/2 (K − 1) 2ρL 2 √ K + σ K/n + 4nL 2 2ρL 2 √ K + σ K/n 3 σ 2 (1 − ρ) + 9ς 2 1 − √ ρ 2 ≤ (f (0) − f * + L/2) K + C 3 √ K G(K − 1) + C 4 K G(K − 1) + 1 √ K G 2 1/2 (K − 1) + 4n 3 L 2 σ 3 K √ Kn σ 2 (1 − ρ) + 9ς 2 1 − √ ρ 2(49) where C 3 = (1− √ ρ) 2 (2L 2 +1) 6ρL 2 and C 4 = 7L 2 +L+1 2 . If K is large enough, in particular, if K ≥ 4n 3 L 2 σ 3 (f (0)−f * +L/2) σ 2 (1−ρ) + 9ς 2 (1− √ ρ) 2 , the last term is bounded by (f (0)−f * +L/2) √ Kn . Thus the final expression is 1 K K−1 k=0 E ∇f X k 1 n n 2 ≤ (2(f (0) − f * ) + L) 1 K + 1 √ Kn + 2C 3 √ K + 2C 4 K G(K − 1) + 2 √ K G 2 1/2 (K − 1)(50) which completes the proof. • We derive an expression for a bound on the convergence rate of eventtriggered communication based on a generic bound on the adaptive threshold. • We provide an open-source high performance computing (HPC) implementation of our algorithm using PyTorch and Message Passing Interface (MPI) in C++. We also highlight implementation challenges of this algorithm, particularly the need of advanced features such as one-sided communication, also called remote memory access and the requirement of the newer PyTorch C++ frontend over its traditional Python frontend. We believe that it is not possible to implement event-triggered communication without remote memory access in any computer network as elaborated later in Section 6. Our implementation is open-source and available at [19]. Algorithm A : Regular Communication in Data Parallel Machine Learning for k = 0, 1, 2, . . . K − 1 do Randomly sample from dataset in i-th PE Compute the local stochastic gradient Communicate parameters to neighbors Update parameters using (2) end for Obtain averaged model from all PEs 4. Proposed Algorithm: EventGraD In the decentralized algorithm in (2), the parameters in a PE are exchanged with neighbors in every iteration of the training. This might be a waste of resources since the parameters might not differ a lot in every iteration. Therefore it is possible to relax this requirement of communication with neighbors at every iteration of training. This is the main idea of our algorithm where communication happens only when necessary in events. Figure 1 : 1Illustration of change in norm of parameters over iterations (taken from[18]). The left plot shows the norm of the parameter over iterations at the sender. The right plot shows the norm of that corresponding parameter used at the receiver. Fig 1 1illustrates this phenomenon. As an example, the left plot shows the evolution of the norm of a parameter over training iterations. When this norm changes by more than a threshold (0.1 in Fig 1) from the norm of the previously communicated values, an event for communication is triggered as marked by an asterisk. The first event of communication is forced to take place at iteration k = 0 for convenience. The right plot shows the corresponding values that the receiving PE uses when averaging its parameter with the parameter from this corresponding sending PE. Figure 2 : 2Illustration of slope-based adaptive threshold. The right green star denotes the event of current communication while the left green star denotes the event of last communication. Assumption 2 . 2The following assumptions hold:1. Lipschitz Gradient: All functions f i (.)'s have L-Lipschitz Gradients.2. Spectral Gap: Given the symmetric doubly stochastic matrix W , the value ρ := (max{ |λ 2 (W )| , |λ n (W )| }) satisfies ρ < 1. Figure 3 : 3Illustration of the difference between two-sided and one-sided communication. our simulations are done on CPUs. We use an HPC cluster of nodes with each node having 2 CPU Sockets of AMD's EPYC 24-core 2.3 GHz processor and 128 GB RAM per node. The cluster uses Mellanox EDR interconnect. The MPI library chosen is Open MPI 4.0.1 compiled with gcc 8.3.0. The version of Libtorch used is 1.5.0. We conduct our experiments on the CIFAR-10 dataset. We choose the residual neural network commonly used for training on CIFAR-10 [57]. Our simulations use the ResNet-18 configuration. For training this network, a learning rate of 0.01 is used with cross-entropy as the loss function and a mini-batch size of 256. The value of horizon h used to calculate the threshold in(6)is taken to be 1. Note that we performed experiments on the MNIST dataset in our previous work[20].To illustrate our adaptive event-triggered threshold selection scheme, we look at how the norm of the parameters, i.e., the weights and biases, in the ResNet-18 model change with iterations. Note that the ResNet-18 model has 86 Fig 4 . 4After few initial oscillations, the change of the values is gradual which suggests that not all parameters need to be communicated at every iteration. This paves the way for saving on communication of messages by event-triggered communication. The corresponding threshold evolution as calculated by the equation in Figure 4 : 4Plot showing the evolution of the norm of the parameters of the neural network in a certain PE. Note that the parameters may vary in their trend of evolution. The thresholds in Fig 5 have an oscillatory behavior. This is due to the fact that often the parameters in a neural network have local minor oscillations because of the nature of the stochastic gradient descent algorithm. The parameters in Fig 4 have these local oscillations, however they are not prominently visible due to the higher scale of the plot. Further, the stochastic nature of Figure 5 : 5Plot showing the adaptive threshold of the parameters shown in Fig 4. The trend of the threshold is adaptive to the trend of evolution of the corresponding parameter.the MPI one-sided implementation of the algorithm amplifies the oscillations.It is desired that the threshold reflect the aggregate trend of evolution in the parameter and not the local oscillations. In order to solve this issue, the sender can keep a history of multiple previously communicated events instead of just one previous event. Then the average slope is calculated which is the mean of the slopes between two consecutive events in that history. This average slope is then multiplied by the horizon to obtain the threshold. The length of the history is a hyperparameter which is similar in notion to the length of a moving average filter. The higher the length, the smoother the trend but at the cost of increased computational complexity.Having described details of selecting the threshold, we now look at the experimental convergence properties of the algorithm. We compare our eventtriggered communication algorithm proposed in Algorithm B with respect to the regular communication algorithm in Algorithm A from[40].Fig 6 showsthe loss function over epochs for both these algorithms, each repeated for 10 different runs shown by the errorbars. Note that an epoch refers to processing the entire dataset allotted to a PE once while an iteration refers to processing a batch once. Thus one epoch has multiple iterations which depends on the size of the batch.From Fig 6,we see that the decay in loss function seems similar for both the algorithms, indicating that they have similar speed of convergence.It is important to observe that the theoretical results in Section 5 deal with a bound on convergence of the average of the parameters in all the PEs whereas the plot inFig 6 isconcerned with experimental convergence of the loss function. Figure 6 : 6Plot showing the loss function over epochs. The experiments have been repeated 10 times to account for variations which have been considered in the errorbars. It is seen that the event-triggered communication algorithm has an experimental rate of convergence similar to that of the regular communication algorithm. states the accuracy of regular and event-triggered communication as well as the percentage of messages in event-triggered communication after training for 20 epochs. This paper introduces a novel algorithm that reduces communication in parallel training of neural networks. The proposed EventGraD algorithm communicates the model parameters in events only when the value of the parameter changes by a threshold. The choice of the threshold for triggering events is chosen adaptively based on the slope of the parameter values. The algorithm can be applied to different neural network configurations and different datasets. An asymptotic bound on the rate of convergence is provided. The challenges of implementing this algorithm in a high performance computing cluster, such as the requirement of advanced communication protocols and libraries, are discussed. Experiments on the CIFAR-10 dataset show the superior communication performance of the algorithm while maintaining the same accuracy. Algorithm B : EventGraD -Event-Triggered Communication in Data Parallel SGD for k = 0, 1, 2, . . . K − 1 do Randomly sample from dataset in i-th PE Compute the local stochastic gradient for I = 1, 2, . . . , N do if x k,i,I − x k+1,i,I ≥ δ k,i,I then Obtain averaged model from all PEs Choosing the threshold δ k,i,I is a design problem. The efficiency of this algorithm depends on selecting appropriate thresholds. The simplest option would be to choose the same value of threshold for all the parameters in all the PEs as was done in [53]. However, selecting the appropriate value would involve a lot of trial and error. Further, when the neural network model changes, the process would have to be repeated all over again. More importantly, the parameters in a model and across different PEs would vary differently and selecting the same threshold for all of them is not desired. Instead, it is better to choose a dynamic threshold that is adaptive to the rate of change of the parameters. A metric that is indicative of the rate of change of a parameter is the local slope of the norm of the parameter. Thus we choose the threshold of a parameter based on the local slope of its norm. Whenever an event of communication is triggered, the slope is calculated between the current value and the last communicated value.This slope is multiplied by a horizon h to calculate the threshold as illustrated inFig 2.This threshold will be kept fixed until the next event is triggered, resulting in calculation of a new threshold. Thus we obtain:Communicate parameter to neighbors end if end for Update parameters using (5) end for Corollary 1. Under the same assumptions as in Theorem 1, if we set γ =1 2ρL 2 √ K+σ √ K/n TensorFlow, CNTK, etc. Almost all of these frameworks support parallel or distributed training. TensorFlow follows the parameter server approach for parallelization. PyTorch provides a module called DistributedDataParallel that implements AllReduce based training. Horovod is another framework developed by Uber that implements an optimized AllReduce algorithm. However, none of these frameworks provide native support for the training involving averaging with just neighbors. Hence we decided to implement the proposed algorithms without using any of the distributed modules in these frameworks.We use PyTorch and MPI for our implementation. First, we point out why one-sided communication or remote memory access is necessary. Usually comdent on the change in values of the parameters of the sender which is a local phenomenon. Thus, when an event is triggered in the sending PE, it can issue a MPI Send operation. However, since the intended receiving PE is not aware of when the event is triggered at the sender, it does not know when to issue a MPI Recv operation. So two-sided communication using MPI Send and MPI Recv cannot be used for our algorithm.Hence we select one-sided communication for our purpose. In one-sided communication, only the sending PE has to know all the parameters of the message for both the sending and receiving side and can remotely write to a portion of the memory of the receiver without the receiver's involvement -hence the alternate name of Remote Memory Access[56]. That region of memory in the receiver is called window and can be publicly accessed. In our case, it is used to store the model parameters from the neighbors. So when an event for communication is triggered in the sending PE, it uses MPI Put to write its model parameters directly into the window of the corresponding neighbor PE.An illustration of one-sided vs two-sided communication is provided inFig 3.It is worth noting that PyTorch does not support one-sided communicationat this point. Recently, PyTorch released a C++ frontend called Libtorch which can be integrated with traditional C++ MPI implementations. Further, the C++ frontend is more suitable for HPC environments unlike the Python frontend. Hence we combine the neural network training functionalities of Libtorch with communication routines in MPI to implement our algorithm. For further details on our implementation, the reader is referred tomunication in high performance computing networks is two-sided. In other words, the sending PE starts the communication of a message by invoking a MPI Send operation and then the receiving PE completes the communication and receives the message by invoking a MPI Recv operation [29]. In our event- triggered communication algorithm, the events for communication are depen- Table 1 : 1Comparison of Regular Communication vs Event-Triggered Communication after 20 epochs. The Event-Triggered Communication algorithm drastically reduces the number of messages to be communicated by around 60% while maintaining similar accuracy.overhead. Note that the accuracy of both the regular and event-triggered algorithms decrease as the number of PEs increase. This is because as the ring of PEs get larger, messages comprising of neural network parameters require more hops to propagate through the entire ring. Hence, given same number of epochs, the larger ring comprising of more PEs will have lesser accuracy. If the algorithm is run for more epochs on more PEs, the accuracy will not degrade.Additionally, if the number of neighbors of each PE is increased, information can flow sooner, resulting in more accuracy.A noteworthy feature of our proposed algorithm is that it is complementary to other algorithms for reduced communication that have been proposed in the literature. In other words, our algorithm can be combined with these algorithms. For instance, we can apply the techniques of quantization and spar-Number of PEs Regular Accuracy Event-Triggered Accuracy Percentage of Messages 4 86.5 87 43.24 8 86.3 86.2 42.98 16 84.9 84.2 45.91 32 82.5 81.9 44.89 decentralized environment that we consider. In other words, our event-triggered algorithm saves around 60% of the messages as compared to the baseline algo- rithm while maintaining similar accuracy, thus alleviating the communication Table 2 : 2Combining Top-K% Sparsification with Event-Triggered (ET) Communication for K= 10. The percent of communication is 2K= 20% of the percent of messages. It is seen that Number of PEs Sparse ET Accuracy Percent of Messages Percent of communicationthe communication required here is around 1 6 -th of that in Event-Triggered (ET) communi- cation without sparsification while maintaining similar accuracy as in Table 1. 4 85.4 36.6 7.3 8 85.26 37.7 7.5 16 83.09 37.7 7.5 Artificial intelligence within the interplay between natural and artificial computation: Advances in data science, trends and applications. Javier Juan M Górriz, Andrés Ramírez, Ortíz, J Francisco, Fermin Martínez-Murcia, John Segovia, Matthew Suckling, Yu-Dong Leming, Jose Ramóń Zhang, Guido Alvarez-Sánchez, Bologna, Neurocomputing. 410Juan M Górriz, Javier Ramírez, Andrés Ortíz, Francisco J Martínez-Murcia, Fer- min Segovia, John Suckling, Matthew Leming, Yu-Dong Zhang, Jose Ramóń Alvarez-Sánchez, Guido Bologna, et al. Artificial intelligence within the interplay between natural and artificial computation: Advances in data science, trends and applications. Neurocomputing, 410:237-270, 2020. Parallel approaches to machine learning-a comprehensive survey. Sujatha R Upadhyaya, Journal of Parallel and Distributed Computing. 733Sujatha R Upadhyaya. Parallel approaches to machine learning-a comprehensive survey. Journal of Parallel and Distributed Computing, 73(3):284-292, 2013. Demystifying parallel and distributed deep learning: An in-depth concurrency analysis. Tal Ben-Nun, Torsten Hoefler, ACM Computing Surveys (CSUR). 524Tal Ben-Nun and Torsten Hoefler. Demystifying parallel and distributed deep learning: An in-depth concurrency analysis. ACM Computing Surveys (CSUR), 52(4):1-43, 2019. Evolving deep networks using hpc. Derek C Steven R Young, Travis Rose, Johnston, T William, Heller, P Thomas, Thomas E Karnowski, Potok, M Robert, Gabriel Patton, Jonathan Perdue, Miller, Proceedings of the Machine Learning on HPC Environments. the Machine Learning on HPC EnvironmentsSteven R Young, Derek C Rose, Travis Johnston, William T Heller, Thomas P Karnowski, Thomas E Potok, Robert M Patton, Gabriel Perdue, and Jonathan Miller. Evolving deep networks using hpc. In Proceedings of the Machine Learning on HPC Environments, pages 1-7, 2017. Sajal Dash, and Mallikarjun Shankar. Strategies to deploy and scale deep learning on the summit supercomputer. Junqi Yin, Shubhankar Gahlot, Nouamane Laanait, Ketan Maheshwari, Jack Morrison, IEEE/ACM Third Workshop on Deep Learning on Supercomputers (DLS). IEEEJunqi Yin, Shubhankar Gahlot, Nouamane Laanait, Ketan Maheshwari, Jack Morrison, Sajal Dash, and Mallikarjun Shankar. Strategies to deploy and scale deep learning on the summit supercomputer. In 2019 IEEE/ACM Third Work- shop on Deep Learning on Supercomputers (DLS), pages 84-94. IEEE, 2019. Scaling up machine learning: Parallel and distributed approaches. Ron Bekkerman, Mikhail Bilenko, John Langford, Cambridge University PressRon Bekkerman, Mikhail Bilenko, and John Langford. Scaling up machine learn- ing: Parallel and distributed approaches. Cambridge University Press, 2011. A review of privacy-preserving techniques for deep learning. Amine Boulemtafes, Abdelouahid Derhab, Yacine Challal, Neurocomputing. 384Amine Boulemtafes, Abdelouahid Derhab, and Yacine Challal. A review of privacy-preserving techniques for deep learning. Neurocomputing, 384:21-45, 2020. Exascale computing study: Technology challenges in achieving exascale systems. Defense Advanced Research Projects Agency Information Processing Techniques Office (DARPA IPTO). Keren Bergman, Tech. Rep. 15Keren Bergman et al. Exascale computing study: Technology challenges in achiev- ing exascale systems. Defense Advanced Research Projects Agency Information Processing Techniques Office (DARPA IPTO), Tech. Rep, 15, 2008. Doe advanced scientific computing advisory subcommittee (ascac) report: top ten exascale research challenges. Robert Lucas, James Ang, Keren Bergman, Shekhar Borkar, William Carlson, Laura Carrington, George Chiu, Robert Colwell, William Dally, Jack Dongarra, USDOE Office of Science (SC)(United StatesTechnical reportRobert Lucas, James Ang, Keren Bergman, Shekhar Borkar, William Carlson, Laura Carrington, George Chiu, Robert Colwell, William Dally, Jack Dongarra, et al. Doe advanced scientific computing advisory subcommittee (ascac) report: top ten exascale research challenges. Technical report, USDOE Office of Science (SC)(United States), 2014. Power consumption due to data movement in distributed programming models. Siddhartha Jana, Oscar Hernandez, Stephen Poole, Barbara Chapman, Siddhartha Jana, Oscar Hernandez, Stephen Poole, and Barbara Chapman. Power consumption due to data movement in distributed programming models. European Conference on Parallel Processing. SpringerIn European Conference on Parallel Processing, pages 366-378. Springer, 2014. Communication-efficient algorithms for statistical optimization. Yuchen Zhang, C John, Martin J Duchi, Wainwright, The Journal of Machine Learning Research. 141Yuchen Zhang, John C Duchi, and Martin J Wainwright. Communication-efficient algorithms for statistical optimization. The Journal of Machine Learning Re- search, 14(1):3321-3363, 2013. Qsgd: Communication-efficient sgd via gradient quantization and encoding. Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, Milan Vojnovic, arXiv:1610.02132arXiv preprintDan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. Qsgd: Communication-efficient sgd via gradient quantization and encoding. arXiv preprint arXiv:1610.02132, 2016. Deep gradient compression: Reducing the communication bandwidth for distributed training. Yujun Lin, Song Han, Huizi Mao, Yu Wang, Bill Dally, International Conference on Learning Representations. Yujun Lin, Song Han, Huizi Mao, Yu Wang, and Bill Dally. Deep gradient com- pression: Reducing the communication bandwidth for distributed training. In International Conference on Learning Representations, 2018. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. Frank Seide, Hao Fu, Jasha Droppo, Gang Li, Dong Yu, Fifteenth Annual Conference of the International Speech Communication Association. Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. In Fifteenth Annual Conference of the International Speech Communication Association, 2014. A consensus-based decentralized training algorithm for deep neural networks with communication compression. Bo Liu, Zhengtao Ding, Neurocomputing. Bo Liu and Zhengtao Ding. A consensus-based decentralized training algorithm for deep neural networks with communication compression. Neurocomputing, 2021. s-step iterative methods for symmetric linear systems. A T Chronopoulos, Charles William Gear, Journal of Computational and Applied Mathematics. 252AT Chronopoulos and Charles William Gear. s-step iterative methods for sym- metric linear systems. Journal of Computational and Applied Mathematics, 25(2):153-168, 1989. Communication-avoiding Krylov subspace methods. Mark Hoemmen, UC BerkeleyPhD thesisMark Hoemmen. Communication-avoiding Krylov subspace methods. PhD thesis, UC Berkeley, 2010. Eventtriggered communication in parallel computing. Soumyadip Ghosh, K Kamal, Vijay Saha, Gretar Gupta, Tryggvason, 2018 IEEE/ACM 9th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (scalA). IEEESoumyadip Ghosh, Kamal K Saha, Vijay Gupta, and Gretar Tryggvason. Event- triggered communication in parallel computing. In 2018 IEEE/ACM 9th Work- shop on Latest Advances in Scalable Algorithms for Large-Scale Systems (scalA), pages 1-8. IEEE, 2018b. Eventgrad: Event-triggered communication in parallel stochastic gradient descent. Soumyadip Ghosh, Vijay Gupta, 2020 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC) and Workshop on Artificial Intelligence and Machine Learning for Scientific Applications (AI4S). IEEESoumyadip Ghosh and Vijay Gupta. Eventgrad: Event-triggered communication in parallel stochastic gradient descent. In 2020 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC) and Work- shop on Artificial Intelligence and Machine Learning for Scientific Applications (AI4S), pages 1-8. IEEE, 2020. Parallelized stochastic gradient descent. Martin Zinkevich, Markus Weimer, Lihong Li, Alex J Smola, Advances in neural information processing systems. Martin Zinkevich, Markus Weimer, Lihong Li, and Alex J Smola. Parallelized stochastic gradient descent. In Advances in neural information processing systems, pages 2595-2603, 2010. Optimization methods for large-scale machine learning. Léon Bottou, E Frank, Jorge Curtis, Nocedal, Siam Review. 602Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223-311, 2018. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. Benjamin Recht, Christopher Re, Stephen Wright, Feng Niu, Advances in neural information processing systems. Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in neural information processing systems, pages 693-701, 2011. Deep learning with elastic averaging sgd. Sixin Zhang, Anna E Choromanska, Yann Lecun, Advances in neural information processing systems. Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging sgd. In Advances in neural information processing systems, pages 685- 693, 2015. Asynchronous parallel stochastic gradient for nonconvex optimization. Xiangru Lian, Yijun Huang, Yuncheng Li, Ji Liu, Advances in Neural Information Processing Systems. Xiangru Lian, Yijun Huang, Yuncheng Li, and Ji Liu. Asynchronous parallel stochastic gradient for nonconvex optimization. In Advances in Neural Informa- tion Processing Systems, pages 2737-2745, 2015. Robust and communication-efficient federated learning from non-iid data. Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek, IEEE transactions on neural networks and learning systems. Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, and Wojciech Samek. Robust and communication-efficient federated learning from non-iid data. IEEE transactions on neural networks and learning systems, 2019. Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. Yang Chen, Xiaoyan Sun, Yaochu Jin, IEEE Transactions on Neural Networks and Learning Systems. Yang Chen, Xiaoyan Sun, and Yaochu Jin. Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. IEEE Transactions on Neural Networks and Learning Systems, 2019. Ternary compression for communication-efficient federated learning. Jinjin Xu, Wenli Du, Ran Cheng, Wangli He, Yaochu Jin, arXiv:2003.03564arXiv preprintJinjin Xu, Wenli Du, Ran Cheng, Wangli He, and Yaochu Jin. Ternary compression for communication-efficient federated learning. arXiv preprint arXiv:2003.03564, 2020. Using MPI: portable parallel programming with the message-passing interface. D William, William Gropp, Ewing Gropp, Anthony Lusk, Skjellum, MIT press1William D Gropp, William Gropp, Ewing Lusk, and Anthony Skjellum. Using MPI: portable parallel programming with the message-passing interface, volume 1. MIT press, 1999. Scalable distributed dnn training using commodity gpu cloud computing. Nikko Strom, Sixteenth Annual Conference of the International Speech Communication Association. Nikko Strom. Scalable distributed dnn training using commodity gpu cloud com- puting. In Sixteenth Annual Conference of the International Speech Communica- tion Association, 2015. Communication quantization for data-parallel training of deep neural networks. Nikoli Dryden, Tim Moon, Sam Ade Jacobs, Brian Van Essen, 2016 2nd Workshop on Machine Learning in HPC Environments (MLHPC). Nikoli Dryden, Tim Moon, Sam Ade Jacobs, and Brian Van Essen. Communi- cation quantization for data-parallel training of deep neural networks. In 2016 2nd Workshop on Machine Learning in HPC Environments (MLHPC), pages 1-8. . IEEE. IEEE, 2016. Deep learning with limited numerical precision. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan, International Conference on Machine Learning. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In International Conference on Machine Learning, pages 1737-1746, 2015. Ganesh Venkatesh, et al. Mixed precision training. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, International Conference on Learning Representations. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. Mixed precision training. In International Conference on Learn- ing Representations, 2018. The convergence of sparsified gradient methods. Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, Cédric Renggli, Advances in Neural Information Processing Systems. Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, and Cédric Renggli. The convergence of sparsified gradient methods. In Advances in Neural Information Processing Systems, pages 5973-5983, 2018. Sparcml: High-performance sparse communication for machine learning. Cèdric Renggli, Mehdi Saleh Ashkboos, Dan Aghagolzadeh, Torsten Alistarh, Hoefler, Cèdric Renggli, Saleh Ashkboos, Mehdi Aghagolzadeh, Dan Alistarh, and Torsten Hoefler. Sparcml: High-performance sparse communication for machine learning. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. the International Conference for High Performance Computing, Networking, Storage and AnalysisIn Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1-15, 2019. Qsparse-localsgd: Distributed sgd with quantization, sparsification and local computations. Debraj Basu, Deepesh Data, Can Karakus, Suhas Diggavi, Advances in Neural Information Processing Systems. Debraj Basu, Deepesh Data, Can Karakus, and Suhas Diggavi. Qsparse-local- sgd: Distributed sgd with quantization, sparsification and local computations. In Advances in Neural Information Processing Systems, pages 14695-14706, 2019. On the convergence of decentralized gradient descent. Kun Yuan, Qing Ling, Wotao Yin, SIAM Journal on Optimization. 263Kun Yuan, Qing Ling, and Wotao Yin. On the convergence of decentralized gradient descent. SIAM Journal on Optimization, 26(3):1835-1854, 2016. Consensus and cooperation in networked multi-agent systems. Reza Olfati-Saber, Alex Fax, Richard M Murray, Proceedings of the IEEE. 951Reza Olfati-Saber, J Alex Fax, and Richard M Murray. Consensus and coopera- tion in networked multi-agent systems. Proceedings of the IEEE, 95(1):215-233, 2007. Pipe-sgd: A decentralized pipelined sgd framework for distributed deep net training. Youjie Li, Mingchao Yu, Songze Li, Salman Avestimehr, Nam Sung Kim, Alexander Schwing, arXiv:1811.03619arXiv preprintYoujie Li, Mingchao Yu, Songze Li, Salman Avestimehr, Nam Sung Kim, and Alexander Schwing. Pipe-sgd: A decentralized pipelined sgd framework for dis- tributed deep net training. arXiv preprint arXiv:1811.03619, 2018. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, Ji Liu, Advances in Neural Information Processing Systems. Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In Advances in Neural In- formation Processing Systems, pages 5330-5340, 2017. Michael Blot, David Picard, Matthieu Cord, Nicolas Thome, arXiv:1611.09726Gossip training for deep learning. arXiv preprintMichael Blot, David Picard, Matthieu Cord, and Nicolas Thome. Gossip training for deep learning. arXiv preprint arXiv:1611.09726, 2016. How to scale distributed deep learning?. Qiaochu Peter H Jin, Forrest Yuan, Kurt Iandola, Keutzer, arXiv:1611.04581arXiv preprintPeter H Jin, Qiaochu Yuan, Forrest Iandola, and Kurt Keutzer. How to scale distributed deep learning? arXiv preprint arXiv:1611.04581, 2016. Jeff Daily, Abhinav Vishnu, Charles Siegel, Thomas Warfel, Vinay Amatya, Gossipgrad, arXiv:1803.05880Scalable deep learning using gossip communication based asynchronous gradient descent. arXiv preprintJeff Daily, Abhinav Vishnu, Charles Siegel, Thomas Warfel, and Vinay Am- atya. Gossipgrad: Scalable deep learning using gossip communication based asynchronous gradient descent. arXiv preprint arXiv:1803.05880, 2018. Event-triggered feedback in control, estimation, and optimization. Michael Lemmon, Networked control systems. SpringerMichael Lemmon. Event-triggered feedback in control, estimation, and optimiza- tion. In Networked control systems, pages 293-358. Springer, 2010. Distributed eventtriggered control for multi-agent systems. Emilio Dimos V Dimarogonas, Karl H Frazzoli, Johansson, IEEE Transactions on Automatic Control. 575Dimos V Dimarogonas, Emilio Frazzoli, and Karl H Johansson. Distributed event- triggered control for multi-agent systems. IEEE Transactions on Automatic Con- trol, 57(5):1291-1297, 2012. Distributed optimal consensus algorithms in multi-agent systems. Aijuan Wang, Tao Dong, Xiaofeng Liao, Neurocomputing. 339Aijuan Wang, Tao Dong, and Xiaofeng Liao. Distributed optimal consensus al- gorithms in multi-agent systems. Neurocomputing, 339:26-35, 2019. Event-triggered consensus for multi-agent networks with switching topology under quantized communication. Xin Chen, Xiaofeng Liao, Lan Gao, Shasha Yang, Huiwei Wang, Huaqing Li, Neurocomputing. 230Xin Chen, Xiaofeng Liao, Lan Gao, Shasha Yang, Huiwei Wang, and Huaqing Li. Event-triggered consensus for multi-agent networks with switching topology under quantized communication. Neurocomputing, 230:294-301, 2017. Event-triggered communication and control of networked systems for multi-agent consensus. Cameron Nowzari, Eloy Garcia, Jorge Cortés, Automatica. 105Cameron Nowzari, Eloy Garcia, and Jorge Cortés. Event-triggered communica- tion and control of networked systems for multi-agent consensus. Automatica, 105:1-27, 2019. Distributed event-triggered scheme for a convex optimization problem in multi-agent systems. Zhongyuan Zhao, Gang Chen, Mingxiang Dai, Neurocomputing. 284Zhongyuan Zhao, Gang Chen, and Mingxiang Dai. Distributed event-triggered scheme for a convex optimization problem in multi-agent systems. Neurocomput- ing, 284:90-98, 2018. Distributed optimization of first-order discrete-time multi-agent systems with event-triggered communication. Qingguo Lü, Huaqing Li, Dawen Xia, Neurocomputing. 235Qingguo Lü, Huaqing Li, and Dawen Xia. Distributed optimization of first-order discrete-time multi-agent systems with event-triggered communication. Neuro- computing, 235:255-263, 2017. Distributed linear programming with event-triggered communication. Dean Richert, Jorge Cortes, SIAM Journal on Control and Optimization. 543Dean Richert and Jorge Cortes. Distributed linear programming with event-triggered communication. SIAM Journal on Control and Optimization, 54(3):1769-1797, 2016. Adaptive event-triggered communication scheme for networked control systems with randomly occurring nonlinearities and uncertainties. Jin Zhang, Chen Peng, Dajun Du, Min Zheng, Neurocomputing. 174Jin Zhang, Chen Peng, Dajun Du, and Min Zheng. Adaptive event-triggered communication scheme for networked control systems with randomly occurring nonlinearities and uncertainties. Neurocomputing, 174:475-482, 2016. Distributed deep learning with eventtriggered communication. Jemin George, Prudhvi Gurram, arXiv:1909.05020arXiv preprintJemin George and Prudhvi Gurram. Distributed deep learning with event- triggered communication. arXiv preprint arXiv:1909.05020, 2019. Fastest mixing markov chain on a graph. Stephen Boyd, Persi Diaconis, Lin Xiao, SIAM review. 464Stephen Boyd, Persi Diaconis, and Lin Xiao. Fastest mixing markov chain on a graph. SIAM review, 46(4):667-689, 2004. Event-based broadcasting for multi-agent average consensus. S Georg, Seyboth, V Dimos, Karl H Dimarogonas, Johansson, Automatica. 491Georg S Seyboth, Dimos V Dimarogonas, and Karl H Johansson. Event-based broadcasting for multi-agent average consensus. Automatica, 49(1):245-252, 2013. Using advanced MPI: Modern features of the message-passing interface. William Gropp, Torsten Hoefler, Rajeev Thakur, Ewing Lusk, MIT PressWilliam Gropp, Torsten Hoefler, Rajeev Thakur, and Ewing Lusk. Using ad- vanced MPI: Modern features of the message-passing interface. MIT Press, 2014. Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
[]
[ "Preferential orientation of NV defects in CVD diamond films grown on (113)-oriented substrates", "Preferential orientation of NV defects in CVD diamond films grown on (113)-oriented substrates" ]
[ "M Lesik \nLaboratoire Aimé Cotton\nCNRS\nUniversité Paris-Sud and ENS Cachan\n91405OrsayFrance\n", "T Plays \nLaboratoire Aimé Cotton\nCNRS\nUniversité Paris-Sud and ENS Cachan\n91405OrsayFrance\n", "A Tallaire \nLaboratoire des Sciences des Procédés et des Matériaux (LSPM)\nUniversité Paris 13\nSorbonne Paris Cité\nCNRS\n93430VilletaneuseFrance\n", "J Achard \nLaboratoire des Sciences des Procédés et des Matériaux (LSPM)\nUniversité Paris 13\nSorbonne Paris Cité\nCNRS\n93430VilletaneuseFrance\n", "O Brinza \nLaboratoire des Sciences des Procédés et des Matériaux (LSPM)\nUniversité Paris 13\nSorbonne Paris Cité\nCNRS\n93430VilletaneuseFrance\n", "L William \nLaboratoire des Sciences des Procédés et des Matériaux (LSPM)\nUniversité Paris 13\nSorbonne Paris Cité\nCNRS\n93430VilletaneuseFrance\n", "M Chipaux \nDepartment of Biomedical Engineering\n# Currently at University of Groningen / University Medical Center Groningen\nThales Research & Technology\n1 avenue Augustin Fresnel, Antonius Deusinglaan 191767, 9713 AVPalaiseau Cedex, GroningenFrance, The Netherlands\n", "L Toraille \nDepartment of Biomedical Engineering\n# Currently at University of Groningen / University Medical Center Groningen\nThales Research & Technology\n1 avenue Augustin Fresnel, Antonius Deusinglaan 191767, 9713 AVPalaiseau Cedex, GroningenFrance, The Netherlands\n", "T Debuisschert \nDepartment of Biomedical Engineering\n# Currently at University of Groningen / University Medical Center Groningen\nThales Research & Technology\n1 avenue Augustin Fresnel, Antonius Deusinglaan 191767, 9713 AVPalaiseau Cedex, GroningenFrance, The Netherlands\n", "A Gicquel \nLaboratoire des Sciences des Procédés et des Matériaux (LSPM)\nUniversité Paris 13\nSorbonne Paris Cité\nCNRS\n93430VilletaneuseFrance\n", "J F Roch \nLaboratoire Aimé Cotton\nCNRS\nUniversité Paris-Sud and ENS Cachan\n91405OrsayFrance\n", "V Jacques \nLaboratoire Aimé Cotton\nCNRS\nUniversité Paris-Sud and ENS Cachan\n91405OrsayFrance\n" ]
[ "Laboratoire Aimé Cotton\nCNRS\nUniversité Paris-Sud and ENS Cachan\n91405OrsayFrance", "Laboratoire Aimé Cotton\nCNRS\nUniversité Paris-Sud and ENS Cachan\n91405OrsayFrance", "Laboratoire des Sciences des Procédés et des Matériaux (LSPM)\nUniversité Paris 13\nSorbonne Paris Cité\nCNRS\n93430VilletaneuseFrance", "Laboratoire des Sciences des Procédés et des Matériaux (LSPM)\nUniversité Paris 13\nSorbonne Paris Cité\nCNRS\n93430VilletaneuseFrance", "Laboratoire des Sciences des Procédés et des Matériaux (LSPM)\nUniversité Paris 13\nSorbonne Paris Cité\nCNRS\n93430VilletaneuseFrance", "Laboratoire des Sciences des Procédés et des Matériaux (LSPM)\nUniversité Paris 13\nSorbonne Paris Cité\nCNRS\n93430VilletaneuseFrance", "Department of Biomedical Engineering\n# Currently at University of Groningen / University Medical Center Groningen\nThales Research & Technology\n1 avenue Augustin Fresnel, Antonius Deusinglaan 191767, 9713 AVPalaiseau Cedex, GroningenFrance, The Netherlands", "Department of Biomedical Engineering\n# Currently at University of Groningen / University Medical Center Groningen\nThales Research & Technology\n1 avenue Augustin Fresnel, Antonius Deusinglaan 191767, 9713 AVPalaiseau Cedex, GroningenFrance, The Netherlands", "Department of Biomedical Engineering\n# Currently at University of Groningen / University Medical Center Groningen\nThales Research & Technology\n1 avenue Augustin Fresnel, Antonius Deusinglaan 191767, 9713 AVPalaiseau Cedex, GroningenFrance, The Netherlands", "Laboratoire des Sciences des Procédés et des Matériaux (LSPM)\nUniversité Paris 13\nSorbonne Paris Cité\nCNRS\n93430VilletaneuseFrance", "Laboratoire Aimé Cotton\nCNRS\nUniversité Paris-Sud and ENS Cachan\n91405OrsayFrance", "Laboratoire Aimé Cotton\nCNRS\nUniversité Paris-Sud and ENS Cachan\n91405OrsayFrance" ]
[]
Thick CVD diamond layers were successfully grown on (113)-oriented substrates. They exhibited smooth surface morphologies and a crystalline quality comparable to (100) electronic grade material, and much better than (111)-grown layers. High growth rates (15-50 µm/h) were obtained while nitrogen doping could be achieved in a fairly wide range without seriously imparting crystalline quality. Electron spin resonance measurements were carried out to determine NV centers orientation and concluded that one specific orientation has an occurrence probability of 73 % when (100)-grown layers show an equal distribution in the 4 possible directions. A spin coherence time of around 270 µs was measured which is equivalent to that reported for material with similar isotopic purity. Although a higher degree of preferential orientation was achieved with (111)-grown layers (almost 100 %), the ease of growth and post-processing of the (113) orientation make it a potentially useful material for magnetometry or other quantum mechanical applications.
10.1016/j.diamond.2015.05.003
[ "https://arxiv.org/pdf/1504.02011v2.pdf" ]
119,285,787
1504.02011
d5e71f7a2e1fcb2cad2f94c48f57cb223136ac11
Preferential orientation of NV defects in CVD diamond films grown on (113)-oriented substrates M Lesik Laboratoire Aimé Cotton CNRS Université Paris-Sud and ENS Cachan 91405OrsayFrance T Plays Laboratoire Aimé Cotton CNRS Université Paris-Sud and ENS Cachan 91405OrsayFrance A Tallaire Laboratoire des Sciences des Procédés et des Matériaux (LSPM) Université Paris 13 Sorbonne Paris Cité CNRS 93430VilletaneuseFrance J Achard Laboratoire des Sciences des Procédés et des Matériaux (LSPM) Université Paris 13 Sorbonne Paris Cité CNRS 93430VilletaneuseFrance O Brinza Laboratoire des Sciences des Procédés et des Matériaux (LSPM) Université Paris 13 Sorbonne Paris Cité CNRS 93430VilletaneuseFrance L William Laboratoire des Sciences des Procédés et des Matériaux (LSPM) Université Paris 13 Sorbonne Paris Cité CNRS 93430VilletaneuseFrance M Chipaux Department of Biomedical Engineering # Currently at University of Groningen / University Medical Center Groningen Thales Research & Technology 1 avenue Augustin Fresnel, Antonius Deusinglaan 191767, 9713 AVPalaiseau Cedex, GroningenFrance, The Netherlands L Toraille Department of Biomedical Engineering # Currently at University of Groningen / University Medical Center Groningen Thales Research & Technology 1 avenue Augustin Fresnel, Antonius Deusinglaan 191767, 9713 AVPalaiseau Cedex, GroningenFrance, The Netherlands T Debuisschert Department of Biomedical Engineering # Currently at University of Groningen / University Medical Center Groningen Thales Research & Technology 1 avenue Augustin Fresnel, Antonius Deusinglaan 191767, 9713 AVPalaiseau Cedex, GroningenFrance, The Netherlands A Gicquel Laboratoire des Sciences des Procédés et des Matériaux (LSPM) Université Paris 13 Sorbonne Paris Cité CNRS 93430VilletaneuseFrance J F Roch Laboratoire Aimé Cotton CNRS Université Paris-Sud and ENS Cachan 91405OrsayFrance V Jacques Laboratoire Aimé Cotton CNRS Université Paris-Sud and ENS Cachan 91405OrsayFrance Preferential orientation of NV defects in CVD diamond films grown on (113)-oriented substrates NV centerssingle crystal diamondnitrogen dopingcrystal growthPlasma assisted chemical vapor depositioncrystal orientation Thick CVD diamond layers were successfully grown on (113)-oriented substrates. They exhibited smooth surface morphologies and a crystalline quality comparable to (100) electronic grade material, and much better than (111)-grown layers. High growth rates (15-50 µm/h) were obtained while nitrogen doping could be achieved in a fairly wide range without seriously imparting crystalline quality. Electron spin resonance measurements were carried out to determine NV centers orientation and concluded that one specific orientation has an occurrence probability of 73 % when (100)-grown layers show an equal distribution in the 4 possible directions. A spin coherence time of around 270 µs was measured which is equivalent to that reported for material with similar isotopic purity. Although a higher degree of preferential orientation was achieved with (111)-grown layers (almost 100 %), the ease of growth and post-processing of the (113) orientation make it a potentially useful material for magnetometry or other quantum mechanical applications. Introduction The negatively charged nitrogen-vacancy center (NV) in diamond has focused a lot of attention in the past few years due to a number of emerging applications in quantum information [1] and magnetic sensing [2], for which it is believed to bring a substantial advantage over incumbent technologies [3]. The electronic spin state associated with this point-like defect can be optically detected [4], and coherently manipulated using microwave fields with long coherence times, even at room temperature [5]. Under an external magnetic field, splitting of the NV center's spin state occurs which can be detected by applying an appropriate resonant microwave field. Based on this principle, promising magnetic sensors achieving a high spatial resolution [6] and an extremely high sensitivity in the sub-picotesla range [7] have been reported. For this application, the environment of the NV center in the diamond matrix needs to be carefully controlled since the close proximity of other defects can dramatically shorten coherence times [8]. Ultra-pure Chemically Vapor Deposited (CVD) single crystal diamond films in which nitrogen atoms are implanted and annealed are commonly used but this process suffers from a relatively low yield and co-implanted defects that cannot be completely annealed out [9,10]. On the other hand, grown-in NV centers that are produced by adding nitrogen during CVD diamond synthesis [11] have led to the longest coherence times so far [5]. Although spatially localizing NV centers is particularly challenging [12], an additional advantage of this approach comes from the ability to control the NV defect orientation in the diamond matrix. Due to its C 3v symmetry, the NV center can be oriented along one of the 4 different {111} crystallographic axes. In most diamond samples, these orientations are occupied with equal probabilities, leading to significant limitations for quantum information and sensing applications. Partial preferential orientation (50 %) of grown-in centers has been obtained at the inclined step facets of a (100) crystal grown under step-flow mode or by using a (110) substrate for growth [13,14]. More recently, an almost perfect alignment for NV centers (97-100 %) has also been achieved for (111)-grown CVD layers [15,16,17]. In this latter case, the orientation of the center perpendicularly to the surface is ideal for coupling to photonic structures and achieving enhanced light collection efficiency, potentially resulting in more robust and sensitive sensors [18]. However, the synthesis of nitrogen-doped CVD diamond layers on (111) substrates is plagued by the formation of twins and extended defects [19,20]. The extremely narrow range of optimal growth conditions required [21,22] leads to low growth rates (a few µm/h at most) and make it difficult to tune NV density in the layers. Moreover high residual stress can be responsible for crack formation and strong background blue luminescence. Post-growth processing to prepare freestanding plates or membranes is delicate to achieve due to the difficulty in cutting and polishing on this orientation [23]. Finally, (111) substrates sliced from larger crystals made by HPHT [24] or CVD are poorly available and only in small size of a few mm². (113) crystal orientation, which is stable under certain CVD growth conditions [25], could be a potential substitute for (111) substrates. This stability is likely related to the fact that the (113) plane undergoes a surface reconstruction in the presence of a hydrogen plasma in a similar fashion to what has been reported for silicon, thus decreasing the surface energy on this orientation [26,27]. The (113) plane can be obtained by polishing a standard (100) surface with a 25.2° angle towards a <111> direction. Therefore it lies in between the (100) and (111) planes. Silva et al. [28] have considered this orientation in a geometrical model aiming at describing the evolution of crystal shape and have shown that they could allow obtaining enlarged crystals. In this work, we present the successful growth of thick CVD layers on (113)-oriented substrates with various amounts of doped nitrogen. The orientation and coherence time of grown-in NV centers were measured in order to assess their potential use in magnetometry or other quantum mechanical applications. Experimental details Diamond substrates made of cylindrical (113) plates were prepared from standard 1.5 mm thick (100) HPHT Ib diamond crystals [28] following the procedure presented in Fig. 1. First, two parallel planes were laser-cut with an angle of 25.2° with respect to the top face, in the direction of one of the corners of the cubic-shape crystal (Fig. 1a). A cylinder having a diameter of about 2.4 mm was then laser-cut in the resulting piece (Fig. 1b). Eventually both (113) facets of the plate were polished (Fig. 1c). Plasma assisted chemical vapor deposition (PACVD) was then carried out in a home-made reactor using deposition conditions optimized for (100) growth [29]. Those include a high power density (3.5 kW, 250 mbar), a temperature of about 900 °C and a H 2 /CH 4 gas mixture (96/4). High purity hydrogen (9N) and methane (6N) were used. A low N 2 amount (from 0 to 50 ppm) was intentionally added in order to tune NV density. Background contamination by nitrogen impurities is however believed to occur even when no N 2 is added. Growth was sustained for several hours resulting in layers having thicknesses from 460 to 1450 µm (samples A to D in Table 1). For comparison purpose, additional CVD layers were grown on (113) and (111) substrates prepared from thick optical-grade Element 6 diamond material grown in the standard <100> direction and laser cut (samples E and F in Table 1). High-quality smooth (111)-grown layers were obtained without twinning by using a low methane concentration and a high growth temperature as explained in Ref. [22]. Table 1. Growth conditions used for the CVD layers studied in this work, and thickness obtained. Sample The samples were observed by Scanning Electron Microscopy (SEM) in a ZEISS EVO MA15 system and confronted to a 3D model reported elsewhere [30] in order to identify the different faces formed during growth. Laser microscope images were acquired using a Keyence VK 9700 equipment. Large-scale photoluminescence (PL) images were recorded with a dedicated The properties of individual NV defects hosted in (113)-oriented CVD diamond samples were analyzed with a home-made scanning confocal microscope operating under ambient conditions, as described in details elsewhere [31]. Electron spin resonance (ESR) spectroscopy was performed by applying a microwave excitation through a copper microwire directly spanned on the diamond surface. Orientation of the grown-in NV centers was assessed by recording orientation-dependent Zeeman shift of the ESR while applying a static magnetic field [16]. Finally, the spin coherence time of the NV defect electronic spin was measured by applying a spin echo sequence [π/2 -τ -πτπ/2], which consists of resonant microwave π/2 and π pulses separated by a variable free evolution duration τ [32]. CVD diamond growth on (113) substrates A general observation is that (113) Another important point that needs to be taken into account when growing thick layers on any orientation, is the appearance of additional crystalline faces that can lead to a reduction in the top surface area, or induce stress in the adjacent faces [35]. In the first row of Fig. 2a, the different crystalline faces are identified. As expected (111) faces are twinned while (100) faces are relatively smooth except when N 2 is added due to the well-known step-bunching phenomenon. The large (100) face that develops in the bottom part of the image tends to completely overgrow the (113) face especially when the growth thickness is high and when N 2 is added. Therefore a reduction in the useful top surface area is only observed for very large thicknesses (> 1 mm) or for high N 2 additions (> 10 ppm) indicating that this orientation is suitable to obtain large and thick high-quality layers. To better comparatively assess crystalline quality and purity, CL analysis was carried out on samples with various crystalline orientations and doping ( Fig. 3a and 3b). The CL spectrum obtained for the undoped (113) CVD layer is dominated by emission in the Free-Exciton (FE) region with only a very weak fluorescence in the visible region which is comparable to an electronic-grade (100)-grown diamond crystal (Fig. 3a). When N 2 is intentionally added, the contribution from NV 0 centers becomes obvious at 575 nm and is accompanied by several large bands in the UV-visible range which could be related to extended defects. These bands are the strongest for the smooth undoped (111)-grown layer. This indicates that despite optimized growth conditions, this orientation leads to incorporation of a larger amount of defects and stress, as previously reported [12]. In all spectra, a slight SiVluminescence is also detected at around 737 nm which probably arises from contamination by quartz material in the reactor chamber. Nevertheless, the relatively high crystalline quality of the layers is confirmed by the presence of intense free exciton recombination assisted by both Transverse Optic (TO) and Transverse Acoustic (TA) phonons (Fig. 3b). For the (111)-grown layer, an additional peak which relates to the boron Bound Exciton recombination is observed at around 238 nm. Incorporation of residual impurities such as boron is indeed enhanced as compared to both (100) and (113) orientation. From the BE TO /FE TO emission ratio [36], we estimate that the residual boron concentration in the film is around 8×10 15 cm -3 . In summary, (113)-grown CVD layers compare favorably to both (100) and (111). High quality layers can be obtained at high growth rates under standard high-power density growth conditions with a limited incorporation of defects and impurities. No background luminescence or stress was evidenced which is comparable to the best (100) electronic-grade material available. The surface remained smooth without step bunching while nitrogen doping could be easily tuned in the layer by adding N 2 in the gas phase. Degradation of crystalline quality was only observed when the added nitrogen was higher than several tens of ppm. Therefore it is believed that this material could be a useful platform for applications making use of grown-in NV color centers such as magnetometry. The ability to introduce a large amount of NV centers while preserving a good crystalline quality is particularly important when ensemble of NVs are needed since the magnetic sensitivity roughly scales with the square root of their density [2]. Orientation and spin properties of NV defects in (113) layers We now evaluate the properties of NV centers in our (113)-grown material. To this end, individual NV defects were optically addressed in sample A using a confocal microscope under green laser excitation. A typical PL scan is shown in Fig. 4a, which indicates well-isolated spots corresponding to single NV defects. The unicity of the emitter was verified by recording the second-order autocorrelation function g (2) () of the PL intensity using a Hanbury Brown and Twiss interferometer. A sub-poissonian photon statistics such that g (2) ()<0.5 is the signature of a single emitter (Fig. 4b) [37]. The four possible NV defect orientations are shown in Fig. 5a and the angles that they make with the <113> direction is indicated. The two orientations <11 1> and <1 1 1> make an equal angle  with the <113> direction. The <1 11> orientation is almost in the (113) plane (=100°) while the <111> direction points out with an angle 29.5°.The NV defect orientation was then experimentally measured by recording ESR spectra while applying a static magnetic field B= 18 G perpendicular to the (113) surface plane (i.e. along the <113> direction). In this limit of weak magnetic fields [2], the ESR frequencies are given by , where D=2.87 GHz is the zero-field splitting of the NV defect, and is the magnetic field projection along the NV defect quantization axis. For each NV defect, measurement of the ESR spectrum therefore enables to discriminate between NV defect with <111> orientation (=29.5°), {<11 1> ; <1 1 1>} orientations (=58.5°) and <1 11> orientation (=100°), as illustrated in Fig. 5b. The probability of occurrence of each NV defect orientations was estimated by recording ESR spectra over a set of about 200 single NV defects. The resulting statistical distribution is shown in Fig. 5d. We observe a significant preferential orientation of NV defects along the <111> direction with a probability of about 73 %. It is striking that centers that are almost single NV 1 µm (a) (b) in the (113) plane (centers oriented along <1 11>) were not detected. This is consistent with previously reported results on NV orientation dependence on crystal growth direction, which usually showed that "in-plane" directions are very unfavorable for the creation of NVs [13][14][15][16][17]. On the other hand, directions closest to the growth direction are promoted, such as <111> in this case. Although the precise mechanism leading to preferential alignment of NV centers in (113) layers cannot be fully determined here, models taking into account the atomistic configuration and the movement of steps during growth could be proposed in the future, such as that reported for (111)grown layers [38]. We note that the collected PL intensity is significantly larger for NV defects oriented along the preferential <111> direction (Fig. 5c), which is an additional advantage. This is a direct consequence of the direction of the NV defect optical dipoles, which are lying in a plane perpendicular to the NV axis [16]. As a consequence from the small  angle for the <111> direction, light extraction efficiency from the (113) plane is thus enhanced. It is also relevant to evaluate the spin coherence properties of NV centers in our (113)-grown material and compare to those measured for centers incorporated during growth on a conventional (100) orientation. A typical spin-echo measurement recorded from a single NV defect in sample A is depicted in Fig. 6. It shows collapses and revivals of the electron spin coherence induced by Larmor precession of 13 C nuclear spin impurities [32]. The envelope of the spin echo signal indicates a coherence time T 2 of ≈270 µs. This is comparable to a single NV defect hosted in standard high-purity (100) CVD crystals with the same isotopic purity. oriented CVD layer. A magnetic field B=53 G is applied along the NV defect axis. Conclusion In this work, diamond layers were epitaxially grown by PACVD on (113) N V Analysis of NV centers in (113)-grown layers has revealed a preferential orientation in the <111> direction with a probability of about 73 %; the remaining 27 % being shared between two other equivalent orientations: <11 1> and <1 1 1>. The most in-plane direction (<1 11>) was not detected. This is consistent with previous reports that highlighted that out-of-plane directions are more favorable for NV centers. Finally NV centers in (113)-grown layers exhibited coherence times as high as that measured in conventional (100) crystals with a similar isotopic purity. Although preferential orientation is only partial as compared to the recently reported results for (111)-grown layers, the ease of growth and processing on (113) substrates opens the way to their use in the fabrication of useful structures for magnetometry or other quantum mechanical applications. Figure 1 . 1Procedure for preparing cylindrical (113) plates from a (100) HPHT diamond crystal. (a) 2 parallel planes are laser-cut with a 25.2° angle, (b) a 2.4 mm-diameter cylinder is laser-cut, (c) the resulting (113) cylinder is polished on both sides. DiamondView TM equipment which uses near band-gap UV light (around 225 nm) to excite luminescence from the samples. Raman spectra were acquired at room temperature in a Jobin-Yvon HR800 system using the 488 nm excitation line of an Ar laser. Cathodoluminescence (CL) analysis was also performed at 110 K with a Jobin-Yvon CLUE system attached to the SEM on samples C, D and F as well as on a reference electronic-grade (100) CVD plate. The electron gun was operated with an accelerating voltage of 10 kV for a measured current of around 1.1 nA. The light collected by the parabolic mirror in the UV-visible range was analyzed with a TRIAX 550 spectrometer equipped with 600 and 1800 grooves diffraction gratings. The signal was averaged when scanning an area of about 80×100 µm² (magnification of around 1000 ×). -grown layers have a smooth surface without any nonepitaxial features or twins even for thicknesses of several hundreds of micrometers (1 st row of Fig. 2). Moreover, PL images recorded with the DiamondView TM equipment (2 nd row of Fig. 2) show only limited blue fluorescence except for the highest doping level (sample D), which suggests that residual stress remains low.The addition of low N 2 amounts in the gas phase led to an increase in growth rates from 15 µm/h at 0 ppm up to around 50 µm/h at 50 ppm which is by at least a factor of 2 higher than growth rates achieved on a conventional (100) orientation under similar conditions[33]. Moreover step-bunching frequently observed in the presence of nitrogen for (100) growth[34] did not occur for (113) growth. As an indication, the surface roughness (Ra) measured for the 3 samples with lowest doping (A-C) on a 50×50 µm² was in the range 30-60 nm. However the morphology substantially degraded at the highest doping level, as illustrated inFig. 2d.The Raman/PL analysis ofFig. 2eshows intense and narrow diamond Raman peaks for samples A to C with a Full Width at Half Maximum (FWHM) comparable to high-quality (100)-grown layers. Sample D grown with the highest N 2 level shows a wider Raman peak that reaches 3.2 cm -1 and that is shifted towards lower wavenumbers (inset ofFig. 2e). This indicates both the presence of stress and a decrease in crystalline quality. NV 0 and NVpeaks are detected at room temperature for samples C and D clearly indicating nitrogen incorporation in the crystals. Figure 2 . 2(a)-(d) CVD diamond layers grown on (113)-oriented HPHT substrates. (a) sample A, no intentional N 2 addition, (b) sample B, 0.5 ppm N 2 , (c) sample C, 10 ppm N 2 , (d) sample D, 50 ppm N 2 in the gas phase. First row: SEM images where the different crystalline faces can be identified; Second row: PL images recorded with the DiamondView TM equipment under UV light excitation. Green luminescence comes from the substrate and blue or red luminescence from stress or nitrogen related centers in the CVD layer; Third row: laser microscope mapping images of the top (113) how the morphology is degraded at the highest doping level (d Figure 3 . 3CL spectra acquired at 110 K for samples grown on different orientations and doping levels. (a) The full range spectra were normalized to the free exciton (FE) peak at around 235 nm. (b) The UV excitonic region spectra show contribution from free excitons assisted by Transverse Acoustic (TA) and Optic (TO) phonons. Boron Bound Excitons (BE TO ) are also detected for the (111)-grown layer. Figure 4 . 4(a) PL raster scan of sample A recorded with a green laser power of 100 W. (b) Secondorder autocorrelation function g (2) () of the PL intensity recorded from the spot indicated with a white dashed circle in (a). A strong anti-correlation effect is observed around zero delay g (2) ()≈0.1. Figure 5 . 5(a) Schematic drawing of the four possible orientations of NV defects with respect to the <113> direction. (b) Figure 6 . 6Spin echo signal recorded from a grown-in <111>-oriented single NV defect in a(113) The inset shows a zoom into the diamond Raman peak region where the FWHM of the peak can be extracted.). (e) Raman/PL spectra acquired at room temperature with a 488 nm excitation line for samples A to D (from bottom to top). The spectra were normalized to the Raman peak and vertically shifted for clarity. Orientation-dependent ESR spectra recorded from single NV defects in sample A while applying a static magnetic field B=18 G perpendicular to the (113) diamond surface plane (c) Collected PL intensity as a function of the laser power for single NV defects with different orientations. (d) Statistical distribution of NV defect orientations extracted from ESR measurements for a set of about 200 single NV defects. The black dashed lines indicate the expected distribution for randomly oriented NV defects. substrates prepared from thick HPHT and CVD crystals by laser cutting and polishing. Standard high-power conditions resulted in growth rates in the range 15-50 µm/h. Smooth morphologies without step bunching were observed for several hundreds of µm thick films. The crystalline quality of undoped layers was comparable to that of electronic-grade commercial (100) material and substantially superior to that of smooth layers (111)-grown layers. Tuning of NV center density could be achieved in a wide range by intentionally adding nitrogen to the gas phase.p/2 p p/2   T 2 = 271 14 µs NV axis Acknowledgements:The authors would like to thank A. Edmonds and M. Markham from Element 6 (UK) for providing some of the(113)and(111) Nitrogen-vacancy centers: Physics and applications. V Acosta, P Hemmer, MRS Bulletin. 38V. Acosta, P. Hemmer, Nitrogen-vacancy centers: Physics and applications. MRS Bulletin, 38 (2013) 127-130. . A Gruber, A Dräbenstedt, C Tietz, L Fleury, J Wrachtrup, C V Borczyskowski, ScanningA. Gruber, A. Dräbenstedt, C. Tietz, L. Fleury, J. Wrachtrup, C.v. Borczyskowski, Scanning . Confocal Optical Microscopy and Magnetic Resonance on Single Defect Centers. Science. 276Confocal Optical Microscopy and Magnetic Resonance on Single Defect Centers. Science, 276 (1997) 2012-2014. . G Balasubramanian, P Neumann, D Twitchen, M Markham, R Kolesov, N Mizuochi, J , G. Balasubramanian, P. Neumann, D. Twitchen, M. Markham, R. Kolesov, N. Mizuochi, J. Ultralong spin coherence time in isotopically engineered diamond. J Isoya, J Achard, J Beck, V Tissler, P R Jacques, F Hemmer, J Jelezko, Wrachtrup, Nat Mater. 8Isoya, J. Achard, J. Beck, J. Tissler, V. Jacques, P.R. Hemmer, F. Jelezko, J. Wrachtrup, Ultralong spin coherence time in isotopically engineered diamond. Nat Mater, 8 (2009) 383-387. . J M Taylor, P Cappellaro, L Childress, L Jiang, D Budker, P R Hemmer, A Yacoby, R , J.M. Taylor, P. Cappellaro, L. Childress, L. Jiang, D. Budker, P.R. Hemmer, A. Yacoby, R. High-sensitivity diamond magnetometer with nanoscale resolution. M D Walsworth, Lukin, Nat Phys. 4Walsworth, M.D. Lukin, High-sensitivity diamond magnetometer with nanoscale resolution. Nat Phys, 4 (2008) 810-816. A subpicotesla diamond magnetometer. T Wolf, P Neumann, J Isoya, J Wrachtrup, arXiv:1411.6553T. Wolf, P. Neumann, J. Isoya, J. Wrachtrup, A subpicotesla diamond magnetometer, (2014) arXiv:1411.6553 . N Bar-Gill, L M Pham, C Belthangady, D Le Sage, P Cappellaro, J R Maze, M D Lukin, A , N. Bar-Gill, L.M. Pham, C. Belthangady, D. Le Sage, P. Cappellaro, J.R. Maze, M.D. Lukin, A. Suppression of spin-bath dynamics for improved coherence of multi-spinqubit systems. R Yacoby, Walsworth, Nat Commun. 3858Yacoby, R. Walsworth, Suppression of spin-bath dynamics for improved coherence of multi-spin- qubit systems. Nat Commun, 3 (2012) 858. Creation and nature of optical centres in diamond for single-photon emissionâ€"overview and critical remarks. S Pezzagna, D Rogalla, D Wildanger, J Meijer, Z Alexander, New Journal of Physics. 1335024S. Pezzagna, D. Rogalla, D. Wildanger, J. Meijer, Z. Alexander, Creation and nature of optical centres in diamond for single-photon emissionâ€"overview and critical remarks. New Journal of Physics, 13 (2011) 035024. . T Yamamoto, T Umeda, K Watanabe, S Onoda, M L Markham, D J Twitchen, B , T. Yamamoto, T. Umeda, K. Watanabe, S. Onoda, M.L. Markham, D.J. Twitchen, B. Extending spin coherence times of diamond qubits by hightemperature annealing. L P Naydenov, T Mcguinness, S Teraji, F Koizumi, H Dolde, J Fedder, J Honert, T Wrachtrup, F Ohshima, J Jelezko, Isoya, Physical Review B. 8875206Naydenov, L.P. McGuinness, T. Teraji, S. Koizumi, F. Dolde, H. Fedder, J. Honert, J. Wrachtrup, T. Ohshima, F. Jelezko, J. Isoya, Extending spin coherence times of diamond qubits by high- temperature annealing. Physical Review B, 88 (2013) 075206. . A Tallaire, A T Collins, D Charles, J Achard, R Sussmann, A Gicquel, M E Newton, A , A. Tallaire, A.T. Collins, D. Charles, J. Achard, R. Sussmann, A. Gicquel, M.E. Newton, A.M. Characterisation of high-quality thick single-crystal diamond grown by CVD with a low nitrogen addition. R J Edmonds, Cruddace, Diam. & Relat. Mat. 15Edmonds, R.J. Cruddace, Characterisation of high-quality thick single-crystal diamond grown by CVD with a low nitrogen addition. Diam. & Relat. Mat., 15 (2006) 1700-1707. . A Tallaire, M Lesik, V Jacques, S Pezzagna, V Mille, O Brinza, J Meijer, B Abel, J , A. Tallaire, M. Lesik, V. Jacques, S. Pezzagna, V. Mille, O. Brinza, J. Meijer, B. Abel, J.F. Temperature dependent creation of nitrogen-vacancy centers in single crystal CVD diamond layers. A Roch, J Gicquel, Achard, Diam. & Relat. Mat. 51Roch, A. Gicquel, J. Achard, Temperature dependent creation of nitrogen-vacancy centers in single crystal CVD diamond layers. Diam. & Relat. Mat., 51 (2015) 55-60. . A M Edmonds, U F S D&apos;haenens-Johansson, R J Cruddace, M E Newton, K M C Fu, C , A.M. Edmonds, U.F.S. D'Haenens-Johansson, R.J. Cruddace, M.E. Newton, K.M.C. Fu, C. Production of oriented nitrogen-vacancy color centers in synthetic diamond. R G Santori, D J Beausoleil, M L Twitchen, Markham, Physical Review B. 8635201Santori, R.G. Beausoleil, D.J. Twitchen, M.L. Markham, Production of oriented nitrogen-vacancy color centers in synthetic diamond. Physical Review B, 86 (2012) 035201. Enhanced metrology using preferential orientation of nitrogenvacancy centers in diamond. L M Pham, N Bar-Gill, D Le Sage, C Belthangady, A Stacey, M Markham, D J Twitchen, M D Lukin, R L Walsworth, Physical Review B. 86121202L.M. Pham, N. Bar-Gill, D. Le Sage, C. Belthangady, A. Stacey, M. Markham, D.J. Twitchen, M.D. Lukin, R.L. Walsworth, Enhanced metrology using preferential orientation of nitrogen- vacancy centers in diamond. Physical Review B, 86 (2013) 121202. Perfect alignment and preferential orientation of nitrogenvacancy centers during chemical vapor deposition diamond growth on (111) surfaces. J Michl, T Teraji, S Zaiser, I Jakobi, G Waldherr, F Dolde, P Neumann, M W Doherty, N B Manson, J Isoya, J Wrachtrup, Appl. Phys. Lett. 104102407J. Michl, T. Teraji, S. Zaiser, I. Jakobi, G. Waldherr, F. Dolde, P. Neumann, M.W. Doherty, N.B. Manson, J. Isoya, J. Wrachtrup, Perfect alignment and preferential orientation of nitrogen- vacancy centers during chemical vapor deposition diamond growth on (111) surfaces. Appl. Phys. Lett., 104 (2014) 102407. Perfect preferential orientation of nitrogen-vacancy defects in a synthetic diamond sample. M Lesik, J P Tetienne, A Tallaire, J Achard, V Mille, A Gicquel, J F Roch, V Jacques, Appl. Phys. Lett. 104113107M. Lesik, J.P. Tetienne, A. Tallaire, J. Achard, V. Mille, A. Gicquel, J.F. Roch, V. Jacques, Perfect preferential orientation of nitrogen-vacancy defects in a synthetic diamond sample. Appl. Phys. Lett., 104 (2014) 113107. . T Fukui, Y Doi, T Miyazaki, Y Miyamoto, H Kato, T Matsumoto, T Makino, S , T. Fukui, Y. Doi, T. Miyazaki, Y. Miyamoto, H. Kato, T. Matsumoto, T. Makino, S. Perfect selective alignment of nitrogen-vacancy centers in diamond. R Yamasaki, N Morimoto, M Tokuda, Y Hatano, H Sakagawa, T Morishita, S Tashima, Y Miwa, N Suzuki, Mizuochi, Applied Physics Express. 755201Yamasaki, R. Morimoto, N. Tokuda, M. Hatano, Y. Sakagawa, H. Morishita, T. Tashima, S. Miwa, Y. Suzuki, N. Mizuochi, Perfect selective alignment of nitrogen-vacancy centers in diamond. Applied Physics Express, 7 (2014) 055201. . E Neu, P Appel, M Ganzhorn, J Miguel-Sanchez, M Lesik, V Mille, V Jacques, A , E. Neu, P. Appel, M. Ganzhorn, J. Miguel-Sanchez, M. Lesik, V. Mille, V. Jacques, A. Photonic nano-structures on (111)-oriented diamond. J Tallaire, P Achard, Maletinsky, Applied Physics Letters. 104153108Tallaire, J. Achard, P. Maletinsky, Photonic nano-structures on (111)-oriented diamond. Applied Physics Letters, 104 (2014) 153108. Formation of stacking faults containing microtwins in (111) chemical-vapor-deposited diamond homoepitaxial layers. M Kasu, T Makimoto, W Ebert, E Kohn, Applied Physics Letters. 83M. Kasu, T. Makimoto, W. Ebert, E. Kohn, Formation of stacking faults containing microtwins in (111) chemical-vapor-deposited diamond homoepitaxial layers. Applied Physics Letters, 83 (2003) 3465-3467. A mechanism for crystal twinning in the growth of diamond by chemical vapour deposition. J E Butler, I Oleynik, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 366J.E. Butler, I. Oleynik, A mechanism for crystal twinning in the growth of diamond by chemical vapour deposition. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 366 (2008) 295-311. Anisotropic lateral growth of homoepitaxial diamond (111) films by plasma-enhanced chemical vapor deposition. N Tokuda, M Ogura, S Yamasaki, T Inokuma, Japanese Journal of Applied Physics. 53N. Tokuda, M. Ogura, S. Yamasaki, T. Inokuma, Anisotropic lateral growth of homoepitaxial diamond (111) films by plasma-enhanced chemical vapor deposition. Japanese Journal of Applied Physics, 53 (2014) 04EH04. High quality thick CVD diamond films homoepitaxially grown on (111)-oriented substrates. A Tallaire, J Achard, A Boussadi, O Brinza, A Gicquel, I N Kupriyanov, Y N Palyanov, G Sakr, J Barjon, Diam. & Relat. Mat. 41A. Tallaire, J. Achard, A. Boussadi, O. Brinza, A. Gicquel, I.N. Kupriyanov, Y.N. Palyanov, G. Sakr, J. Barjon, High quality thick CVD diamond films homoepitaxially grown on (111)- oriented substrates. Diam. & Relat. Mat., 41 (2014) 34-40. . J Hird, J Field, Diamond Polishing, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 460J. Hird, J. Field, Diamond polishing. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 460 (2004) 3547-3568. Crystal growth and perfection of large octahedral synthetic diamonds. A F Khokhryakov, Y N Palyanov, I N Kupriyanov, Y M Borzdov, A G Sokol, J Hartwig, F Masiello, Journal of Crystal Growth. 317A.F. Khokhryakov, Y.N. Palyanov, I.N. Kupriyanov, Y.M. Borzdov, A.G. Sokol, J. Hartwig, F. Masiello, Crystal growth and perfection of large octahedral synthetic diamonds. Journal of Crystal Growth, 317 (2011) 32-38. Confirmation of {113} facets on diamond grown by chemical vapor deposition. K A Snail, Z P Lu, R Weimer, J Heberlein, E Pfender, L M Hanssen, Journal of Crystal Growth. 137K.A. Snail, Z.P. Lu, R. Weimer, J. Heberlein, E. Pfender, L.M. Hanssen, Confirmation of {113} facets on diamond grown by chemical vapor deposition. Journal of Crystal Growth, 137 (1994) 676-679. First principles calculation of the structure and energy of Si(113). D M Bird, L J Clarke, R D King-Smith, M C Payne, I Stich, A P Sutton, Physical Review Letters. 693785D.M. Bird, L.J. Clarke, R.D. King-Smith, M.C. Payne, I. Stich, A.P. Sutton, First principles calculation of the structure and energy of Si(113). Physical Review Letters, 69 (1992) 3785. Atomic Structure of Clean Si(113) Surfaces: Theory and Experiment. J Dabrowski, H J Müssig, G Wolff, Physical Review Letters. 731660J. Dabrowski, H.J. Müssig, G. Wolff, Atomic Structure of Clean Si(113) Surfaces: Theory and Experiment. Physical Review Letters, 73 (1994) 1660. . F Silva, J Achard, X Bonnin, O Brinza, A Michau, A Secroun, K De Corte, S Felton, M , F. Silva, J. Achard, X. Bonnin, O. Brinza, A. Michau, A. Secroun, K. De Corte, S. Felton, M. Single crystal CVD diamond growth strategy by the use of a 3D geometrical model: Growth on (113) oriented substrates. A Newton, Gicquel, Diam. & Relat. Mat. 17Newton, A. Gicquel, Single crystal CVD diamond growth strategy by the use of a 3D geometrical model: Growth on (113) oriented substrates. Diam. & Relat. Mat., 17 (2008) 1067-1075. High quality MPACVD diamond single crystal growth: high microwave power density regime. J Achard, F Silva, A Tallaire, X Bonnin, G Lombardi, K Hassouni, A Gicquel, J. Phys. J. Achard, F. Silva, A. Tallaire, X. Bonnin, G. Lombardi, K. Hassouni, A. Gicquel, High quality MPACVD diamond single crystal growth: high microwave power density regime. J. Phys. 3D crystal growth model for understanding the role of plasma pre-treatment on CVD diamond crystal shape. F Silva, J Achard, X Bonnin, A Michau, A Tallaire, O Brinza, A Gicquel, F. Silva, J. Achard, X. Bonnin, A. Michau, A. Tallaire, O. Brinza, A. Gicquel, 3D crystal growth model for understanding the role of plasma pre-treatment on CVD diamond crystal shape. physica status solidi (a), 203 (2006) 3049-3055. Avoiding power broadening in optically detected magnetic resonance of single NV defects for enhanced dc magnetic field sensitivity. A Dréau, M Lesik, L Rondin, P Spinicelli, O Arcizet, J F Roch, V Jacques, Physical Review B. 84195204A. Dréau, M. Lesik, L. Rondin, P. Spinicelli, O. Arcizet, J.F. Roch, V. Jacques, Avoiding power broadening in optically detected magnetic resonance of single NV defects for enhanced dc magnetic field sensitivity. Physical Review B, 84 (2011) 195204. . L Childress, M V Dutt, J M Taylor, A S Zibrov, F Jelezko, J Wrachtrup, P , L. Childress, M.V. Gurudev Dutt, J.M. Taylor, A.S. Zibrov, F. Jelezko, J. Wrachtrup, P.R. Coherent Dynamics of Coupled Electron and Nuclear Spin Qubits in Diamond. M D Hemmer, Lukin, Science. 314Hemmer, M.D. Lukin, Coherent Dynamics of Coupled Electron and Nuclear Spin Qubits in Diamond. Science, 314 (2006) 281-285. Nitrogen induced increase of growth rate in chemical vapor deposition of diamond. W Muller-Sebert, E Worner, F Fuchs, C Wild, P Koidl, Applied Physics Letters. 68W. Muller-Sebert, E. Worner, F. Fuchs, C. Wild, P. Koidl, Nitrogen induced increase of growth rate in chemical vapor deposition of diamond. Applied Physics Letters, 68 (1996) 759-760. Effects of nitrogen impurities on the CVD growth of diamond: step bunching in theory and experiment. F K De Theije, J J Schermer, W J P Van Enckevort, Diam. & Relat. Mat. 9F.K. de Theije, J.J. Schermer, W.J.P. van Enckevort, Effects of nitrogen impurities on the CVD growth of diamond: step bunching in theory and experiment. Diam. & Relat. Mat., 9 (2000) 1439- 1449. Dependence of CVD diamond growth rate on substrate orientation as a function of process parameters in the high microwave power density regime. O Brinza, J Achard, F Silva, X Bonnin, P Barroy, K De Corte, A Gicquel, O. Brinza, J. Achard, F. Silva, X. Bonnin, P. Barroy, K. De Corte, A. Gicquel, Dependence of CVD diamond growth rate on substrate orientation as a function of process parameters in the high microwave power density regime. physica status solidi (a), 205 (2008) 2114-2120. . J Barjon, T Tillocher, N Habka, O Brinza, J Achard, R Issaoui, F Silva, C Mer, P , J. Barjon, T. Tillocher, N. Habka, O. Brinza, J. Achard, R. Issaoui, F. Silva, C. Mer, P. Boron acceptor concentration in diamond from excitonic recombination intensities. Bergonzo, Physical Review B. 834Bergonzo, Boron acceptor concentration in diamond from excitonic recombination intensities. Physical Review B, 83 (2011) 4. Photon antibunching in the fluorescence of individual color centers in diamond. R Brouri, A Beveratos, J.-P Poizat, P Grangier, Optics Letters. 25R. Brouri, A. Beveratos, J.-P. Poizat, P. Grangier, Photon antibunching in the fluorescence of individual color centers in diamond. Optics Letters, 25 (2000) 1294-1296. Atomistic mechanism of perfect alignment of nitrogen-vacancy centers in diamond. T Miyazaki, Y Miyamoto, T Makino, H Kato, S Yamasaki, T Fukui, Y Doi, N Tokuda, M Hatano, N Mizuochi, Applied Physics Letters. 105261601T. Miyazaki, Y. Miyamoto, T. Makino, H. Kato, S. Yamasaki, T. Fukui, Y. Doi, N. Tokuda, M. Hatano, N. Mizuochi, Atomistic mechanism of perfect alignment of nitrogen-vacancy centers in diamond. Applied Physics Letters, 105 (2014) 261601.
[]
[ "Time-varying neutrino mass from a supercooled phase transition: current cosmological constraints and impact on the Ω m -σ 8 plane", "Time-varying neutrino mass from a supercooled phase transition: current cosmological constraints and impact on the Ω m -σ 8 plane" ]
[ "Christiane S Lorenz \nAstrophysics\nUniversity of Oxford\nDWB, Keble RoadOX1 3RHOxfordUK\n", "Lena Funcke \nMax-Planck-Institut für Physik (Werner-Heisenberg-Institut)\nFöhringer Ring 680805MünchenGermany\n\nArnold Sommerfeld Center\nLudwig-Maximilians-Universität\nTheresienstraße 3780333MünchenGermany\n\nPerimeter Institute for Theoretical Physics\n31 Caroline Street NorthN2L 2Y5WaterlooONCanada\n", "Erminia Calabrese \nSchool of Physics and Astronomy\nCardiff University\nThe ParadeCF24 3AACardiffUK\n", "Steen Hannestad \nDepartment of Physics and Astronomy\nAarhus University\nNy Munkegade 120DK8000Aarhus CDenmark\n" ]
[ "Astrophysics\nUniversity of Oxford\nDWB, Keble RoadOX1 3RHOxfordUK", "Max-Planck-Institut für Physik (Werner-Heisenberg-Institut)\nFöhringer Ring 680805MünchenGermany", "Arnold Sommerfeld Center\nLudwig-Maximilians-Universität\nTheresienstraße 3780333MünchenGermany", "Perimeter Institute for Theoretical Physics\n31 Caroline Street NorthN2L 2Y5WaterlooONCanada", "School of Physics and Astronomy\nCardiff University\nThe ParadeCF24 3AACardiffUK", "Department of Physics and Astronomy\nAarhus University\nNy Munkegade 120DK8000Aarhus CDenmark" ]
[]
In this paper we investigate a time-varying neutrino mass model, motivated by the mild tension between cosmic microwave background (CMB) measurements of the matter fluctuations and those obtained from low-redshift data. We modify the minimal case of the model proposed in Ref.[1] that predicts late neutrino mass generation in a post-recombination cosmic phase transition, by assuming that neutrino asymmetries allow for the presence of relic neutrinos in the late-time Universe. We show that, if the transition is supercooled, current cosmological data (including CMB temperature, polarization and lensing, baryon acoustic oscillations, and Type Ia supernovae) prefer the scale factor as of the phase transition to be very large, peaking at as ∼ 1, and therefore supporting a cosmological scenario in which neutrinos are almost massless until very recent times. We find that in this scenario the cosmological bound on the total sum of the neutrino masses today is significantly weakened compared to the standard case of constant-mass neutrinos, with mν < 4.8 eV at 95% confidence, and in agreement with the model predictions. The main reason for this weaker bound is a large correlation arising between the dark energy and neutrino components in the presence of false vacuum energy that converts into the non-zero neutrino masses after the transition. This result provides new targets for the coming KATRIN and PTOLEMY experiments. We also show that the time-varying neutrino mass model considered here does not provide a clear explanation to the existing cosmological Ωm-σ8 discrepancies.
10.1103/physrevd.99.023501
[ "https://arxiv.org/pdf/1811.01991v2.pdf" ]
119,344,201
1811.01991
d4860ede89a99064f4bfff6d8d26e6b162b08493
Time-varying neutrino mass from a supercooled phase transition: current cosmological constraints and impact on the Ω m -σ 8 plane Christiane S Lorenz Astrophysics University of Oxford DWB, Keble RoadOX1 3RHOxfordUK Lena Funcke Max-Planck-Institut für Physik (Werner-Heisenberg-Institut) Föhringer Ring 680805MünchenGermany Arnold Sommerfeld Center Ludwig-Maximilians-Universität Theresienstraße 3780333MünchenGermany Perimeter Institute for Theoretical Physics 31 Caroline Street NorthN2L 2Y5WaterlooONCanada Erminia Calabrese School of Physics and Astronomy Cardiff University The ParadeCF24 3AACardiffUK Steen Hannestad Department of Physics and Astronomy Aarhus University Ny Munkegade 120DK8000Aarhus CDenmark Time-varying neutrino mass from a supercooled phase transition: current cosmological constraints and impact on the Ω m -σ 8 plane (Dated: Received December 17, 2018; published -00, 0000) In this paper we investigate a time-varying neutrino mass model, motivated by the mild tension between cosmic microwave background (CMB) measurements of the matter fluctuations and those obtained from low-redshift data. We modify the minimal case of the model proposed in Ref.[1] that predicts late neutrino mass generation in a post-recombination cosmic phase transition, by assuming that neutrino asymmetries allow for the presence of relic neutrinos in the late-time Universe. We show that, if the transition is supercooled, current cosmological data (including CMB temperature, polarization and lensing, baryon acoustic oscillations, and Type Ia supernovae) prefer the scale factor as of the phase transition to be very large, peaking at as ∼ 1, and therefore supporting a cosmological scenario in which neutrinos are almost massless until very recent times. We find that in this scenario the cosmological bound on the total sum of the neutrino masses today is significantly weakened compared to the standard case of constant-mass neutrinos, with mν < 4.8 eV at 95% confidence, and in agreement with the model predictions. The main reason for this weaker bound is a large correlation arising between the dark energy and neutrino components in the presence of false vacuum energy that converts into the non-zero neutrino masses after the transition. This result provides new targets for the coming KATRIN and PTOLEMY experiments. We also show that the time-varying neutrino mass model considered here does not provide a clear explanation to the existing cosmological Ωm-σ8 discrepancies. In this paper we investigate a time-varying neutrino mass model, motivated by the mild tension between cosmic microwave background (CMB) measurements of the matter fluctuations and those obtained from low-redshift data. We modify the minimal case of the model proposed in Ref. [1] that predicts late neutrino mass generation in a post-recombination cosmic phase transition, by assuming that neutrino asymmetries allow for the presence of relic neutrinos in the late-time Universe. We show that, if the transition is supercooled, current cosmological data (including CMB temperature, polarization and lensing, baryon acoustic oscillations, and Type Ia supernovae) prefer the scale factor as of the phase transition to be very large, peaking at as ∼ 1, and therefore supporting a cosmological scenario in which neutrinos are almost massless until very recent times. We find that in this scenario the cosmological bound on the total sum of the neutrino masses today is significantly weakened compared to the standard case of constant-mass neutrinos, with mν < 4.8 eV at 95% confidence, and in agreement with the model predictions. The main reason for this weaker bound is a large correlation arising between the dark energy and neutrino components in the presence of false vacuum energy that converts into the non-zero neutrino masses after the transition. This result provides new targets for the coming KATRIN and PTOLEMY experiments. We also show that the time-varying neutrino mass model considered here does not provide a clear explanation to the existing cosmological Ωm-σ8 discrepancies. I. INTRODUCTION The absolute value and the origin of the neutrino masses are two of the main open questions in particle physics and cosmology. The discovery of neutrino oscillations [2][3][4] implies that at least two of the three neutrino mass eigenstates must have a non-vanishing mass, and gives a lower limit of 59 meV (normal ordering) and 109 meV (inverted ordering) for the total sum of the neutrino masses, m ν [5]. For the high-end tail of the mass distribution, the most stringent upper limit is set by cosmological data. Observations of the cosmic microwave background (CMB) from the Planck satellite, combined with baryon acoustic oscillations (BAO), give m ν < 0.12 eV at 95% confidence [6]. The projected sensitivity of future CMB and BAO data is ∼ 0.03 eV [7,8]. This is an indirect measurement tracking the effect of the neutrino masses on the matter distribution in the Universe. Upper limits on the absolute electron neutrino mass have also been obtained from direct β-decay searches (see e.g. Refs. [9][10][11]), with m νe ≤ 2.2 eV at 95% confidence. The KATRIN β-decay experiment will improve these limits by measuring m νe down to 0.2 eV at 90% confidence [12]; this is about one order of magnitude higher than future cosmological sensitivity. Flavour eigenstates of the neutrino, such as the electron neutrino mentioned here, can be described as linear combinations of the neutrino mass eigenstates and are connected to those through the Pontecorvo-Maki-Nakagawa-Sakata mixing matrix [5,13,14]. Cosmology and laboratory searches are sensitive to different linear combinations of the neutrino mass eigenstates and therefore confine the neutrino parameter space in a complementary way. The discovery of neutrino oscillations hints at fundamental new physics beyond the Standard Model (SM) of particle physics, since the SM particle content does not allow for any renormalizable neutrino mass term [15]. In fact, Dirac neutrino masses cannot be accommodated in the SM due to the absence of right-handed neutrino states, and Majorana masses for the left-handed neutrinos are not allowed as the SM Higgs sector only contains an SU (2) L doublet and no triplets. Therefore, it is widely believed that neutrino masses require the postulation of new elementary particles (see e.g. Refs. [5,16,17] for reviews). The most popular directions of model building beyond the SM usually focus on new physics at short distances corresponding to high energies (E TeV), and thereby strongly affecting early-Universe cosmology. As an alternative direction, Ref. [1] proposed a low-energy solution to the neutrino mass problem at a new infrared gravitational scale (Λ G eV), which is numerically coincident with the scale of dark energy. As reviewed below, this model alters late-Universe cosmology after photon decoupling. In the Standard Model of cosmology, and/or in the presence of these relic neutrinos with time-varying mass, the neutrino mass affects the growth of cosmic structures in several ways (see e.g. Ref. [18] for a review and Ref. [19] for a summary of the effects relevant here). In particular, non-zero masses suppress the amplitude of matter fluctuations in the late-time Universe compared to those present at early times, i.e. at the time of the CMB decoupling. Therefore, the total sum of the neutrino masses is strongly correlated with the inferred values of the matter density, Ω m , and matter clustering, for example measured by the amplitude of matter fluctuations on 8 h −1 Mpc scales, σ 8 . These quantities can be constrained with the CMB, the CMB lensing signal (that is the deflection of the CMB photon paths due to gravitational potential wells along their trajectories), and different probes of the matter distribution in the local Universe, e.g. the galaxy weak lensing signal, galaxy clustering, and the abundance of galaxy clusters. Over the past few years, measurements of Ω m -σ 8 from early-and late-time surveys have shown some mild tensions. In particular, taking the parameter combination S 8 ≡ σ 8 Ω m /0.3, the tension exists when comparing Planck CMB constraints [20] with galaxy weak lensing data from the Canada France Hawaii Lensing Survey (CFHTLenS) at the 1.7σ level [21] (see also Ref. [22]), from the Kilo Degree Survey (KiDS) at the 2.2σ level [23,24] (2.6σ in combination with 2dFLenS [25]), and from the first-year release of the Dark Energy Survey (DES) at the 1.7σ level [26]. Similar levels of inconsistency are found between Ω m -σ 8 inferred from the abundance of galaxy clusters detected with the Sunyaev-Zel'dovich (SZ) effect and Planck CMB values [27,28]. This has generated a lot of interest in the cosmological community with efforts split between investigation of residual systematics in the data or analysis assumptions in KiDS, DES, and Planck (e.g. [29][30][31][32][33][34]), and the possibility of having seen signatures of new physics beyond the standard ΛCDM cosmological model (e.g. [24,[35][36][37][38][39][40][41][42][43]). For example, Refs. [24,44,45] explored whether timevarying dark energy or neutrino masses might solve the tensions. Although the significance of the discrepancy changes slightly in more extended models, there is, at present, no clear preference for a beyond-ΛCDM cosmology. However, a general trend of these results is that lowredshift data prefer less matter fluctuations compared to early-time estimates which, when allowing neutrino masses to vary, translates into higher preferred values of the neutrino mass compared to the constraints coming from the CMB alone. Motivated by this, and taking at face value the analysis assumptions and the likelihood packages of each experiment (i.e. assuming this is not data/analysis systematics driven), we explore here a time-varying neutrino mass model, where the neutrino mass increases with time. Time-varying neutrino mass models were first introduced by Ref. [46] as a way to explain the similar energy scales of massive neutrinos and dark energy, and suggested that mass-varying neutrinos could be the cause of cosmic acceleration. However, Ref. [47] showed that these models would not be stable, and not distinguishable from a cosmological constant. Time-varying neutrino mass models and their cosmological implications were also studied in Refs. [48][49][50][51][52][53]. A new time-varying neutrino mass model was recently proposed in Ref. [1], where neutrino masses are generated through a gravitational θ-term in a late cosmic phase transition. This transition is expected to be of first order (see anologous discussions in Ref. [54,55]) and thus can either take place almost instantaneously at a temperature T ∼ m ν or can be substantially supercooled and thus become apparent only at lower temperatures T m ν . In both cases, the minimal case of the gravitational mass model predicts almost complete relic neutrino annihilation after the transition, so that all cosmological mass constraints are entirely evaded. However, a substantial relic neutrino density can survive in the non-minimal case of neutral lepton asymmetries, which was not considered in Ref. [1]. In this case, impact on neutrino mass constraints from cosmological data would be expected. For example, cosmological constraints on a simplified version of this non-minimal case of the model were presented in Ref. [56], finding that in some cases the cosmological neutrino mass bounds are significantly weakened compared to the standard constant-mass case, with m ν 0.6 − 0.8 eV. In this paper, we extend the analysis of Ref. [56] in three ways. 1) We include false vacuum energy from the supercooled phase transition, which is especially important when generating relatively large neutrino masses at late times corresponding to low temperatures (see Section II), this was neglected in Ref. [56]. For simplicity, we neglect the neutrino self-interactions and (partial) annihilation, as predicted by the model in Ref. [1], which will be treated in future studies. 2) We add Planck polarization data. 3) We examine whether time-varying neutrino masses can ease the tensions between cosmological parameters inferred from high-and low-redshift data, looking at the constraints from different probes in the Ω m -σ 8 plane. The analysis assumptions are reported in Section III and results in Section IV B. We summarize our findings and discuss the implications of our analysis both for the KATRIN experiment and for relic neutrino detection experiments, such as PTOLEMY [57], in Section V. II. TIME-VARYING NEUTRINO MASS MODEL A. Theoretical Foundations The gravitational neutrino mass model in Ref. [1] predicts the relic neutrinos to be massless until a late cosmic phase transition after photon decoupling. In the transi-tion, a neutrino vacuum condensate forms and generates small effective neutrino masses m ν ∼ Λ G ∼ | νν | ≡ v, where Λ G is the neutrino flavor symmetry breaking scale and v is the scale of the vacuum condensate 1 . The massive relic neutrinos then rapidly decay into the lightest neutrino mass eigenstate, become strongly coupled, and (partially) bind up or annihilate into almost massless Goldstone bosons through the process ν +ν → φ + φ. Naively, one might expect this modification of the relic neutrino sector to be ruled out by cosmological observations; for example, the similar idea of a neutrinoless Universe [59] was ruled out by neutrino free-streaming in the early Universe [60,61], an induced phase shift in the CMB peaks [62], and precision measurements of the effective number of neutrino species from the CMB (more recently from Ref. [20]). This is not the case because, crucially, the temperature T Λ G of the neutrino phase transition is a free parameter of the model in Ref. [1], fixed to T today T Λ G T CMB by the above-mentioned cosmological constraints 2 . Thus, Ref. [1] predicts neutrino self-interactions and (partial) annihilation only in the late Universe after photon decoupling, making the model predictions cosmologically viable. Additionally, an important point to stress here is that, although an almost complete relic neutrino annihilation is a key prediction of the minimal case in Ref. [1], it can be evaded in the presence of neutrino asymmetries. Big Bang Nucleosynthesis (BBN) and CMB data weakly constrain the muon-and tau-neutrino asymmetries [64], while BBN data strongly constrain the electron-neutrino asymmetry [65]. If standard neutrino oscillations in the early Universe mix the neutrino flavors, the strong BBN bounds would apply to all neutrino flavors [66]. However, the model in Ref. [1] predicts massless relic neutrinos in the early Universe, and all flavor-violating couplings only turn on abruptly when approaching the late-time phase transition (similar to, e.g. axion couplings [67]). We derive that the latest Planck CMB limit of ∆N eff < 0.33 at 95% confidence provides a weak bound on the ν µ,τ asymmetries n νµ,τ − nν µ,τ n νµ,τ 0.16 × 11 3 ∼ 0.58 ,(1) and therefore that up to ∼ 58% of the ν µ and ν τ flavors could have survived the annihilation after the late phase transition. This corresponds to ∼ 39% of all relic neutrinos. Such an asymmetry could only survive in the Dirac neutrino case [68], which implies that the Majorana case of Ref. [1] would always yield a neutrinoless Universe. Consequently, in this work we consider a modified version of the minimal case in Ref. [1], exclusively studying late neutrino mass generation and neglecting the selfinteractions and (partial) annihilation which we leave for future studies. Ref. [1] assumed the phase transition to take place instantaneously, i.e. at a temperature T Λ G ∼ Λ G ∼ v ∼ m ν . However, since the phase transition is expected to be of first order (see anologous discussions in Refs. [54,55]) and the possible presence of neutrino asymmetries would further strengthen the first-order transition (see analogous discussions in Refs. [69,70]), the transition can also be substantially delayed and thus become apparent only at lower temperatures. Such a supercooling mechanism is well known from inflationary and other cosmological scenarios (see e.g. Refs. [71][72][73]) and can drastically increase the energy density in an expanding Universe. In the model in Ref. [1], this mechanism could give rise to relatively large neutrino masses even at a low apparent transition temperature, T Λ G Λ G ∼ v ∼ m ν . The relevant factors characterizing the possible delay of the phase transition are the mentioned neutrino asymmetries and unknown order-one coefficients in the effective potential V (Φ, T ) of the neutrino-bilinear order parameters Φ ≡νν. In the case of a strongly supercooled transition, the false metastable vacuum can be stabilized at Φ = 0 over long cosmological times until tunnelling becomes significant at lower temperatures, which enables the false vacuum decay to the true minimum at Φ = 0 [74]. This vacuum decay releases positive potential energy density associated with the false vacuum and thus increases the energy density in the late relic neutrino sector relative to the other diluting energy densities in the Universe, e.g. of the photons. Consequently, the model in Ref. [1] implies that the energy density in today's neutrino sector can be significantly larger than expected by standard cosmology. Since a delayed neutrino phase transition would have a greater impact on cosmological observables than a nondelayed transition, the numerical analysis in this paper only focuses on the former case. In particular, Ref. [56] found that in the case of neutrino mass generation at T Λ G ∼ Λ G ∼ v ∼ m ν , the cosmological limits are very similar to the constant-mass neutrino case if relic neutrino annihilation [1] is neglected. The neutrino masses will only slowly rise in this case, while the local minimum of the free energy will slowly decrease, with less impact on cosmological observations. In case of a supercooled phase transition, the neutrino masses and transition temperature are two independent parameters. We note that generating relatively large masses at a low temperature seems to violate energy conservation at first sight. Therefore, differently from what was done in Ref. [56], we here take into account the false vacuum energy from the supercooled phase transition, which converts into neutrino masses at the low apparent transition temperature. Due to the unknown order-one coefficients in the effective potential mentioned above, the exact amount of false vacuum energy is an unpredictable free parameter of the theory. For simplicity, we assume that the false vacuum energy entirely converts into neutrino masses, and we neglect the additional conversion into excitations of the Φ field, i.e. dark radiation. We choose the same step-function parametrization for the late neutrino mass generation as Ref. [56] m ν (a) = 0 if a ≤ a s m ν tanh B s a as − 1 if a > a s(2) where m ν is today's individual neutrino rest mass, a is the scale factor, a s is the scale factor at the apparent phase transition time when the neutrino gains its mass, and B s is a parameter that determines the speed of the mass generation. We can fix the parameter B s to 10 10 , since the timescale of neutrino mass generation is of order m −1 ν , which corresponds to approximately femto/picoseconds. We note that here we assume a degenerate neutrino mass spectrum, i.e. m νi ≡ m ν . Degenerate neutrino masses are still allowed in the mass model of Ref. [1] because the standard cosmological mass limits are evaded, the bounds from β-decay experiments are relatively weak, and constraints from neutrinoless double-β experiments only apply to Majorana neutrinos. Moreover, current cosmological data constrains only the sum of neutrino masses and cannot resolve yet whether the neutrino mass ordering is normal or inverted [75][76][77][78][79][80][81]. Therefore, we assume degenerate masses that are generated at almost equal times for each mass eigenstate, i.e. within timescales much smaller than the Hubble timescale. Since the relic neutrinos rapidly decay into the lightest neutrino mass eigenstate, ν l , after the transition, the cosmologically constrained sum of the relic neutrino masses reduces to m ν = 3 × m l . To model the time evolution of the false vacuum energy density, we can use a similar parametrization as for the neutrino mass above ρ 0 (a) = V 0 1 − tanh B s 1 − a as if a > a s V 0 if a ≤ a s(3) where V 0 = ( p 2 ν + m 2 ν − p ν ) n ν is the difference in energy density of massive and massless neutrinos at a = a s , and n ν is the neutrino number density at that time. We assume here that the equation of state parameter of the false vacuum energy is constant, w = −1, and that only the amplitude of the energy density rapidly changes within timescales of femto/picoseconds, as discussed above. Therefore, the false vacuum energy effectively behaves as an additional vacuum energy contribution on top of dark energy, until the vacuum decays into the true minimum. Crucially, this scenario does not enhance the dark energy perturbations as in other massvarying neutrino models [48], and hence is not affected by model instabilities. We notice here that we assume a standard cosmological constant for dark energy in our study and do not attempt to link it to the false vacuum energy. We will briefly comment on this in Sec. V. The energy densities of massive neutrinos and the false vacuum energy component are shown in Fig. 1 for m ν = 0.2 eV and a late phase transition at a redshift of z s = 10 (or equivalently a s ∼ 0.091, solid lines). In this case, the false vacuum energy dominates over the dark energy density until the phase transition and is then transferred into the energy required for the generation of the neutrino masses. We also show with dashed lines the case of a very late phase transition happening at z s = 0.5, we note that in this case the false vacuum energy is more subtle and always subdominant compared to dark energy. B. Cosmological Observables The impact of this model on cosmological observables is shown in Figs. 2, 3: features in the CMB temperature (C TT l ) and polarization (C EE l ) power spectra, the CMB lensing convergence (C φφ l ) and matter power spectra (P (k)) are shown for m ν = 0.2 eV and three different values of z s (Fig. 2), and for a late phase transition and high neutrino masses (Fig. 3), with the standard massive neutrino case as the reference model in both cases. When the phase transition happens late (small values of z s and large values of a s ), the model becomes more similar to massless neutrinos. On the contrary, for large values of z s , the model is very similar to the standard constant mass neutrino case. Therefore, for z s = 1000 (a s = 0.001), the effect of the time-varying neutrino mass for all four power spectra is only marginal compared to the reference case. We start our explanation with the matter power spectrum on the top left corner of Fig. 2. For standard massive neutrinos, the matter power spectrum is suppressed on small scales, in the case of m ν = 0.2 eV this corresponds to k ≥ k nr = 0.0027. This suppression is more or less pronounced in our case depending on the time of the phase transition. As mentioned above, for a small value of z s the neutrinos are massless for most of their evolution and as a result the matter power spectrum is less suppressed and more similar to the power spectrum of massless neutrinos. As described in Ref. [56], the turnover-scale of the matter power spectrum is also affected, depending on the exact time at which the neutrinos gain their mass. As the time of the phase transition moves towards smaller redshifts, the enhancement of the matter power spectrum for large values of k translates into an overall enhancement of the lensing convergence power spectrum (top right panel of Fig. 2). The CMB temperature and polarization anisotropy power spectra (bottom left and right panel, respectively) are mostly affected at small and large multipoles, encoding the impact of extra vacuum energy and neutrino free-streaming in the case of a late phase transition. Anticipating larger values of m ν allowed by a supercooled phase transition, in Fig. 3 we compare cosmological observables in the case of mass-varying neutrinos with m ν = 2 eV with respect to the standard massive neutrino case with m ν = 0.2 eV. We notice that even in the case of these very different mass scenarios the impact on the observables is subtle. For this comparison, we have not renormalized the values of the different matter density components (i.e. we kept the amount of cold dark matter and baryons fixed) to reproduce the process where neutrinos exchange some energy only with the false vacuum energy component (i.e. moving along the dark energy degeneracy line seen in Sec. IV A). A higher impact is now seen on P (k), showing a suppression on all scales out to the horizon at the phase transition scale, caused by the substantial amount of false vacuum energy before the transition. The features in the CMB spectra are also enhanced due to the different energy budget of the Universe. We note that the differences between the models are only of the order of a few percent. We anticipate that this might be hard to uncover with current data but is within the reach of future CMB and galaxy surveys. The CMB SO [8] and Stage-4 projects [7] will have the sensitivity to distinguish the small-scale CMB features, while Euclid [82] and LSST [83] will provide better measurements of P (k). III. ANALYSIS METHODOLOGY To constrain the parameters of our model, we use modified versions of the publicly available Boltzmann solver CAMB [84] and the Monte-Carlo Markov chain package CosmoMC [85]. We compare this model where neutrino masses are generated through a supercooled phase transition, named hereafter Supercool-ν, to the standard ΛCDM case with fixed neutrino masses m ν = 0.06 eV, and to the case in which the total mass is varied but constant in time (i.e. the standard massive neutrino case), ΛCDM+ m ν . When reporting ΛCDM results, we vary the standard six cosmological parameters (the baryon and cold dark matter densities, Ω b and Ω c , the scalar spectral index n s , the amplitude of primordial fluctuations, A s , the Hubble constant, H 0 , and the optical depth to reionization, τ ) and fix the total sum of neutrino masses to m ν = 0.06 eV, corresponding approximately to the lower limit obtained from neutrino oscillation experiments [5]. In the extended analyses for i) ΛCDM+ m ν we additionally vary m ν as a constant parameter; and for ii) the Supercool-ν model we additionally consider the full time evolution of the neutrino mass and vary the scale factor of the phase transition, a s . The false vacuum energy amplitude is set by the value of m ν and a s via Eq. (3). Unless otherwise stated (for example in Section IV B), we assume standard flat priors on the ΛCDM basic parameters (following Ref. [64]). We vary m ν between 0.06 and 6.6 eV to incorporate current limits from laboratory searches (i.e. above the minimum threshold set by oscillation experiments and converting m νe < 2.2 eV into m ν < 6.6 eV). We will extend this range in Sec. IV B to ease the comparison with other published results. The logarithm of the time of the phase transition, log(a s ), is varied between -5 and 0. This allows the exploration of neutrino mass generation across a large range of cosmic time. We fix the speed of the transition with B s = 10 10 , corresponding to an almost instantaneous phase transition. This parameter was very unconstrained in the analysis of Ref. [56], so we do not expect its exact value to affect our results. We separate our analysis in two parts: in Sec. IV A we report state-of-the-art constraints for the parameters of the time-varying neutrino mass model considered here; in Sec. IV B we study the constraints in the Ω m -σ 8 plane from different cosmological probes. IV. RESULTS A. Cosmological Mass Limits To obtain constraints from current data, we combine Planck CMB temperature, polarization, and lensing spectra from the 2015 release [86,87] 3 with BAO distance ratio from BOSS DR12 (CMASS and LOWZ) [88], SDSS MGS [89] and 6DF [90], and Type Ia supernovae redshiftmagnitude diagram from the Joint Light-curve analysis (JLA) compilation [91]. This is the baseline data combi- nation of the Planck analyses that we follow here. The results are shown in Fig. 4 and reported in Table I. We find that much larger values for m ν are allowed in the case of a supercooled phase transition compared to the case of standard constant-mass neutrinos and that the data prefer a large value of the phase transition scale factor, i.e. a late relic neutrino mass generation (peaking at today's scale factor) m ν ≤ 4.8 eV log(a s ) ≥ −3.6 at 95% confidence .(4) This is a significantly weakened limit for the neutrino mass, to be compared to m ν ≤ 0.2 eV for standard massive neutrinos with the same data combination -we note though that the 68% limit, m ν ≤ 1.6 eV, is much tighter due to the non-Gaussian distribution recovered in this fit. This is expected in this model and the reason for this is illustrated in Fig. 1: the inclusion of the false vacuum energy generates a condition where the amplitude of the dark energy density and the combination of the neutrino and false vacuum energy components are very similar over most of the cosmic history. Especially in the case when the transition happens very late (z s ≤ 10) and the sum of neutrino masses is large, the neutrino energy density will be of the same order of magnitude as the dark energy density until almost today. Therefore, a strong anti-correlation between m ν and Ω Λ arises (at the level of 98%). This can also be seen in Fig. 4. A similar degeneracy has also been observed for early dark energy (EDE) models [19,92], however in these models the degeneracy is caused by the time-varying evolution of the dark energy component. We also note that the correction that we added to keep energy conserved in the model, i.e. the inclusion of the false vacuum energy, is the main reason why our constraints are broader than those reported in Ref. [56]. The preference for a late transition captures the trend that has emerged fitting for neutrino masses with early-and late-time cosmological probes: we confirm that the data require lighter neutrinos at CMB decoupling and more significant masses can be generated only in the late Universe. The goodness of the fit obtained with this time-varying neutrino mass model is only marginally better than that obtained in the standard massive neutrino case, with a difference in best-fit likelihoods of only 1.57 (∆χ 2 = 3.14). Therefore, the Supercool-ν model is slightly but not significantly favoured, yielding a p-value of 0.08 with one additional degree of freedom for the Supercool-ν model compared to ΛCDM+ m ν . B. The Ωm-σ8 plane We now compare cosmological constraints in the Ω mσ 8 plane. We use Planck CMB temperature and polarization data, Planck lensing and Planck SZ cluster counts data [28], and galaxy weak lensing data from KiDS [23] 4 . We take each dataset singularly, except for the SZ case where, following the Planck analysis, we further add BBN constraints on Ω b h 2 to break parameter degeneracies. This choice is made to explore the impact of time-varying neutrino masses on the existing tensions, and whether the inclusion of massive neutrinos generated late in the Universe might ease the discrepancies. Planck CMB lensing is also included in our analysis as a dataset on its own, not because of tension with other data but rather to look at the effect of this model at intermediateto-low redshifts. To easily compare with the KiDS weak lensing, Planck lensing, and Planck SZ cluster results, we use now the same flat priors for the unconstrained parameters assumed by the individual experiments. For KiDS we use the priors assumed in Ref. [24]; for Planck CMB lensing we use the priors for n s and Ω b as stated in Ref. [87]; and for the Planck SZ cluster counts we use the priors for n s and Ω b reported in Ref. [28]. For these latter data we further assume a Gaussian prior on the bias parameter picking the CCCP baseline case [93] used as reference cluster mass calibration in the Planck analyses. For the galaxy weak lensing, Planck lensing and SZ cases τ is not varying. We note that the neutrino mass parameter is now varied between 0.06 and 10 eV consistently with other published analyses and therefore for a simpler comparison. The results for the three models compared here are shown in Fig. 5 (transition from top to bottom) and discussed below. ΛCDM: The top panel of Fig. 5 shows the constraints for Ω m and σ 8 in the case of m ν fixed to 0.06 eV for the four different datasets considered here. These results reproduce the published KiDS [24], Planck CMB and CMB lensing [64,87], and Planck SZ+BBN [28] 5 4 We work with KiDS weak lensing data because this is the most discrepant data and because it was the only publicly available likelihood at the time this work started. 5 We have cross-checked our Planck SZ+BBN results by additionally including BAO and comparing with the Planck SZ+BBN+BAO constraints in Ref. [28]. results, and are shown here only for reference 6 . ΛCDM+ m ν : When varying m ν as a parameter, correlations in the matter components generate a broadening of the constraints. In particular, the middle panel of Fig. 5 shows the impact of the standard massive-neutrinos-driven suppression of density fluctuations below their free-streaming length. Larger allowed values for the neutrino mass enlarge the Planck CMB primary and lensing constraints towards lower values of σ 8 and higher values of Ω m . Similar effects are seen for the KiDS and SZ analysis. This has been extensively demonstrated in the literature (e.g. [24,44,45,94]). Supercool-ν: The bottom panel in Fig. 5 shows our results for the supercooled phase transition. The largest impact compared to the other two cases is seen on the Planck CMB contours: they now extend to much lower values of σ 8 and higher values of Ω m . CMB lensing contours are slightly affected, while the KiDS and SZ cluster results are almost unchanged. This is explained by the data preferring a late-time mass generation, so that the Supercool-ν case only differs significantly from the ΛCDM or ΛCDM+ m ν cases at CMB and CMB lensing epochs. The contours however broaden along the degeneracy line, bringing data in slightly better agreement but with no substantial model preference (when considering the broadening due to the extra parameters present in the model). We also note that the derived value of the Hubble constant in this model is not significantly different from the one obtained in the ΛCDM+ m ν case. V. SUMMARY AND CONCLUDING REMARKS In this paper, we have presented state-of-the-art constraints from cosmology on a time-varying neutrino mass model motivated by Ref. [1]. We assume that relic neutrino masses are generated from a form of false vacuum energy in a supercooled neutrino phase transition and neglect neutrino annihilation in the late Universe. This is a modified version of the minimal model in Ref. [1] which predicts almost complete neutrino annihilation after the supercooled phase transition and thus implies that all cosmological mass constraints are entirely evaded. We find that current data prefer a phase transition very late in time (peaking at today) and that the constraint on the total mass of neutrinos is significantly weakened compared to the standard massive neutrinos case, with m ν ≤ 4.8 eV at 95% confidence (≤ 1.6 eV at 68% confidence). This larger bound is mostly due to large correlations with the dark energy component, affected by the presence of the false vacuum energy term. To summarize, we find that the standard constant-mass neutrino case with low masses and the Supercool-ν model studied here with high masses are both successful with current data. The recently proposed PTOLEMY experiment [57] aims to achieve the sensitivity required to detect relic neutrinos. However, such a detection would only be feasible in case of degenerate or quasi-degenerate neutrino masses due to the proposed energy resolution of ∼ 0.15 eV per neutrino [95]. While such large masses are ruled out by conventional cosmological neutrino mass bounds, the results found here still allow for a detection by PTOLEMY in the presence of a strongly asymmetric neutrino background, as we will further discuss below. The KATRIN β-decay experiment [12] also has the potential to discover a relatively large absolute neutrino mass scale soon. Since the model considered here allows for larger neutrino masses, a detection of an unexpectedly large absolute neutrino mass scale at KATRIN could provide a strong hint towards this model, at least if the standard cosmological ΛCDM model is valid in other respects. We note that the KATRIN measurement would not be affected by possible modifications of the measured electron energy spectrum due to neutrino self-interactions, since the β-decay process happens on much shorter timescales than these interactions. The weakened neutrino mass bounds gain even further importance in the hypothetical presence of sterile neutrinos motivated by experimental short-baseline anomalies [96]. Light sterile neutrinos usually stand in conflict with cosmological bounds on neutrino masses and the primordial radiation density [97], but these conflicts vanish in the model [1], since the relic (active) neutrinos are massless in the early Universe and thus have vanishing couplings to their sterile partners. We further looked at the possibility of solving current early-and late-time tensions in the measurements of matter fluctuations with this model. Larger values allowed for the neutrino mass also weaken constraints on the matter density and clustering. These, however, broaden along the degeneracy direction already present in the standard constant mass case and do not provide a convincing explanation to the tensions. We made several simplifications to the original neutrino mass model in Ref. [1]. • The model predicts that the relic neutrinos rapidly decay into the lightest neutrino mass eigenstate after the late cosmic transition. Therefore, any cosmological neutrino mass bound derived with this model only applies to the smallest neutrino mass and not to the sum of all masses. Considering that, at present, we do not have further information on the neutrino mass eigenstates ordering and relative weight, we argue that making this simplification is not impacting our conclusion. Moreover, the decay becomes less relevant for larger masses, since then the neutrino mass eigenstates have similar masses and are cosmologically not distinguishable. We also note that the relic neutrino decay into the lightest mass eigenstate results in an enhanced (suppressed) relic neutrino detection rate at PTOLEMY for a normal (inverted) neutrino mass ordering, because the lightest mass eigenstate contains a large (small) fraction of the electron neutrino flavor eigenstate. • The model in Ref. [1] also predicts that the relic neutrinos become strongly coupled after the phase transition and substantially annihilate into almost massless Goldstone bosons, i.e. dark radiation. In the case of almost complete annihilation, this would not be tracked by neutrino masses from cosmological data. We relax this prediction by Ref. [1] for two reasons: i) first, an evidence of time-varying neutrino masses from cosmology could still inform model building in general. Our study confirms the general trend that low-redshift data prefer heavier neutrinos and showed that large masses can be generated only in the late Universe. We note here that the latter result is expected to also hold true in case of complete neutrino annihilation, due to the larger amount of false vacuum energy required for an earlier phase transition. ii) An almost complete annihilation could in fact be evaded in the presence of large neutrino asymmetries and could be falsified by a cosmological neutrino mass detection. We showed that non-complete annihilation is still a viable possibility considering the current bounds on these asymmetries. • Another aspect we neglected in our study is the formation and evolution of topological defects, as well as out-of-equilibrium effects like bubble nucleation and collision. Related cosmological studies of the resulting inhomogeneities in supercooled late-time phase transitions have been presented in Ref. [98], which finds that kinetic-SZ data constrain bubble nucleation from false vacuum decay to happen very recently. We defer the studies of such inhomogeneities as well as the cosmological effects of neutrino self-interactions, (partial) annihilation, and dark radiation to future investigations. • Finally, we note that for simplicity we fixed the false vacuum energy density V 0 to the energy density required to generate the relic neutrino masses. However, a substantial amount of the false vacuum energy could also convert into dark radiation. In general, V 0 is a free parameter of the model [1], which opens up the possibility that V 0 could be identified with the observed dark energy density. 7 In such a "decaying dark energy" scenario, our Universe recently became dark-radiation dominated, will soon enter a matter-dominated era, and will continue to expand at a decelerating rate (see e.g. Refs. [98][99][100][101][102][103][104][105] for similar considerations). The redshift of dark energy decay is constrained by Type IA supernovae data to z s 0.1 at the 2σ level [102]. The dark radiation bosons would not yield directly observable cosmological effects, despite their huge abundance, due to strongly suppressed interactions with Standard Model particles. However, they might yield observable signatures in non-cosmological contexts (see Refs. [1,58]). FIG. 1 . 1Energy densities for different components present in our analysis: neutrinos with a time-varying mass generated at zs = 10 (zs = 0.5) and corresponding to mν = 0.2 eV today with solid (dashed) curves, false vacuum energy, standard dark energy, matter (baryons and cold dark matter), and radiation. FIG. 2 . 2Effect on cosmological observables from the time-varying neutrino mass model considered here, shown for mν = 0.2 eV and for three different values of the phase transition redshift (zs = 1000 or as = 0.001, zs = 100 or as = 0.01 and zs = 10 or as=0.09), compared to the standard massive neutrinos case with mν = 0.2 eV used as a reference. Different panels report the matter power spectrum (top left), the CMB lensing convergence power spectrum (top right), and CMB temperature and polarization anisotropy power spectra (bottom left and right, respectively). The effects of this model are subtle, with percent level features, but within the reach of future experiments. FIG. 3 . 3Same as Fig. 2 in the case of a late phase transition (zs = 0.1 or as = 0.91) and a large neutrino mass with mν = 2 eV, compared to the standard massive neutrinos case with mν = 0.2 eV. FIG. 4 . 4Constraints for mν , as and ΩΛ (with contours at 68% and 95% confidence) in the case of standard massive neutrino (ΛCDM+ mν , dark blue) or for relic neutrinos with mass generated in a supercooled phase transition (Supercoolν, light blue). The results are obtained using Planck TT-TEEE+lensing, BAO and SN. Ref.[58] showed that this scenario could also solve the strong CP problem if the condensate generates the up-quark mass as well.2 We note that the upper bound on T Λ G still applies if neutrinos get small masses through other mechanisms beyond gravity, making this constraint model-independent. A generic lower bound on the scale Λ G stems from experimental tests of Newtonian gravity down to ∼ meV −1 distances[63], which is similar to the model-dependent lower bound on v from the observed neutrino mass splitting [1]. The final 2018 Planck release occurred during the final stages of this work. We, however, note that the 2018 likelihood software needed to analyse the data is not yet public, and we anticipate that our results will not change with the new data products. We note that these contours will shift if using different τ values compared to the Planck 2015 one used here. However, we do not expect this to change significantly any conclusion drawn in this paper. We decided to keep the 2015 value to compare more easily with other published results. Ref.[1] already noticed a potential connection between the neutrino vacuum condensate and dark energy, due to the surprising numerical coincidence of the dark energy and neutrino mass scales, and because the neutrino condensate is inherently connected to a new low-energy gravitational scale, Λ G . However, the model does[1] G. Dvali and L. Funcke, Phys. Rev. D93, 113002 (2016), arXiv:1602.03191 [hep-ph]. not solve the cosmological constant problem since it cannot explain why other Standard Model vacuum contributions, such as the Higgs condensate, do not contribute to the cosmological constant. ACKNOWLEDGEMENTSWe thank Shahab Joudaki . Y Fukuda, Super-Kamiokande Collaboration10.1103/PhysRevLett.81.1562Phys. Rev. Lett. 811562Y. Fukuda et al. (Super-Kamiokande Collaboration), Phys. Rev. Lett. 81, 1562 (1998). . Q R Ahmad, SNO Collaboration10.1103/PhysRevLett.87.071301Phys. Rev. Lett. 8771301Q. R. Ahmad et al. (SNO Collaboration), Phys. Rev. Lett. 87, 071301 (2001). . Q R Ahmad, SNO CollaborationOhers, SNO Collaboration10.1103/PhysRevLett.89.011301Phys. Rev. Lett. 8911301Q. R. Ahmad and ohers (SNO Collaboration), Phys. Rev. Lett. 89, 011301 (2002). . M Tanabashi, Particle Data GroupPhys. Rev. D. 9830001M. Tanabashi et al. (Particle Data Group), Phys. Rev. D 98, 030001 (2018). . N Aghanim, PlanckarXiv:1807.06209[astro-ph.CON. Aghanim et al. (Planck), (2018), arXiv:1807.06209 [astro-ph.CO]. . K N Abazajian, CMB-S4arXiv:1610.02743astro-ph.COK. N. Abazajian et al. (CMB-S4), (2016), arXiv:1610.02743 [astro-ph.CO]. . J Aguirre, arXiv:1808.07445[astro-ph.COSimons ObservatoryJ. Aguirre et al. (Simons Observatory), (2018), arXiv:1808.07445 [astro-ph.CO]. . V Lobashev, 10.1016/S0375-9474(03)00985-0Nuclear Physics A. 719153V. Lobashev, Nuclear Physics A 719, C153 (2003). . C Kraus, 10.1140/epjc/s2005-02139-7The European Physical Journal C -Particles and Fields. 40447C. Kraus et al., The European Physical Journal C - Particles and Fields 40, 447 (2005). . V N Aseev, A I Belesev, A I Berlev, E V Geraskin, A A Golubev, N A Likhovid, V M Lobashev, A A Nozik, V S Pantuev, V I Parfenov, A K Skasyrskaya, F V Tkachov, S V Zadorozhny, 10.1103/PhysRevD.84.112003Phys. Rev. D. 84112003V. N. Aseev, A. I. Belesev, A. I. Berlev, E. V. Geraskin, A. A. Golubev, N. A. Likhovid, V. M. Lobashev, A. A. Nozik, V. S. Pantuev, V. I. Parfenov, A. K. Skasyrskaya, F. V. Tkachov, and S. V. Zadorozhny, Phys. Rev. D 84, 112003 (2011). . G Drexlin, V Hannen, S Mertens, C Weinheimer, 10.1155/2013/293986arXiv:1307.0101Adv. High Energy Phys. 2013293986physics.ins-detG. Drexlin, V. Hannen, S. Mertens, and C. Wein- heimer, Adv. High Energy Phys. 2013, 293986 (2013), arXiv:1307.0101 [physics.ins-det]. Progress of Theoretical. Z Maki, M Nakagawa, S Sakata, 10.1143/PTP.28.870Physics. 28Z. Maki, M. Nakagawa, and S. Sakata, Progress of The- oretical Physics 28, 870 (1962). . B Pontecorvo, Zh. Eksp. Teor. Fiz. 261717Sov. Phys. JETPB. Pontecorvo, Sov. Phys. JETP 26, 984 (1968), [Zh. Eksp. Teor. Fiz.53,1717(1967)]. M D Schwartz, Quantum Field Theory and the Standard Model. Cambridge, UKCambridge University PressM. D. Schwartz, Quantum Field Theory and the Stan- dard Model (Cambridge University Press, Cambridge, UK, 2014). . A De Gouvêa, 10.1146/annurev-nucl-102115-044600Annu. Rev. Nucl. Part. Sci. 66197A. De Gouvêa, Annu. Rev. Nucl. Part. Sci. 66, 197 (2016). P Hernandez, 10.5170/CERN-2016-005.85arXiv:1708.01046Proceedings, 8th CERN-Latin-American School of High-Energy Physics. 8th CERN-Latin-American School of High-Energy PhysicsIbarra, Ecuadorhep-phP. Hernandez, in Proceedings, 8th CERN- Latin-American School of High-Energy Physics (CLASHEP2015): Ibarra, Ecuador, March 05-17, 2015 (2016) pp. 85-142, arXiv:1708.01046 [hep-ph]. . J Lesgourgues, G Mangano, G Miele, S Pastor, Neutrino Cosmology, 10.1017/CBO9781139012874Cambridge University PressJ. Lesgourgues, G. Mangano, G. Miele, and S. Pas- tor, Neutrino Cosmology (Cambridge University Press, 2013). . C S Lorenz, E Calabrese, D Alonso, 10.1103/PhysRevD.96.043510arXiv:1706.00730Phys. Rev. 9643510astro-ph.COC. S. Lorenz, E. Calabrese, and D. Alonso, Phys. Rev. D96, 043510 (2017), arXiv:1706.00730 [astro-ph.CO]. . N Aghanim, PlanckarXiv:1807.06209[astro-ph.CON. Aghanim et al. (Planck), (2018), arXiv:1807.06209 [astro-ph.CO]. . N Maccrann, J Zuntz, S Bridle, B Jain, M R Becker, 10.1093/mnras/stv1154arXiv:1408.4742Mon. Not. Roy. Astron. Soc. 4512877astro-ph.CON. MacCrann, J. Zuntz, S. Bridle, B. Jain, and M. R. Becker, Mon. Not. Roy. Astron. Soc. 451, 2877 (2015), arXiv:1408.4742 [astro-ph.CO]. . S Joudaki, 10.1093/mnras/stw2665arXiv:1601.05786Mon. Not. Roy. Astron. Soc. 4652033astro-ph.COS. Joudaki et al., Mon. Not. Roy. Astron. Soc. 465, 2033 (2017), arXiv:1601.05786 [astro-ph.CO]. . H Hildebrandt, 10.1093/mnras/stw2805arXiv:1606.05338Mon. Not. Roy. Astron. Soc. 4651454astro-ph.COH. Hildebrandt et al., Mon. Not. Roy. Astron. Soc. 465, 1454 (2017), arXiv:1606.05338 [astro-ph.CO]. . S Joudaki, 10.1093/mnras/stx998arXiv:1610.04606Mon. Not. Roy. Astron. Soc. 4711259astro-ph.COS. Joudaki et al., Mon. Not. Roy. Astron. Soc. 471, 1259 (2017), arXiv:1610.04606 [astro-ph.CO]. . S Joudaki, 10.1093/mnras/stx2820arXiv:1707.06627Mon. Not. Roy. Astron. Soc. 474astro-ph.COS. Joudaki et al., Mon. Not. Roy. Astron. Soc. 474, 4894 (2018), arXiv:1707.06627 [astro-ph.CO]. . M A Troxel, DESarXiv:1708.01538[astro-ph.COM. A. Troxel et al. (DES), (2017), arXiv:1708.01538 [astro-ph.CO]. . P A R Ade, Planck10.1051/0004-6361/201321521arXiv:1303.5080Astron. Astrophys. 571astro-ph.COP. A. R. Ade et al. (Planck), Astron. Astrophys. 571, A20 (2014), arXiv:1303.5080 [astro-ph.CO]. . P A R Ade, Planck10.1051/0004-6361/201525833arXiv:1502.01597Astron. Astrophys. 594astro-ph.COP. A. R. Ade et al. (Planck), Astron. Astrophys. 594, A24 (2016), arXiv:1502.01597 [astro-ph.CO]. . G E Addison, Y Huang, D J Watts, C L Bennett, M Halpern, G Hinshaw, J L Weiland, 10.3847/0004-637X/818/2/132Astrophys. G. E. Addison, Y. Huang, D. J. Watts, C. L. Bennett, M. Halpern, G. Hinshaw, and J. L. Weiland, Astrophys. . 10.3847/0004-637X/818/2/132arXiv:1511.00055[astro-ph.COJ. 818J. 818, 132 (2016), arXiv:1511.00055 [astro-ph.CO]. . G Efstathiou, P Lemos, 10.1093/mnras/sty099arXiv:1707.00483astro-ph.COG. Efstathiou and P. Lemos, (2017), 10.1093/mn- ras/sty099, arXiv:1707.00483 [astro-ph.CO]. . E Sellentin, C Heymans, J Harnois-Draps, 10.1093/mnras/sty988arXiv:1712.04923astroph.COE. Sellentin, C. Heymans, and J. Harnois-Draps, (2017), 10.1093/mnras/sty988, arXiv:1712.04923 [astro- ph.CO]. . M A Troxel, DESarXiv:1804.10663[astro-ph.COM. A. Troxel et al. (DES), (2018), arXiv:1804.10663 [astro-ph.CO]. . G Obied, C Dvorkin, C Heinrich, W Hu, V Miranda, 10.1103/PhysRevD.96.083526arXiv:1706.09412Phys. Rev. 9683526astro-ph.COG. Obied, C. Dvorkin, C. Heinrich, W. Hu, and V. Miranda, Phys. Rev. D96, 083526 (2017), arXiv:1706.09412 [astro-ph.CO]. . M Asgari, arXiv:1810.02353[astro-ph.COM. Asgari et al., (2018), arXiv:1810.02353 [astro- ph.CO]. . B J Barros, L Amendola, T Barreiro, N J Nunes, arXiv:1802.09216astro-ph.COB. J. Barros, L. Amendola, T. Barreiro, and N. J. Nunes, (2018), arXiv:1802.09216 [astro-ph.CO]. . I G Mccarthy, S Bird, J Schaye, J Harnois-Deraps, A S Font, L Van Waerbeke, arXiv:1712.02411astro-ph.COI. G. Mccarthy, S. Bird, J. Schaye, J. Harnois- Deraps, A. S. Font, and L. Van Waerbeke, (2017), arXiv:1712.02411 [astro-ph.CO]. . V Poulin, K K Boddy, S Bird, M Kamionkowski, 10.1103/PhysRevD.97.123504arXiv:1803.02474Phys. Rev. 97123504astro-ph.COV. Poulin, K. K. Boddy, S. Bird, and M. Kamionkowski, Phys. Rev. D97, 123504 (2018), arXiv:1803.02474 [astro-ph.CO]. . S Peirone, M Martinelli, M Raveri, A Silvestri, 10.1103/PhysRevD.96.063524arXiv:1702.06526Phys. Rev. 9663524astro-ph.COS. Peirone, M. Martinelli, M. Raveri, and A. Silvestri, Phys. Rev. D96, 063524 (2017), arXiv:1702.06526 [astro-ph.CO]. . J Lesgourgues, G Marques-Tavares, M Schmaltz, 10.1088/1475-7516/2016/02/037arXiv:1507.04351JCAP. 160237astroph.COJ. Lesgourgues, G. Marques-Tavares, and M. Schmaltz, JCAP 1602, 037 (2016), arXiv:1507.04351 [astro- ph.CO]. . V Poulin, P D Serpico, J Lesgourgues, 10.1088/1475-7516/2016/08/036arXiv:1606.02073JCAP. 160836astro-ph.COV. Poulin, P. D. Serpico, and J. Lesgourgues, JCAP 1608, 036 (2016), arXiv:1606.02073 [astro-ph.CO]. . M A Buen-Abad, M Schmaltz, J Lesgourgues, T Brinckmann, 10.1088/1475-7516/2018/01/008arXiv:1708.09406JCAP. 18018astro-ph.COM. A. Buen-Abad, M. Schmaltz, J. Lesgourgues, and T. Brinckmann, JCAP 1801, 008 (2018), arXiv:1708.09406 [astro-ph.CO]. . E Di Valentino, A Melchiorri, J Silk, 10.1103/PhysRevD.93.023513arXiv:1509.07501Phys. Rev. 9323513astro-ph.COE. Di Valentino, A. Melchiorri, and J. Silk, Phys. Rev. D93, 023513 (2016), arXiv:1509.07501 [astro-ph.CO]. . A Gomez-Valent, J Sola, 10.1209/0295-5075/120/39001arXiv:1711.00692astro-ph.COEPL. 120A. Gomez-Valent and J. Sola, EPL 120, 39001 (2017), arXiv:1711.00692 [astro-ph.CO]. . R A Battye, A Moss, 10.1103/PhysRevLett.112.051303arXiv:1308.5870Phys. Rev. Lett. 11251303astro-ph.COR. A. Battye and A. Moss, Phys. Rev. Lett. 112, 051303 (2014), arXiv:1308.5870 [astro-ph.CO]. . M Wyman, D H Rudd, R A Vanderveld, W Hu, 10.1103/PhysRevLett.112.051302arXiv:1307.7715Phys. Rev. Lett. 11251302astro-ph.COM. Wyman, D. H. Rudd, R. A. Vanderveld, and W. Hu, Phys. Rev. Lett. 112, 051302 (2014), arXiv:1307.7715 [astro-ph.CO]. . R Fardon, A E Nelson, N Weiner, 10.1088/1475-7516/2004/10/005arXiv:astro-ph/0309800JCAP. 04105astro-phR. Fardon, A. E. Nelson, and N. Weiner, JCAP 0410, 005 (2004), arXiv:astro-ph/0309800 [astro-ph]. . N Afshordi, M Zaldarriaga, K Kohri, 10.1103/PhysRevD.72.065024arXiv:astro-ph/0506663Phys. Rev. 7265024astro-phN. Afshordi, M. Zaldarriaga, and K. Kohri, Phys. Rev. D72, 065024 (2005), arXiv:astro-ph/0506663 [astro-ph]. . U Franca, M Lattanzi, J Lesgourgues, S Pastor, 10.1103/PhysRevD.80.083506arXiv:0908.0534Phys. Rev. 8083506astroph.COU. Franca, M. Lattanzi, J. Lesgourgues, and S. Pastor, Phys. Rev. D80, 083506 (2009), arXiv:0908.0534 [astro- ph.CO]. . G , La Vacca, D F Mota, 10.1051/0004-6361/201220971arXiv:1205.6059astro-ph.COA&A. 56053G. La Vacca and D. F. Mota, A&A 560, A53 (2013), arXiv:1205.6059 [astro-ph.CO]. . A W Brookfield, C Van De Bruck, D F Mota, D Tocchini-Valentini, 10.1103/PhysRevLett.96.061301arXiv:astro-ph/0503349Phys. Rev. Lett. 9661301astro-phA. W. Brookfield, C. van de Bruck, D. F. Mota, and D. Tocchini-Valentini, Phys. Rev. Lett. 96, 061301 (2006), arXiv:astro-ph/0503349 [astro-ph]. . A W Brookfield, C Van De Bruck, D F Mota, D Tocchini-Valentini, 10.1103/PhysRevD.73.083515,10.1103/PhysRevD.76.049901arXiv:astro-ph/0512367Phys. Rev. 7383515Erratum: Phys. Rev.D76,049901(2007). astro-phA. W. Brookfield, C. van de Bruck, D. F. Mota, and D. Tocchini-Valentini, Phys. Rev. D73, 083515 (2006), [Erratum: Phys. Rev.D76,049901(2007)], arXiv:astro- ph/0512367 [astro-ph]. . O E Bjaelde, A W Brookfield, C Van De Bruck, S Hannestad, D F Mota, L Schrempp, D Tocchini-Valentini, 10.1088/1475-7516/2008/01/026arXiv:0705JCAP. 080126astro-phO. E. Bjaelde, A. W. Brookfield, C. van de Bruck, S. Hannestad, D. F. Mota, L. Schrempp, and D. Tocchini-Valentini, JCAP 0801, 026 (2008), arXiv:0705.2018 [astro-ph]. . C.-Q Geng, C.-C Lee, R Myrzakulov, M Sami, E N Saridakis, 10.1088/1475-7516/2016/01/049arXiv:1504.08141JCAP. 160149astro-ph.COC.-Q. Geng, C.-C. Lee, R. Myrzakulov, M. Sami, and E. N. Saridakis, JCAP 1601, 049 (2016), arXiv:1504.08141 [astro-ph.CO]. . R D Pisarski, F Wilczek, 10.1103/PhysRevD.29.338Phys. Rev. 29338R. D. Pisarski and F. Wilczek, Phys. Rev. D29, 338 (1984). . D Roder, J Ruppert, D H Rischke, 10.1103/PhysRevD.68.016003arXiv:nucl-th/0301085Phys. Rev. 6816003nucl-thD. Roder, J. Ruppert, and D. H. Rischke, Phys. Rev. D68, 016003 (2003), arXiv:nucl-th/0301085 [nucl-th]. . S M Koksbang, S Hannestad, 10.1088/1475-7516/2017/09/014arXiv:1707.02579astro-ph.COJCAP. 170914S. M. Koksbang and S. Hannestad, JCAP 1709, 014 (2017), arXiv:1707.02579 [astro-ph.CO]. S Betts, arXiv:1307.4738Proceedings, 2013 Community Summer Study on the Future of U.S. Particle Physics: Snowmass on the Mississippi (CSS2013). 2013 Community Summer Study on the Future of U.S. Particle Physics: Snowmass on the Mississippi (CSS2013)Minneapolis, MN, USAastroph.IMS. Betts et al., in Proceedings, 2013 Community Summer Study on the Future of U.S. Particle Physics: Snowmass on the Mississippi (CSS2013): Minneapolis, MN, USA, July 29-August 6, 2013 (2013) arXiv:1307.4738 [astro- ph.IM]. . G Dvali, L Funcke, arXiv:1608.08969hepphG. Dvali and L. Funcke, (2016), arXiv:1608.08969 [hep- ph]. . J F Beacom, N F Bell, S Dodelson, 10.1103/PhysRevLett.93.121302arXiv:astro-ph/0404585Phys. Rev. Lett. 93121302astrophJ. F. Beacom, N. F. Bell, and S. Dodelson, Phys. Rev. Lett. 93, 121302 (2004), arXiv:astro-ph/0404585 [astro- ph]. . S Hannestad, J Cosmol, 10.1088/1475-7516/2005/02/011arXiv:astro-ph/0411475Astropart. Phys. 0502astro-phS. Hannestad, J. Cosmol. Astropart. Phys. 0502, 11 (2005), arXiv:astro-ph/0411475 [astro-ph]. . L Lancaster, F.-Y Cyr-Racine, L Knox, Z Pan, 10.1088/1475-7516/2017/07/033arXiv:1704.06657JCAP. 170733astroph.COL. Lancaster, F.-Y. Cyr-Racine, L. Knox, and Z. Pan, JCAP 1707, 033 (2017), arXiv:1704.06657 [astro- ph.CO]. . B Follin, L Knox, M Millea, Z Pan, 10.1103/PhysRevLett.115.091301arXiv:1503.07863Phys. Rev. Lett. 11591301astroph.COB. Follin, L. Knox, M. Millea, and Z. Pan, Phys. Rev. Lett. 115, 091301 (2015), arXiv:1503.07863 [astro- ph.CO]. Swanson. D J Kapner, T S Cook, E G Adelberger, J H Gundlach, B R Heckel, C D Hoyle, H E , 10.1103/PhysRevLett.98.021101arXiv:hep-ph/0611184Phys. Rev. Lett. 9821101hep-phD. J. Kapner, T. S. Cook, E. G. Adelberger, J. H. Gund- lach, B. R. Heckel, C. D. Hoyle, and H. E. Swan- son, Phys. Rev. Lett. 98, 021101 (2007), arXiv:hep- ph/0611184 [hep-ph]. . P A R Ade, Planck Collaboration10.1051/0004-6361/201525830arXiv:1502.01589A&A. 594astro-ph.COP. A. R. Ade et al. (Planck Collaboration), A&A 594, A13 (2016), arXiv:1502.01589 [astro-ph.CO]. . G Mangano, G Miele, S Pastor, O Pisanti, S Sarikas, 10.1016/j.physletb.2012.01.015arXiv:1110.4335Phys. Lett. B. 7081hep-phG. Mangano, G. Miele, S. Pastor, O. Pisanti, and S. Sarikas, Phys. Lett. B 708, 1 (2012), arXiv:1110.4335 [hep-ph]. . E Castorina, U Franca, M Lattanzi, J Lesgourgues, G Mangano, A Melchiorri, S Pastor, 10.1103/PhysRevD.86.023517arXiv:1204.2510Phys. Rev. D. 8623517astro-ph.COE. Castorina, U. Franca, M. Lattanzi, J. Lesgourgues, G. Mangano, A. Melchiorri, and S. Pastor, Phys. Rev. D 86, 023517 (2012), arXiv:1204.2510 [astro-ph.CO]. Axions: Theory, cosmology, and experimental searches. P Sikivie, 10.1007/978-3-540-73518-2_2arXiv:astro-ph/0610440Proceedings, 1st Joint ILIAS-CERN-CAST axion training. 1st Joint ILIAS-CERN-CAST axion trainingGeneva, Switzerland741astro-phP. Sikivie, Axions: Theory, cosmology, and experi- mental searches. Proceedings, 1st Joint ILIAS-CERN- CAST axion training, Geneva, Switzerland, November 30-December 2, 2005, Lect. Notes Phys. 741, 19 (2008), arXiv:astro-ph/0610440 [astro-ph]. . P Langacker, G Segre, S Soni, 10.1103/PhysRevD.26.3425Phys. Rev. D. 263425P. Langacker, G. Segre, and S. Soni, Phys. Rev. D 26, 3425 (1982). . M A Stephanov, K Rajagopal, E V Shuryak, 10.1103/PhysRevLett.81.4816arXiv:hep-ph/9806219Phys. Rev. Lett. 814816hep-phM. A. Stephanov, K. Rajagopal, and E. V. Shuryak, Phys. Rev. Lett. 81, 4816 (1998), arXiv:hep-ph/9806219 [hep-ph]. . T Boeckel, J Schaffner-Bielich, 10.1103/PhysRevLett.105.041301arXiv:0906.4520Phys. Rev. Lett. 10541301astro-ph.COT. Boeckel and J. Schaffner-Bielich, Phys. Rev. Lett. 105, 041301 (2010), arXiv:0906.4520 [astro-ph.CO]. . E Witten, 10.1016/0550-3213(81)90182-6Nucl. Phys. B. 177477E. Witten, Nucl. Phys. B 177, 477 (1981). . A H Guth, 10.1103/PhysRevD.23.347Phys. Rev. D. 23347A. H. Guth, Phys. Rev. D 23, 347 (1981). . D Yueker, I Mishustin, M Bleicher, 10.1088/0954-3899/41/12/125005J. Phys. 41125005D. Yueker, I. Mishustin, and M. Bleicher, J. Phys. G41, 125005 (2014). V F Mukhanov, Physical Foundations of Cosmology. Cambridge University PressV. F. Mukhanov, Physical Foundations of Cosmology (Cambridge University Press, 2005). . M Lattanzi, M Gerbino, 10.3389/fphy.2017.00070arXiv:1712.07109astro-ph.COFront.in Phys. 570M. Lattanzi and M. Gerbino, Front.in Phys. 5, 70 (2018), arXiv:1712.07109 [astro-ph.CO]. . R Jimenez, T Kitching, C Pena-Garay, L Verde, 10.1088/1475-7516/2010/05/035arXiv:1003.5918JCAP. 100535astro-ph.COR. Jimenez, T. Kitching, C. Pena-Garay, and L. Verde, JCAP 1005, 035 (2010), arXiv:1003.5918 [astro-ph.CO]. . M Gerbino, M Lattanzi, O Mena, K Freese, 10.1016/j.physletb.2017.10.052arXiv:1611.07847Phys. Lett. 775239astroph.COM. Gerbino, M. Lattanzi, O. Mena, and K. Freese, Phys. Lett. B775, 239 (2017), arXiv:1611.07847 [astro- ph.CO]. . S Vagnozzi, E Giusarma, O Mena, K Freese, M Gerbino, S Ho, M Lattanzi, 10.1103/PhysRevD.96.123503arXiv:1701.08172Phys. Rev. 96123503astro-ph.COS. Vagnozzi, E. Giusarma, O. Mena, K. Freese, M. Gerbino, S. Ho, and M. Lattanzi, Phys. Rev. D96, 123503 (2017), arXiv:1701.08172 [astro-ph.CO]. . S Hannestad, T Schwetz, 10.1088/1475-7516/2016/11/035arXiv:1606.04691JCAP. 161135astro-ph.COS. Hannestad and T. Schwetz, JCAP 1611, 035 (2016), arXiv:1606.04691 [astro-ph.CO]. . F Capozzi, E Di Valentino, E Lisi, A Marrone, A Melchiorri, A Palazzo, 10.1103/PhysRevD.95.096014arXiv:1703.04471Phys. Rev. 9596014hep-phF. Capozzi, E. Di Valentino, E. Lisi, A. Marrone, A. Melchiorri, and A. Palazzo, Phys. Rev. D95, 096014 (2017), arXiv:1703.04471 [hep-ph]. . S Gariazzo, M Archidiacono, P F Salas, O Mena, C A Ternes, M Trtola, 10.1088/1475-7516/2018/03/011arXiv:1801.04946JCAP. 180311hep-phS. Gariazzo, M. Archidiacono, P. F. de Salas, O. Mena, C. A. Ternes, and M. Trtola, JCAP 1803, 011 (2018), arXiv:1801.04946 [hep-ph]. . R Laureijs, J Amiaux, S Arduini, J , R. Laureijs, J. Amiaux, S. Arduini, J. . . J Auguères, R Brinchmann, M Cole, C Cropper, L Dabin, A Duvet, Ealet, arXiv:1110.3193ArXiv e-prints. astro-ph.COAuguères, J. Brinchmann, R. Cole, M. Cropper, C. Dabin, L. Du- vet, A. Ealet, and et al., ArXiv e-prints (2011), arXiv:1110.3193 [astro-ph.CO]. . P A Abell, LSST Science CollaborationJ Allison, LSST Science CollaborationS F Anderson, LSST Science CollaborationJ R Andrew, LSST Science CollaborationJ R P Angel, LSST Science CollaborationL Armus, LSST Science CollaborationD Arnett, LSST Science CollaborationS J Asztalos, LSST Science CollaborationT S Axelrod, LSST Science CollaborationarXiv:0912.0201astro-ph.IMLSST Science Collaboration, P. A. Abell, J. Allison, S. F. Anderson, J. R. Andrew, J. R. P. Angel, L. Ar- mus, D. Arnett, S. J. Asztalos, T. S. Axelrod, and et al., ArXiv e-prints (2009), arXiv:0912.0201 [astro-ph.IM]. . A Lewis, A Challinor, A Lasenby, 10.1086/309179arXiv:astro-ph/9911177Astrophys. J. 538astro-phA. Lewis, A. Challinor, and A. Lasenby, Astrophys. J. 538, 473 (2000), arXiv:astro-ph/9911177 [astro-ph]. . A Lewis, S Bridle, 10.1103/PhysRevD.66.103511arXiv:astro-ph/0205436Phys. Rev. 66103511astro-phA. Lewis and S. Bridle, Phys. Rev. D66, 103511 (2002), arXiv:astro-ph/0205436 [astro-ph]. . N Aghanim, Planck10.1051/0004-6361/201526926arXiv:1507.02704astro-ph.COAstron. Astrophys. 59411N. Aghanim et al. (Planck), Astron. Astrophys. 594, A11 (2016), arXiv:1507.02704 [astro-ph.CO]. . P A R Ade, Planck10.1051/0004-6361/201525941arXiv:1502.01591Astron. Astrophys. 594astro-ph.COP. A. R. Ade et al. (Planck), Astron. Astrophys. 594, A15 (2016), arXiv:1502.01591 [astro-ph.CO]. . S Alam, F D Albareti, C Prieto, F Anders, S F Anderson, T Anderton, B H Andrews, E Armengaud, É Aubourg, S Bailey, 10.1088/0067-0049/219/1/12arXiv:1501.00963ApJS. 219astro-ph.IMS. Alam, F. D. Albareti, C. Allende Prieto, F. Anders, S. F. Anderson, T. Anderton, B. H. Andrews, E. Ar- mengaud,É. Aubourg, S. Bailey, and et al., ApJS 219, 12 (2015), arXiv:1501.00963 [astro-ph.IM]. . A J Ross, L Samushia, C Howlett, W J Percival, A Burden, M Manera, 10.1093/mnras/stv154arXiv:1409.3242Mon. Not. Roy. Astron. Soc. 449astro-ph.COA. J. Ross, L. Samushia, C. Howlett, W. J. Percival, A. Burden, and M. Manera, Mon. Not. Roy. Astron. Soc. 449, 835 (2015), arXiv:1409.3242 [astro-ph.CO]. . F Beutler, C Blake, M Colless, D H Jones, L Staveley-Smith, L Campbell, Q Parker, W Saunders, F Watson, 10.1111/j.1365-2966.2011.19250.xarXiv:1106.3366Mon. Not. Roy. Astron. Soc. 416F. Beutler, C. Blake, M. Colless, D. H. Jones, L. Staveley-Smith, L. Campbell, Q. Parker, W. Saun- ders, and F. Watson, Mon. Not. Roy. Astron. Soc. 416, 3017 (2011), arXiv:1106.3366. . M Betoule, J Marriner, N Regnault, J.-C Cuillandre, P Astier, J Guy, C Balland, P El Hage, D Hardin, R Kessler, L Le Guillou, J Mosher, R Pain, P.-F , M. Betoule, J. Marriner, N. Regnault, J.-C. Cuillandre, P. Astier, J. Guy, C. Balland, P. El Hage, D. Hardin, R. Kessler, L. Le Guillou, J. Mosher, R. Pain, P.-F. . M Rocci, K Sako, Schahmaneche, 10.1051/0004-6361/201220610A&A. 552124Rocci, M. Sako, and K. Schahmaneche, A&A 552, A124 (2013). . E Calabrese, D Huterer, E V Linder, A Melchiorri, L Pagano, 10.1103/PhysRevD.83.123504arXiv:1103.4132Phys. Rev. D. 83123504astro-ph.COE. Calabrese, D. Huterer, E. V. Linder, A. Melchiorri, and L. Pagano, Phys. Rev. D 83, 123504 (2011), arXiv:1103.4132 [astro-ph.CO]. . H Hoekstra, R Herbonnet, A Muzzin, A Babul, A Mahdavi, M Viola, M Cacciato, 10.1093/mnras/stv275Mon. NotH. Hoekstra, R. Herbonnet, A. Muzzin, A. Babul, A. Mahdavi, M. Viola, and M. Cacciato, Mon. Not. . Roy, 10.1093/mnras/stv275arXiv:1502.01883[astro-ph.COAstron. Soc. 449Roy. Astron. Soc. 449, 685 (2015), arXiv:1502.01883 [astro-ph.CO]. . L Salvati, M Douspis, N Aghanim, 10.1051/0004-6361/201731990arXiv:1708.00697astro-ph.COL. Salvati, M. Douspis, and N. Aghanim, (2017), 10.1051/0004-6361/201731990, arXiv:1708.00697 [astro-ph.CO]. . A J Long, C Lunardini, E Sabancilar, 10.1088/1475-7516/2014/08/038arXiv:1405.7654JCAP. 140838hep-phA. J. Long, C. Lunardini, and E. Sabancilar, JCAP 1408, 038 (2014), arXiv:1405.7654 [hep-ph]. . S Gariazzo, C Giunti, M Laveder, Y F Li, E M Zavanin, 10.1088/0954-3899/43/3/033001arXiv:1507.08204J. Phys. 4333001hep-phS. Gariazzo, C. Giunti, M. Laveder, Y. F. Li, and E. M. Zavanin, J. Phys. G43, 033001 (2016), arXiv:1507.08204 [hep-ph]. . M Drewes, 10.1088/1475-7516/2017/01/025arXiv:1602.04816J. Cosmol. Astropart. Phys. 170125hep-phM. Drewes et al., J. Cosmol. Astropart. Phys. 1701, 025 (2017), arXiv:1602.04816 [hep-ph]. . U.-L Pen, P Zhang, 10.1103/PhysRevD.89.063009arXiv:1202.0107astro-ph.COPhys. Rev. 8963009U.-L. Pen and P. Zhang, Phys. Rev. D89, 063009 (2014), arXiv:1202.0107 [astro-ph.CO]. . H Goldberg, 10.1016/S0370-2693(00)01045-5arXiv:hep-ph/0003197Phys. Lett. 492153hep-phH. Goldberg, Phys. Lett. B492, 153 (2000), arXiv:hep- ph/0003197 [hep-ph]. . Z Chacko, L J Hall, Y Nomura, 10.1088/1475-7516/2004/10/011arXiv:astro-ph/0405596JCAP. 041011astro-phZ. Chacko, L. J. Hall, and Y. Nomura, JCAP 0410, 011 (2004), arXiv:astro-ph/0405596 [astro-ph]. . A De La Macorra, 10.1016/j.astropartphys.2007.05.005arXiv:astro-ph/0702239Astropart. Phys. 28astro-phA. de la Macorra, Astropart. Phys. 28, 196 (2007), arXiv:astro-ph/0702239 [astro-ph]. . S Dutta, S D H Hsu, D Reeb, R J Scherrer, 10.1103/PhysRevD.79.103504arXiv:0902.4699Phys. Rev. 79103504astroph.COS. Dutta, S. D. H. Hsu, D. Reeb, and R. J. Scherrer, Phys. Rev. D79, 103504 (2009), arXiv:0902.4699 [astro- ph.CO]. . E Abdalla, L L Graef, B Wang, 10.1016/j.physletb.2013.08.011arXiv:1202.0499Phys. Lett. 726786gr-qcE. Abdalla, L. L. Graef, and B. Wang, Phys. Lett. B726, 786 (2013), arXiv:1202.0499 [gr-qc]. . L M Krauss, A J Long, 10.1103/PhysRevD.89.085023arXiv:1310.5361Phys. Rev. 8985023hep-phL. M. Krauss and A. J. Long, Phys. Rev. D89, 085023 (2014), arXiv:1310.5361 [hep-ph]. . R G Landim, E Abdalla, 10.1016/j.physletb.2016.11.044arXiv:1611.00428Phys. Lett. 764271hep-phR. G. Landim and E. Abdalla, Phys. Lett. B764, 271 (2017), arXiv:1611.00428 [hep-ph].
[]
[ "THE WELL-ORDERING OF DUAL BRAID MONOIDS", "THE WELL-ORDERING OF DUAL BRAID MONOIDS" ]
[ "Jean Fromentin " ]
[]
[]
We describe the restriction of the Dehornoy ordering of braids to the dual braid monoids introduced by Birman, Ko and Lee: we give an inductive characterization of the ordering of the dual braid monoids and compute the corresponding ordinal type. The proof consists in introducing a new ordering on the dual braid monoid using the rotating normal form of arXiv:math.GR/0811.3902, and then proving that this new ordering coincides with the standard ordering of braids.
10.1016/j.crma.2008.05.001
[ "https://arxiv.org/pdf/0712.3836v2.pdf" ]
115,166,104
0712.3836
31c5641872bebb3d9ee88c325bc3dcf74f1edcde
THE WELL-ORDERING OF DUAL BRAID MONOIDS 10 Dec 2008 Jean Fromentin THE WELL-ORDERING OF DUAL BRAID MONOIDS 10 Dec 2008arXiv:0712.3836v2 [math.GR] We describe the restriction of the Dehornoy ordering of braids to the dual braid monoids introduced by Birman, Ko and Lee: we give an inductive characterization of the ordering of the dual braid monoids and compute the corresponding ordinal type. The proof consists in introducing a new ordering on the dual braid monoid using the rotating normal form of arXiv:math.GR/0811.3902, and then proving that this new ordering coincides with the standard ordering of braids. It is known since [7] and [15] that the braid group B n is left-orderable, by an ordering whose restriction to the positive braid monoid is a well-order. Initially introduced by complicated methods of self-distributive algebra, the standard braid ordering has then received a lot of alternative constructions originating from different approaches-see [10]. However, this ordering remains a complicated object, and many questions involving it remain open. Dual braid monoids have been introduced by Birman, Ko, and Lee in [2]. The dual braid monoid B + * n is a certain submonoid of the n-strand braid group B n . It is known that the monoid B + * n admits a Garside structure, where simple elements correspond to non-crossing partitions of n-see [1]. In particular, there exists a standard normal form associated with this Garside structure, namely the so-called greedy normal form. The rotating normal form is another normal form on B + * n that was introduced in [12]. It relies on the existence of a natural embedding of B + * n−1 in B + * n and on the easy observation that each element of B + * n admits a maximal right divisor that belongs to B + * n−1 . The main ingredient in the construction of the rotating normal form is the result that each braid β in B + * n−1 admits a unique decomposition β = φ b−1 n (β b ) · ... · φ 2 n (β 3 ) · φ n (β 2 ) · β 1 with β b , ... , β 1 in B + * n−1 such that β b = 1 and such that for each k 1, the braid β k is the maximal right-divisor of φ b−k n (β b ) · ... · β k that lies in B + * n−1 . The sequence (β b , ..., β 1 ) is then called the φ n -splitting of β. The main goal of this paper is to establish the following simple connection between the order on B + * n and the order on B + * n−1 through the notion of φ n -splitting. Theorem 1. For all braids β, γ in B + * n with n 2, the relation β < γ is true if and only if the φ n -splitting (β b , ... , β 1 ) of β is smaller than the φ n -splitting (γ c , ... , γ 1 ) of γ with respect to the ShortLex-extension of the ordering of B + * n−1 , i.e., we have either b < c, or b = c and there exists t such that β t < γ t holds and β k = γ k holds for b k > t. A direct application of Theorem 1 is: Corollary. For n 2, the restriction of the braid ordering to B + * n is a well-ordering of ordinal type ω ω n−2 . This refines a former result by Laver stating that the restriction of the braid ordering to B + * n is a well-ordering without determining its exact type. Another application of Theorem 1 or, more exactly, of its proof, is a new proof of the existence of the braid ordering. What we precisely obtain is a new proof of the result that every nontrivial braid can be represented by a so-called σ-positive or σ-negative word ("Property C"). The connection between the restrictions of the braid order to B + * n and B + * n−1 via the φ n -splitting is formally similar to the connection between the restrictions of the braid order to the Garside monoids B + n and B + n−1 via the so-called Φ n -splitting established in [9] as an application of Burckel's approach of [3,4,5]. However, there is an important difference, namely that, contrary to Burckel's approach, our construction requires no transfinite induction: although intricate in the general case of 5 strands and above, our proof remains elementary. This is an essential advantage of using the Birman-Ko-Lee generators rather than the Artin generators. The paper is organized as follows. In Section 1, we briefly recall the definition of the Dehornoy ordering of braids and the definition of the dual braid monoid. In Section 2, we use the φ n -splitting to construct a new linear ordering of B + * n , called the rotating ordering. In Section 3, we deduce from the results about φ n -splittings established in [12] the result that certain specific braids are σ-positive or trivial. Finally, Theorem 1 is proved in Section 4. The general framework Artin's braid group B n is defined for n 2 by the presentation σ 1 , ... , σ n−1 ; σ i σ j = σ j σ i for |i − j| 2 σ i σ j σ i = σ j σ i σ j for |i − j| = 1 . (1.1) The submonoid of B n generated by {σ 1 , ... , σ n−1 } is denoted by B + n . 1.1. The standard braid ordering. We recall the construction of the Dehornoy ordering of braids. By a braid word we mean any word on the letters σ ±1 i . Definition 1.1. -A braid word w is called σ i -positive (resp. σ i -negative) if w contains at least one σ i (resp. at least one σ −1 i ), no σ −1 i (resp. no σ i ), and no letter σ ±1 j with j > i. -A braid β is said to be σ i -positive (resp. σ i -negative) if, among the braid words representing β, at least one is σ i -positive (resp. σ i -negative). -A braid β is said to be σ-positive (resp. σ-negative) if it is σ i -positive (resp. σ i -negative) for some i. -For β, γ braids, we declare that β < γ is true if the braid β −1 γ is σ-positive. By definition of the relation <, every σ-positive braid β satisfy 1 < β. Then, every braid of B + n except 1 is lager than 1. Example 1.2. Put β = σ 2 and γ = σ 1 σ 2 . Let us show that β is <-smaller than γ. The quotient β −1 γ is represented by the word σ −1 2 σ 1 σ 2 . Unfortunately, the latter word is neither σ 2 -positive (since it contains σ −1 2 ), nor σ 2 -negative (since it contains σ 2 ), nor σ 1 -positive and σ 1 -negative (since it contains a letter σ ±1 2 ). However, the word σ −1 2 σ 1 σ 2 is equivalent to σ 1 σ 2 σ −1 1 , which is σ 2 -positive. Then, the braid β −1 γ is σ 2 -positive and the relation β < γ holds. Theorem 1.3. [7] For each n 2, the relation < is a linear ordering on B n that is invariant under left multiplication. Remark 1.4. In this paper, we use the flipped version of the braid ordering [10], in which one takes into account the generator σ i with greatest index, and not the original version, in which one considers the generator with lowest index. This choice is necessary here, as we need that B + * n−1 is an initial segment of B + * n . That would not be true if we were considering the lower version of the ordering. We recall that the flip automorphism Φ n that maps σ i and σ n−i for each i exchanges the two version, and, therefore, the properties of both orderings are identical. Following [7], we recall that Theorem 1.3 relies on two results: Property A. Every σ i -positive braid is nontrivial. Property C. Every braid is either trivial or σ-positive or σ-negative. In the sequel, as we shall prove that Property C is a consequence of Theorem 1, we never use Theorem 1.3, i.e., we never use the fact that the relation < of Definition 1.1 is a linear ordering. The only properties of < we shall use are the following trivial facts-plus, but exclusively for the corollaries, Property A. Lemma 1.5. The relation < is transitive, and invariant under left-multiplication. The ordering < on B n admits lots of properties but it is not a well-ordering. For instance, σ −1 n−1 , σ −2 n−1 , ... , σ −k n−1 , ... is an infinite descending sequence. However, R. Laver proved the following result: Theorem 1.6. [15] Assume that M is a submonoid of B ∞ generated by a finite number of braids, each of which is a conjugate of some braid σ i . Then the restriction of < to M is a well-ordering. The positive braid monoid B + n satisfies the hypotheses of Theorem 1.6. Therefore, the restriction of the braid ordering to B + n is a well-ordering. However, Laver's proof of Theorem 1.6 leaves the determination of the isomorphism type of (M, <) open. In the case of the monoid B + n , the question was solved by S. Burckel: Proposition 1.7. [4] For each n 2, the order type of (B + n , <) is the ordinal ω ω n−2 . 1.2. Dual braid monoids. In this paper, we consider another monoid of which B n is a group of fractions, namely the Birman-Ko-Lee monoid, also called the dual braid monoid. Definition 1.8. For 1 p < q, put a p,q = σ p ...σ q−2 σ q−1 σ −1 q−2 ...σ −1 p . (1.2) Then the dual braid monoid B + * n is the submonoid of B n generated by the braids a p,q with 1 p < q n; the braids a p,q are called the Birman-Ko-Lee generators. Remark 1.9. In [2], the braid a p,q is defined to be σ q−1 ...σ p+1 σ p σ −1 p+1 ...σ −1 q−1 . Both options lead to isomorphic monoids, but our choice is the only one that naturally leads to the suitable embedding of B + * n−1 into B + * n . By definition, we have σ p = a p,p+1 , so the dual braid monoid B + * n includes the positive braid monoid B + n . The inclusion is proper for n 3: the braid a 1,3 , which belongs to B + * n by definition, does not belong to B + n . The following notational convention will be useful in the sequel. For p q, we write [p, q] for the interval {p, ... , q} of N. We say that [p, q] is nested in [r, s] if we have r < p < q < s. The following results were proved by Birman, Ko, and Lee. Proposition 1.10. [2] (i) In terms of the generators a p,q , the monoid B + * n is presented by the relations a p,q a r,s = a r,s a p,q for [p, q] and [r, s] disjoint or nested, (1.3) a p,q a q,r = a q,r a p,r = a p,r a p,q for 1 p < q < r n. (1.4) (ii) The monoid B + * n is a Garside monoid with Garside element δ n defined by δ n = a 1,2 a 2,3 ... a n−1,n (= σ 1 σ 2 ... σ n−1 ). (1.5) Garside monoids are defined for instance in [11] or [8]. In every Garside monoid, conjugating by the Garside element defines an automorphism (. In the case of B + * n , the automorphism φ n defined by φ n (β) = δ n β δ −1 n has order n, and its action on the generators a p,q is as follows. Lemma 1.11. For all p, q with 1 p < q n, we have φ n (a p,q ) = a p+1,q+1 for q n−1, a 1,p+1 for q = n. (1.6) The relations (1.6) show that the action of φ n is similar to a rotation. Note that the relation φ n (a p,q ) = a p+1,q+1 always holds provided indices are taken mod n and possibly switched, so that, for instance, a p+1,n+1 means a 1,p+1 . For every braid β in B + * n and every k, the definition of φ n implies the relation δ n φ k n (β) = φ k+1 n (β) δ n , δ −1 n φ k n (β) = φ k−1 n (β) δ −1 n . (1.7) By definition, the braid a p,q is the conjugate of σ q−1 by the braid σ p ... σ q−2 . Braids of the latter type play an important role in the sequel, and we give them a name. Definition 1.12. For p q, we put δ p,q = a p,p+1 a p+1,p+2 ... a q−1,q ( = σ p σ p+1 ... σ q−1 ). (1.8) Note that the Garside element δ n of B + * n is equal to δ 1,n and that the braid δ p,p is the trivial one, i.e., is the braid 1. With this notation, we easily obtain a p,q = δ p,q δ −1 p,q−1 = δ p,q−1 σ i−1 δ −1 p,q−1 for p < q, (1.9) a p,q = δ −1 p+1,q δ p,q for p < q, (1.10) δ p,r = δ p,q δ q,r for p q r, (1.11) δ p,q δ r,s = δ r,s δ p,q for p q < r s. (1.12) Relations (1.9), (1.11), and (1.12) are direct consequences of the definition of δ p,q . Relation (1.10) is obtained by using an induction on p and the equality δ −1 p+1,q δ p,q = σ p δ −1 p+2,q δ p+1,q σ −1 p , which holds for p < q. 1.3. The restriction of the braid ordering to the dual braid monoid. The aim of this paper is to describe the restriction of the braid ordering of Definition 1.1 to the dual braid monoid B + * n . The initial observation is: Proposition 1.13. [15] For each n 2, the restriction of the braid ordering to the monoid B + * n is a well-ordering. Proof. By definition, the braid a p,q is a conjugate of the braid σ q−1 . So B + * n is generated by finitely many braids, each of which is a conjugate of some σ i . By Laver's Theorem (Theorem 1.6), the restriction of < to B + * n is a well-ordering. As in the case of B + n , Laver's Theorem, which is an non-effective result based on the so-called Higman's Lemma [13], leaves the determination of the isomorphism type of < ↾ B + * n open. This is the question we shall address in the sequel. Before introducing our specific methods, let us begin with some easy observations. Lemma 1.14. Every braid β in B + * n except 1 satisfies β > 1. Proof. By definition, the braid a p,q is σ-positive-actually it is σ q−1 -positive in the sense of Definition 1.1. Lemma 1.15. For each n 2, we have 1 < a 1,2 < a 2,3 < a 1,3 < ... < a 1,n−1 < a n−1,n < a n−2,n < ... < a 1,n . (1.13) Proof. We claim that a p,q < a r,s holds if and only if we have either q < s, or q = s and p > r. Assume first q < s. Then, the braid a −1 p,q is σ q−1 -negative while a r,s is σ s−1 -positive with q < s, hence the quotient a −1 p,q a r,s is σ s−1 -positive, which implies a p,q < a r,s . Assume now q = s and p > r. Then, by relation (1.9), the quotient a −1 p,q a r,s is equal to δ p,s−1 δ −1 p,s δ r,s δ −1 r,s−1 . Applying Relation (1.11) on δ r,s , we obtain a −1 p,q a r,s = δ p,s−1 δ −1 p,s δ r,p−1 δ p−1,s δ −1 r,s−1 . Then, by applying Relation (1.12) on δ −1 p,s δ r,p−1 , we obtain a −1 p,q a r,s = δ p,s−1 δ r,p−1 δ −1 p,s δ p−1,s δ −1 r,s−1 . Finally, Relation (1.10) on δ −1 p,s δ p−1,s implies a −1 p,q a r,s = δ p,s−1 δ r,p−1 a p−1,s δ −1 r,s−1 . The braid a p−1,s is σ s−1 -positive, while the braids δ p,s−1 , δ r,p−1 and δ −1 r,s−1 are σ tpositive or σ t -negative for t < s−1. Hence the quotient a −1 p,q a r,s is σ s−1 -positive, which implies a p,q < a r,s . As < is a linear ordering, this is enough to conclude, i.e., the implications we proved above are equivalences. 1.4. The rotating normal form. In this section we briefly recall the construction of the rotating normal form of [12]. Proposition 1.17. [12] Assume n 3 and that β is a nontrivial braid in B + * n . Then there exists a unique sequence (β b , ... , β 1 ) in B + * n−1 satisfying β b = 1 and -β = φ b−1 n (β b ) · ... · φ n (β 2 ) · β 1 , (1.17.i) -for each k 1, the B + * n−1 -tail of φ b−k n (β b ) · ... · φ n (β k+1 ) is trivial. (1.17.ii) Definition 1.18. The unique sequence (β b , ... , β 1 ) of braids introduced in Proposition 1.17 is called the φ n -splitting of β. Its length, i.e., the parameter b, is called the n-breadth of β. The idea of the φ n -splitting is very simple: starting with a braid β of B + * n , we extract the maximal right-divisor that lies in B + * n−1 , i.e., that leaves the nth strand unbraided, then we extract the maximal right-divisor of the remainder that leaves the first strand unbraided, and so on rotating by 2π/n at each step-see Figure 1. Figure 1. The φ 6 -splitting of a braid of B + * 6 . Starting from the right, we extract the maximal right-divisor that keeps the sixth strand unbraided, then rotate by 2π/6 and extract the maximal right-divisor that keeps the first strand unbraided, etc. β 1 φ 6 (β 2 ) φ 2 6 (β 3 ) φ 3 6 (β 4 ) As the notion of a φ n -splitting is fundamental in this paper, we give examples. Example 1.19. Let us determine the φ n -splitting of the generators of B + * n , i.e., of a p,q with 1 p < q n. For q n−1, the generator a p,q belongs to B + * n−1 , then its φ n -splitting is (a p,q ). Next, as a p,n does not lie in B + * n−1 , the rightmost entry in its φ n -splitting is trivial. As we have φ −1 n (a p,n ) = a p−1,n−1 for p 2, the φ n -splitting of a p,n with p 2 is (a p−1,n−1 , 1). Finally, the braids a 1,n and φ −1 n (a 1,n ) = a n−1,n do not lie in B + * n−1 , but φ −2 n (a 1,n ) = a n−2,n−1 does. So the φ n -splitting of a 1,n is (a n−2,n−1 , 1, 1). To summarize, the φ n -splitting of a p,q is      (a p,q ) for p < q n−1, (a p−1,n−1 , 1) for 2 p and q = n, (a n−2,n−1 , 1, 1) for p = 1 and q = n. (1.15) By Relation (1.6), the application φ n maps each braid a p,q to another similar braid a r,s . Using this remark, we can consider the alphabetical homomorphism, still denoted φ n , that maps the letter a p,q to the corresponding letter a r,s , and extends to word on the letter a p,q . Note that, in this way, if the word w on the letter a p,q represents the braid β, then φ n (w) represents φ n (β). We can now recursively define a distinguished expression for each braid of B + * n in terms of the generators a p,q . Definition 1.20. -For β in B + * 2 , the φ 2 -normal form of β is defined to be the unique word a k 1,2 that represents β. -For n 3 and β in B + * n , the φ n -normal form of β is defined to be the word φ b−1 n (w b ) ... w 1 where, for each k, the word w k is the φ n−1 -normal form of β k and where (β b , ... , β 1 ) is the φ n -splitting of β. As the φ n -splitting of a braid β lying in B + * n−1 is the length 1 sequence (β), the φ n -normal form and φ n−1 -normal form of β coincide. Therefore, we can drop the subscript in the φ n -normal form. From now on, we call rotating normal form, or simply normal form, the expression so obtained. As each braid is represented by a unique normal word, we can unambiguously use the syntactical properties of its normal form. We conclude this introductory section with some syntactic constraints involving φ n -splittings and normal words, These results are borrowed from [12]. Definition 1.21. For β in B + * n , the last letter of β, denoted β # , is defined to be the last latter in the normal form of β. Lemma 1.22. [12] Assume that (β b , ... , β 1 ) is a φ n -splitting. (i) For k 2, the letter β # k has the form a ..,n−1 , unless β k is trivial; (ii) For k 3, the braid β k is different from 1; (iii) For k 2, if the normal form of β k is w ′ a n−2,n−1 with w ′ = ε (the empty word), then the last letter of w ′ has the form a ..,n−1 . The rotating ordering As explained above, we aim at proving results about the restriction of the braid ordering < to the dual braid monoid B + * n . We shall do it indirectly, by first introducing an auxiliary ordering < * , and eventually proving that the latter coincides with the original braid ordering. 2.1. Another ordering on B + * n . Using the φ n -splitting of Definition 1.18, every braid of B + * n comes associated with a distinguished finite sequence of braids belonging to B + * n−1 . In this way, every ordering on B + * n−1 can be extended to an ordering on B + * n using a lexicographic extension. Iterating the process, we can start from the standard ordering on B + * 2 , i.e., on natural numbers, and recursively define a linear ordering on B + * n . We recall that, if (A, ≺) is an ordered set, a finite sequence s in A is called ShortLex-smaller than another finite sequence s ′ if the length of s is smaller than that of s ′ , or if both lengths are equal and s is lexicographically ≺-smaller than s ′ , i.e., when both sequences are read starting from the left, the first entry in s that does not coincide with its counterpart in s ′ is ≺-smaller. Definition 2.1. For n 2, we recursively define a relation < * n on B + * n as follows: -For β, γ in B + * 2 , we declare that β < * 2 γ is true for β = a b 1,2 and γ = a c 1,2 with b < c; -For β, γ in B + * n with n 3, we declare that β < * n γ is true if the φ n -splitting of β is smaller than the φ n -splitting of γ for the ShortLex-extension of < * n−1 . Example 2.2. As was seen in Example 1.19, the n-breadth of a p,q with q n−1 is 1 while the n-breadth of a q,n is 2 for p = 1 or 3 for p = 1. An easy induction on n gives a p,q < * n a r,s whenever q < s n holds. Then, one establishes 1 < * n a 1,2 < * n a 2,3 < * n a 1,3 < * n a 3,4 < * n a 2,4 < * n a 1,4 < * n ... < * n a n−1,n < * n ... < * n a 1,n . We observe that, according to Lemma 1.15 and Example 2.2, the relations < and < * n agree on the generators of B + * n . Proposition 2.3. For n 2, the relation < * n is a well-ordering on B + * n . For each braid β, the immediate < * n -successor of β is β a 1,2 , i.e., β σ 1 . Proof. The ordered monoid (B + * 2 , < * 2 ) is isomorphic to N with the usual ordering, which is a well-ordering. As the ShortLex-extension of a well-ordering is itself a well-ordering-see [16]-we inductively deduce that < * n is a well-ordering. The result about successors immediately follows from the fact that, if the φ nsplitting of β is (β p , ... , β 1 ), then the φ n -splitting of βa 1,2 is (β p , ... , β 1 a 1,2 ). The connection between the ordering < * n−1 and the restriction of < * n to B + * n−1 is simple: B + * n−1 is an initial segment of B + * n . Proposition 2.4. For n 3, the monoid B + * n−1 is the initial segment of (B + * n , < * n ) determined by a n−1,n , i.e., we have B + * n−1 = {β ∈ B + * n | β < * n a n−1,n }. Moreover the braid a n−1,n is the smallest of n-breadth 2. Proof. First, by construction, every braid β of B + * n−1 has n-breadth 1, whereas, by (1.15), the n-breadth of a n−1,n is 2. So, by definition, β < * n a n−1,n holds. Conversely, assume that β is a braid of B + * n that satisfies β < * n a n−1,n . As the n-breadth of a n−1,n is 2, the hypothesis β < * n a n−1,n implies that the n-breadth of β is at most 2. We shall prove, using induction on n, that β has n-breadth at most 1, which, by construction, implies that β belongs to B + * n−1 . Assume first n = 3. By definition, every φ 3 -splitting of length 2 has the form (a b 1,2 , a c 1,2 ) with b = 0. The ShortLex-least such sequence is (a 1,2 , 1), which turns out to be the φ 3 -splitting of a 2,3 . Hence a 2,3 is the < * 3 -smallest element of B + * 3 with 3-breadth equal to 2, and β < * 3 a 2,3 implies β ∈ B + * 2 . Assume now n > 3. Assume for a contradiction that the n-breadth of β is 2. Let (β 2 , β 1 ) be the φ n -splitting of β. As the φ n -splitting of a n−1,n is (a n−2,n−1 , 1), and β 1 < * n−1 1 is impossible, the hypothesis β < * n a n−1,n implies β 2 < * n−1 a n−2,n−1 . By induction hypothesis, this implies that β 2 lies in B + * n−2 , hence φ n (β 2 ) lies in B + * n−1 . This contradicts Condition (1.17.ii): a sequence (β 2 , β 1 ) with β 2 < * n−1 a n−2,n−1 cannot be the φ n -splitting of a braid of B + * n . So the hypothesis that β has nbreadth 2 is contradictory, and β necessarily lies in B + * n−1 . Building on the compatibility result of Proposition 2.4, we hereafter drop the subscript in < * n and simply write < * . Note that < * is actually a linear order (and even a well-ordering) on B + * ∞ , the inductive limit of the monoids B + * n with respect to the canonical embedding of B + * n−1 into B + * n . 2.2. Separators. By definition of < * , for b < c, every braid in B + * n that has nbreadth b is < * -smaller than every braid that has n-breadth c. As the ordering < * is a well-ordering, there must exist, for each b, a < * -smallest braid with n-breadth b. These braids, which play the role of separators for < * , are easily identified. They will play an important role in the sequel. Proposition 2.4 says that the least upper bound of the braids with n-breadth 1 is a n−1,n . From n-breadth 2, a periodic pattern appears. Definition 2.5. For n 3 and b 1, we put δ n,b =φ b+1 n (a n−2,n−1 )·...·φ 2 n (a n−2,n−1 ). For instance, we find δ 6,4 = φ 5 6 (a 4,5 ) · φ 4 6 (a 4,5 ) · φ 3 6 (a 4,5 ) · φ 2 6 (a 4,5 ), whence δ 6,4 = a 3,4 a 2,3 a 1,2 a 1,6 , and, similarly, δ 5,3 = a 2,3 a 1,2 a 1,5 . Proposition 2.6. For all n 3 and b 1, (i) the φ n -splitting of δ n,b is the length b+2 sequence (a n−2,n−1 , ..., a n−2,n−1 , 1, 1); (ii) we have δ n,b = δ b n δ −b n−1 ; (iii) the braid δ n,b is the < * -smallest braid in B + * n that has n-breadth b+2-hence it is the least upper bound of all braids of n-breadth b+1. Proof. (i) First, we observe that there exists no relation a n−1,n a n−2,n−1 = ... in the presentation of the monoid B + * n . Then the word a n−1,n a n−2,n−1 , which is equal to φ n (a n−2,n−1 ) a n−2,n−1 , is alone in its equivalence class under the relation of B + * n . Similarly, the word φ b−1 n (a n−2,n−1 )·...·a n−2,n−1 , called w, is alone in its equivalence class, as no relation of the presentation may be applied to any length 2 subword of this word. As φ n is an isomorphism, the same result holds for the word φ 2 n (w), which represents the braid δ n,b . As the braid δ n,b is represented by a normal word, we deduce that φ 2 n (w) is the normal word representing δ n,b , i.e., it is its normal form. (ii) We use an induction on b. Relation (1.9) implies a 1,n = δ n δ −1 n−1 . Using (1.6), we deduce δ n,1 = φ 2 n (a n−2,n−1 ) = a 1,n = δ n δ −1 n−1 . Assume now b 2. By definition, we have δ n,b = φ b+1 n (a n−2,n−1 ) δ n,b−1 . Then, using the induction hypothesis we have δ n,b = φ b+1 n (a n−2,n−1 ) δ b−1 n δ −b+1 n−1 . Pushing δ b−1 n to the left using (1.7), we find: δ n,b = δ b−1 n φ 2 n (a n−2,n−1 ) δ −b+1 n−1 . Relation (1.6) implies φ 2 n (a n−2,n−1 ) = a 1,n . So, using (1.9), we finally obtain (β b+2 , ..., β 1 ) be the φ n -splitting of a braid β that lies in B + * n and satisfies β * δ n,b . By definition of < * , we have β b+2 * a n−2,n−1 . By Lemma 1.22(i) and (ii), the B + * n−2 -tail of β b+2 is trivial, hence its n−1-breadth is at least 2. Then Proposition 2.4 implies β b+2 = a n−2,n−1 . By definition of < * again, we have β b+1 * a n−2,n−1 . Using the previous argument repeatedly, we obtain β k = a n−2,n−1 for k 3. Using the argument once more gives β 2 * 1, which implies β 2 = 1. Finally, by definition of < * , we have β 1 = 1. We conclude by (i) and the uniqueness of the φ n -splitting. δ n,b = δ b−1 n δ n δ −1 n−1 δ −b+1 n−1 = δ b n δ −b n−1 . (iii) Let Owing to Proposition 2.6, it is coherent to extend Definition 2.5 by δ n,0 = a n−1,n . In this way, the result of Proposition 2.6(iii) extends to the case b = 0. We now observe that the orderings < and < * agree on the separators δ n,b . Lemma 2.7. Assume n 3. Then 0 b < c implies δ n,b < δ n,c . Proof. Assume 0 < b < c. By Proposition 2.6(ii), we have The hypothesis c − b > 0 implies that δ −1 n,b δ n,c is a σ n−1 -positive braid, since the braid δ k is σ k−1 -positive. Hence we have δ n,b < δ n,c . δ −1 n,b · δ n,c = δ b n−1 δ −b n · δ c n δ −c n−1 = δ b n−1 δ c−b n δ −c n−1 . It remains to establish the result for b = 0. The previous case implies δ n,1 δ n,c . As the relation < is transitive (Lemma 1.5), it is enough to prove δ n,0 < δ n,1 . Using Proposition 2.6(ii) and inserting δ n δ −1 n on the left, we obtain δ −1 n,0 δ n,1 = a −1 n−1,n δ n δ −1 n−1 = δ n δ −1 n a −1 n−1,n δ n δ −1 n−1 . Relation (1.6) implies δ −1 n a −1 n−1,n δ n = φ −1 n (a −1 n−1,n ) = a −1 n−2,n−1 . We deduce δ −1 n,0 δ n,1 = δ n a −1 n−2,n−1 δ −1 n−1 , and the latter decomposition is explicitly σ n−1 -positive. 2.3. The main result. At this point, we have two a priori unrelated linear orderings of the monoid B + * n , namely the standard braid ordering <, and the rotating ordering < * of Definition 2.1. The main technical result of this paper is: Theorem 2.8. For all braids β, β ′ in B + * n , the relation β < * β ′ implies β < β ′ . Before starting the proof of this result, we list a few consequences. First we obtain a new proof of Property C. Corollary 2.9 (Property C). Every non-trivial braid is σ-positive or σ-negative. Proof. Assume that β is a non-trivial braid of B n . First, as B n is a group of fractions for B + * n , there exist β ′ , β ′′ in B + * n satisfying β = β ′−1 β ′′ . As β is assumed to be nontrivial, we have β ′ = β ′′ . As < * is a strict linear ordering, one of β ′ < * β ′′ or β ′′ < * β ′ holds. In the first case, Theorem 2.8 implies that β ′−1 β ′′ , i.e., β, is σ-positive. In the second case, Theorem 2.8 implies that β ′′−1 β ′ is σ-positive, hence β is σ-negative. Corollary 2.10. The relation < * coincide with the restriction of < to B + * n . Proof. Let β, γ belong to B + * n . By Theorem 2.8, β < * γ implies β < γ. Conversely, assume β < * γ. As < * is a linear ordering, we have either γ < * β, hence γ < β, or β = γ. In both cases, Property A implies that β < γ fails. Corollary 2.10 directly implies Theorem 1 stated in the introduction. Indeed, the characterization of the braid ordering given in Theorem 1 is nothing but the recursive definition of the ordering < * . Finally, we obtain a new proof of Laver's result, together with a determination of the order type. Corollary 2.11. The restriction of the Dehornoy ordering to the dual braid monoid B + * n is a well-ordering, and its order type is the ordinal ω ω n−2 . Proof. It is standard that, if (X, <) is a well-ordering of ordinal type λ, then the ShortLex-extension of < to the set of all finite sequences of elements of X is a well-ordering of ordinal type λ ω -see [16]. The ordinal type of < * on B + * 2 is ω, the order type of the standard ordering of natural numbers. So, an immediate induction shows that, for each n 2, the ordinal type of < * on B + * n is at most ω ω n−2 . A priori, this is only an upper bound, because it is not true that every sequence of braids in B + * n−1 is the φ n -splitting of a braid of B + * n . However, by construction, the monoid B + * n includes the positive braid monoid B + n , and it was shown in [4]or, alternatively, in [6]-that the order type of the restriction of the braid ordering to B + n is ω ω n−2 . Hence the ordinal type of its restriction to B + * n is at least that ordinal, and, finally, we have equality. (Alternatively, we could also directly construct a type ω ω n−2 increasing sequence in B + * n .) Remark 2.12. By construction, the ordering < is invariant under left-multiplication. Another consequence of Corollary 2.10 is that the ordering < * is invariant under left-multiplication as well. Note that the latter result is not obvious at all from the direct definition of that relation. A Key Lemma So, our goal is to prove that the rotating ordering of Definition 2.1 and the standard braid ordering coincide. The result will follow from the fine properties of the rotating normal form and of the φ n -splitting. The aim of this section is to establish these properties. Most of them are improvements of properties established in [12], and we shall heavily use the notions introduced in this paper. 3.1. Sigma-positive braid of type a p,n . We shall prove Theorem 2.8 by using an induction on the number of strands n. Actually, in order to maintain an induction hypothesis, we shall prove a stronger implication: instead of merely proving that, if β is < * -smaller than γ, then the quotient braid β −1 γ is σ-positive, we shall prove the more precise conclusion that β −1 γ is σ-positive of type a p,n for some p related to the last letter in γ. δ −1 f (d),n−1 δ −1 f (d−1),n−1 ... δ −1 f (1),n−1 , with f (d) f (d−1) ... f (1) = p. -A braid is called σ i -nonnegative if it is σ i -positive or it belongs to B i . -For p n−2, a braid β is called σ-positive of type a p,n if it can be expressed as β + · δ p,n · β − , where β + is σ n−1 -nonnegative and β − is a p,n -dangerous. -A braid β is called σ-positive of type a n−1,n if it is 1, or equal to β ′ · a n−1,n , where β ′ is a σ-positive braid of type a 1,n . Note that, an a p,n -braid with p = n−1 is not the trivial one, i.e., is different from 1, as it contains δ p,n−1 , which is non trivial. In the other hand, the only a n−1,n -danger ours braid is 1. We observe that the definition of σ-positive of type a n−1,n is different from the definition of σ-positive of type a p,n for p < n−1 (technical reasons make such a distinction necessary). Saying a braid is σ-positive of type a p,n is motivated by the fact that a p,n is the simplest σ-positive braid of its type. Lemma 3.2. Assume that β is a σ-positive braid of type a p,n . Then (i) β is σ n−1 -positive, (ii) φ n+1 (β) is σ-positive of type a p+1,n+1 , (iii) if p = 1 then β δ −t n−1 is σ-positive of type a 1 ,n for all t 0, (iv) if β = a n−1,n holds, then γ β is σ-positive of type a p,n for every σ n−1nonnegative braid γ. Proof. (i) An a p,n -dangerous braid is σ n−1 -nonnegative (actually it is σ n−2 -negative), and the braid δ p,n is σ n−1 -positive. Therefore β is σ n−1 -positive. (ii) With the notation of Definition 3.1, let δ −1 f (d),n−1 ... δ −1 f (1),n−1 be the decom- position of β − , with f (1) = p. Then we have φ n+1 (β − ) = δ −1 f (d)+1,n ... δ −1 f (1)+1 ,n , an a p+1,n+1 -dangerous word. By definition, the braid β + can be represented by a word on the alphabet σ ±1 i with i n−2. As, for i n−2, the image of σ i by φ n+1 is σ i+1 , the braid φ n+1 (β + ) is σ n -nonnegative. So the relation φ n+1 (β) = φ n+1 (β + )·δ p+1,n+1 ·φ n+1 (β − ) witnesses that φ n+1 (β) is σ-positive of type a p+1,n+1 . Point (iii) directly follows from the fact that, if γ − is an a 1,n -dangerous braid, then, for each t 0, the braid γ − δ −t n−1 is also a 1,n -dangerous. (iv) Assume p n−2. Then, by definition, we have β = β + · δ p,n · β − , where β + is σ n−1 -nonnegative and β − is a p,n -dangerous. Hence, we get γ β = γ β + · δ p,n · β − . As the product of σ n−1 -nonnegative braids is σ n−1 -nonnegative, the braid γ β is σ-positive of type a p,n . Assume now p = n−1. As, by hypothesis β is different from a n−1,n , we have β = β ′ · a n−1,n , where β ′ is σ-positive of type a 1,n . The case p n−2 implies that the braid γ β ′ is σ-positive of type a 1,n . Hence the braid γ β, which is equal to γ β ′ · a n−1,n , is σ-positive of type a n−1,n . Remark 3.3. For t 1, the braid δ n,t is σ-positive of type a 1,n . Indeed, by Proposition 2.6(ii), we have δ n,t = δ t−1 n · δ n · δ −t n−1 , the right-hand side being an explicit σ-positive braid of type a 1,n . 3.2. Properties of σ-positive braids of type a p,n . We now show that the entries in a φ n -splitting give raise to σ-positive braids of type a p,n , for some p that can be effectively controlled. Lemma 3.4. For n 3, every braid with last letter a p,n is σ-positive of type a p,n . Proof. Let β be a braid of B + * n with last letter a p,n (Definition 1.21). Put β = β ′ · a p,n . Assume first p n−2. Then, by (1.9), we have β = β ′ · δ p,n · δ −1 p,n−1 , an explicit σ-positive braid of type a p,n , since the braid β ′ is positive, hence σ n−1nonnegative, and δ −1 p,n−1 is a p,n -dangerous. Assume now p = n−1. The case β ′ = 1 is clear. For β ′ = 1, by Lemma 1.22(iii), there is a positive braid β ′′ satisfying β ′ = β ′′ · a q,n for some q. The relation a 1,q a q,n = a q,n a 1,n implies a q,n = a −1 1,q a q,n a 1,n . Using (1.9) for a 1,n gives β ′ = β ′′ a −1 1,q a q,n · δ 1,n · δ −1 1,n−1 , an explicit σ-positive braid of type a 1,n . Therefore, β is σ-positive of type a 1,n . We shall see now that the normal form of every braid β of B + * n−1 such that the B + * n−1 -tail of φ n (a p,n β) is trivial contains a sequence of overlapping letters a r,s . Words containing such sequences are what we shall call ladders. Definition 3.5. For n 3, we say that a normal word w is an a p,n -ladder lent on a q−1,n−1 , if there exists a decomposition w = w 0 x 1 w 1 ... w h−1 x h w h ,(3.1) and a sequence p = f (0) < f (1) < ... < f (h) = n−1 such that (i) for each k h, the letter x k is of the form a e(k),f (k) with e(k)<f (k−1)<f (k), (ii) for each k < h, the word w k contains no letter a p,q with p<f (k)<q, (iii) the last letter of w is a q−1,n−1 . By convention, an a n−1,n -ladder lent on a q−1,n−1 is a word on the letters a p,q whose last letter is a q−1,n−1 . The concept of a ladder is easily illustrated by representing the generators a p,q as a vertical line from the pth line to the qth line on an n-line stave. Then, for every k 0, the letter x k looks like a bar of a ladder-see Figure 3. .. , β 1 ) is the φ n -splitting of some braid of B + * n . Then, for each k in {b−1, ... , 3}, the normal form of β k is a φ n (β # k+1 )-ladder lent on β # k . The same results hold for k = 2 whenever β 2 is not 1. A direct consequence of Proposition 5.7 of [12] is Lemma 3.7. Assume n 3 and let β be a braid represented by an a p,n -ladder lent on a q−1,n−1 with q = n−1 and γ − be an a p,n -dangerous braid. Then γ − β is a σ-positive braid of type a q−1,n−1 . We are now ready to prove that the non-terminal entries of a φ n -splitting (β b , ... , β 1 ) have the expected property, namely that the braid β k provides a protection against a φ n (β # k+1 )-dangerous braid, in the sense that if γ − k+1 is a β # k+1 -dangerous braid, then the braid φ n (γ − k+1 ) β k is σ-positive of type β # k . Proposition 3.8. Assume that (β b , ... , β 1 ) is a φ n -splitting. Then, for each k in {b−1, ... , 3} and every β # k+1 -dangerous braid γ − k+1 , the braid φ n (γ − k+1 ) β k is σpositive of type β # k . Moreover γ − k+1 β k is different from a n−2,n−1 , except if β is itself a n−2,n−1 . The same result holds for k = 2, unless β 2 is the trivial braid1. Proof. Take k in {b−1, ... , 3}. By definition of a dangerous braid, the braid φ n (γ − k+1 ) is φ n (β # k+1 )-dangerous. Assume β # k = a n−2,n−1 . By Proposition 3.6, the normal form of β k is a φ n (β # k+1 )-ladder lent on β # k . Then, by Lemma 3.7, the braid φ n (γ − k+1 ) β k is σ-positive of type β # k . Assume now β # k = a n−2,n−1 with β k = a n−2,n−1 . By Proposition 3.6 again, the normal form of β k is a φ n (β # k+1 )-ladder lent on a n−2,n−1 . Let w ′ a n−2,n−1 be the normal form of β. By definition of a ladder, as the letter a n−2,n−1 does not satisfy the condition (i) of Definition 3.5, the word w ′ is an a p,n -ladder lent on a p,n−1 for some p-see Lemma 1.22(iii). We denote by β ′ k the braid represented by w ′ . Then, by Lemma 3.7, the braid φ n (γ − k+1 ) β ′ k is σ-positive of type a p,n−1 . Then it is the product β ′+ k · δ p,n · β ′− k . The relation δ 1,p δ p,n−1 = δ 1,n−1 implies that the braid φ n (γ − k+1 ) β ′ k is equal to β ′+ k δ −1 1,p · δ 1,n−1 · β ′− k , where β ′+ k δ −1 1,p is σ n−2 -nonnegative and β ′− k is a 1,n -dangerous. Then φ n (γ − k+1 ) β ′ k is σ-positive of type a 1,n−1 . Hence φ n (γ − k+1 ) β k is σ-positive of type a n−2,n−1 . Assume finally β k = a n−2,n−1 . As the only a n−2,n−1 -dangerous braid is trivial, the braid φ n (γ − k+1 ) β k is equal to a n−2,n−1 , a σ-positive braid of type a n−2,n−1 . The same arguments establish the case k = 2 with β 2 = 1. 3.3. The Key Lemma. We arrive at our main technical result. It mainly says that, if a braid β of B + * n has n-breadth b, then the braid δ −1 n,b−2 · β is either σpositive or trivial. Actually, the result is stronger: the additional information is first that we can control the type of the quotient above, and second that a similar result holds when we replace the leftmost entry of the φ n -splitting of β with another braid of B + * n−1 that resembles it enough. This stronger result, which unfortunately makes the statement more complicated, will be needed in Section 4 for the final induction on the braid index n. Proposition 3.9. Assume n 3 and that (β b , ... , β 1 ) is the φ n -splitting of a braid β in B + * n with b 3. Let a q,n be the last letter of β β −1 1 . Whenever γ b is a σ-positive braid of type β # b , the braid δ −1 n,b−2 · φ b−1 n (γ b ) · φ b−2 n (β b−1 ) · ... · φ n (β 2 ), (3.2) is trivial or σ-positive of type a q,n -the first case occurring only for q = 1. Proof. Put β * = δ −1 n,b−2 · φ b−1 n (γ b ) · φ b−2 n (β b−1 ) · . .. · φ n (β 2 ) and a p−1,n−1 = β # b (Lemma 1.22). First, we decompose the left fragment δ −1 n,b−2 · φ b−1 n (γ b ) of β * as a product of a σ n−1 -nonnegative braid and a dangerous braid. By definition of a σ-positive braid of type β # b , we have γ b = γ + b δ p−1,n−1 γ − b , where γ − b is an β # b -dangerous braid and where γ + b is σ n−2 -nonnegative. Using Proposition 2.6(ii)t, we obtain δ −1 n,b−2 · φ b−1 n (γ b ) = δ b−2 n−1 δ −b+2 n φ b−1 n (γ + b δ p−1,n−1 ) φ b−1 n (γ − b ). (3.3) By (1.7), we have δ −b+2 n φ b−1 n (γ + b δ p−1,n−1 ) = φ n (γ + b δ p−1,n−1 )δ −b+2 n . Using the relation δ p,n δ −1 n = δ −1 p , an easy consequence of (1.11), we obtain δ −b+2 n φ b−1 n (γ + b δ p−1,n−1 ) = φ n (γ + b ) δ −1 p δ −b+3 n . (3.4) Substituting (3.4) in (3.3), we find δ −1 n,b−2 · φ b−1 n (γ b ) = δ b−2 n−1 φ n (γ + b ) δ −1 p δ −b+3 n φ b−1 n (γ − b ). (3.5) From there, we deduce that β * is equal to δ b−2 n−1 φ n (γ + b ) δ −1 p · δ −b+3 n φ b−1 n (γ − b ) φ b−2 n (β b−1 ) ... φ n (β 2 ). (3.6) Write β * * = δ −b+3 n φ b−1 n (γ − b ) φ b−2 n (β b−1 ) ... φ n (β 2 ) . Note that the left factor of (3.6), which is δ b−2 n−1 φ n (γ + b ) δ −1 p , is σ n−1 -nonnegative. At this point, four cases may occur. Case 1: β 2 / ∈ {1, a n−2,n−1 }. By Lemma 5.9 of [12], the braid β * * is equal to β ′′ φ 2 n (γ − 3 ) φ n (β 2 ) β 1 , where β ′′ is a σ n−1 -nonnegative braid and where γ − 3 is a β # 3 -dangerous braid. Put β ′ 2 = φ n (γ − 3 ) β 2 . By Proposition 3.8, β ′ 2 is σ-positive of type β # 2 and different from a n−2,n−1 . We deduce that β * is equal to δ b−2 n−1 φ n (γ ′ b ) δ −1 p β ′′ · φ n (β ′ 2 ). (3.7) The left factor of (3.7) is σ n−1 -nonnegative, while the right factor, namely φ n (β ′ 3 ), is different from a n−1,n and σ-positive of type φ n (β # 2 ) by Lemma 3.2(ii). As, in this case, the last letter of β β −1 1 is φ n (β # 2 ), we conclude using Lemma 3.2(iv). Case 2: β 2 ∈ {1, a n−2,n−1 }, β 3 = ... = β k−1 = a n−2,n−1 and β k = a n−2,n−1 for some k b−1. If β 2 is trivial, then the last letter of β β −1 1 is a 1,n ; otherwise the last letter of β β −1 1 is φ n (a n−2,n−1 ), i.e., a n−1,n -a direct consequence of Lemma 1.22. As the product of a σ-positive braid of type a 1,n with a n−1,n is a σ-positive braid of type a n−1,n , it is enough to prove that the braid β * is the product of a σ-positive braid of type a 1,n with φ n (β 2 ). By Lemma 5.9 of [12], the braid β * * is equal to β ′′ δ −k+2 n φ k n (γ − k+1 ) φ k−1 n (β k ) φ k−2 n (a n−2,n−1 ) ... φ 2 n (a n−2,n−1 ) φ n (β 2 ), (3.8) with β ′′ a σ n−1 -nonnegative braid and γ − k+1 a β # k+1 -dangerous braid. Proposition 3.8 implies that the braid φ n (γ − k+1 ) β k is σ-positive of type β # k . By Corollary 3.11 of [12], the last letter of β k is a n−2,n−1 . Then, by Lemma 3.2(ii) the braid φ 2 n (γ k+1 ) φ n (β k ) is σ-positive of type a n−1,n . Hence, by definition of a σ-positive braid of type a n−1,n , we have the relation φ 2 n (γ − k+1 ) φ n (β k ) = β ′ k φ n (a n−2,n−1 ), (3.9) where β ′ k is a σ-positive braid of type a 1,n . Substituting (3.9) in (3.8) gives that β * * is equal to β ′′ δ −k+2 n φ k−2 n (β ′ k ) φ k−1 n (a n−2,n−1 ) φ k−2 n (a n−2,n−1 ) ... φ 2 n (a n−2,n−1 ) φ n (β 2 ). Using φ n (a n−2,n−1 ) δ −1 n = δ −1 n−1 and (1.7), we obtain that the right factor of (3.6) is β ′′ β ′ k δ −k+2 n−1 φ n (β 2 ). As β ′ k is a σ-positive braid of type a 1,n , Lemma 3.2(iii) implies that β ′ k δ −k+2 n−1 is σ-positive of type a 1,n , and so is β ′′ β ′ k δ −k+2 n−1 by Lemma 3.2(iv). Hence, by (3.6), the braid β * is the product of a σ-positive braid of type a 1,n with φ n (β 2 ). Case 3: β 2 ∈ {1, a n−2,n−1 }, β 3 = ... = β b−1 = a n−2,n−1 and γ b = a n−2,n−1 . As in Case 2, it is enough to prove that the braid β * is the product of a σ-positive of type a 1,n with φ n (β 2 ). Using Lemma 2.6(ii) and (1.7) in its definition, the braid β * is equal to δ b−2 n−1 · φ n (γ b ) δ −1 n · φ n (a n−2,n−1 ) δ −1 n · ... · φ n (a n−2,n−1 ) δ −1 n φ n (β 2 ). By Corollary 3.11 of [12], the last letter of β b is a n−2,n−1 , so γ b is σ-positive of type a n−2,n−1 . Hence φ n (γ b ) is σ-positive of type a n−1,n and is different from a n−1,n . Then, by definition of a σ-positive braid of type a n−1,n , there exists a σ-positive braid β ′ b of type a 1,n satisfying φ n (γ b ) = β ′ b a n−1,n . Using φ n (a n−2,n−1 ) δ −1 n = δ −1 n−1 , we deduce that the braid β * is equal to δ b−2 n−1 · β ′ b δ −b+2 n−1 · φ n (β 2 ). (3.10) By Lemma 3.2(iii), the middle factor of (3.10), namely β ′ b δ −b+2 n−1 , is σ-positive of type a 1,n . Then β * is the product of a σ-positive braid of type a 1,n with φ n (β 2 ). Case 4: β 2 ∈ {1, a n−2,n−1 }, β 3 = ... = β b−1 = a n−2,n−1 and γ b = a n−2,n−1 . By definition, we have β * = δ b−2 n−1 · φ n (a n−2,n−1 ) δ −1 n ... φ n (a n−2,n−1 ) δ −1 n φ n (β 2 ). (3.11) Using φ n (a n−2,n−1 ) δ −1 n = δ −1 n−1 once again, we deduce β * = φ n (β 2 ). The braid φ n (β 2 ) is either trivial or equal to a n−1,n , a σ-positive word of type a n−1,n , as expected. So the proof of the Key Lemma is complete. Proof of the main result We are now ready to prove Theorem 2.8, which state that if β, γ are braids of B + * n , then β < * γ implies β < γ, (4.1) where < * refers to the ordering of Definition 2.1 and < refers to the Dehornoy ordering, i.e., β < γ means that the quotient-braid β −1 γ is σ-positive. We shall split the argument into three steps. The first step consists in replacing the initial problem that involves two arbitrary braids β, γ with two problems, each of which only involves one braid. To do that, we use the separators δ n,t of Definition 2.5, and address the problem of comparing one arbitrary braid with the special braids δ n,t . We shall prove that β < * δ n,t implies β < δ n,t (4.2) δ n,t * β implies δ n,t β (4.3) So, essentially, we have three things to do: proving (4.2), proving (4.3), and showing how to deduce the general implication (4.1). 4.1. Proofs of (4.2) and (4.3). We begin with the implication (4.2). Actually, we shall prove a stronger result, needed to maintain an inductive argument in the proof of Theorem 2.8. Proposition 4.1. For n 3, the implication (4.2) is true. Moreover for t 1, the relation β < * δ n,t implies that β −1 δ n,t is σ-positive of type a 1,n . Proof. Take β in B + * n and assume β < * δ n,t for some t 0. Let (β b , ... , β 1 ) be the φ n -splitting of β. By Proposition 2.6(iii), we necessarily have b t+1. If t = 0 holds, then the braid β lies in B + * n−1 and the quotient β −1 δ n,0 , which is β −1 a n−1,n , is σ-positive. If t 1 and b 1 hold, then the braid β −1 is σ n−1 -nonnegative, and as δ n,t is σ-positive of type a 1,n , Lemma 3.2 implies that the quotient β −1 δ n,t is σ-positive of type a 1,n . Assume now t 1 and b 2. Then, by Proposition 2.6(ii), we find β −1 δ n,t = β −1 δ t n δ −t n−1 = β −1 1 · φ n (β −1 2 ) ... φ b−1 n (β −1 b ) · δ t n · δ −t n−1 . Using Relation (1.7), we push b−1 factors δ n to the left and dispatch them between the factors β −1 k : β −1 δ n,t = β −1 1 φ n (β −1 2 ) ... φ b−2 n (β −1 b−1 ) φ b−1 n (β −1 b ) δ b−1 n δ t−b+1 n δ −t n−1 = β −1 1 φ n (β −1 2 ) ... φ b−2 n (β −1 b−1 ) δ b−2 n δ n β −1 b δ t−b+1 n δ −t n−1 ... = β −1 1 δ n β −1 2 ... δ n β −1 b−1 δ n β −1 b δ t−b+1 n δ −t n−1 . As B + * n−1 is a Garside monoid, there exists an integer k such that β −1 b δ k n−1 belongs to B + * n−1 . Call the latter braid β ′ b . Thus, we have δ n β −1 b = δ n β ′ b δ −k n−1 . Relation (1.7) implies δ n β ′ b δ −k n−1 = φ n (β ′ b ) δ n δ −k n−1 . Then the braid β −1 δ n,t is equal to β −1 1 δ n β −1 2 ... δ n β −1 b−1 φ n (β ′ b ) · δ n δ −k n−1 δ t−b+1 n δ −t n−1 . Each braid β k belongs to B + * n−1 , and so its inverse β −1 k does not involve the nth strand. Hence the left factor of (4.4), namely β −1 1 δ n β −1 2 ... δ n β −1 b−1 φ n (β ′ b ), is σ n−1nonnegative. If b = t+1 holds, the right factor of (4.4), namely δ n δ −k n−1 δ t−b+1 n δ −t n−1 , is equal to the braid δ n · δ −t−k n−1 , which shows it is a σ-positive braid of type a 1,n . If b t holds, (4.4) ends with δ n δ −t n−1 , which is a σ-positive braid of type a 1,n , and the factor δ n δ −k n−1 δ t−b n is σ n−1 -nonnegative. In each case, we conclude using Lemma 3.2(iii) that β −1 δ n,t is σ-positive of type a 1,n . Using the Key Lemma of Section 3, i.e., Proposition 3.9, we now establish the implication (4.3). Proof. Take β in B + * n and assume δ n,t * β. Let (β b , ... , β 1 ) be the φ n -splitting of β. By definition of < * , the relation δ n,t * β implies t b−2. Then δ −1 n,t β is equal to δ −1 n,t δ n,b−2 · δ −1 n,b−2 β. (4.5) By Lemma 2.7, the factor δ −1 n,t δ n,b−2 of (4.5) is σ-positive or trivial. By Lemma 3.4, the braid β b is σ-positive of type β # b . Then, Proposition 3.9 guarantees that the right factor of (4.5), namely δ −1 n,b−2 β, is σ-positive or trivial. associate with every braid β of B + * n a tree T (β) with branches of length n−2 and natural numbers labeling the leaves-see [9] for an analogous construction. Then Theorem 1 immediately implies that, for β, γ in B + * n , the relation β < γ holds if and only if the tree T (β) is ShortLex-smaller than the tree T (γ). Remark 4.4. Whether the tools developed in [12] and in the current may be adapted to the case of B + n is an open question. The starting point of our approach is very similar to that of [9] and [4]. However, it seems that the machinery of ladders and dangerous braids involved in the technical results of Section 3 are really specific to the case of the dual monoids, and heavily depend on the highly redundant character of the relations connecting the Birman-Ko-Lee generators. By contrast, a much more promising approach would be to investigate the restriction of the finite Thurston-type braid orderings of [17] to the monoids B + * n along the lines developed in [14]. In particular, it should be possible to determine the isomorphism type explicitly. Definition 1 . 16 . 116For n 3 and β a braid of B + * n . The maximal braid β 1 lying in B + * n−1 that right-divides the braid β is called the B + * n−1 -tail of β. Figure 2 . 2The braid δ n,r as a separator in (B + * n , < * )-hence in (B + * n , <) as well once Theorem 2.8 is proved. Definition 3. 1 . 1Assume n 3. -A braid is called a p,n -dangerous if it admits one decomposition of the form Figure 3 . 3The bars of the ladder are represented by black thick vertical lines. An a2,5-ladder lent on a3,5 (the last letter). The gray line starts at position 2 and goes up to position 5 using the bars of the ladder. The empty spaces between bars in the ladder are represented by a framed box. In such boxes the vertical line representing the letter ap,q does not cross the gray line. ] Assume n 3 and that (β b , . Proposition 4 . 2 . 42For n 3, the implication (4.3) is true. Proof. We use induction on n. For n = 2, everything is obvious, as both < and < * coincide with the standard ordering of natural numbers.Assume n 3, and β < * γ where β, γ belong to B + * n and β = 1 holds. Then γ = 1 holds as well. Let (β b , ... , β 1 ) and (γ c , ... , γ 1 ) be the φ n -splittings of β and γ. As β < * γ holds, we have b c. Write β c = ... = β b+1 = 1. Let t be the maximal integer in {1, ... , c} satisfying β t < * γ t . By definition of < * , such a t exists. Write γ ′ t = β −1 t γ t . By induction hypothesis, the braid γ ′ t is σ-positive. Moreover, if t 2 holds, then the braid γ ′ t is σ-positive of type γ # t . Assume t = 1. Then the braid β −1 γ is equal to γ ′ t . Hence, it is σ-positive. As the B + * n−1 -tail of γ 1 is non-trivial, we have nothing more to prove. Assume now t 2. Let a q,n be the last letter of φ t−1 n (γ t ) · . . . · φ n (β 2 ). The sequence (β t−1 , ... , β 1 ) is a φ n -splitting of a braid of n-breadth t−1. Then Proposition 4.1 implies that the braid β ′ , that is equal to. As γ ′ t is σ-positive of type γ # t , Proposition 3.9 implies that the braid γ ′ is σ-positive of type a q,n or trivial (the latter occurs only for q = 1). Then, in any case, the braid β ′ γ ′ is σ-positive of type a q,n . As, by construction, we have β −1 γ = β ′ · γ ′ · γ 1 , the braid β −1 γ is σ-positive. Moreover, assume that the B + * n−1 -tail of γ is trivial. Then γ 1 is trivial and the braid γ ends with a q,n . In this case we have β −1 γ = β ′ ·γ ′ , a σ-positive braid of type a q,n−1 . Therefore β −1 γ is a σ-positive braid of type γ # .Our proof of Proposition 4.3 is therefore complete, and so is the proof of Theorem 2.8, which is strictly included in the latter.So, we now have a complete description of the restriction of the Dehornoy ordering of braids to the Birman-Ko-Lee monoid B + * n . The characterization of Theorem 1 is inductive, connecting the ordering on B + * n to the ordering on B + * n−1 . Actually, it is very easy to obtain a non-inductive formulation. Indeed, we can define the iterated splitting T (β) of a braid β of B + * n to be the tree obtained by substituting the φ n−1 -splittings of the entries in the φ n -splitting, and iterating with the φ n−2splittings, and so on until we reach B + * 2 , i.e., the natural numbers. In this way, we the case when β and γ have the same n-breadth, and this is what we do now. Actually, as was already mentioned, in order to maintain an induction hypothesis, we shall prove a stronger implication: is why we shall consider the. there remains to treat the "Lex"-case, i.e.. Short"-and the "Lex"-cases simultaneouslySo, there remains to treat the "Lex"-case, i.e., the case when β and γ have the same n-breadth, and this is what we do now. Actually, as was already mentioned, in order to maintain an induction hypothesis, we shall prove a stronger implication: is why we shall consider the "Short"-and the "Lex"-cases simultaneously. The dual braid monoid. D Bessis, Ann. Sci.École Norm. Sup. 365D. Bessis, The dual braid monoid, Ann. Sci.École Norm. Sup. 36 (2003), no. 5, 647-683. A new approach to the word and conjugacy problems in the braid groups. J Birman, K H Ko, S J Lee, Adv. Math. 1392J. Birman, K.H. Ko, and S.J. Lee, A new approach to the word and conjugacy problems in the braid groups, Adv. Math. 139 (1998), no. 2, 322-353. L'ordre total sur les tresses positives. S Burckel, Université de CaenPh.D. thesisS. Burckel, L'ordre total sur les tresses positives, Ph.D. thesis, Université de Caen, 1994. The wellordering on positive braids. J. Pure Appl. Algebra. 1201, The wellordering on positive braids, J. Pure Appl. Algebra 120 (1997), no. 1, 1-17. Computation of the ordinal of braids. Order. , Computation of the ordinal of braids, Order (1999), 291-304. L Carlucci, P Dehornoy, A Weiermann, arXiv:math.LO/0711.3785Unprovability statements involving braids. L. Carlucci, P. Dehornoy, and A. Weiermann, Unprovability statements involving braids, arXiv:math.LO/0711.3785. Braid groups and left distributive operations. P Dehornoy, Trans. Amer. Math. Soc. 3452P. Dehornoy, Braid groups and left distributive operations, Trans. Amer. Math. Soc. 345 (1994), no. 2, 293-304. Groupes de garside. Ann. Sci. Ec. Norm. Sup. 35, Groupes de garside, Ann. Sci. Ec. Norm. Sup. 35 (2002), 267-306. Alternating normal forms for braids and locally Garside monoids. J. Pure Appl. Algebra. 21211, Alternating normal forms for braids and locally Garside monoids, J. Pure Appl. Algebra 212 (2008), no. 11, 2413-2439. Ordering braids, Mathematical Surveys and Monographs. P Dehornoy, I Dynnikov, D Rolfsen, B Wiest, Amer. Math. Soc., in pressP. Dehornoy, I. Dynnikov, D. Rolfsen, and B. Wiest, Ordering braids, Mathematical Surveys and Monographs, Amer. Math. Soc., in press, 2002. Gaussian groups and Garside groups, two generalizations of Artin goups. P Dehornoy, L Paris, Proc. London Math. Soc. 793P. Dehornoy and L. Paris, Gaussian groups and Garside groups, two generalizations of Artin goups, Proc. London Math. Soc. 79 (1999), no. 3, 569-604. J Fromentin, arXiv:math.GR/0811.3902Every braid admits a short σ-definite representative. J. Fromentin, Every braid admits a short σ-definite representative, arXiv:math.GR/0811.3902. Ordering by divisibility in abstract algebras. G Higman, Proc. London Math. Soc. 2G. Higman, Ordering by divisibility in abstract algebras, Proc. London Math. Soc. 2 (1952), 326-336. T Ito, arXiv:math.GR/0810.4074On finite Thurston type orderings of braid groups. T. Ito, On finite Thurston type orderings of braid groups, arXiv:math.GR/0810.4074. Braid group actions on left distributive structures and well-orderings in the braid group. R Laver, J. Pure Appl. Algebra. 1081R. Laver, Braid group actions on left distributive structures and well-orderings in the braid group, J. Pure Appl. Algebra 108 (1996), no. 1, 81-98. Basic set theory. A Levy, Springer VerlagA. Levy, Basic set theory, Springer Verlag, 1979. Orderings of mapping class groups after Thurston. H Short, B Wiest, Enseign. Math. 46H. Short and B. Wiest, Orderings of mapping class groups after Thurston, Enseign. Math. 46 (2000), 279-312.
[]
[ "On some aspects of approximation of ridge functions", "On some aspects of approximation of ridge functions" ]
[ "Anton Kolleck \nMathematical Institute\nTechnical University Berlin\nStrasse des 17. Juni 136D-10623BerlinGermany\n", "Jan Vybíral \nMathematical Institute\nTechnical University Berlin\nStrasse des 17. Juni 136D-10623BerlinGermany\n" ]
[ "Mathematical Institute\nTechnical University Berlin\nStrasse des 17. Juni 136D-10623BerlinGermany", "Mathematical Institute\nTechnical University Berlin\nStrasse des 17. Juni 136D-10623BerlinGermany" ]
[]
We present effective algorithms for uniform approximation of multivariate functions satisfying some prescribed inner structure. We extend in several directions the analysis of recovery of ridge functions f (x) = g( a, x ) as performed earlier by one of the authors and his coauthors. We consider ridge functions defined on the unit cube [−1, 1] d as well as recovery of ridge functions defined on the unit ball from noisy measurements. We conclude with the study of functions of the type f (x) = g( a − x 2 l d 2 ).
10.1016/j.jat.2015.01.003
[ "https://arxiv.org/pdf/1406.1747v1.pdf" ]
17,263,860
1406.1747
02396653e247c71fae12dd9e674f55a87349159d
On some aspects of approximation of ridge functions 6 Jun 2014 Anton Kolleck Mathematical Institute Technical University Berlin Strasse des 17. Juni 136D-10623BerlinGermany Jan Vybíral Mathematical Institute Technical University Berlin Strasse des 17. Juni 136D-10623BerlinGermany On some aspects of approximation of ridge functions 6 Jun 2014Ridge functionsHigh-dimensional function approximationNoisy measurementsCompressed SensingDantzig selector 2010 MSC: 65D15, 41A25 We present effective algorithms for uniform approximation of multivariate functions satisfying some prescribed inner structure. We extend in several directions the analysis of recovery of ridge functions f (x) = g( a, x ) as performed earlier by one of the authors and his coauthors. We consider ridge functions defined on the unit cube [−1, 1] d as well as recovery of ridge functions defined on the unit ball from noisy measurements. We conclude with the study of functions of the type f (x) = g( a − x 2 l d 2 ). Introduction Functions depending on a large number of variables play nowadays a crucial role in many areas, including parametric and stochastic PDE's, bioinformatics, financial mathematics, data analysis and learning theory. Together with an extensive computational power being used in these applications, results on basic numerical aspects of these functions became crucial. Unfortunately, multivariate problems suffer often from the curse of dimension, i.e. the minimal number of operations necessary to achieve (an approximation of) a solution grows exponentially with the underlying dimension of the problem. Although this effect was observed many times in the literature, we refer to [24] for probably the most impressive result of this kind -namely that even the uniform approximation of infinitely-differentiable functions is intractable in high dimensions. In the area of Information Based Complexity it was possible to achieve a number of positive results on tractability of multivariate problems by posing an additional (structural) assumption on the functions under study. The best studied concepts in this area include tensor product constructions and different concepts of anisotropy and weights. We refer to the series of monographs [25,26,27] for an extensive treatment of these and related problems. We pursue the direction initiated by Cohen, Daubechies, DeVore, Kerkyacharian and Picard in [11] and further developed in a series of recent papers [16,19,23]. This line of study is devoted to ridge functions, which are multivariate function f taking the form f (x) = g( a, x ) for some univariate function g and a non-zero vector a ∈ R d . We refer also to [14,30,31] for a related approach. Functions of this type are by no means new in mathematics. They appear for example very often in statistics in the frame of the so-called single index models. They play also an important role in approximation theory, where their simple structure motivated the question if a general function could be well approximated by sums of ridge functions. The pioneering work in this field is [22], where the term "ridge function" was first introduced, and also [20], where the fundamentality of ridge functions was investigated. Ridge functions appeared also in mathematical analysis of neural networks [4,29] and as the basic building blocks of ridgelets of Candès and Donoho [6]. A survey on approximation by (sums of) ridge functions was given in [28]. The biggest difference between our setting and the usual approach of statistical learning and data analysis is that we suppose that the sampling points of f can be freely chosen, and are not given in advance. This happens, for instance, if sampling of the unknown function at a point is realized by a (costly) PDE solver. Most of the techniques applied so far in recovery of ridge functions are based on the simple formula ∇f (x) = g ′ ( a, x ) · a. (1.1) One way, how to use (1.1) is to approximate the gradient of f at a point with non-vanishing g ′ ( a, x ). By (1.1), it is then co-linear with a. Once a is recovered, one can use any one-dimensional sampling method to approximate g. Another way to approximate a is inspired by the technique of compressed sensing [7,15]. Taking directional derivatives of f at x results into ∂f (x) ∂ϕ = ∇f (x), ϕ = g ′ ( a, x ) a, ϕ , i.e. it gives an access to the scalar product of a with a chosen vector ϕ. If we assume, that most of the coordinates of a are zero (or at least very small) and choose the directions ϕ 1 , . . . , ϕ m at random, one can recover a effectively by the algorithms of sparse recovery. Our aim is to fill some gaps left so far in the analysis done in [16]. Although the possibility of extending the analysis also to functions defined on other domains than the unit ball was mentioned already in [16], no steps in this direction were done there. We study in detail ridge functions defined on the unit cube [−1, 1] d . The crucial component of our analysis is the use of the sign of a vector sign(x), which is defined componentwise. Although the mapping x → sign(x) is obviously not continuous, the mapping (for a ∈ R d fixed) x → a, sign(x) is continuous at a (and takes the value a l d 1 there). This observation allows to imitate the approach of [16] and to adapt it to this setting. Let us remark, that all our approximation schemes recover first an approximation of the vector a ∈ R d . Afterwards, the problem becomes essentially one-dimensional and a good approximation of f by a limited number of sampling points can then be recovered by many classical methods, i.e. by spline approximation. We will therefore concentrate on an effective recovery of an approximation of a and the approximation of f will be given only implicitly. Another topic only briefly discussed in [16] was the recovery of ridge functions from noisy measurements, which is an important step for every possible application of the methods so far. Furthermore, our analysis as well as the approach of [16] or even the classical results of [3] are based on approximation of first (or higher) order derivatives by differences, which poses naturally the question on numerical stability of the presented algorithms. We present an algorithm based on the Dantzig selector of [8], which allows for recovery of a ridge function also in this setting. It turns out, that in the case of a small step size h > 0, the first order differences can not be evaluated with high enough precision. On the other hand, for a large step size h the first order differences do not approximate the first order derivatives well enough. Typically, there is therefore an h > 0, for which an optimal degree of approximation is achieved. Next thing we discuss is the robustness of the methods developed. We show that (without much additional effort) it can be applied also for uniform recovery of translated radial functions f (x) = g( a − x 2 l d 2 ), which are constant along co-centered spheres instead of parallel hyperplanes. Similarly to the model of ridge functions, both the center a ∈ R d and the univariate function g are unknown. Finally, we close the paper with few numerical simulations of the algorithms presented. They highlight the surprising fact, that their accuracy improves with increasing dimension. This is essentially based on the use of concentration of measure phenomenon in the underlying theory and goes in line with similar observations made in the area of compressed sensing. The paper is structured as follows. Section 2 collects some necessary notation and certain basic facts on sparse recovery from the area of compressed sensing. Section 3 extends the analysis of [16] to the setting of ridge functions defined on the unit cube. Section 4 treats the recovery of ridge functions defined on the unit ball from noisy measurements. Section 5 studies the translated radial functions f (x) = g( a − x 2 l d 2 ) and Section 6 closes with numerical examples. Preliminaries In this section we collect some notation and give an overview of results from the area of compressed sensing, which we shall need later on. Notation For a given vector x ∈ R d and 0 ≤ p ≤ ∞ we define x l d p :=            d i=1 |x i | p 1 p if 0 < p < ∞, #{i | x i = 0} if p = 0, max i=i,...,d |x i | if p = ∞, where #A denotes the cardinality of the set A. This notation is further complemented by putting for 0 < p < ∞ x l d p,∞ := max k=1,...,d k 1 p x (k) , where x (k) , k = 1, . . . , d denotes the non-increasing rearrangement of the absolute entries of x, i.e. x (1) ≥ x (2) ≥ . . . ≥ x (d) ≥ 0 and x (j) = |x σ(j) | for some permutation σ : {1, . . . , d} → {1, . . . , d} and all j = 1, . . . , d. It is a very well known fact, that · ℓ d p is a norm for 1 ≤ p ≤ ∞ and a quasi-norm if 0 < p ≤ 1. Also · ℓ d p,∞ is a quasi-norm for every 0 < p < ∞. If p = 2, the space ℓ d 2 is a Hilbert space with the usual inner product given by x, y = x T y = d i=1 x i y i , x, y ∈ R d . If 1 ≤ s ≤ d is a natural number, then a vector x ∈ R d is called s-sparse if it contains at most s nonzero entries, i.e. x l d 0 ≤ s. The set of all s-sparse vectors is denoted by Σ d s := {x ∈ R d | x l d 0 ≤ s}. Finally, the best s-term approximation of a vector x describes, how well can x be approximated by s-sparse vectors. Definition 2.1. The best s-term approximation of a given vector x ∈ R d with respect to the l d 1 -norm is given by σ s (x) 1 := min z∈Σ d s x − z l d 1 . Results from compressed sensing Next we recall some basic concepts and results from compressed sensing which we will use later. Compressed sensing emerged in [7,9,15] as a method of recovery of sparse vectors x from a small set of linear measurements y = Φx. Since then, a vast literature on the subject appeared, concentrating on various aspects of the theory, and its applications. As it is not our aim to develop the theory of compressed sensing, but rather to use it in approximation theory, we shall restrict ourselves to the most important facts needed later on. We refer to [2,12,17,18] for recent overviews of the field and more references. We focus on the recovery of vectors from noisy measurements, i.e. we want to recover the vectors x ∈ R d from m < d linear measurements of the form y = Φx + e + z,(2.1) where Φ ∈ R m×d is the measurement matrix and the noise is a composition of two factors, namely of the deterministic noise e ∈ R m and the random noise z ∈ R m . Typically, we will assume, that e is small (with respect to some ℓ m p norm) and that the components of z are generated independently according to a Gaussian distribution with small variance. Obviously, some conditions have to be posed on Φ, so that the recovery of x from the measurements y given by (2.1) is possible. The most usual one in the theory of compressed sensing is that the matrix Φ satisfies the restricted isometry property. Definition 2.2. The matrix Φ ∈ R m×d satisfies the restricted isometry property (RIP) of order s ≤ d if there exists a constant 0 < δ < 1 such that (1 − δ) x 2 l d 2 ≤ Φx 2 l m 2 ≤ (1 + δ) x 2 l d 2 holds for all s-sparse vectors x ∈ Σ d s . The smallest constant δ for which this inequality holds is called the restricted isometry constant and we will denote it by δ s . In general it is very hard to show that a given matrix satisfies this RIP or not. This is in particular the main reason why we will use random matrices, since it turns out that those matrices satisfy the RIP with overwhelming high probability. We present a version of such a statement, which comes from [1]. Theorem 2.3. For every 0 < δ < 1 there exist constants C 1 , C 2 > 0 depending on δ such that the random matrix Φ ∈ R m×d with entries generated independently as ϕ ij = 1 √ m +1 with probability 1/2, −1 with probability 1/2 (2.2) satisfies the RIP of order s for each s ≤ (C 2 m)/ log(d/m) with RIP constant δ s ≤ δ with probability at least 1 − 2e −C 1 m . A matrix Φ generated by (2.2) is called normalized Bernoulli matrix. For the sake of simplicity, we work with Bernoulli sensing matrices, but note that most of the statements presented below remain true for other classes of random matrices, c.f. [13,Section 5]. Next we present several recovery results for our starting problem (2.1). The first result of this kind deals with the case of exact measurements (i.e. e = z = 0) and uses the so called l Then it holds x − ∆ l d 1 (y) l d 1 = x − ∆ l d 1 (Φx) l d 1 ≤ C 0 σ d s (x) 1 with constant C 0 depending only on δ. This theorem implies that s-sparse vectors are recovered exactly by the l d 1 -minimizer (2.3) in the noise-free setting, since σ d s (x) 1 = 0 holds for every x ∈ Σ d s . To deal with the deterministic noise e, we shall need some more information about the geometrical properties of Bernoulli matrices. In particular, we will make use of Theorem 3.5 and Theorem 4.1 of [13], cf. also [21]. Theorem 2.5. Let Φ ∈ R m×d be a normalized Bernoulli matrix and let d ≥ (log 6) 2 m. Let U J = {y ∈ R m : y J ≤ 1}, where y J = max √ m y l m ∞ ; m log(d/m) y l m 2 . (i) Then there exists an absolute constant C 3 > 0 such that with probability at least 1 − e − √ dm for every y ∈ U J there is an x ∈ R d , such that Φx = y and x l d 1 ≤ C 3 . (ii) Let δ > 0 and let C 1 and C 2 be the constants from Theorem 2.3. Then there exists an absolute constant C 3 and a constant C 4 depending on δ such that, with probability at least 1 − 2e −C 1 m − e − √ md , for each y ∈ U J there exists a vector x ∈ R d with Φx = y, x l d 1 ≤ C 3 and x l d 2 ≤ C 4 log(d/m)/m. We will use those two theorems to handle the deterministic noise e. Further we need a similar result to handle the random noise z, therefore we recall the Dantzig selector from [8]. Definition 2.6 (Dantzig selector). For a matrix Φ ∈ R m×d and constants λ d , σ > 0 the Dantzig selector ∆ DS (y) ∈ R d of an input vector y ∈ R m is defined as the solution of the minimization problem min w∈R d w l d 1 subject to Φ T (y − Φw) l d ∞ ≤ λ d σ. (2.4) Remark 2.7. In what follows we shall use several parameters as the description of the typical frame of compressed sensing. First, we take m ≤ d to be natural numbers and denote by Φ ∈ R m×d the normalized Bernoulli matrix (2.2). We put δ := 1/6 and denote by C 1 and C 2 the constants appearing in Theorem 2.3. Next, we assume that the natural numbers s ≤ m ≤ d satisfy d ≥ (log 6) 2 m and 3s ≤ (C 2 m)/ log(d/m). (2.5) Hence, by Theorem 2.3, Φ has (with high probability) the RIP of order 3s with a constant at most 1/6. Now we can use Theorem 1.3 of [8] to handle the random noise z. Theorem 2.8. Let s, m, d be natural numbers with (2.5) and let Φ ∈ R m×d be a normalized Bernoulli matrix. Let y = Φx + z for x ∈ R d with x p,∞ ≤ R, 0 < p ≤ 1, and z ∈ R m with independent entries z i ∼ N (0, σ 2 ) . Then there exists a constant C 5 such that the Dantzig Selector (with λ d = √ 2 log d) satisfies ∆ DS (y) − x 2 l d 2 ≤ min 1≤s * ≤s 2C 5 log d s * σ 2 + R 2 s −2(1/p−1/2) * with high probability. Combining Theorem 2.5 and Theorem 2.8 we get the following new result. Theorem 2.9. Let s, m, d be natural numbers with (2.5) and let Φ ∈ R m×d be a normalized Bernoulli matrix. For x ∈ R d and e, z ∈ R m with x l d 1,∞ ≤ R, e l d 2 ≤ c ε log(d/m), e l d ∞ ≤ c ε and z i ∼ N (0, σ 2 ) for some constants R, σ, ε, c > 0 let y = Φx + e + z. Then there exist constants C 5 , C 6 , C 7 such that the Dantzig selector ∆ DS (with λ d = √ 2 log d) applied to y satisfies the estimate ∆ DS (y) − x l d 2 ≤ min 1≤s * ≤s 2C 5 log d s * σ 2 +R 2 s −1 * 1 2 + C 7 ε √ m √ s with high probability, whereR = 2(R + 2C 6 ε √ m). Proof. It follows from the assumptions that e J ≤ c ε √ m. Then we use Theorem 2.5 (ii) to find a vector u ∈ R d , such that Φu = e, u l d 1 ≤ C 3 e J ≤ C 3 c ε √ m, u l d 2 ≤ C 4 log(d/m)/m e J ≤ C 4 c ε log(d/m). Further we apply the triangle inequality for the · 1,∞ quasinorm (see, for instance, Lemma 2.7 in [18]) to get x + u l d 1,∞ ≤ 2 x l d 1,∞ + u l d 1,∞ ≤ 2 x l d 1,∞ + u l d 1 ≤ 2(R + C 6 ε √ m) =:R. Finally, applying Theorem 2.8 (with p = 1) we get ∆ DS (y) − x l d 2 = ∆ DS (Φx + e + z) − x l d 2 ≤ ∆ DS (Φ(x + u) + z) − (x + u) l d 2 + u l d 2 ≤ min 1≤s * ≤s 2C 5 log d s * σ 2 +R 2 s −1 * 1 2 + C 7 ε √ m √ s which finishes the proof. Approximation of ridge functions defined on cubes In this section we consider uniform approximation of ridge functions of the form f : [−1, 1] d → R, x → g( a, x ). (3.1) We assume that both the ridge vector a ∈ R d and the univariate function g (also called ridge profile) are unknown. First, we note that the problem is invariant with respect to scaling. Suppose that f is a ridge function with representation f (x) = g( a, x ). Then for any scalar λ ∈ R\{0} we putg(x) := g( 1 λ x) andã := λa to get another representation of f in the form of (3.1), namelỹ g( ã, x ) =g( λa, x ) = g 1 λ λa, x = g( a, x ) = f (x). Thus we can pose a scaling condition on a without any loss of generality. Furthermore, if g ′ (0) = 0, we can switch from a to −a, and obtain a ridge representation of f with g ′ (0) > 0. In [16], the scaling condition a l d 2 = 1 was assumed. This fitted together with both the scalar product structure used in the definition of f , as well as with the geometry of the domain of f used in [16], namely the Euclidean unit ball. It is easy to observe, that it will be more convenient for us to work with the ℓ d 1 -norm of a. Indeed, let us consider that the ridge profile g(t) = t is known, i.e. that we have f (x) = a, x for some (unknown) a ∈ R d , and let us assume, that we have an l d 1 -approximationâ of a with a −â l d 1 ≤ ε. Then Hölder's inequality gives us f − f ∞ = sup x∈[−1,1] d | a −â, x | ≤ sup x∈[−1,1] d a −â l d 1 x l d ∞ ≤ ε. In what follows we shall therefore assume that a l d 1 = 1 (3.2) and that g is a univariate function defined on I = [−1, 1]. We further assume that g and g ′ are Lipschitz continuous with constants c 0 , c 1 > 0, i.e. |g(t 1 ) − g(t 2 )| ≤ c 0 |t 1 − t 2 |, (3.3) |g ′ (t 1 ) − g ′ (t 2 )| ≤ c 1 |t 1 − t 2 | (3.4) holds for all t 1 , t 2 ∈ I = [−1, 1]. Finally, we assume that g ′ (0) > 0 (3.5) as it is known, cf. [23], that approximation of ridge functions may be intractable if this condition is left out. Approximation scheme without sparsity In this part we evolve an approximation scheme for ridge functions with an arbitrary ridge vector a ∈ R d , merely assuming the right normalization (3.2). After this we consider the same problem with an additional sparsity condition on a, where we will use results from compressed sensing to reduce the number of samples. Motivated by the formula (1.1) for x = 0 ∇f (0) = g ′ (0)a,(3.6) we set for a small constant h > 0 and i ∈ {1, . . . , d} a i := f (he i ) − f (0) h ,(3.7) where e 1 , . . . , e d are the usual canonical basis vectors of R d . As expected, it turns out thatã i is a good approximation of g ′ (0)a i as the mean value theorem gives a i = f (he i ) − f (0) h = g(h a, e i ) − g(0) h = g ′ (ξ h,i )a i for some ξ h,i ∈ (−|ha i |, |ha i |). And for the ℓ d 1 -approximation we obtain ã − g ′ (0)a l d 1 = d i=1 |ã i − g ′ (0)a i | = d i=1 |g ′ (ξ h,i ) − g ′ (0)||a i | ≤ d i=1 c 1 |ξ h,i ||a i | ≤ d i=1 c 1 |ha i ||a i | = c 1 h d i=1 |a i | 2 (3.8) ≤ c 1 h. Thusã is a good approximation to g ′ (0)a and since we want an approximation to a and we know that a is l d 1 -normalized we set a :=ã ã l d 1 . Now we have to estimate the difference between a andâ, therefore we will use a variant of Lemma 3.4 of [16]. Lemma 3.1. Let x ∈ R d with x l d 1 = 1,x ∈ R d \{0} and λ ∈ R. Then it holds sign(λ)x x l d 1 − x l d 1 ≤ 2 x − λx l d 1 x l d 1 . Proof. This lemma is a direct consequence of the triangle inequality. First we obtain x l d 1 − |λ| = x l d 1 − λx l d 1 ≤ x − λx l d 1 and therefore sign(λ)x x l d 1 − x l d 1 ≤ sign(λ)x − sign(λ)λx x l d 1 l d 1 + sign(λ)λx − x x l d 1 x l d 1 l d 1 = x − λx l d 1 x l d 1 + (|λ| − x l d 1 )x x l d 1 l d 1 ≤ 2 x − λx l d 1 x l d 1 , which proves the claim. Remark 3.2. We only used the triangle inequality to prove the previous lemma. Thus the lemma remains true for any norm on R d . Applying this lemma to our case it holds with (3.8) and the assumption (3.5) sign(g ′ (0))â − a l d 1 = â − a l d 1 ≤ 2c 1 h ã l d 1 . (3.9) Although we now know thatâ is a good approximation of a, it is still not clear how to define the uniform approximationf of f . The naive approach (used with success in [16] for ridge functions defined on the Euclidean unit ball) is to sample f alongâ, i.e. to putĝ(t) := f (tâ), and then definê f (x) :=ĝ( â, x ). But when trying to estimate f −f ∞ , we would need to ensure that â, a is close to 1. This was indeed the case in [16], where an estimate on â − a ℓ d 2 was obtained, but it is not true any more in our setting of ℓ d 1 approximation. On the other hand, because of the normalization of a, we have a, sign(a) = d i=1 a i · sign(a i ) = d i=1 |a i | = a l d 1 = 1, where we defined the sign of a vector x ∈ R d entrywise, i.e. sign(x) := (sign(x i )) i ∈ R d . Note that this function is discontinuous, hence sign(a) and sign(â) can be far from each other, even if the difference a −â l d 1 is small. Nevertheless their scalar product with a is nearly the same as Hölder's inequality gives | a, sign(a) − sign(â) | = | a, sign(a) − â, sign(â) − a −â, sign(â) | ≤ a −â l d 1 sign(â) l d ∞ = a −â l d 1 . (3.10) Thus we defineĝ : [−1, 1] → R, t → f t · sign(â) (3.11) andf (x) =ĝ( â, x ). (3.12) Let us summarize our approximation algorithm as follows. Algorithm A • Input: Ridge function f (x) = g( a, x ) with (3.2)-(3.5) and h > 0 small • Putã i := f (he i ) − f (0) h , i = 1, . . . , d • Putâ :=ã ã l d 1 • Putĝ(t) = f (t · sign(â)) andf (x) =ĝ( â, x ) • Output:f We formulate the approximation properties of Algorithm A as the following theorem. Theorem 3.3. Let f : [−1, 1] d → R be a ridge function with f (x) = g( a, x ) for some a ∈ R d with (3.2) and a differentiable function g : [−1, 1] → R with (3.3)-(3.5). For h > 0 we construct the functionf as described in Algorithm A. Then f −f ∞ ≤ 2c 0 â − a l d 1 ≤ 4c 0 c 1 h g ′ (0) − c 1 h , (3.13) where the last inequality only holds if g ′ (0) − c 1 h is positive. Proof. First, we show that â − a l d 1 ≤ 2hc 1 ã l d 1 ≤ 2hc 1 g ′ (0) − c 1 h ,(3.14) where the last inequality only holds if g ′ (0) − c 1 h is positive. Due to (3.9), we only have to show the last inequality of (3.14). Since g ′ is Lipschitz continuous with Lipschitz constant c 1 we have for any y ∈ [−h, h] g ′ (0) − |g ′ (y)| ≤ |g ′ (0) − g ′ (y)| ≤ c 1 |0 − y| ≤ c 1 h and therefore |g ′ (y)| ≥ g ′ (0) − c 1 h. Withã i = g ′ (ξ h,i )a i for some ξ h,i ∈ (−|ha i |, |ha i |) ⊂ [−h, h] , it follows by the triangle inequality and (3.8) (3.14) and the second inequality in (3.13). ã l d 1 ≥ g ′ (0)a l d 1 − ã − g ′ (0)a l d 1 ≥ g ′ (0) − c 1 h which proves To prove the first inequality in (3.13), we use (3.3) and (3.10) to show thatĝ is a good uniform approximation of g on [−1, 1]. We obtain |g(t) −ĝ(t)| = |g(t) − g( a, t · sign(â) )| ≤ c 0 |t − t a, sign(â) | = c 0 |t| | a, sign(a) − sign(â) | ≤ c 0 a −â l d 1 (3.15) for each t ∈ [−1, 1]. Finally, we combine this estimate with the definition of f as given in (3.12) and arrive at |f (x) − f (x)| = |ĝ( â, x ) − g( a, x )| ≤ |ĝ( â, x ) − g( â, x )| + |g( â, x ) − g( a, x )| (3.16) ≤ c 0 a −â l d 1 + c 0 | a −â, x | ≤ 2c 0 a −â l d 1 . Remark 3.4. (i) The estimate (3.13) depends heavily on the value of g ′ (0). Especially, the approximation becomes difficult, when this value gets smaller and (3.13) becomes void if g ′ (0) = 0. This is a very well known aspect of approximation of ridge functions, which was studied in a great detail in [23]. We refer also to a slightly weaker condition used in [16]. (ii) If a l d 2 is small, the following improvement of (3.13) becomes of interest. First, we observe that (3.8) can be improved to ã − g ′ (0)a l d 1 ≤ c 1 h a 2 l d 2 , which results into â − a l d 1 ≤ 2c 1 h a 2 l d 2 ã l d 1 . Finally, this allows to improve (3.13) to f −f ∞ ≤ 4c 0 c 1 h a 2 l d 2 g ′ (0) − c 1 h a 2 l d 2 . Approximation with sparsity In this subsection we assume that the ridge vector a ∈ R d is not only ℓ d 1normalized, but satisfies also some sparsity condition, i.e. most of the entries of a are zero or at least very small. We will use techniques of compressed sensing to address the approximation of the ridge vector a, afterwards we obtain an approximation of f in the same way as before. Let Φ ∈ R m×d be a normalized Bernoulli matrix and let ϕ 1 , . . . , ϕ m be its rows. Taking their scalar product with the quantities in (3.6), we obtain ∂f ∂ϕ j (0) = ∇f (0), ϕ j = g ′ (0) a, ϕ j . (3.17) We use again first order differences as an approximation of the directional derivatives in (3.17), i.e. we set b j := f (hϕ j ) − f (0) h . As in the previous section the mean value theorem gives the existence of some ξ h,j with |ξ h,j | ≤ |h| · | a, ϕ j | such that b j = g ′ (ξ h,j ) a, ϕ j . In this sense, we expectb j to be a good approximation of g ′ (0) a, ϕ j and b to be a good approximation of g ′ (0)Φa. Hence, we recoverã through ℓ 1minimization. From this point on, we may continue as before. Let us summarize this procedure as the Algorithm B. • Putĝ(t) = f (t · sign(â)) andf (x) =ĝ( â, x ) • Output:f = g ′ (0)Φa ∈ R m we get b − b l d 1 = m j=1 |g ′ (ξ h,j ) a, ϕ j − g ′ (0) a, ϕ j | = m j=1 |g ′ (ξ h,j ) − g ′ (0)|| a, ϕ j | ≤ m j=1 c 1 h| a, ϕ j | 2 ≤ c 1 h m j=1 a 2 l d 1 ϕ j 2 l d ∞ = c 1 h. Therefore we obtainb = b + e = g ′ (0)Φa + e (3.19) for e ∈ R m with e l m 1 ≤ c 1 h and, similarly, e l m ∞ ≤ c 1 h/m and e l m 2 ≤ c 1 h/ √ m. Hence, by using Theorem 2.5 there exists some vector u ∈ R d with Φu = e and u l d 1 ≤ max √ m e l m ∞ ; m log(d/m) e l m 2 ≤ max c 1 h √ m ; c 1 h log(d/m) ≤ √ 2c 1 h, where we used m ≥ log(d) and d ≥ (log 6) 2 m for the last inequality. Take some 1/3 > δ > 0 fixed, e.g. δ = 1/6, and apply Theorem 2.4 to g ′ (0)a + u. This gives us ∆ l d 1 (b) − g ′ (0)a l d 1 = ∆ l d 1 Φ(g ′ (0)a + u) − g ′ (0)a l d 1 ≤ ∆ l d 1 Φ(g ′ (0)a + u) − g ′ (0)a − u l d 1 + u l d 1 ≤ C 0 · σ d s g ′ (0)a + u 1 + u l d 1 ≤ C 0 g ′ (0) · σ d s (a) 1 + (1 + C 0 ) u l d 1 ≤ (1 + C 0 ) g ′ (0) · σ d s (a) 1 + u l d 1 . Finally, by settingã : = ∆ l d 1 (b) andâ :=ã/ ã l d 1 , Lemma 3.1 provides a −â l d 1 ≤ 2(1 + C 0 ) · g ′ (0) · σ d s (a) 1 + u l d 1 ã l d 1 ≤ 2 √ 2(1 + C 0 ) · g ′ (0) · σ d s (a) 1 + c 1 h ã l d 1 . From this point on we can proceed as in the proof of Theorem 3.3. We can again estimate the l d 1 -norm ofã from below. We get ã l d 1 = ∆ l d 1 (Φ(g ′ (0)a + u)) l d 1 ≥ g ′ (0)a + u l d 1 − ∆ l d 1 (Φ(g ′ (0)a + u)) − g ′ (0)a − u l d 1 ≥ g ′ (0) a l d 1 − u l d 1 − σ d s (g ′ (0)a + u) 1 ≥ g ′ (0) − 2 u l d 1 − g ′ (0)σ d s (a) 1 ≥ g ′ (0) − 2 √ 2c 1 h − g ′ (0)σ d s (a) 1 . Thus we can replace the norm ã l d 1 with this expression (if it is positive) to get f −f ∞ ≤ 2c 0 C ′ 0 · g ′ (0) · σ d s (a) 1 + c 1 h ã l d 1 ≤ C · g ′ (0) · σ d s (a) 1 + h g ′ (0)(1 − σ d s (a) 1 ) − c 1 h for some constant C depending only on c 0 , c 1 , C 0 , C ′ 0 . Remark 3.6. (i) In particular, if a is s-sparse, we get σ d s (a) 1 = 0 and, therefore, f −f ∞ ≤ C h g ′ (0) − c 1 h . (ii) The constant C ′ 0 can be chosen to be C ′ 0 = 2 √ 2(1 + C 0 ) with C 0 being the constant from Theorem 2.4. (iii) If the sparsity level of a is s ∈ N, the condition 2s ≤ (C 2 m)/ log(d/m) implies m ≥ 2s log(d)/C 2 . Thus, in this case we need m = O(s log d) measurements to reconstruct the vector a. Approximation of ridge functions with noisy measurements In this section we study another aspect of recovery of ridge functions, which was hardly discussed up to now in the literature. We consider ridge functions defined on the unit ball as in [16] but we assume that the measurements are affected by random noise. In addition, we suppose that the vector a satisfies a compressibility condition. To be more precise, we consider ridge functions f : B d = {x ∈ R d | x l d 2 < 1} → R, x → f (x) = g( a, x ). We assume, that the ridge vector a ∈ R d is l d 2 -normalized a l d 2 = 1 (4.1) and compressible in the following sense, a l d 1 ≤ R, R > 0. (4.2) Furthermore, we assume that the ridge profile is a differentiable function g : [−1, 1] → R with (3.3)-(3.5). We shall use again the setting of Remark 2.7. Let d ≥ (log 6) 2 m and let Φ ∈ R m×d be a normalized Bernoulli matrix (2.2) with rows ϕ 1 , . . . , ϕ m . By Theorem 2.3 it is ensured that Φ satisfies the RIP of order 2s with 0 < δ 2s ≤ δ := 1/6 with high probability for every positive integer s with 3s ≤ (C 2 m)/ log(d/m), where the constant C 2 is the constant from Theorem 2.3. But in contrary to (3.7), we now assume that the evaluation of f is perturbed by noise. To make the presentation technically simpler, we shall assume that the value f (0) is given precisely (i.e. without noise). This can be achieved (with high precision) by resampling the value f (0) several times, and applying Hoeffding's inequality. Hence, we set for j = 1, . . . , m and a small constant h > 0 b j := (f (hϕ j ) +z j ) − f (0) h = f (hϕ j ) − f (0) h +z j h . We assume that the random noisez = (z 1 , . . . ,z m ) T ∈ R m has independent componentsz j ∼ N (0, σ 2 ). Sincez j are independent, it is well known that z j :=z j h ∼ N 0, σ 2 h 2 (4.3) are also independent. As in the case with exact measurements the mean value theorem gives us f (hϕ j ) − f (0) h = g( a, hϕ j ) − g(0) h = g ′ (ξ h,j ) a, ϕ j for some real ξ h,j with |ξ h,j | ≤ | a, hϕ j |, hence b j = g ′ (ξ h,j ) a, ϕ j + z j . To recover the vector a from these measurements let us first define the deterministic noise e ∈ R m by e j = a, ϕ j (g ′ (ξ h,j ) − g ′ (0)), j = 1, . . . , m, (4.4) i.e. b = g ′ (0)Φa + e + z. We then recoverâ with the help of Dantzig selector (2.4) instead of l 1minimization. Finally, for the construction ofĝ andf , we can use the direct approach of [16], which is given bŷ g : R → R, t → f (tâ) andf : B d → R, x →ĝ( â, x ). Let us summarize this procedure as the following algorithm. . , ϕ m ∈ R d • Put b j = (f (hϕ j ) +z j ) − f (0) h , j = 1, . . . , m • Putâ = ∆ DS (b) ∆ DS (b) l d 2 for λ d = √ 2 log d • Putĝ : R → R, t → f (tâ) • Putf : B d → R, x →ĝ( â, x ) • Output:f Letz j ∼ N (0, σ 2 ) be independent. Then there is a constant C 2 > 0, such that the functionf defined by Algorithm C satisfies with high probability err(a,â) , f −f ∞ ≤ 2c 0 a −â l d 2 ≤ 4c 0 err(a,â) g ′ (0) −(4.6) where err(a,â) := min 1≤s * ≤s 2C 5 log d s * σ 2 h 2 +R 2 s −1 * 1 2 + C 7 h √ s , (4.7) R := 2(R + 2C 6 h) for some constants C 5 , C 6 , C 7 . The second inequality in (4.6) only holds if the denominator is positive. Proof. To prove this theorem, we apply Theorem 2.9 to (4.5). Therefore we need to estimate the norm of e ∈ R m , defined by (4.4). We obtain e 2 l m 2 = m j=1 a, ϕ j (g ′ (ξ h,j ) − g ′ (0)) 2 ≤ m j=1 c 1 h a, ϕ j 2 2 ≤ c 2 1 h 2 m j=1 a l d 1 ϕ j l d ∞ 4 ≤ c 2 1 h 2 R 4 m and similarly we can show e l m ∞ ≤ c 1 hR 2 m . We can now apply Theorem 2.9 with ε = hR 2 / √ m to get ∆ DS (b) − g ′ (0)a l d 2 ≤ min 1≤s * ≤s 2C 5 log d s * 2σ 2 h 2 +R 2 s −1 * 1 2 + C 7 h √ s =: err(a,â) withR = 2(R + 2C 6 h) and some constants C 5 , C 6 , C 7 . And since we know that a is l d 2 -normalized we setâ := ∆ DS (b) ∆ DS (b) l d 2 . Applying Lemma 3.1 we get a −â l d 2 ≤ 2err(a,â) ∆ DS (b) l d 2 ≤ 2err(a,â) g ′ (0) a l m 2 − err(a,â) = 2err(a,â) g ′ (0) − err(a,â) , where the last inequality only holds if the denominator is positive. This proves the second inequality in (4.6). The proof of the first part of (4.6) proceeds as in [16]. First we define an approximationĝ to gĝ : R → R, t → f (tâ). (4.8) This is indeed a good approximation to g as for any t ∈ [−1, 1] we get |ĝ(t) − g(t)| = |g( a, t ·â ) − g(t)| ≤ c 0 |t (1 − a,â )| = c 0 |t| · | a, a −â | ≤ c 0 · a −â l d 2 . (4.9) With this approximationĝ to g we define the functionf bŷ f : B d → R, x →ĝ( â, x ). It remains to show thatf is a good approximation to f . With the help of (4.8) and (4.9) we obtain |f (x) −f (x)| = |g( a, x ) −ĝ( â, x )| ≤ |g( â, x ) −ĝ( â, x )| + |g( a, x ) − g( â, x )| ≤ c 0 · a −â l d 2 + c 0 | a −â, x | ≤ 2c 0 a −â l d 2 for all x ∈ B d . Approximation of translated radial functions The methods we presented so far, as well as the methods of [16], were developed in the (quite restrictive) frame of ridge functions. As an example of a possible extension of these algorithms, we consider the class of translated radial functions, i.e. functions of the form f : B d → R, x → f (x) = g( a − x 2 l d 2 ) for some fixed l d 2 -normalized vector a ∈ R d a l d 2 = 1 (5.1) and a function g : [0, 4] → R. Hence, f is constant on the spheres centered in a or, equivalently, it is a radial function translated by a. Typically, we shall again assume that g and g ′ are Lipschitz continuous with constants c 0 and c 1 , respectively. The idea to recover those functions is similar to the case of ridge functions. First we recover the center a and then we define approximationsĝ to g andf to f . For a small constant h > 0 and fixed vectors x i ∈ R d , i = 1, . . . , d we set a i := f (he i + x i ) − f (x i ) h , where e 1 , . . . , e d are again the canonical basis vectors of R d . With help of the mean value theorem we can express this as a i = f (he i + x i ) − f (x i ) h = g( a − he i − x i 2 l d 2 ) − g( a − x i 2 l d 2 ) h = g ′ (ξ h,i ) a − he i − x i 2 l d 2 − a − x i 2 l d 2 h for some real ξ h,i between a − he i − x i 2 l d 2 and a − x i 2 l d 2 . The nominator can be simplified by a − he i − x i 2 l d 2 − a − x i 2 l d 2 = a − he i − x i , a − he i − x i − a − x i , a − x i = −2h a, e i + h 2 e i , e i + 2h e i , x i = −2ha i + h 2 + 2hx i,i . Let us choose x i = −(h/2)e i to get a i = f ((h/2)e i ) − f (−(h/2)e i ) h = −2g ′ (ξ h,i )a i for some ξ h,i between a − (h/2)e i 2 l d 2 and a + (h/2)e i 2 l d 2 . Next let us note that ξ h,i is very close to 1 = a 2 l d 2 : |ξ h,i − 1| ≤ max 1 − a − (h/2)e i 2 l d 2 , 1 − a + (h/2)e i 2 l d 2 = max −ha i − h 2 /4 , ha i − h 2 /4 (5.2) ≤ h + h 2 /4. Finally we obtain thatâ is a good approximation to −2g ′ ( a 2 l d 2 )a = −2g ′ (1)a, since ã + 2g ′ (1)a 2 l d 2 = d i=1 −2g ′ (ξ h,i )a i + 2g ′ (1)a i 2 = 4 d i=1 g ′ (ξ h,i ) − g ′ (1) a i 2 ≤ 4 d i=1 (c 1 |ξ h,i − 1| a i ) 2 ≤ 4c 2 1 d i=1 h + h 2 /4 a i 2 = 4c 2 1 h + h 2 /4 2 d i=1 a 2 i = 4c 2 1 h + h 2 /4 2 . Thusã is almost a multiple of a. Again, we need to assume that the derivative of g ′ is non-trivial in some sense. Due to the construction, we replace (3.5) by the condition g ′ (1) = 0. Then the l d 2 -normalized vectorâ :=ã ã l d 2 approximates a, possibly up to a sign. Choosing any vectorâ ⊥ ∈ R d orthogonal toâ, we can identify the sign by sampling alongâ ⊥ . Afterwards, the correct sign might be assigned toâ. We will therefore restrict ourselves to the case g ′ (1) > 0. (5.3) Once an approximation of a was recovered, it is again easy to define an approximation of g, and finally of f . We summarize this procedure as the following algorithm. • Putã i := f (he i /2)−f (−he i /2) h , i = 1, . . . , d • Putâ :=ã ã l d 2 • Putĝ : [0, 4] → R, t → f (â(1 − √ t)) • Putf : B d → R, x →ĝ( â − x 2 l d 2 ) • Output:f The performance of Algorithm D is estimated by the following Theorem. f −f ∞ ≤ c 0 2 â − a l d 2 + â − a 2 l d 2 (5.4) and â − a l d 2 ≤ 2c 1 h + h 2 /4 g ′ (1) − c 1 (h + h 2 /4) (5.5) if g ′ (1) − c 1 (h + h 2 /4) is positive. Proof. First, we estimate the difference between a andâ. By (5.2) and (3.4) g ′ (1) − |g ′ (ξ h,i )| ≤ |g ′ (1) − g ′ (ξ h,i )| ≤ c 1 |1 − ξ h,i | ≤ c 1 (h + h 2 /4), hence |g ′ (ξ h,i )| ≥ g ′ (1) − c 1 (h + h 2 /4). (5.6) Therefore, if the right hand side of (5.6) is positive, we get ã 2 l d 2 = d i=1 |ã i | 2 = 4 d i=1 |g ′ (ξ h,i )a i | 2 ≥ 4 d i=1 g ′ (1) − c 1 (h + h 2 /4) 2 a 2 i = 4 g ′ (1) − c 1 (h + h 2 /4) 2 . Now we apply Lemma 3.1 to obtain â − a l d 2 ≤ 4c 1 (h + h 2 /4) ã l d 2 ≤ 2c 1 (h + h 2 /4) g ′ (1) − c 1 (h + h 2 /4) . (5.7) Given the approximationâ to a we define an approximationĝ to g bŷ g : [0, 4] → R, t → f (â(1 − √ t)) . (i) We assumed in Theorem 5.1, that the function g and its derivative g ′ are both Lipschitz. If we assume this property only on the interval (1 − (h + h 2 /4), 1 + (h + h 2 /4)), we can still recover at least (5.7). This applies even to the case, when g (and also its derivative) are unbounded near the origin. In that case, we can still approximate the position of the singularity, although uniform approximation of f is out of reach. (ii) As in the approximation scheme for ridge functions we can use techniques from compressed sensing to recover f if a is compressible. To be more precise, if a satisfies a l d 1 ≤ R and Φ ∈ R m×d is a normalized Bernoulli matrix with rows ϕ 1 , . . . , ϕ m ∈ R d we definẽ b j := f ((h/2)ϕ j ) − f (−(h/2)ϕ j ) h . As f is defined only on the unit ball B d and ϕ j l d j = f h ϕ j ϕ j l d 2 − f −h ϕ j ϕ j l d 2 h . (5.9) By defining the deterministic noise e ∈ R m b = −2g ′ (1)Φa + e (5.10) we can show with similar calculations as before that e l m 2 ≤ η := 2c 1 R 2Rh √ d +h 2 . (5.11) Using the (P 1,η ) minimizer of [5] we put a = arg min z∈R d z l d 1 s.t. Φz −b l m 2 ≤ η with η given by the right hand side of (5.11). We then get the estimate, cf. [18,Theorem 4.22] or [2, Theorem 1.6], ã − 2g ′ (1)a l d 2 ≤ ̺ := Cσ s (2g ′ (1)a) 1 √ s + Dη with two universal constants C, D > 0. Here again s ≤ C 2 m/ log(d/m). Lemma 3.1 (with l d 2 instead of l d 1 ) gives forâ =ã/ ã l d 2 â − a l d 2 ≤ 2̺/ ã l d 2 . Finally, using ã l d 2 ≥ 2g ′ (1) a l d 2 − ã − 2g ′ (1)a l d 2 ≥ 2g ′ (1) − ̺, we get â − a l d 2 ≤ 2̺ 2g ′ (1) − ̺ if 2g ′ (1) > ̺. This gives a replacement of (5.7), the rest of the proof of Theorem 5.1 then applies without further modifications. (iii) Once we have this approximation scheme using techniques from compressed sensing, we can easily extend it to an approximation scheme with noisy measurements. We assume again thatb from (5.9) is corrupted by noise z/h, where the components of z = (z 1 , . . . , z m ) T are again independent N (0, σ 2 ) distributed random variables. Formula (5.10) is then replaced byb = −2g ′ (1)Φa + e + z/h and Dantzig selector can be applied. Numerical results In this section we investigate the performance of the algorithms presented so far in several model situations. The results shed a new light on some of the aspects, which we did not discuss in detail, especially on the size of the constants used in previous theorems. All the approximation schemes started by looking for a good approximationâ of the unknown direction a and, consequently, the quality of the uniform approximation of f byf was then bounded by the corresponding distance betweenâ and a. In what follows, we will therefore discuss only the approximation error between a andâ. Ridge functions on cubes We start with Algorithm A, i.e. with approximation of a ridge function f (x) = g( a, x ) defined on the cube [−1, 1] d with a ℓ d 1 = 1. We have considered different dimensions (d ∈ {10, 100, 1000, 10.000}). As the Algorithm A does not make any use of sparsity of a, it is reasonable to assume, that all its coordinates are equally likely to be non-zero. The entries of a were therefore always independently normally distributed (i.e. a i ∼ N (0, 1)), afterwards a got l d 1 -normalized according to (3.2). in dependence of the step size h > 0 for two different profiles g(t) = tanh(t) and g(t) = tanh(t − 1). Note that the y-axis scales logarithmically. Let us give some remarks on Figure 1. • The approximation improves rapidly with growing dimension. This is given by considering the non-sparse ridge vectors a and by the concentration of measure phenomenon as described also in Remark 3.4. • Smaller step size h implies also better quality of approximation, but already reasonable sizes of h (i.e. h = 0.2) imply relatively very small errors. • Finally, the second derivative of the first profile at zero vanishes, were it is non-zero for the second profile. Therefore, the first order differences approximate the first order derivative less accurately in that case, leading to larger (but still surprisingly small) approximation errors. The left part of Figure 2 shows the dependence of the number of the sampling points m on the dimension d and sparsity s, cf. (2.5), when using Algorithm B. We fixed the ridge profile g(t) = tanh(t − 1), the sparsity s = 5 and the step size h = 0.1 and constructed an s-sparse random vector a by Matlab command sprandn, followed by the ℓ d 1 -normalization. For each integer d between 50 and 1000 and for each integer m between 1 and 55, we then run the Algorithm B 120 times and the average approximation error a −â ℓ d needs to grow only logarithmically in the dimension d to guarantee good approximation with high probability. The right part of Figure 2 then shows the average value of a −â ℓ d 1 for the same profile and sparsity for three different pairs of (d, m). We observe, that especially for large dimensions even extremely small number of measurements guarantees already reasonable approximation errors. Figure 3 studies the performance of the recovery of the ridge vector a from noisy measurements as described in Algorithm C. We fixed the parameters d = 1000, m = 400, and s = 5, the ridge profile g(t) = tanh(t − 1) and four different noise levels σ ∈ {0.03, 0.01, 0.003, 0.001}. We have used the ℓ 1 -MAGIC implementation of Dantzig selector, available at the web page of Justin Romberg at http://users.ece.gatech.edu/~justin/l1magic/. As the noise level gets amplified by the factor 1/h, when taking the first order differences, cf. (4.3), it is not surprising that the recovery fails completely for small values of h. On the other hand, for large values of h, the correspondence between first order differences and first order derivatives gets weaker and the quality of approximation deteriorates as well. This effect is clearly visible from (4.7) and, numerically, in the left part of Figure 3, where there is an optimal h for the recovery of a. Strictly speaking, the functions considered in Section 4 were defined only on the unit ball B d , so that the value of h in Figure 3 should be limited to be smaller than m/d. We have decided to include also larger values h to exhibit the optimal h, although for our profile and our parameters it lies outside of this interval. Noisy measurements Although not discussed before, it is quite straightforward to modify the non-probabilistic Algorithm A also to the case of noisy measurements. Es- sentially, the gradient ∇f (0) is then approximated by the first-order differences, this time corrupted by noise. We applied this approach to the profile g(t) = tanh(t − 1) and parameters just described with the results plotted in the right part of Figure 3. We observe that the approximation errors get much larger, demonstrating once again the success of Dantzig selector. Shifted radial functions In Figure 4 we considered the approximation of the pole a of a shifted radial function f with f (x) = g( a−x 2 l d 2 ) and g(t) = −1/t. On the left plot, we fixed the sparsity s = 5 and considered three values of d = 100, d = 1000 and d = 10.000. The number of measurements was then m = 40, m = 60, or m = 80, respectively. Finally, we run the modification of Algorithm D described in Remark 5.2 and plot the average approximation error a −â ℓ d 2 against the step size h. The right hand plot of Figure 4 shows the noise-aware modification of Algorithm D described also in Remark 5.2. . 4 . 4Let Φ ∈ R m×d satisfy the RIP of order 2s with constant δ 2s ≤ δ < 1/3. Let x ∈ R d and let us denote y = Φx. Finally, let ∆ • 2 •• 2Input: Ridge function f (x) = g( a, x ) with (3.2)-(3.5), h > 0 small and m ≤ d/(log 6) Take Φ ∈ R m×d a normalized Bernoulli matrix, cf. (2.2) • Putb j := f (hϕ j ) − f (0) h , j = 1, . . . , m Putã Theorem 3. 5 . 5Let f : [−1, 1] d → R be a ridge function with f (x) = g( a, x ) for some vector a ∈ R d with (3.2) and some differentiable function g : [−1, 1] → R with (3.3)-(3.5). Let d ≥ (log 6) 2 m and h > 0 be fixed. Then there exist some constantsC ′ 0 , C 1 , C 2 > 0, such that for every positive integer s with 2s ≤ (C 2 m)/ log(d/m) the functionf constructed in Algorithm B satisfies f −f ∞ ≤ 2c 0 â − a l d 1 ≤ 2c 0 err(a,â),(3.18)where err(a,â) := C ′ 0 · g ′ (0) · σ d s (a) 1 + h g ′ (0)(1 − σ d s (a) 1 ) − c 1 h ,with probability at least 1 − e − √ md − e −C 1 m provided the denominator in the expression for err(a,â) is positive.Proof. The first inequality in (3.18) follows again by (3.15) combined with(3.16).Settingb := (b 1 , . . . ,b m ) T ∈ R m and b : • 2 • 2Input: Ridge function f (x) = g( a, x ) with (4.1), (4.2), (3.3)-(3.5), h, σ > 0 and m ≤ d/(log 6) Construct the m × d normalized Bernoulli matrix Φ, c.f. (2.2), with rows denoted by ϕ 1 , . . Theorem 4. 1 . 1Let f : B d → R be a ridge function f (x) = g( a, x ) with (4.1), (4.2), (3.3)-(3.5). Furthermore, let h, σ > 0 and let s ≤ m ≤ d be positive integers with (2.5). • Input: Translated radial function f : B d → R with f (x) = g( Theorem 5. 1 . 2 l d 2 ) 122Let f : B d → R, g : [0, 4] → R and a ∈ R d be such that f (x) = g( a − x and a and and g satisfy (5.1),(3.3), (3.4) and (5.3). Then Figure 1 : 1Approximation of a according to Algorithm A with g(t) = tanh(t) (left) and g(t) = tanh(t − 1) (right). Figure 1 1shows the (average) approximation error a−â ℓ d 1 1 corresponds afterwards to the shade of grey of the point with coordinates d and m. In accordance with the theory of compressed sensing (and with Remark 3.6), we observe that the number of measurements Figure 2 : 2Face transition for the approximation of a and the average error of a −â ℓ d 1 according to algorithm B. Figure 3 : 3Approximation of a with noisy measurements according to Algorithm C (left) and a modification of Algorithm A (right). Note, that only the y-axis of the left plot is logarithmic. Figure 4 : 4Approximation of a according to algorithm D with sparsity (left) and with noisy measurements (right). Essentially,ĝ is the restriction of f onto {λâ : λ ∈ R} ∩ B d . Using(3.3) we obtain the estimatefor all t ∈ [0, 4]. Next we definê.With (5.7) and (5.8) we get the final estimateRemark 5.2. (Extensions of Theorem 5.1) A simple proof of the restricted isometry property for random matrices. R Baraniuk, M Davenport, R Devore, M Wakin, Constr. Approx. 28R. Baraniuk, M. Davenport, R. DeVore, M. Wakin, A simple proof of the restricted isometry property for random matrices, Constr. Approx. 28 (2008) 253-263. A survey of compressed sensing, Applied and Numerical Harmonic Analysis. H Boche, R Calderbank, G Kutyniok, J Vybíral, Birkhäuser, Bostonto appearH. Boche, R. Calderbank, G. Kutyniok, J. Vybíral, A survey of compressed sensing, Applied and Numerical Harmonic Analysis, Birkhäuser, Boston, to appear. Identifying linear combinations of ridge functions. M D Buhmann, A Pinkus, Adv. in Appl. Math. 22M.D. Buhmann, A. Pinkus, Identifying linear combinations of ridge functions, Adv. in Appl. Math. 22 (1999) 103-118. Harmonic analysis of neural networks. E Candès, Appl. Comput. Harmon. Anal. 6E. Candès, Harmonic analysis of neural networks, Appl. Comput. Har- mon. Anal. 6 (1999) 197-218. The restricted isometry property and its implications for compressed sensing. E Candès, Compte Rendus de l'Academie des Sciences. 346E. Candès, The restricted isometry property and its implications for compressed sensing, Compte Rendus de l'Academie des Sciences, Paris, Serie I 346 (2008) 589-592. Ridgelets: a key to higher-dimensional intermittency?. E Candès, D L Donoho, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 357E. Candès, D.L. Donoho, Ridgelets: a key to higher-dimensional inter- mittency?, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 357 (1999) 2495-2509. Decoding by linear programming. E Candès, T Tao, IEEE Trans. Inform. Theory. 51E. Candès, T. Tao, Decoding by linear programming, IEEE Trans. In- form. Theory 51 (2005) 4203-4215. The Dantzig selector: statistical estimation when p is much larger than n. E Candès, T Tao, Ann. Stat. 35E. Candès, T. Tao, The Dantzig selector: statistical estimation when p is much larger than n, Ann. Stat. 35 (2007) 2313-2351. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. E Candès, J Romberg, T Tao, IEEE Trans. Inform. Theory. 52E. Candès, J. Romberg, T. Tao, Robust uncertainty principles: ex- act signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory 52 (2006) 489-509. Compressed sensing and best kterm approximation. A Cohen, W Dahmen, R Devore, J. Amer. Math. Soc. 22A. Cohen, W. Dahmen, R. DeVore, Compressed sensing and best k- term approximation, J. Amer. Math. Soc. 22 (2009) 211-231. Capturing ridge functions in high dimensions from point queries. A Cohen, I Daubechies, R Devore, G Kerkyacharian, D Picard, Constr. Appr. 35A. Cohen, I. Daubechies, R. DeVore, G. Kerkyacharian, D. Picard, Cap- turing ridge functions in high dimensions from point queries, Constr. Appr. 35 (2012) 225-243. Introduction to compressed sensing. Compressed sensing. M A Davenport, M F Duarte, Y C Eldar, G Kutyniok, Cambridge Univ. PressCambridgeM.A. Davenport, M.F. Duarte, Y.C. Eldar, G. Kutyniok, Introduction to compressed sensing. Compressed sensing, 1-64, Cambridge Univ. Press, Cambridge, (2012) Instance-optimality in probability with an ℓ 1 -minimization decoder. R Devore, G Petrova, P Wojtaszczyk, Appl. Comput. Harmon. Anal. 27R. DeVore, G. Petrova, P. Wojtaszczyk, Instance-optimality in proba- bility with an ℓ 1 -minimization decoder, Appl. Comput. Harmon. Anal. 27 (2009) 275-288. Approximation of functions of few variables in high dimensions. R Devore, G Petrova, P Wojtaszczyk, Constr. Approx. 33R. DeVore, G. Petrova, P. Wojtaszczyk, Approximation of functions of few variables in high dimensions, Constr. Approx. 33 (2011) 125-143. Compressed sensing. D L Donoho, IEEE Trans. Inform. Theory. 52D.L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory 52 (2006) 1289-1306. Learning functions of few arbitrary linear parameters in high dimensions. M Fornasier, K Schnass, J Vybíral, Found. Comput. Math. 12M. Fornasier, K. Schnass, J. Vybíral, Learning functions of few arbi- trary linear parameters in high dimensions, Found. Comput. Math. 12 (2012) 229-262. Compressive sensing. M Fornasier, H Rauhut, Handbook of Mathematical Methods in Imaging. Scherzer, OtmarSpringerM. Fornasier, H. Rauhut, Compressive sensing, In: Scherzer, Otmar (Ed.) Handbook of Mathematical Methods in Imaging, Springer, pp. 187-228. A mathematical introduction to compressive sensing, Applied and Numerical Harmonic Analysis. S Foucart, H Rauhut, Birkhäuser, BostonS. Foucart, H. Rauhut, A mathematical introduction to compres- sive sensing, Applied and Numerical Harmonic Analysis, Birkhäuser, Boston, 2013. Active learning of multi-index function models. T Hemant, V Cevher, Advances in Neural Information Processing Systems. 25T. Hemant, V. Cevher, Active learning of multi-index function models, in Advances in Neural Information Processing Systems 25 (2012) 1475-1483, available at http://books.nips.cc/papers/files/nips25/NIPS2012 0701.pdf Fundamentality of ridge functions. V Ya, A Lin, Pinkus, J. Approx. Theory. 75V. Ya. Lin, A. Pinkus, Fundamentality of ridge functions, J. Approx. Theory 75 (1993) 295-311. Smallest singular value of random matrices and geometry of random polytopes. A E Litvak, A Pajor, M Rudelson, N Tomczak-Jaegermann, Adv. Math. 195A.E. Litvak, A. Pajor, M. Rudelson, N. Tomczak-Jaegermann, Smallest singular value of random matrices and geometry of random polytopes, Adv. Math. 195 (2005) 491-523. Optimal reconstruction of a function from its projections. B P Logan, L A Shepp, Duke Math. J. 42B.P. Logan, L.A. Shepp, Optimal reconstruction of a function from its projections, Duke Math. J. 42 (1975) 645-659. Entropy and sampling numbers of classes of ridge functions. S Mayer, T Ullrich, J Vybíral, submittedS. Mayer, T. Ullrich, J. Vybíral, Entropy and sampling num- bers of classes of ridge functions, submitted, available at http://arxiv.org/abs/1311.2005. Approximation of infinitely differentiable multivariate functions is intractable. E Novak, H Woźniakowski, J. Compl. 25E. Novak, H. Woźniakowski, Approximation of infinitely differentiable multivariate functions is intractable, J. Compl. 25 (2009) 398-404. E Novak, H Woźniakowski, Tractability of Multivariate Problems. ZürichEur. Math. Soc. Publ. House6Linear InformationE. Novak, H. Woźniakowski, Tractability of Multivariate Problems, Vol- ume I: Linear Information. EMS Tracts in Mathematics, Vol. 6, Eur. Math. Soc. Publ. House, Zürich, 2008. E Novak, H Woźniakowski, ume II: Standard Information for Functionals. EMS Tracts in Mathematics. ZürichEur. Math. Soc. Publ. House12Tractability of Multivariate ProblemsE. Novak, H. Woźniakowski, Tractability of Multivariate Problems, Vol- ume II: Standard Information for Functionals. EMS Tracts in Mathe- matics, Vol. 12, Eur. Math. Soc. Publ. House, Zürich, 2010. E Novak, H Woźniakowski, ume III: Standard Information for Operators. EMS Tracts in Mathematics. ZürichEur. Math. Soc. Publ. House18Tractability of Multivariate ProblemsE. Novak, H. Woźniakowski, Tractability of Multivariate Problems, Vol- ume III: Standard Information for Operators. EMS Tracts in Mathe- matics, Vol. 18, Eur. Math. Soc. Publ. House, Zürich, 2012. Approximating by ridge functions, Surface Fitting and Multiresolution. A Pinkus, Methods. A. Pinkus, Approximating by ridge functions, Surface Fitting and Mul- tiresolution Methods (1997) 279-292. Approximation theory of the MLP model in neural networks. A Pinkus, Acta Numerica. 8A. Pinkus, Approximation theory of the MLP model in neural networks, Acta Numerica 8 (1999) 143-195. Compressed learning of high-dimensional sparse functions. K Schnass, J Vybíral, IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP). K. Schnass, J. Vybíral, Compressed learning of high-dimensional sparse functions, In: IEEE Int. Conf. on Acoustics, Speech and Signal Pro- cessing (ICASSP) (2011) 3924-3927. Complexity of approximation of functions of few variables in high dimensions. P Wojtaszczyk, J. Compl. 27P. Wojtaszczyk, Complexity of approximation of functions of few vari- ables in high dimensions, J. Compl. 27 (2011) 141-150.
[]
[ "Coherent Smith-Purcell γ-Ray Emission", "Coherent Smith-Purcell γ-Ray Emission" ]
[ "Kamran Akbari \nICFO-Institut de Ciencies Fotoniques\nThe Barcelona Institute of Science and Technology\n08860CastelldefelsBarcelonaSpain\n", "Simone Gargiulo \nInstitute of Physics\nLaboratory for Ultrafast Microscopy and Electron Scattering (LUMES)\nÉcole Polytechnique Fédérale de Lausanne (EPFL)\nCH-1015LausanneSwitzerland\n", "Fabrizio Carbone \nInstitute of Physics\nLaboratory for Ultrafast Microscopy and Electron Scattering (LUMES)\nÉcole Polytechnique Fédérale de Lausanne (EPFL)\nCH-1015LausanneSwitzerland\n", "F Javier García De Abajo \nICFO-Institut de Ciencies Fotoniques\nThe Barcelona Institute of Science and Technology\n08860CastelldefelsBarcelonaSpain\n\nICREA-Institució Catalana de Recerca i Estudis Avançats\nPasseig Lluís Companys 2308010BarcelonaSpain\n" ]
[ "ICFO-Institut de Ciencies Fotoniques\nThe Barcelona Institute of Science and Technology\n08860CastelldefelsBarcelonaSpain", "Institute of Physics\nLaboratory for Ultrafast Microscopy and Electron Scattering (LUMES)\nÉcole Polytechnique Fédérale de Lausanne (EPFL)\nCH-1015LausanneSwitzerland", "Institute of Physics\nLaboratory for Ultrafast Microscopy and Electron Scattering (LUMES)\nÉcole Polytechnique Fédérale de Lausanne (EPFL)\nCH-1015LausanneSwitzerland", "ICFO-Institut de Ciencies Fotoniques\nThe Barcelona Institute of Science and Technology\n08860CastelldefelsBarcelonaSpain", "ICREA-Institució Catalana de Recerca i Estudis Avançats\nPasseig Lluís Companys 2308010BarcelonaSpain" ]
[]
We investigate the Smith-Purcell emission produced by electron-or ion-beam-driven coherent excitation of nuclei arranged in periodic crystal lattices. The excitation and subsequent radiative decay of the nuclei can leave the target in the initial ground state after γ-ray emission, thus generating a coherent superposition of the far-field photon amplitude emanating from different nuclei that results in sharp angular patterns at spectrally narrow nuclear transition energies. We focus on Fe-57 as an example of two-level nuclear lossy system giving rise to Smith-Purcell emission at 14.4 keV with a characteristic delay of 142 ns relative to the excitation time. These properties enable a clean separation from faster and spectrally broader emission mechanisms, such as bremsstrahlung. Besides its fundamental interest, our study holds potential for the design of high-energy, narrowband, highly-directive photon sources, as well as a means to store energy in the form of nuclear excitations.
null
[ "https://arxiv.org/pdf/2203.05990v1.pdf" ]
247,411,179
2203.05990
7e985578ec57ace6e4d29f221d6b6d317b84640d
Coherent Smith-Purcell γ-Ray Emission Kamran Akbari ICFO-Institut de Ciencies Fotoniques The Barcelona Institute of Science and Technology 08860CastelldefelsBarcelonaSpain Simone Gargiulo Institute of Physics Laboratory for Ultrafast Microscopy and Electron Scattering (LUMES) École Polytechnique Fédérale de Lausanne (EPFL) CH-1015LausanneSwitzerland Fabrizio Carbone Institute of Physics Laboratory for Ultrafast Microscopy and Electron Scattering (LUMES) École Polytechnique Fédérale de Lausanne (EPFL) CH-1015LausanneSwitzerland F Javier García De Abajo ICFO-Institut de Ciencies Fotoniques The Barcelona Institute of Science and Technology 08860CastelldefelsBarcelonaSpain ICREA-Institució Catalana de Recerca i Estudis Avançats Passeig Lluís Companys 2308010BarcelonaSpain Coherent Smith-Purcell γ-Ray Emission We investigate the Smith-Purcell emission produced by electron-or ion-beam-driven coherent excitation of nuclei arranged in periodic crystal lattices. The excitation and subsequent radiative decay of the nuclei can leave the target in the initial ground state after γ-ray emission, thus generating a coherent superposition of the far-field photon amplitude emanating from different nuclei that results in sharp angular patterns at spectrally narrow nuclear transition energies. We focus on Fe-57 as an example of two-level nuclear lossy system giving rise to Smith-Purcell emission at 14.4 keV with a characteristic delay of 142 ns relative to the excitation time. These properties enable a clean separation from faster and spectrally broader emission mechanisms, such as bremsstrahlung. Besides its fundamental interest, our study holds potential for the design of high-energy, narrowband, highly-directive photon sources, as well as a means to store energy in the form of nuclear excitations. I. INTRODUCTION The interaction of electron beams with periodic gratings enables the generation of coherent radiation over a wide spectral range in synchrotrons and free-electron lasers [1,2]. Indeed, far-field radiative components are produced in general upon scattering of the evanescent electromagnetic field that accompanies electrons in motion when they cross or pass near a material structure, giving rise to cathodoluminescence emission [3]. If the structure is periodically patterned with spatial period d along the electron velocity vector v, Smith-Purcell (SP) radiation [4] is emitted over a broad range of wavelengths λ and angles θ n relative to v satisfying the condition of far-field constructive interference c/v − cos θ n = nλ/d,(1) where the integer n labels different coherence orders. The SP effect has been extensively studied using macroscopic structures such as gratings and particle arrays [5][6][7][8][9][10][11][12][13][14], as well as atomic layers in stacked van der Waals materials [15]. In addition, electrons act in unison if they are bunched within small beam regions compared to the emission wavelength in the moving frame, giving rise to superradiant light generation ranging from the terahertz [16] to the x-ray [17,18] spectral domains. Incidentally, the inverse process (electron energy gain near an illuminated grating) has also been demonstrated [19] and explored as a route to realise laser-based table-top particle accelerators [20]. Resonances in the periodic structure can enhance the emission, as demonstrated using plasmonic gratings [21]. Likewise, atoms in a crystal structure can be resonantly * Corresponding Author: [email protected] excited by each passing electron, so that subsequent radiative decay to their initial ground states leaves the target unchanged and thus gives rise to a coherent superposition of amplitudes contributed by different atoms in the far field. The resulting wavelengths and directions of light emission are equally described by Eq. (1). Nuclear transitions can also be excited by electromagnetic interaction with free electrons [22] and ions [23], and an analogous SP effect responding to the same kinematic condition as with electronic transitions is expected to take place. Supporting this possibility, coherent photon scattering from arrays of nuclei is inherent in the Mössbauer effect [24], which has prompted further studies of nuclear-based coherent optical response [25,26]. Given the large variety of nuclear excitation energies extending from ∼ 10 eV to megaelectronvolts, as well as the lifetimes of such excitations ranging from subnanoseconds to many years [27], electron-or ion-driven excitation of nuclei in a solid crystal offers a unique platform for exploring new physics with potential application in extreme light sources. A particularly fascinating opportunity is open by the coherent superposition of the emission from different nuclei when their radiative decay occurs with a long characteristic delay time after the passage of the charged projectiles, so that the angular distribution of the generated γ rays is controlled by the direction and velocity of the exciting beam. In this work, we investigate coherent γ-ray emission associated with the nuclear excitations produced when relativistic electrons or ions traverse a periodic crystal target. Specifically, we focus on nuclear excitations in Fe-57 as an example of two-level lossy system in which the emission probability is relatively intense. The resulting SP emission takes place over a temporal scale dictated by the lifetime of these excitations (142 ns), and therefore, it can be neatly separated from faster and more intense coherent emission mechanisms, such as bremsstrahlung (BR) [28], which occurs during the projectile crossing of the target (e.g., within < 1 fs for a 100 nm-thick crystal). Besides its fundamental interest, our work emphasizes the potential of nuclear SP emission for application in extreme high-energy photon sources, capable of producing strongly directional angular patterns over the relatively long times associated with the slow decay of the selected nuclear excitations. The resulting radiative emission is coherent if the nuclei return to their original ground state, thus producing γ rays along angles θ relative to the projectile velocity v satisfying the Smith-Purcell condition of far-field constructive interference [Eq. (1)] at integer orders n (see profile calculated for v = 0.94 c and 10 nuclei in a linear array of period d = 2.86 Å). (d) Radiative transitions between different excited-and ground-state sublevels in Fe-57 have relative strengths (see labels) that depend on the azimuthal angular number m of the emitted photon (color-matched arrows). II. RESULTS AND DISCUSSION A fast electron or ion passing parallel to a periodic array of nuclei [ Fig. 1(a)], such as those arranged in a solid crystal, induces nuclear excitations that can subsequently decay via radiative and nonradiative processes [ Fig. 1(b)]. If the nuclei return to their initial ground states, the coherent superposition of the associated radiative emission amplitudes gives rise to a sharp angular pattern, as prescribed by Eq. (1) [ Fig. 1(c)]. Here, we consider the ω 0 = 14.4 keV nuclear excitation in Fe-57 (see Appendix for an analysis of the 43.8 keV excitation in Dy-161), in which both ground and excited states are degenerate as illustrated in Fig. 1(d), where sublevels are labeled by their respective angular momentum numbers µ g and µ e with |µ g | ≤ j g = 1/2 and |µ e | ≤ j e = 3/2. Coherent emission is produced if the excited nucleus decays back to its original ground state, whereas incoherent photons are generated otherwise. Transitions are dominated by the magnetic dipole channel, assisted by the absorption or emission of photons with an angular momentum number m = µ e − µ g . Averaging over the two possible µ g sublevels, the combined process of excitation and subsequent de-excitation is coherent in a fraction f = 2/3 of the interaction events. In addition, internal conversion, whereby the excited state donates its energy to an electron in the system, produces nonradiative decay with a probability that is α IC = 8.544 times higher than for radiative decay [29,30]. This leads to a radiative lifetime κ −1 r = κ −1 (1 + α IC )/f = 2.03 µs, where κ −1 = 142 ns is the measured lifetime of the excited state [27]. Combining these elements, we can describe the SP emission under consideration by assigning a frequency-space induced magnetic moment m j (ω) = α M (ω)H ext (r j , ω) to each nucleus j in the structure (see details in the Appendix), where α M (ω) ≈ 3 4k 3 κ r ω 0 − ω − iκ/2(2) is an isotropic µ g -averaged magnetic polarizability, H ext (r, ω) = (2eZω/vcγ)K 1 (ωb/vγ)e iωz/vφ is the magnetic field generated at the nuclear position r j by a passing projectile of charge eZ (e.g., Z = −1 for electrons) and velocity v [3], k = ω/c is the light wave vector, and γ = 1/ 1 − v 2 /c 2 is the relativistic Lorentz factor. Considering the electric far-field E ind (r, ω) = k 2 m j (ω) × r e ik|r−rj | /|r−r j | induced by each magnetic dipole m j (ω), and summing over the contributions from all nuclei, we find a probability of coherently emitting one γ photon to be given by (see Appendix) Γ coh = d 2 Ω Γ coh (Ω), where Γ coh (Ω) = 9Z 2 α 8π(v/c) 2 γ 2 κ 2 r ω 0 κ |r × g(Ω)| 2 is the angle-resolved probability, Ω = (θ, ϕ) indicates the direction ofr, and α ≈ 1/137 is the fine structure constant. Here, we have introduced the dimensionless farfield amplitude g(Ω) = j K 1 ω 0 |R j − R p | vγ e iω0zj /v e −ik0·rjφ jp ,(3) where k 0 = (ω 0 /c)r, R j denotes the components of r j in a plane perpendicular to the velocity vector v, R p defines the point of crossing of the probe in that plane, andφ jp is an in-plane unit vector perpendicular to R j − R p . It is useful to examine the photon yield produced by interaction with a single nucleus j. Collecting the above expressions, we find which we use to calculate the results shown in Fig. 2(a,b). We observe a smooth increase in the photon yield with increasing velocity, as well as a sharp drop as the beam moves away from the nucleus, in accordance with the small-distance (R jp vγ/ω 0 ) behavior of Eq. (B6), Γ coh j ∝ 1/R 2 jp , independent of v. We note that BR emission can exceed the photon yield of the nuclear-excitation mechanism by several orders of magnitude ( Fig. 2(a,b), dashed curves, see Appendix), although the strength of the BR mechanism scales as M −2 with the mass of the probe M and can therefore be neglected for massive ions. However, despite the higher number of photons generated via BR, the narrowness of the nuclear resonance renders a larger emission density within its spectral width [Fig. 2(c)]. In addition, BR emission takes place while the projectiles are traversing the material, and therefore, it can be neatly separated from γ-ray emission, which extends over a much longer period determined by the lifetime of the nuclear resonance [ Fig. 2(d)], in the range of nanoseconds for the configuration under study. We therefore think of a bunched beam of electrons or ions, with a bunch duration in the sub-nanosecond range, as a way to excite the nuclei for subsequent reading through γ-ray emission over a longer time of several nanoseconds, using a fast detector to discriminate the arrival time of the γ rays with respect to the exciting bunch. Γ coh j = 3Z 2 α (v/c) 2 γ 2 κ 2 r ω 0 κ K 2 1 ω 0 R jp vγ ,(4) Incoherent γ-ray emission can also take place as noted above, with an angular profile that should be equivalent to that generated by magnetic dipoles that are randomly oriented on a plane perpendicular to the local magnetic field (see Appendix). Considering this, as well as the probability of such incoherent processes relative to coherent emission, we find the probability of the former to have the angular distribution Γ incoh (Ω) = (3/16π)(1/f − 1) j 1 + sin 2 θ sin 2 (ϕ − ϕ jp ) Γ coh j with Γ coh j given by Eq. (B6), which is therefore broad and featureless, so we dismiss it and concentrate instead on the sharp angular peaks associated with the nuclear SP coherent emission. When the beam traverses a crystal lattice, the sum over nuclei can be analytically performed by separately computing it for each atomic plane. This is conveniently done by recasting the amplitude g(Ω) [Eq. (3)] in reciprocal space, so that we are left with a sum over the two-dimensional reciprocal lattice vectors G, while the sum over layers produces the sharp angular profiles described by Eq. (1). Following the procedure detailed in the Appendix for a film consisting of N atomic planes with (100) surface orientation and body-centered cubic crystal symmetry, as appropriate for solid iron, we find an angle-resolved γ-ray SP emission probability Γ coh (Ω) = n Γ coh n (ϕ) δ(cos θ − cos θ n ), emerging along polar directions θ determined by Eq. (1) and with an azimuthal distribution given by Γ coh n (ϕ) ≈ N 18π 2 Z 2 α κ 2 r ac(ω 0 a/c) 4 κ (5) × G |k + G| 2 cos 2 θ n + (k + G) ·r 2 |k + G| 2 + (ω 0 /vγ) 2 2 . Here, a is the lattice constant, the sum runs over twodimensional reciprocal lattice vectors G = (2π/a)(i, j) subject to the condition that i + j + n is an even integer number, and k is the projection of k 0 on the surface plane. To obtain Eq. (C4), we have averaged the emission probability over the impact parameter R p under the assumption that the electron or ion beam extends over many crystal periods in the transverse direction. The resulting G sum exhibits a logarithmic divergence arising from large G contributions and behaving as ∼ G<Gmax d 2 G/G 2 ∼ log G max when restricting the sum by a cutoff G max . In physical terms, this divergence is associated with close encounters between the probe and the nuclei, which are in fact avoided due to their Coulomb interaction. We thus express the cutoff G max ∼ 1/R min in terms of the minimum impact parameter R min with respect to the nuclear positions. Without entering into the details of the transverse modulation of the beam as it propagates along the crystal, we provide results obtained by computing Eq. (C4) for different values of R min , keeping in mind that this parameter is limited by the transverse beam energy, which is in turn E ⊥ ∼ θ 2 inc E 0 , where E 0 is the longitudinal kinetic energy and we consider a small incidence angle θ inc relative to the rows of nuclei. For our Fe-57 target (atomic number Z Fe = 26), the relevant transverse Coulomb energy is E Coul = (2Z Fe e 2 /a) log(a/2R min ), corresponding to the difference in the potential created by a (100) atomic row at distances R min and a/2 from it (i.e., the minimum and maximum separations from any such row inside the crystal) after averaging over positions along the row direction. In addition, we expect an angular broadening of the emission profile ∼ θ inc , so taking θ inc = 2 • and E 0 = 1 MeV as reasonable parameters, the condition E ⊥ ∼ E Coul leads to R min ∼ 1 pm, which should be considered as a plausible minimum distance, while even smaller values should be reachable with more energetic probes and similarly small beam inclinations or divergences. We show the photon yield produced from a Fe-57 crystal film oriented perpendicularly to the (100) direction when it is traversed by normallyimpinging electrons or ions as a function of probe velocity. The yield is divided by Z 2 and normalized per incident particle and per atomic layer in the film. We show results for three different values of the minimum beam-nucleus distance Rmin summed for coherence orders n = 1 − 12, as well as the decomposition of the Rmin = 1 pm yield in the contribution of different n's (lower curves, see labels, integrated over azimuthal angles). The photon yield calculated from the angular integral of Eq. (C4) shows the expected logarithmic increase with decreasing R min , as well as an overall increase with particle velocity v, as we show in Fig. 3, where we plot the yield divided by Z 2 for particles of charge eZ and normalized to the number of (100) atomic layers N in the Fe-57 film. Interestingly, a rich structure of emission is found as a function of v, and in particular, different coherence orders n produce commensurate contributions to the emission up to a large order (Fig. 3, lower curves), each of them associated with different emission angles according to Eq. (1). The sum over n's (black curves) exhibits sudden jumps as a function of v that reflect the above-mentioned change in the parity of the reciprocal lattice vectors [see condition on G in Eq. (C4)] that contribute at consecutive orders. Incidentally, for the Fe-57 film under consideration, the lattice constant a ≈ 2.86 Å is substantially larger than the photon wavelength λ ≈ 0.86 Å and the cutoff parameter (e.g., vγ/ω 0 ≈ 0.28Å for v = 0.9 c), so we anticipate a weak azimuthal dependence in the angular distribution of the emission from Eq. (C4), which we corroborate numerically. An azimuthal dependence is however encountered for ultrarelativistic particles (v c, not shown). Under conditions previously explored in experiment [31], we consider relavistic ions with v close to c and Z ∼ 10, which can travers a film thickness of several microns (N ∼ 10 4 ) and reach small impact parameters R min , so the photon yield per ion resulting by multiplying the result in Fig. 3 by N Z 2 is ∼ 10 −11 , which should be detectable considering the noted delay between the time of ion bunch bombardment and the emission of γ photons. III. CONCLUSIONS In conclusion, a periodic array of nuclei in a solid crystal film can be coherently excited by electrons or ions traversing the material, giving rise to the emission of γ rays along well-defined directions in analogy to the SP effect. Although we explore this effect here for Fe-57, it should also be observable in other nuclei, which configure a vast range of excitation energies and lifetimes [27]. The predicted yield of nuclear SP emission is predicted to increase with the mass of the probe, which should enable closer collisions with the nuclei in the structure. Further improvement of the yield could be obtained by operating under channeling conditions similar to what happens in the Okorokov effect [31][32][33]. Besides the fundamental interest lying in the demonstration of coherence between different nuclei in a sample by means of the observation of sharp angular emission profiles, our results hold potential as a characterization technique that should shed light into the spatial distribution of different isotopes in a material. The fact that γ rays are released from the sample over an interval controlled by the lifetime of the nuclear excitation, extending well beyond the time of passage of the beam, suggests a strategy for storing energy in the form of nuclear excitations, which can be later liberated as γ-ray emission along narrowly peaked angular directions. APPENDIX In this appendix, we present detailed derivations of the coherent magnetic dipolar polarizability describing light scattering by Fe-57 and Dy-161 nuclei, the Smith-Purcell γ-ray emission from solid crystal targets containing these types of atoms, and the incoherent photon emission probability produced by electron or ion bombardment. Appendix A: Coherent and incoherent light scattering by Fe-57 and Dy-161 nuclei The ground state of an Fe-57 nucleus has a total angular number j g = 1/2. We are interested in excitations to the state of energy ω 0 = 14.4129 keV in that nucleus, which has an total angular momentum j e = 3/2 and a measured lifetime 1/κ = 142 ns [27] (for 1/e decay). Radiative coupling between the ground and excited states is mediated by magnetic-dipole (M1) and electricquadrupole (E2) terms, with a ratio of their respective strengths given by E2/M1 ≈ 0.002 [27], so we ignore the E2 channel and assume pure magnetic-dipole transitions. In addition, the excited state can decay nonradiatively by transferring energy to electrons in the system (the so-called internal conversion mechanism) with a ratio of nonradiative-to-radiative rates α IC = 8.544 for the M1 channel [29,30]. By analogy to electronic levels in atoms, we represent nuclear sublevels through a specific realization such as [34] |ljµ = 1/2 s=−1/2 C ljµs Y lµ−s |s , describing the coupling between spin states |s = ±1/2 and orbital states with the symmetry of the spherical harmonics Y lm , characterized by angular momentum numbers (l, m) (see below on the choice of l). Here, j is the total angular momentum number of the nuclear level and µ = −j, · · · , j labels the sublevels. These mixed states involve the nonvanishing half-integer Clebsch-Gordan coefficients C ljµs =        (j + µ + 1)/2(j + 1), l = j + 1/2, s = −1/2, − (j − µ + 1)/2(j + 1), l = j + 1/2, s = 1/2, (j − µ)/2j, l = j − 1/2, s = −1/2, (j + µ)/2j, l = j − 1/2, s = 1/2. Dipolar coupling to a time-varying external magnetic field is realized through matrix elements sharing a common radial part and with angular components given by l j µ |r|ljµ . Expressing the radial unit vector aŝ r = 4π/3 (1/ √ 2) [Y 1,−1 (Ω) − Y 11 (Ω)]x + (i/ √ 2) [Y 1,−1 (Ω) + Y 11 (Ω)]ŷ + Y 10 (Ω)ẑ , where Ω denotes the direction ofr, we can readily obtain the matrix elements in terms of l j µ |Y 1m |ljµ = s C l j µ s C ljµs dΩ Y * l ,µ −s (Ω)Y 1m (Ω)Y l,µ−s (Ω), which vanishes unless l − j = l − j. The result is independent of the actual realization (i.e., the choice of l = j ± 1/2), and in particular, for the Fe-57 system (j = j e = 3/2 and j = j g = 1/2), using the notation |e µ = |1, 3/2, µ and |g µ = |0, 1/2, µ for the excited and ground sublevels, respectively, we find e µ |Y lm |g µ = δ µ ,µ+m 1 √ 4π ×        1 µ = ±3/2, m = 0, 2/3, µ = ±1/2, m = 0, 1/3, µ = ±1/2, m = 0, 0, elsewhere. Squaring these matrix elements, the relative strengths of the transitions connecting different sublevels are found to be as indicated by the orange numbers in the following diagram (reproduced from Fig. 1(d) Blue, red, and green double arrows correspond to transitions in which m = µ − µ = −1, 0, and 1, respectively. The sum of strengths for downward transitions from any given excited state to the accessible ground states is 1, whereas the sum of upward arrows from any of the two ground states is 2. These numbers are in the expected ratio (2j e + 1)/(2j g + 1) = 2, reflecting the fact that more final states are available for excitation than for deexcitation. In accordance with the discussion presented above, we describe the coherent scattering of electromagnetic fields by the nucleus in terms of a magnetic dipolar polarizability. Then, a given Fe-57 nucleus prepared in either of the two ground-state sublevels (|g −1/2 or |g 1/2 ) is characterized by an anisotropic polarizability in which each m component takes a different value propotional to the corresponding transition strength in the above diagram. However, by averaging over the two possible values of µ = ±1/2 in the ground-state sublevels, the magnetic polarizability becomes isotropic and given by α M (ω) ≈ 3 4k 3 κ r ω 0 − ω − iκ/2 ,(A1) where κ r is the coherent radiative decay rate and k = ω 0 /c. The internal conversion mechanism reduces the radiative transition component of the decay rate by a factor 1+α IC relative to κ (i.e., nonradiative decay channels contribute to broaden the resonance with a rate κ deduced from the measured decay time, but only a fraction 1/(1 + α IC ) of all decay channels is associated with the emission of radiation). In addition, just an average fraction f = 2/3 of those radiative decay channels [the mindependent average of the strengths (orange numbers) associated with each m in the above diagram] is coherent in the sense that they leave the system in the original initial state, whereas the remaining 1 − f fraction corresponds to incoherent emission of photons accompanied by a change in µ. Combining these factors, we set κ r = κ × f /(1 + α IC ) ≈ 1/(2.03 µs) for Fe-57. The parameters entering or affecting Eq. (A1) are summarized in Table I. As a consequence of the above diagram, we find that, upon excitation by an optical magnetic field directed along a given directionn, the incoherent radiative decay channels lead to photon emission with an angular profile corresponding to the average of the emission from magnetic dipoles oriented perpendicularly ton. For example, when exciting with m = 0 (i.e.,n =ẑ) from an initial sublevel | ± 1/2 , incoherent radiative decay to the other sublevel | ∓ 1/2 involves emission of photons with m = ±1, whose average is also obtained as the one over the emission of dipoles oriented alongx andŷ (i.e., in the plane perdendicular to the exciting magnetic field). Because the averaged nuclear response is isotropic, this argument applies to any orientationn of the applied field. We use this result below to calculate the incoherent emission probability from the coherent one in a straightforward fashion. For Dy-161 nuclei, the excited state of energy ω 0 = 43.8201 keV is also dominated by magnetic dipole coupling, with a ratio E2/M1 ∼ 10 −9 and a higher degree of degeneracy in the ground and excited states (j g = 5/2 and j e = 7/2). An analysis similar to the one carried out above for iron leads to the following diagram for Dy-161: From these transition strengths, we find a coherent fraction f = 4/9, which combined with tabulated values of κ and α IC taken from Refs. 27, 29, 30, leads to the parameter list given in Table I, although in this case κ r needs to be reduced by a factor of 2.25 to account for decay mediated by transitions to an intermediate state that do not contributed to the studied coherent radiative channel. Likewise, we can describe coherent light scattering by Dy-161 nuclei through the magnetic polarizability in Eq. (A1). |-7/2⟩ |-5/2⟩ |-3/2⟩ |-1/2⟩ |-5/2⟩ |-3/2⟩ je = 7/2 jg = 5/2 m = -1 m = 0 m = 1 |1/2⟩ |3/2⟩ |5/2⟩ |7/2⟩ |3/2⟩ |5/2⟩ Appendix B: Coherent γ-ray cathodoluminescence We work in the nonrecoil approximation (i.e., the electron or ion velocity v is assumed to remain constant during the interaction with the target), which is valid when the excitation energies are small compared with the kinetic energy of the probe, so that it is not strongly deflected by close collisions with the nuclei. In addition, the probability that a given electron or ion emits one γ-ray photon is much smaller than unity, so we can describe the interaction within first-order perturbation theory. Moreover, we consider monochromatic, collimated beams, moving along a well-defined direction. Under these conditions, a rigorous quantum-mechanical treatment of the probe-target interaction problem [35] shows that the excitation probability is identical with the one calculated by assimilating the electron or ion to the classical evanescent electromagnetic field that accompanies a moving point charge eZ (e.g., Z = −1 for electrons). For beams of finite extension along the transverse directions (⊥ v), this probability needs to be averaged over the beam density profile. In the present work, where the studied nuclei respond through their magnetic polarizability, we just need to plug the external magnetic field. Considering a trajectory r = R p + vt, where R p ⊥ v acts as an impact parameter, the probe produces an evanescent magnetic field H ext (r, t) = (2π) −1 dω e −iωt H ext (r, ω) with spec-tral components [3] H ext (r, ω) = 2eZω vcγ K 1 ω|R − R p | vγ e iωz/vφ , (B1) where we use cylindrical coordinates r = (R, ϕ, z) and transverse coordinates R = (x, y), and γ = 1/ 1 − v 2 /c 2 is the relativistic Lorentz factor. For a collection of nuclei distributed at positions r j , coherent γ-ray emission is mediated by the frequencydependent magnetic dipoles m j (ω) = α M (ω)H ext (r j , ω) created in response to the field in Eq. (B1) via the polarizability α M (ω) given in Eq. (A1). The dipole induced at a nucleus j produces an electric field −k 2 (r × m j ) e ik|r−rj | /|r − r j |. In the far-field limit (kr 1 and r r j ), this expression can be approximated as −k 2 (r × m j ) e −ik·rj e ikr /r with k = kr. Summing over all dipoles, the resulting induced (emitted) electric field reduces to E ind (r, ω) = f (ω) e ikr r , where f (Ω, ω) = − α M (ω) 2eZk 3 vγ (B2) × j K 1 ωR jp vγ e iωzj /v e −ik·rjr ×φ jp is the emission amplitude. Here, Ω denotes the direction ofr and we have defined the relative transverse coordinates R jp = R j − R p and the azimuthal vectorsφ jp ⊥ R jp ,ẑ. Following the procedure described in Ref. 3, the coherent photon emission probability Γ coh can be obtained by integrating the radial Poynting vector over time and directions of emission Ω, and then dividing the result by the photon energy ω. We find Γ coh = dΩ Γ coh (Ω) with Γ coh (Ω) = c 4π 2 ∞ 0 dω ω |f (Ω, ω)| 2 .(B3) Inserting Eq. (B2) into Eq. (B3), we can carry out the ω integral by approximating ω ≈ ω 0 outside α M (ω) because the emission concentrates around a narrow spectral range of width κ ω 0 around that frequency. This leads to Γ coh (Ω) = 9Z 2 α 8π(v/c) 2 γ 2 κ 2 r ω 0 κ |r × g(Ω)| 2 ,(B4) where α ≈ 1/137 is the fine structure constant and g(Ω) = j K 1 ω 0 R jp vγ e iω0zj /v e −ik0·rjφ jp(B5) depends on the emission direction Ω through k 0 = (ω 0 /c)r. For a single nucleus j, the angle-integrated probability reduces to Γ coh j = 3Z 2 α (v/c) 2 γ 2 κ 2 r ω 0 κ K 2 1 ω 0 R jp vγ ,(B6) which decays exponentially at large nucleus-beam distances R jp but diverges as ∼ 1/R jp for close encounters. Appendix C: Smith-Purcell γ-ray emission from a solid crystal target The coherent emission from nuclei arranged in a crystal lattice gives rise to γ-ray emission along directions determined by the condition of constructive far-field interference, similar to the Smith-Purcell effect for grating structures [4,5]. We consider a homogeneous crystalline film consisting of N atomic planes with one emitting nucleus in each two-dimensional (2D) unit cell of each of those planes. For simplicity, we take the film to be normal to the z axis (i.e., we consider normally impinging beams). The crystal structure can then be defined in terms of the 2D reciprocal lattice and the three-dimensional displacement vector b = b + b zẑ separating the reference nuclei in two consecutive atomic planes. At this point, we need to multiplex the sum over nuclei sites r j in Eq. (B5) into two sums: one over 2D reallattice sites R j and the other over atomic planes z = z l , keeping in mind that we replace r j by R j + lb. In addition, the 2D lattice sum can be transformed into a sum over 2D reciprocal lattice vectors G by consecutively applying the identities K 1 (∆R)φ = 1 ∆ (x∂ y −ŷ∂ x )K 0 (∆R), K 0 (∆R) = 1 2π d 2 Q e iQ·R Q 2 + ∆ 2 , Rj e iQ·Rj = (2π) 2 A G δ(Q − G), where A is the unit cell area and ∆ = ω 0 /vγ. We find g(Ω) = − 2πivγ Aω 0 G e −i(k +G)·Rp |k + G| |k + G| 2 + (ω 0 /vγ) 2 ×φ k +G S G ,(C1) where k is the in-plane component of k 0 = (ω 0 /c)r, we define the azimuthal vectorφ Q = (−Q yx + Q xŷ )/Q, and the coefficient S G = exp i (ω 0 b z /c)(c/v − cos θ) + G · b(C2) encapsulates the sum over atomic planes and depends on the polar angle of emission θ relative toẑ. We now insert Eq. (C1) into Eq. (B4) and take the average of the result over the beam impact parameter R e (i.e., we average over lateral film positions). This allows to simplify the double sum over reciprocal lattice vectors emerging when squaring g(Ω) in Eq. (B4) by applying A −1 A d 2 R p G e −iG·Rp a G 2 = G |a G | 2 , which is a consequence of the identity A −1 A d 2 R p e i(G−G )·Rp = δ G,G , where the integrals extend over a 2D unit cell. In addition, the geometric sum over atomic planes in Eq. (C2) can also be carry out analytically to yield |S G | 2 = sin 2 (N ξ G /2)/ sin 2 (ξ G /2), where ξ G = (ω 0 b z /c)(c/v − cos θ) + G · b and N is the number of atomic layers. For N 1, we have |S G | 2 ≈ 2πN m δ(ξ G − 2πm), where m runs over integer numbers. Putting these elements together, we obtain Γ coh (Ω) ≈N 9π 2 Z 2 α c 2 κ 2 r A 2 ω 3 0 κ G |k + G| 2 |k + G| 2 + (ω 0 /vγ) 2 2 × r ×φ k +G 2 m δ (ξ G − 2πm) . The δ-function in each m term determines a polar cone of emission through the condition ξ G = 2πm, where the dependence on G can be partially or completely eliminated by redefining m for different types of crystal lattices. In particular, for a (100) film made of a simple cubic crystal with lattice constant a, we can take b = 0 and b z = a, so the G dependence disappears from ξ G and we recover the familiar Smith-Purcell condition cos θ n = c/v − nλ/a in terms of the light wavelength λ and the interatomic plane spacing a. Iron forms a bcc lattice, so considering a (100) film orientation, we can take b = (a/2, a/2), b z = a/2, A = a 2 , and G = (2π/a)(i, j), where a = 2.856 Å is the lattice constant, while i and j run over integer numbers. For such Fe(100) film, we have G · b = (i + j) π, so we can redefine n = 2m−i−j and separate the emission as a sum over components directed along the cones determined by cos θ n = c/v − nλ/d (C3) according to Γ coh (Ω) = n Γ coh n (ϕ) δ(cos θ − cos θ n ), where d = a gives the periodicity along the out-of-plane direction, λ = 2πc/ω 0 is the emission wavelength, and Γ coh n (ϕ) ≈N 18π 2 Z 2 α κ 2 r dc(ω 0 a/c) 4 κ G |k + G| 2 |k + G| 2 + (ω 0 /vγ) 2 2 × r ×φ k +G 2 (C4) is the probability density as a function of the azimuthal angle of emission ϕ. An implicit dependence on θ n arises in Eq. (C4) through k = k sin θ n (cos ϕ, sin ϕ). Here, the prime in indicates that the sum is restricted to G = (2π/a)(i, j) vectors such that i + j + n (= 2m) is an even integer. For Dy-161, we consider a (100) film of DyN, which forms an fcc crystal structure with a Dy-Dy nearestneighbors distance a = 3.6 Å [36], so we can readily apply the results in Eqs. (C3) and (C4) by defining 2D atomic planes with a square lattice of period a = a and vertical spacing b z = a / √ 2 (i.e., d = a √ 2). Appendix D: Incoherent γ-ray emission Each beam-excited nucleus can decay to a state that differs from the original ground state (i.e., to a different sublevel |µ ) by emitting one γ-ray photon. The probability Γ incoh associated with this incoherent process is the sum of the probabilities contributed by all nuclei in the target. Then, as we argue in Sec. A, each nucleus emits incoherently with an angular profile given by the average of magnetic dipoles oriented perpendicularly to the external magnetic field. For a given nucleus, since the magnetic field of the probe is parallel to the azimuthal directionφ jp [see Eqs. (B1) and (B2)], we have to average the field intensities produced by two magnetic dipoles of equal magnitude oriented along R jp andẑ. This results in an angular profile ∝ |r×R jp | 2 +|r×ẑ| 2 = 1+sin 2 θ sin 2 (ϕ−ϕ jp ), where the angles Ω = (θ, ϕ) define the emission direction. Making use of the fact that the respective fractions of coherent and incoherent emission are f and 1 − f (see Sec. A and Table I), we can readily write the angle-resolved incoherent emission probability as Γ incoh (Ω) = 9Z 2 α 16π(v/c) 2 γ 2 κ 2 r ω 0 κ 1 f − 1 × j K 2 1 ω 0 R jp vγ 1 + sin 2 θ sin 2 (ϕ − ϕ jp ) . To obtain this result, we have added the coherent emission probability coming from each nuclei j [Eq. (B6)], previously multiplied by both (1 − f )/f and the normalized angular profile. Appendix E: Bremsstrahlung emission Under the conditions here investigated, particles in the beam can be deflected by the Coulomb potential of the target nuclei, leading to Bremsstrahlung (BR) emission. The radiated energy can be expressed as dΩ ∞ 0 dω ω Γ BR (Ω, ω), where Γ BR (Ω, ω) gives the spectral and angular distribution of the emission probability. For an arbitrary time-dependent velocity vector v(t) and trajectory r p (t) of the particle, this probability admits the expression [1] Γ BR (Ω, ω) = αZ 2 4π 2 ω (E1) × ∞ −∞ dt e iω[t−r·rp(t)/c] d dt r × v(t)/c 1 −r · v(t)/c 2 . Assuming that the interaction with a target nucleus only produces a small perturbation in the particle velocity relative to its initial value v(−∞) = v 0ẑ , we can retain corrections to first order in the Coulomb potential and write the equation of motionv(t) ≈ ZZ n e 2 /M γ R + z v 0 t/γ 2 /(R 2 + v 2 0 t 2 ) 3/2 , where R is the beam-nucleus separation vector, eZ and eZ n are the probe and nucleus charges, M is the mass of the projectile, the Lorentz factor γ is evaluated from v 0 , and we neglect any effect arising from electrons surrounding the nucleus in the actual material. We now expand Eq. (E1) to first order iṅ v and use the above equation of motion to work out the time integral, which yields the result Γ BR (Ω, ω) = α 3 Z 4 Z 2 n 2 ω π 2 M 2 γ 2 v 4 0 (E2) × |(1 − β cos θ)r × F + β (r ×ẑ) (r · F)| 2 , where F = K 1 (ζ)R + (i/γ 2 )K 0 (ζ)ẑ, ζ = (1 − β cos θ) ωR/v 0 , and β = v 0 /c. We use Eq. (E2) to obtain the BR curves in Fig. 2 of the main text, as it gives reasonable results for the velocities under consideration (e.g., as compared to a virtual-quanta analysis [1]). FIG. 1 : 1Smith-Purcell (SP) γ-ray emission. (a) We consider an electron or ion of charge eZ moving close and parallel to a linear periodic array of nuclei. (b) The moving charge can excite the nuclei, giving rise to radiative and nonradiative decay, here illustrated for Fe-57 (excitation energy ω0 = 14.4 keV, total and radiative lifetimes κ −1 = 142 ns and κ −1 r = 2.03 µs). (c) FIG. 2 : 2Photon emission from an individual nucleus. (a,b) Probability of γ-ray emission (solid curves) compared with BR emission (broken curves, integrated over 1 eV around ω0) for electrons and protons passing at a distance R from an individual Fe-57 nucleus as a function of either probe velocity v for R = 0.01 Å (a) or distance for v/c = 0.9 (b). (c) Spectral profile of the emission associated with γ-ray and BR processes. (d) Temporal dependence of the normalized γ-ray and BR emission probabilities. FIG. 3 : 3Coherent SP γ-ray emission. TABLE I : IParameters entering the magnetic dipolar polarizability of Fe-57 and Dy-161 in Eq. (A1). Here, ω0, κ, α IC , jg, and je are taken from Refs. 27, 29, 30, while f and κr are derived as explained in this section. ω0 (keV) 1/κ (s) 1/κr (s) α IC f jg je Fe-57 14.4129 1.42 × 10 −7 2.03 × 10 −6 8.544 2/3 1/2 3/2 Dy-161 43.8201 1.20 × 10 −9 3.17 × 10 −8 4.213 4/9 5/2 7/2 ACKNOWLEDGMENTSThis work has been supported in part by the European Commission (Horizon 2020 Grant No. 964591-SMARTelectron), the European Research Council (Advanced Grant 789104-eNANO), the Spanish MICINN (PID2020-112625GB-I00 and SEV2015-0522), the Catalan CERCA Program, and Fundaciós Cellex and Mir-Puig. S.G acknowledges support from Google Inc. J D Jackson, Classical Electrodynamics. New YorkWileyJ. D. Jackson, Classical Electrodynamics (Wiley, New York, 1999). E Saldin, E Schneidmiller, M Yurkov, The Physics of Free Electron Lasers. BerlinSpringer-Verlag Berlin HeidelbergE. Saldin, E. Schneidmiller, and M. Yurkov, The Physics of Free Electron Lasers (Springer-Verlag Berlin Heidel- berg, Berlin, 2000). . F J García De Abajo, Rev. Mod. Phys. 82209F. J. García de Abajo, Rev. Mod. Phys. 82, 209 (2010). . S J Smith, E M Purcell, Phys. Rev. 921069S. J. Smith and E. M. Purcell, Phys. Rev. 92, 1069 (1953). . P M Van Den, Berg, J. Opt. Soc. Am. 63689P. M. van den Berg, J. Opt. Soc. Am. 63, 689 (1973). . M J Moran, Phys. Rev. Lett. 692523M. J. Moran, Phys. Rev. Lett. 69, 2523 (1992). . H Ishizuka, Y Kawamura, K Yokoo, H Shimawaki, A Hosono, Nucl. Instrum. Methods Phys. Res. A. 445276H. Ishizuka, Y. Kawamura, K. Yokoo, H. Shimawaki, and A. Hosono, Nucl. Instrum. Methods Phys. Res. A 445, 276 (2000). . F J García De Abajo, Phys. Rev. E. 615743F. J. García de Abajo, Phys. Rev. E 61, 5743 (2000). . S Yamaguti, J Inoue, O Haeberlé, K Ohtaka, Phys. Rev. B. 66195202S. Yamaguti, J. Inoue, O. Haeberlé, and K. Ohtaka, Phys. Rev. B 66, 195202 (2002). . S E Korbly, . S Kesar, J R Sirigiri, R J Temkin, Phys. Rev. Lett. 941S. E. Korbly, a. S. Kesar, J. R. Sirigiri, and R. J. Temkin, Phys. Rev. Lett. 94, 1 (2005). . T Ochiai, K Ohtaka, Opt. Express. 137683T. Ochiai and K. Ohtaka, Opt. Express 13, 7683 (2005). . V Blackmore, G Doucas, C Perry, M F Kimmitt, Nucl. Instrum. Methods Phys. Res. B. 2663803V. Blackmore, G. Doucas, C. Perry, and M. F. Kimmitt, Nucl. Instrum. Methods Phys. Res. B 266, 3803 (2008). . R Remez, N Shapira, C Roques-Carmes, R Tirole, Y Yang, Y Lereah, M Soljačić, I Kaminer, A Arie, Phys. Rev. A. 96R61801R. Remez, N. Shapira, C. Roques-Carmes, R. Tirole, Y. Yang, Y. Lereah, M. Soljačić, I. Kaminer, and A. Arie, Phys. Rev. A 96, 061801(R) (2017). . R Remez, A Karnieli, S Trajtenberg-Mills, N Shapira, I Kaminer, Y Lereah, A Arie, Phys. Rev. Lett. 12360401R. Remez, A. Karnieli, S. Trajtenberg-Mills, N. Shapira, I. Kaminer, Y. Lereah, and A. Arie, Phys. Rev. Lett. 123, 060401 (2019). . M Shentcis, A K Budniak, X Shi, R Dahan, Y Kurman, M Kalina, H H Sheinfux, M Blei, M K Svendsen, Y Amouyal, Nat. Photon. 14686M. Shentcis, A. K. Budniak, X. Shi, R. Dahan, Y. Kur- man, M. Kalina, H. H. Sheinfux, M. Blei, M. K. Svend- sen, Y. Amouyal, et al., Nat. Photon. 14, 686 (2020). . J Urata, M Goldstein, M F Kimmitt, A Naumov, C Platt, J E Walsh, Phys. Rev. Lett. 80516J. Urata, M. Goldstein, M. F. Kimmitt, A. Naumov, C. Platt, and J. E. Walsh, Phys. Rev. Lett. 80, 516 (1998). . L Schachter, A Ron, Phys. Rev. A. 40876L. Schachter and A. Ron, Phys. Rev. A 40, 876 (1989). . H L Andrews, C A Brau, Phys. Rev. Spec. Top.AC. 770701H. L. Andrews and C. A. Brau, Phys. Rev. Spec. Top.AC. 7, 070701 (2004). . K Mizuno, J Pae, T Nozokido, K Furuya, Nature. 32845K. Mizuno, J. Pae, T. Nozokido, and K. Furuya, Nature 328, 45 (1987). . W D Kimura, G H Kim, R D Romea, L C Steinhauer, I V Pogorelsky, K P Kusche, R C Fernow, X Wang, Y Liu, Phys. Rev. Lett. 74546W. D. Kimura, G. H. Kim, R. D. Romea, L. C. Stein- hauer, I. V. Pogorelsky, K. P. Kusche, R. C. Fernow, X. Wang, and Y. Liu, Phys. Rev. Lett. 74, 546 (1995). . N Yamamoto, F J García De Abajo, V Myroshnychenko, Phys. Rev. B. 91125144N. Yamamoto, F. J. García de Abajo, and V. Myroshny- chenko, Phys. Rev. B 91, 125144 (2015). . A Winther, K Alder, Nucl. Phys. A. 319518A. Winther and K. Alder, Nucl. Phys. A 319, 518 (1979). . K Alder, A Bohr, T Huus, B Mottelson, A Winther, Rev. Mod. Phys. 28432K. Alder, A. Bohr, T. Huus, B. Mottelson, and A. Winther, Rev. Mod. Phys. 28, 432 (1956). . G Vandegrift, B Fultz, Am. J. Phys. 573913G. Vandegrift and B. Fultz, Am. J. Phys. 57, 3913 (1998). . J P Hannon, G T Trammell, Hyperfine Interact. 123127J. P. Hannon and G. T. Trammell, Hyperfine Interact. 123, 127 (1999). . G V Smirnov, Hyperfine Interact. 12331G. V. Smirnov, Hyperfine Interact. 123, 31 (1999). A W Sáenz, H Überall, Theory of Coherent Bremsstrahlung. Berlin Heidelberg; BerlinSpringerA. W. Sáenz and H. Überall, Theory of Coherent Bremsstrahlung (Springer Berlin Heidelberg, Berlin, 1985), pp. 5-31. . T Kibedi, T Burrows, M B Trzhaskovskaya, P M Davidson, C W Nestor, Nucl. Instrum. Methods Phys. Res. A. 589202T. Kibedi, T. Burrows, M. B. Trzhaskovskaya, P. M. Davidson, and C. W. Nestor, Nucl. Instrum. Methods Phys. Res. A 589, 202 (2008). BrIcc Conversion Coefficient Calculator. Bricc, BrIcc, BrIcc Conversion Coefficient Calculator, URL https://bricc.anu.edu.au/. . H F Krause, S Datz, P F Dittner, N L Jones, C R Vane, Phys. Rev. Lett. 71348H. F. Krause, S. Datz, P. F. Dittner, N. L. Jones, and C. R. Vane, Phys. Rev. Lett. 71, 348 (1993). . V V Okorokov, D L Tolchenkov, I S Khizhnyakov, Y N Cheblukov, Y Y Lapitski, G A Iferov, Y N Zhukova, Phys. Lett. A. 43485V. V. Okorokov, D. L. Tolchenkov, I. S. Khizhnyakov, Y. N. Cheblukov, Y. Y. Lapitski, G. A. Iferov, and Y. N. Zhukova, Phys. Lett. A 43, 485 (1973). . F J García De Abajo, P M Echenique, Phys. Rev. Lett. 761856F. J. García de Abajo and P. M. Echenique, Phys. Rev. Lett. 76, 1856 (1996). A Messiah, Quantum Mechanics. North-Holland, New YorkA. Messiah, Quantum Mechanics (North-Holland, New York, 1966). . F J García De Abajo, V. Di Giulio, ACS Photonics. 8945F. J. García de Abajo and V. Di Giulio, ACS Photonics 8, 945 (2020). . K P Shinde, S H Jang, J W Kim, D S Kim, M Ranot, K C Chung, Dalton Trans. 4420386K. P. Shinde, S. H. Jang, J. W. Kim, D. S. Kim, M. Ra- not, and K. C. Chung, Dalton Trans. 44, 20386 (2015).
[]
[ "Optimal insider control of stochastic partial differential equations", "Optimal insider control of stochastic partial differential equations" ]
[ "Olfa Draouil [email protected] \nDepartment of Mathematics\nUniversity of Tunis El Manar\nTunisTunisia\n", "Bernt Øksendal [email protected] \nDepartment of Mathematics\nUniversity of Oslo\nBlindern, NP.O. Box 10530316OsloNorway\n" ]
[ "Department of Mathematics\nUniversity of Tunis El Manar\nTunisTunisia", "Department of Mathematics\nUniversity of Oslo\nBlindern, NP.O. Box 10530316OsloNorway" ]
[]
MSC(2010): 60H10, 91A15, 91A23, 91B38, 91B55, 91B70, 93E20Keywords: Stochastic partial differential equation (SPDE); optimal control; inside information; Donsker delta functional; stochastic maximum principle; optimal insider control with noisy observations, nonlinear filtering.AbstractWe study the problem of optimal inside control of an SPDE (a stochastic evolution equation) driven by a Brownian motion and a Poisson random measure. Our optimal control problem is new in two ways:• (i) The controller has access to inside information, i.e. access to information about a future state of the system,• (ii) The integro-differential operator of the SPDE might depend on the control.In the first part of the paper, we formulate a sufficient and a necessary maximum principle for this type of control problem, in two cases:• The control is allowed to depend both on time t and on the space variable x.• The control is not allowed to depend on x.In the second part of the paper, we apply the results above to the problem of optimal control of an SDE system when the inside controller has only noisy observations of the state of the system. Using results from nonlinear filtering, we transform this noisy observation SDE inside control problem into a full observation SPDE insider control problem. The results are illustrated by explicit examples.
10.1142/s0219493718500144
[ "https://arxiv.org/pdf/1607.00197v2.pdf" ]
55,395,609
1607.00197
c53e53a58f9269c4ae2fc04d042c3663f431811f
Optimal insider control of stochastic partial differential equations 30 Aug 2016 Olfa Draouil [email protected] Department of Mathematics University of Tunis El Manar TunisTunisia Bernt Øksendal [email protected] Department of Mathematics University of Oslo Blindern, NP.O. Box 10530316OsloNorway Optimal insider control of stochastic partial differential equations 30 Aug 2016 MSC(2010): 60H10, 91A15, 91A23, 91B38, 91B55, 91B70, 93E20Keywords: Stochastic partial differential equation (SPDE); optimal control; inside information; Donsker delta functional; stochastic maximum principle; optimal insider control with noisy observations, nonlinear filtering.AbstractWe study the problem of optimal inside control of an SPDE (a stochastic evolution equation) driven by a Brownian motion and a Poisson random measure. Our optimal control problem is new in two ways:• (i) The controller has access to inside information, i.e. access to information about a future state of the system,• (ii) The integro-differential operator of the SPDE might depend on the control.In the first part of the paper, we formulate a sufficient and a necessary maximum principle for this type of control problem, in two cases:• The control is allowed to depend both on time t and on the space variable x.• The control is not allowed to depend on x.In the second part of the paper, we apply the results above to the problem of optimal control of an SDE system when the inside controller has only noisy observations of the state of the system. Using results from nonlinear filtering, we transform this noisy observation SDE inside control problem into a full observation SPDE insider control problem. The results are illustrated by explicit examples. Introduction In this paper we consider an optimal control problem for a stochastic process Y (t, x) = Y u,Z (t, x) = Y (t, x, Z) = Y (t, x, z)| z=Z defined as the solution of a stochastic partial differential equation (SPDE) given by dY (t, x) = [A u(t,x,Z) Y (t, x) + a(t, x, Y (t, x), u(t, x, Z), Z)]dt + b(t, x, Y (t, x), u(t, x, Z), Z)dB(t) + R c(t, x, Y (t, x), u(t, x, Z), Z, ζ)Ñ(dt, dζ); (t, x) ∈ (0, T ) × D. (1.1) {eq1.3} The boundary conditions are Y (0, x) = ξ(x), x ∈ D (1.2) Y (t, x) = θ(t, x); (t, x) ∈ [0, T ] × ∂D. (1.3) Here B(t) andÑ (dt, dζ) is a Brownian motion and an independent compensated Poisson random measure, respectively, jointly defined on a filtered probability space (Ω, F = {F t } t≥0 , P) satisfying the usual conditions. T > 0 is a given constant, D ⊂ R is a given open set, and ∂D denotes the boundary of D. The process u(t, x) = u(t, x, z) z=Z is our insider control process, where Z is a given F T 0 -measurable random variable for some T 0 > 0 , representing the inside information available to the controller. The operator A u is a linear integro-differential operator acting on x, with parameter u, and the expression A u(t,x,Z) Y (t, x)) means A u Y (t, x, Z)| u=u(t,x,Z) . We interpret the equation (1.1) for Y in the weak sense. By this we mean that Y (t, ·) satisfies the equation (Y (t, ·), φ) L 2 (D) = (ξ, φ) L 2 (D) + is the L 2 inner product on D and A * u is the adjoint of the operator A u , in the sense that (A u ψ, φ) L 2 (D) = (ψ, A * u φ) L 2 (D) (1.6) {adjoint} for all smooth L 2 functions ψ, φ with compact support in D. It can be proved that the Itô formula can be applied to such SPDEs. See [Par], [PR]. We assume that the inside information is of initial enlargement type. Specifically, we assume that the inside filtration H has the form H = {H t } 0≤t≤T , where H t = F t ∨ σ(Z) (1.7) {eq1.1} for all t, where Z is a given F T 0 -measurable random variable, for some T 0 > 0 (constant). Here and in the following we use the right-continuous version of H, i.e. we put H t = H t + = s>t H s . We also assume that the Donsker delta functional of Z exists (see below). This assumption implies that the Jacod condition holds, and hence that B(·) and N(·, ·) are semimartingales with respect to H. See e.g. [DØ2] for details. We assume that the value at time t of our insider control process u(t, x) is allowed to depend on both Z and F t . In other words, u(., x) is assumed to be H-adapted. Therefore it has the form u(t, x, ω) = u 1 (t, x, Z, ω) (1.8) {eq1.2} for some function u 1 : [0, T ] × D × R × Ω → R such that u 1 (., x, z) is F-adapted for each (x, z) ∈ D × R. For simplicity (albeit with some abuse of notation) we will in the following write u instead of u 1 . Let U denote the set of admissible control values.We assume that the functions a(t, x, y, u, z) = a (t, x, y, u, z, ω) : [0, T ] × D × R × U × R × Ω → R b(t, x, y, u, z) = b(t, x, y, u, z, ω) : [0, T ] × D × R × U × R × Ω → R c(t, x, y, u, z, ζ) = c(t, x, y, u, z, ζ, ω) : [0, T ] × D × R × U × R × R × Ω → R (1.9) are given bounded C 1 functions with respect to y and u and adapted processes in (t, ω) for each given x, y, u, z, ζ. Let A be a given family of admissible H−adapted controls u. The performance functional J(u) of a control process u ∈ A is defined by J(u) = E[ T 0 ( D h(t, x, Y (t, x), u(t, x, Z), Z)dx)dt + D k(x, Y (T, x), Z)dx], (1.10) {eq1.4} where h(t, x, y, u, z) : [0, T ] × D × R × U × R → R k(x, y, z) : D × R × R → R (1.11) are given bounded functions, C 1 with respect to y and u. The functions h and k are called the profit rate density and terminal payoff density, respectively. For completeness of the presentation we allow these functions to depend explicitly on the future value Z also, although this would not be the typical case in applications. But it could be that h and k are influenced by the future value Z directly through the action of an insider, in addition to being influenced indirectly through the control process u and the corresponding state process Y . Problem 1.1 Find u ⋆ ∈ A such that sup u∈A J(u) = J(u ⋆ ). (1.12) {eq1.5} The Donsker delta functional To study this problem we adapt the technique of the paper [DØ1] to the SPDE situation and we combine this with the method for optimal control of SPDE developed in [Ø1], [ØPZ] and [ØS1]. We first recall briefly the definition and basic properties of the Donsker delta functional: Definition 2.1 Let Z : Ω → R be a random variable which also belongs to (S) * . Then a continuous functional δ Z (.) : R → (S) * (2.1) {donsker} is called a Donsker delta functional of Z if it has the property that R g(z)δ Z (z)dz = g(Z) a.s. (2.2) {donsker pr for all (measurable) g : R → R such that the integral converges. For example, consider the special case when Z is a first order chaos random variable of the form Z = Z(T 0 ); where Z(t) = t 0 β(s)dB(s) + t 0 R ψ(s, ζ)Ñ(ds, dζ), for t ∈ [0, T 0 ] (2.3) {eq2.5} for some deterministic functions β = 0, ψ such that T 0 0 {β 2 (t) + R ψ 2 (t, ζ)ν(dζ)}dt < ∞ a.s. (2.4) and for every ǫ > 0 there exists ρ > 0 such that R\(−ǫ,ǫ) e ρζ ν(dζ) < ∞. This condition implies that the polynomials are dense in L 2 (µ), where dµ(ζ) = ζ 2 dν(ζ). It also guarantees that the measure ν integrates all polynomials of degree ≥ 2. In this case it is well known (see e.g. [MØP], [DiØ1], Theorem 3.5, and [DØP], [DiØ2]) that the Donsker delta functional exists in (S) * and is given by δ Z (z) = 1 2π R exp ⋄ T 0 0 R (e ixψ(s,ζ) − 1)Ñ(ds, dζ) + T 0 0 ixβ(s)dB(s) + T 0 0 { R (e ixψ(s,ζ) − 1 − ixψ(s, ζ))ν(dζ) − 1 2 x 2 β 2 (s)}ds − ixz dx, (2.5) where exp ⋄ denotes the Wick exponential. Moreover, we have for t < T 0 E[δ Z (z)|F t ] = 1 2π R exp t 0 R ixψ(s, ζ)Ñ(ds, dζ) + t 0 ixβ(s)dB(s) (2.6) + T 0 t R (e ixψ(s,ζ) − 1 − ixψ(s, ζ))ν(dζ)ds − T 0 t 1 2 x 2 β 2 (s)ds − ixz dx. (2.7) If D t and D t,ζ denotes the Hida-Malliavin derivative at t and t, ζ with respect to B and N , respectively, we have E[D t δ Z (z)|F t ] = 1 2π R exp t 0 R ixψ(s, ζ)Ñ(ds, dζ) + t 0 ixβ(s)dB(s) + T 0 t R (e ixψ(s,ζ) − 1 − ixψ(s, ζ))ν(dζ)ds − T 0 t 1 2 x 2 β 2 (s)ds − ixz ixβ(t)dx (2.8) and E[D t,z δ Z (z)|F t ] = 1 2π R exp t 0 R ixψ(s, ζ)Ñ(ds, dζ) + t 0 ixβ(s)dB(s) + T 0 t R (e ixψ(s,ζ) − 1 − ixψ(s, ζ))ν(dζ)ds − T 0 t 1 2 x 2 β 2 (s)ds − ixz (e ixψ(t,z) − 1)dx. (2.9) For more information about the Donsker delta functional, Hida-Malliavin calculus and their properties, see [DØ1]. From now on we assume that Z is a given random variable which also belongs to (S) * , with a Donsker delta functional δ Z (z) ∈ (S) * satisfying E[δ Z (z)|F T ] ∈ L 2 (F T , P ) (2.10) and E[ T 0 (E[D t δ Z (z)|F t ]) 2 dt] < ∞, for all z. (2.11) 3 Transforming the insider control problem to a related parametrized non-insider problem Since Y (t, x) is H-adapted, we get by using the definition of the Donsker delta functional δ Z (z) of Z that Y (t, x) = Y (t, x, Z) = Y (t, x, z) z=Z = R Y (t, x, z)δ Z (z)dz (3.1) {eq1.6} for some z-parametrized process Y (t, x, z) which is F-adapted for each x, z. Then, again by the definition of the Donsker delta functional we can write, with A u = A u(s,x,Z) = A u(s,x,z) z=Z , Y (t, x) = ξ(x, Z) + t 0 [A u Y (s, x) + a(s, x, Y (s, x), u(s, x, Z), Z)]ds + t 0 b(s, x, Y (s, x), u(s, x, Z), Z)dB(s) + t 0 R c(s, x, Y (s, x), u(s, x, Z), Z, ζ)Ñ(ds, dζ) = ξ(x, z) z=Z + t 0 [A u Y (s, x, z) + a(s, x, Y (s, x, z), u(s, x, z), z)] z=Z ds + t 0 b(s, x, Y (s, x, z), u(s, x, z), z) z=Z dB(s) + t 0 R c(s, x, Y (s, x, z), u(s, x, z), z, ζ) z=ZÑ (ds, dζ) = R ξ(x, z)δ Z (z)dz + t 0 R [A u Y (s, x, z) + a(s, x, Y (s, x, z), u(s, x, z), z)]δ Z (z)dzds + t 0 R b(s, x, Y (s, x, z), u(s, x, z), z)δ Z (z)dzdB(s) + t 0 R R c(s, x, Y (s, x, z), u(s, x, z), z, ζ)δ Z (z)dzÑ (ds, dζ) = R {ξ(x, z) + t 0 [A u Y (s, x, z) + a(s, x, Y (s, x, z), u(s, x, z), z)]ds + t 0 b(s, x, Y (s, x, z), u(s, x, z), z)dB(s) + t 0 R c(s, x, Y (s, x, z), u(s, x, z), z, ζ)Ñ(ds, dζ)}δ Z (z)dz. (3.2) {eq1.7} Comparing (3.1) and (3.2) we see that (3.1) holds if we for each z choose Y (t, x, z) as the solution of the classical (but parametrized) SPDE          dY (t, x, z) = [A u Y (t, x, z) + a(t, x, Y (t, x, z), u(t, x, z), z)]dt + b(t, x, Y (t, x, z), u(t, x, z), z)dB(t) + R c(t, x, Y (t, x, z), u(t, x, z), z, ζ)Ñ(dt, dζ); (t, x) ∈ (0, T ) × D Y (0, x, z) = ξ(x, z); x ∈ D R Y (t, x, z)δ Z (z)dz = θ(t, x); (t, x) ∈ [0, T ] × ∂D. (3.3) {eq3.3} As before let A be the given family of admissible H−adapted controls u. Then in terms of Y (t, x, z) the performance functional J(u) of a control process u ∈ A defined in (1.10) gets the form J(u) = E[ T 0 ( D h(t, x, Y (t, x, Z), u(t, x, Z), Z)dx)dt + D k(x, Y (T, x, Z), Z)dx] = E[ R T 0 ( D h(t, Y (t, x, z), u(t, x, z), z)E[δ Z (z)|F t ]dx)dt + D k(x, Y (T, x, z), z)E[δ Z (z)|F T ]dx dz] = R j(u)(z)dz, (3.4) {eq0.13} where j(u)(z) := E[ T 0 ( D h(t, Y (t, x, z), u(t, x, z), z)E[δ Z (z)|F t ]dx)dt + D k(x, Y (T, x, z), z)E[δ Z (z)|F T ]dx. (3.5) {eq1.5} Thus we see that to maximize J(u) it suffices to maximize j(u)(z) for each value of the parameter z ∈ R. Therefore Problem 1.1 is transformed into the problem Problem 3.1 For each given z ∈ R find u ⋆ = u ⋆ (t, x, z) ∈ A such that sup u∈A j(u)(z) = j(u ⋆ )(z). (3.6) {problem2} A sufficient-type maximum principle In this section we will establish a sufficient maximum principle for Problem 3.1. We first recall some basic concepts and results from Banach space theory. Let X be a Banach space with norm · and let F : X → R. (i) We say that F has a directional derivative (or Gâteaux derivative) at v ∈ X in the direction w ∈ X if D w F (v) := lim ε→0 1 ε (F (v + εw) − F (v)) exists. (ii) We say that F is Fréchet differentiable at v ∈ V if there exists a continuous linear map A : X → R such that lim h→0 h∈X 1 h |F (v + h) − F (v) − A(h)| = 0. In this case we call A the gradient (or Fréchet derivative) of F at v and we write A = ∇ v F. (iii) If F is Fréchet differentiable, then F has a directional derivative in all directions w ∈ X and D w F (v) = ∇ v F (w) =: ∇ v F, w . In particular, note that if F is a linear operator, then ∇ v F = F for all v. Problem 3.1 is a stochastic control problem with a standard (albeit parametrized) stochastic partial differential equation (3.3) for the state process Y (t, x, z), but with a non-standard performance functional given by (3.5). We can solve this problem by a modified maximum principle approach, as follows: Define the Hamiltonian H : [0, T ] × D × R × D × U × R × R × R × R × Ω → R by H(t, x, y, ϕ, u, z, p, q, r) = H(t, x, y, ϕ, u, z, p, q, r, ω) = E[δ Z (z)|F t ]h(t, x, y, u, z) + [A u (ϕ) + a(t, x, y, u, z)]p + b(t, x, y, u, z)q + R c(t, x, y, u, z, ζ)r(ζ)ν(dζ). (4.1) {eq4.1} Here D denotes the domain of definition for the operator A u , while R denotes the set of all functions r(·) : R → R such that the last integral above converges. We assume that D is a Banach space.The quantities p, q, r(·) are called the adjoint variables. The adjoint processes p(t, x, z), q(t, x, z), r(t, x, z, ζ) are defined as the solution of the z-parametrized backward stochastic partial differential equation (BSPDE)        dp(t, x, z) = −[A * u(t,x,z) p(t, x, z) + ∂H ∂y (t, x, z)]dt + q(t, x, z)dB(t) + R r(t, x, z, ζ)Ñ(dt, dζ); (t, x, z) ∈ (0, T ) × D × R p(T, x, z) = ∂k ∂y (x, Y (T, x, z), z)E[δ Z (z)|F T ]; (x, z) ∈ D × R p(t, x, z) = 0; (t, x, z) ∈ [0, T ] × ∂D × R, (4.2) {eq4.2a} where ∂H ∂y (t, x, z) = ∂H ∂y (t, x, y, Y (t, ., z), u(t, x, z), z, p(t, x, z), q(t, x, z), r(t, x, z, .))| y=Y (t,x,z) . (4.3) For fixed t, u, z, p, q, r we can regard ϕ → ℓ(ϕ)(x) := H(t, x, ϕ(x), ϕ, u, z, p, q, r) (4.4) {eq4.5a} as a map from D into R. The Fréchet derivative at ϕ of this map is the linear operator ∇ ϕ ℓ on D given by ∇ ϕ ℓ, ψ = ∇ ϕ ℓ(ψ) = A u (ψ)(x)p + ψ(x) ∂H ∂y (t, x, y, ϕ, u, z, p, q, r)| y=ϕ(x) ; ψ ∈ D. (4.5) For simplicity of notation, if there is no risk of confusion, we will denote ℓ by H from now on. We can now state the first maximum principle for our problem (3.6): Theorem 4.1 [Sufficient-type maximum principle] Letû ∈ A, and denote the associated solution of (3.3) and (4.2) byŶ (t, x, z) and (p(t, x, z),q(t, x, z),r(t, x, z, ζ)), respectively. Assume that the following hold: 1. y → k(x, y, z) is concave for all x, z 2. (ϕ, u) → H(t, x, ϕ(x), ϕ, u, z, p(t, x, z), q(t, x, z),r(t, x, z, ζ)) is concave for all t, x, z, ζ 3. sup w∈U H t, x, Y (t, x, z), Y (t, ·, z)(x), w, p(t, x, z), q(t, x, z),r(t, x, z, ζ) = H t, x, Y (t, x, z), Y (t, ·, z)(x), u(t, x, z), p(t, x, z), q(t, x, z),r(t, x, z, ζ) for all t, x, z, ζ. Then u(·, ·, z) is an optimal insider control for Problem 3.1. Proof. By considering an increasing sequence of stopping times τ n converging to T , we may assume that all local integrals appearing in the computations below are martingales and hence have expectation 0. See [ØS2]. We omit the details. Choose arbitrary u(., ., z) ∈ A, and let the corresponding solution of (3.3) and (4.2) be Y (t, x, z), p(t, x, z), q(t, x, z), r(t, x, z, ζ). For simplicity of notation we write h = h(t, x, Y (t, x, z), u(t, x, z)), h = h(t, x, Y (t, x, z), u(t, x, z)) and similarly with a, a, b, b and so on. Moreover put H(t, x) = H(t, x, Y (t, x, z), Y (t, ·, z)(x), u(t, x, z), p(t, x, z), q(t, x, z), r(t, x, z, .)) (4.6) and H(t, x) = H(t, x, Y (t, x, z), Y (t, ·, z)(x), u(t, x, z), p(t, x, z), q(t, x, z), r(t, x, z, .)) (4.7) In the following we write h = h − h, a = a − a, Y = Y − Y . Consider j(u(., ., z)) − j( u(., ., z)) = I 1 + I 2 , where I 1 = E[ T 0 ( D {h(t, x) − h(t, x)}E[δ Z (z)|F t ]dx)dt], I 2 = E[ D {k(x) −k(x)}E[δ Z (z)|F T ]dx]. (4.8) {eq4.7} By the definition of H we have I 1 = E[ T 0 D {H(t, x) − H(t, x) − p(t, x)[A u Y (t, x) − Aû Y (t, x) + a(t, x)] − q(t, x) b(t, x) − Rr (t, x, ζ)c(t, x, ζ)ν(dζ)}dxdt]. (4.9) {eq4.8} Since k is concave with respect to y we have (k(x, Y (T, x, z), z) − k(x,Ŷ (T, x, z), z))E[δ Z (z)|F T ] ≤ ∂k ∂y (x,Ŷ (T, x, z), z)E[δ Z (z)|F T ](Y (T, x, z) −Ŷ (T, x, z)),(4.10) and hence I 2 ≤ E[ D ∂k ∂y (x, Y (T, x, z))E[δ Z (z)|F T ]Ỹ (T, x, z)dx] = E[ D p(T, y) Y (T, x, z)dx] (4.11) {eq4.11} = E[ D ( T 0 p(t, x, z)d Y (t, x, z) + T 0 Y (t, x, z)d p(t, x, z) + T 0 d[p,Ỹ ] t )dx] = E[ D T 0 { p(t, x, z)[A u Y (t, x, z) − Aû Y (t, x) + a(t, x)] − Y (t, x, z)[A * u p(t, x, z) + ∂ H(t, x) ∂y ] + b(t, x) q(t, x) + Rc (t, x, z, ζ)r(t, x, z, ζ)ν(dζ)}dtdx]. where ∂ H(t, x) ∂y = ∂H ∂y (t, x,Ŷ (t, x, z), Y (t, ·, z)(x),û(t, x, z),p(t, x, z),q(t, x, z),r(t, x, z, .)). (4.12) By a slight extension of (1.6) we get D Y (t, x, z)A * u p(t, x, z)dx = D p(t, x, z)Aû Y (t, x, z)dx. (4.13) {eq4.15a} Therefore, adding (4.9) -(4.11) and using (4.13) we get, j(u(., z)) − j( u(., z)) ≤ E[ D T 0 {H(t, x) −Ĥ(t, x) − [p(t, x, z)Aû(Ỹ )(t, x, z) +Ỹ (t, x, z) ∂Ĥ(t, x) ∂y ]}dt dx (4.14) {eq4.10aa} Hence j(u(., z)) − j( u(., z)) ≤ E[ D T 0 {H(t, x) −Ĥ(t, x) − ∇Ŷ H(Ỹ )(t, x, z)}dt dx] (4.15) {eq4.10} where ∇Ŷ H(Ỹ ) = ∇ ϕ H(Ỹ )| ϕ=Ŷ (4.16) By the concavity assumption of H in (ϕ, u) we have: H(t, x) −Ĥ(t, x) ≤ ∇Ŷ H(Y −Ŷ )(t, x, z) + ∂ H ∂u (t, x)(u(t, x) −û(t, x)),(4.17) and the maximum condition implies that ∂ H ∂u (t, x)(u(t, x) −û(t, x)) ≤ 0. (4.18) Hence by (4.15) we get j(u) ≤ j(û). Since u ∈ A was arbitrary, this shows thatû is optimal. A necessary-type maximum principle We proceed to establish a corresponding necessary maximum principle. For this, we do not need concavity conditions, but instead we need the following assumptions about the set of admissible control processes: • A 1 . For all t 0 ∈ [0, T ] and all bounded H t 0 -measurable random variables α(x, z, ω), the control θ(t, x, z, ω) := 1 [t 0 ,T ] (t)α(x, z, ω) belongs to A. • A 2 . For all u, β 0 ∈ A with β 0 (t, x, z) ≤ K < ∞ for all t, x, z define δ(t, x, z) = 1 2K dist(u(t, x, z), ∂U) ∧ 1 > 0 (5.1) {delta} and put β(t, x, z) = δ(t, x, z)β 0 (t, x, z). (5.2) {eq3.2} Then the control u(t, x, z) = u(t, x, z) + aβ(t, x, z); t ∈ [0, T ] belongs to A for all a ∈ (−1, 1). • A3. For all β as in (5.2) the derivative process χ(t, x, z) := d da Y u+aβ (t, x, z)| a=0 (5.3) {eq5.3a} exists, and belong to L 2 (λ × P) and                  dχ(t, x, z) = [ dA du (Y )(t, x, z)β(t, x, z) + A u χ(t, x, z) + ∂a ∂y (t, x, z)χ(t, x, z) + ∂a ∂u (t, x, z)β(t, x, z)]dt +[ ∂b ∂y (t, x, z)χ(t, x, z) + ∂b ∂u (t, x, z)β(t, x, z)]dB(t) + R [ ∂c ∂y (t, x, z, ζ)χ(t, x, z) + ∂c ∂u (t, x, z, ζ)β(t, x, z)]Ñ(dt, dζ); (t, x) ∈ [0, T ] × D, χ(0, x, z) = d da Y u+aβ (0, x, z)| a=0 = 0, χ(t, x, z) = 0; (t, x) ∈ [0, T ] × ∂D. (5.4) {d chi} Theorem 5.1 [Necessary-type maximum principle] Letû ∈ A and z ∈ R. Then the following are equivalent: 1. d da j(û + aβ)(z)| a=0 = 0 for all bounded β ∈ A of the form (5.2). 2. ∂H ∂u (t, x, z) u=û = 0 for all (t, x) ∈ [0, T ] × D. Proof. For simplicity of notation we write u instead ofû in the following. By considering an increasing sequence of stopping times τ n converging to T , we may assume that all local integrals appearing in the computations below are martingales and have expectation 0. See [ØS2]. We omit the details. We can write d da j((u + aβ)(z))| a=0 = I 1 + I 2 where I 1 = d da E[ D T 0 h(t, x, Y u+aβ (t, x, z), u(t, x, z) + aβ(t, x, z), z)E[δ Z (z)|F t ]dtdx]| a=0 and I 2 = d da E[ D k(x, Y u+aβ (T, x, z), z)E[δ Z (z)|F T ]dx]| a=0 . By our assumptions on h and k and by (5.3) we have I 1 = E[ D T 0 { ∂h ∂y (t, x, z)χ(t, x, z) + ∂h ∂u (t, x, z)β(t, x, z)}E[δ Z (z)|F t ]dtdx], (5.5) {iii1} I 2 = E[ D ∂k ∂y (x, Y (T, x, z), z)χ(T, x, z)E[δ Z (z)|F T ]dx] = E[ D p(T, x, z)χ(T, x, z)dx]. (5.6) {iii2} By the Itô formula I 2 = E[ D p(T, x, z)χ(T, x, z)dx] = E[ D T 0 p(t, x, z)dχ(t, x, z)dx + D T 0 χ(t, x, z)dp(t, x, z)dx + D T 0 d[χ, p](t, x, z)dx] (5.7) {eq5.9a} = E[ D T 0 p(t, x, z){ dA du (Y )(t, x, z)β(t, x, z) + A u χ(t, x, z) + ∂a ∂y (t, x, z)χ(t, x, z) + ∂a ∂u (t, x, z)β(t, x, z)}dtdx (5.8) + D T 0 p(t, x, z){ ∂b ∂y (t, x, z)χ(t, x, z) + ∂b ∂u (t, x, z)β(t, x, z)}dB(t) + D T 0 R p(t, x, z){ ∂c ∂y (t, x, z, ζ)χ(t, x, z) + ∂c ∂u (t, x, z, ζ)β(t, x, z)}Ñ(dt, dζ)dx − D T 0 χ(t, x, z)[A * u p(t, x, z) + ∂H ∂y (t, x, z)]dtdx + D T 0 χ(t, x, z)q(t, x, z)dB(t)dx + D T 0 R χ(t, x, z)r(t, x, z, ζ)Ñ(dt, dζ)dx (5.9) + D T 0 q(t, x, z){ ∂b ∂y (t, x, z)χ(t, x, z) + ∂b ∂u (t, x, z)β(t, x, z)}dtdx + D T 0 R { ∂c ∂y (t, x, z, ζ)χ(t, x, z) + ∂c ∂u (t, x, z, ζ)β(t, x, z)}r(t, x, z, ζ)ν(ζ)dtdx] = E D T 0 {p(t, x, z)( dA du (Y )(t, x, z)β(t, x, z) + A u χ(t, x, z))}dt + T 0 χ(t, x, z){p(t, x, z) ∂a ∂y (t, x, z) + q(t, x, z) ∂b ∂y (t, x, z) − A * u p(t, x, z) − ∂H ∂y (t, x, z) + R ∂c ∂y (t, x, z, ζ)r(t, x, z, ζ)ν(dζ)}dt (5.10) + T 0 β(t, x, z){p(t, x, z) ∂a ∂u (t, x, z) + q(t, x, z) ∂b ∂u (t, x, z) + R ∂c ∂u (t, x, z, ζ)r(t, x, z, ζ)ν(dζ)}dt dx = E D [ T 0 −χ(t, x, z) ∂h ∂y E[δ Z (z)|F t ]}dt + T 0 { ∂H ∂u (t, x, z) − ∂h ∂u (t, x, z)E[δ Z (z)|F t ]}β(t, x, z)dt]dx (5.11) = −I 1 + E[ D T 0 ∂H ∂u (t, x, z)β(t, x, z)dtdx]. Summing (5.5) and (5.7) we get d da j((u + aβ)(., x, y))| a=0 = I 1 + I 2 = E[ D T 0 ∂H ∂u (t, x, z)β(t, x, z)dtdx]. We conclude that d da j(u + aβ)(z))| a=0 = 0 if and only if E[ D T 0 ∂H ∂u (t, x, z)β(t, x, z)dtdx] = 0,(5.12) for all bounded β ∈ A of the form (5.2). In particular, applying this to β(t, x, z) = θ(t, x, z) as in A1, we get that this is again equivalent to ∂H ∂u (t, x, z) = 0 for all (t, x) ∈ [0, T ] × D. (5.13) Controls which do not depend on x In some situations it is of interest to study controls u(t, x) = u(t) which have the same value throughout the space D, i.e., only depends on time t. See e.g. Section 8.2. In this case we define the set A 0 of admissible controls by A 0 = {u ∈ A; u(t, x) = u(t) does not depend on x}. (6.1) Defining the performance functional J(u) = R j(u)(z)dz as in Problem 3.1, the problem now becomes: Problem 6.1 For each z ∈ R find u * 0 ∈ A 0 such that sup u∈A 0 j(u)(z) = j(u * 0 )(z). (6.2) {problem4} 6.1 Sufficient-type maximum principle for controls which do not depend on x We now state and prove an analog of Theorem 4.1 for this case: Theorem 6.2 (Sufficient-type maximum principle for controls which do not depend on x). Supposeû ∈ A 0 with corresponding solutionsŶ (t, x, z) of (3.3) andp(t, x, z),q(t, x, z),r(t, x, z, ζ) of (4.2) respectively. Assume that the following hold: 1. y → k(x, y, z) is concave for all x, z 2. (ϕ, u) → H(t, x, ϕ(x), ϕ, u, z, p(t, x, z), q(t, x, z),r(t, x, z, ·)) is concave for all t, x, z 3. sup w∈U D H t, x, Y (t, x, z), Y (t, ·, z), w, p(t, x, z), q(t, x, z),r(t, x, z, ·) dx = D H t, x, Y (t, x, z), Y (t, ·, z), u(t, z), p(t, x, z), q(t, x, z),r(t, x, z, ·) dx for all t, z. Thenû(t, z) is an optimal control for the Problem 6.1. Proof. We proceed as in the proof of Theorem 4.1. Let u ∈ A 0 with corresponding solution Y (t, x, z) of (3.3). Withû ∈ A 0 , consider j(u) − j(û) = E[ T 0 D {h −ĥ}dxdt + D {k −k}dx], (6.3) {eq4.7} wherê h = h(t, x,Ŷ (t, x, z),û(t, z)), h = h(t, x, Y (t, x, z), u(t, z)) k = k(x,Ŷ (T, x, z)) and k = k(x, Y (T, x, z)). Using a similar shorthand notation forâ, a,b, b andĉ, c, and settinĝ H = H(t, x,Ŷ (t, x, z),û(t, z),p(t, x, z),q(t, x, z),r(t, x, z, ·)) (6.4) and H = H(t, x, Y (t, x, z), u(t, z),p(t, x),q(t, x),r(t, x, z, ·)), (6.5) we see that (6.3) can be written j(u) − j(û) = I 1 + I 2 , (6.6) where I 1 = E[ T 0 ( D {h(t, x) − h(t, x)}E[δ Z (z)|F t ]dx)dt], I 2 = E[ D {k(x) −k(x)}E[δ Z (z)|F T ]dx]. (6.7) {I_1I_2} By the definition of H we have I 1 = E[ T 0 D {H(t, x) − H(t, x) − p(t, x)(A u Y (t, x) − AûŶ (t, x) + a(t, x)) − q(t, x) b(t, x) − Rr (t, x, ζ)c(t, x, ζ)ν(dζ)}dxdt]. (6.8) {II1} Since k is concave with respect to y we have x, z)). (6.9) Therefore, as in the proof of Theorem 4.1, (k(x, Y (T, x, z), z) − k(x,Ŷ (T, x, z), z))E[δ Z (z)|F T ] ≤ ∂k ∂y (x,Ŷ (T, x, z), z)E[δ Z (z)|F T ](Y (T, x, z) −Ŷ (T,I 2 ≤ E[ D T 0 { p(t, x, z)[A u Y (t, x, z) − AũỸ (t, x, z) + a(t, x)] − Y (t, x, z)[A * up (t, x, z) + ∂ H(t, x) ∂y (t, x)] + b(t, x) q(t, x) + Rc (t, x, z, ζ)r(t, x, z, ζ)ν(dζ)}dtdx]. (6.10) {II_2} where ∂ H(t, x) ∂y = ∂H ∂y (t, x,Ŷ (t, x, z),Ŷ (t, ., z)(x),û(t, z),p(t, x, z),q(t, x, z),r(t, x, z, .)) (6.11) Adding (6.8) -(6.10) we get as in equation (4.15), j(u) − j(û) ≤ E T 0 D {H(t, x) −Ĥ(t, x) − ∇ŶĤ(Ỹ )(t, x, z)}dx dt . (6.12) {eq4.17} By the concavity assumption of H in (y, u) we have: (6.13) and the maximum condition implies that D ∂ H ∂u (t, x)(u(t) −û(t))dx ≤ 0. (6.14) H(t, x) −Ĥ(t, x) ≤ ∇ŶĤ(Y −Ŷ )(t, x, z) + ∂ H ∂u (t, x)(u(t) −û(t)),Hence D {H(t, x) −Ĥ(t, x) − ∇ŶĤ(Y −Ŷ )(t, x, z)}dx ≤ 0, (6.15) and therefore we conclude by (6.12) that j(u) ≤ j(û). Since u ∈ A 0 was arbitrary, this shows thatû is optimal. 6.2 Necessary-type maximum principle for controls which do not depend on x We proceed as in Theorem 5.1 to establish a corresponding necessary maximum principle for controls which do not depend on x. As in Section 5 we assume the following: • A 1 . For all t 0 ∈ [0, T ] and all bounded H t 0 -measurable random variables α(z, ω), the control θ(t, z, ω) := 1 [t 0 ,T ] (t)α(z, ω) belongs to A 0 . • A 2 . For all u, β 0 ∈ A 0 with β 0 (t, z) ≤ K < ∞ for all t, z define δ(t, z) = 1 2K dist ((u(t, z), ∂U) ∧ 1 > 0 (6.16) {delta} and put β(t, z) = δ(t, z)β 0 (t, z). (6.17) {eq4.22} Then the control u(t, z) = u(t, z) + aβ(t, z); t ∈ [0, T ] belongs to A 0 for all a ∈ (−1, 1). • A3. For all β as in (6.17) the derivative process χ(t, x, z) := d da Y u+aβ (t, x, z)| a=0 exists, and belong to L 2 (λ × P) and                  dχ(t, x, z) = [ dL du (Y )(t, x, z)β(t, z) + A u χ(t, x, z) + ∂a ∂y (t, x, z)χ(t, x, z) + ∂a ∂u (t, x, z)β(t, z)]dt +[ ∂b ∂y (t, x, z)χ(t, x, z) + ∂b ∂u (t, x, z)β(t, z)]dB(t) + R [ ∂c ∂y (t, x, z, ζ)χ(t, x, z) + ∂c ∂u (t, x, z, ζ)β(t, z)]Ñ(dt, dζ); (t, x) ∈ [0, T ] × D, χ(0, x, z) = d da Y u+aβ (0, x, z)| a=0 = 0; x ∈ D χ(t, x, z) = 0; (t, x) ∈ [0, T ] × ∂D. (6.18) {d chi} Then we have the following result: Theorem 6.3 [Necessary-type maximum principle for controls which do not depend on x] Letû ∈ A 0 and z ∈ R. Then the following are equivalent: 1. d da j(û + aβ)(z)| a=0 = 0 for all bounded β ∈ A 0 of the form (6.17). 2. [ D ∂H ∂u (t, x, z)dx] u=û(t) = 0 for all t ∈ [0, T ]. Proof. The proof is analogous to the proof of Theorem 5.1 and is omitted. Application to noisy observation optimal control For simplicity we consider only the one-dimensional case in the following. Suppose the signal process X(t) = X (u) (t, Z) and its corresponding observation process R(t) are given respectively by the following system of stochastic differential equations • (Signal process) dX(t) = α(X(t), R(t), u(t, Z))dt + β(X(t), R(t), u(t, Z))dv(t) + R γ(X(t), R(t), u(t, Z), ζ)Ñ(dt, dζ); t ∈ [0, T ], (7.1) {eq7.1} X(0) has density F (·), i.e. E[φ(X(0))] = R φ(x)F (x)dx; φ ∈ C 0 (R). As before T > 0 is a fixed constant. • (Observation process) The processes v(t) = v(t, ω) and w(t) = w(t, ω) are independent Brownian motions, and N (dt, dζ) is a compensated Poisson random measure, independent of both v and w. We let F v := {F v t } 0≤t≤T and F w := {F w t } 0≤t≤T denote the filtrations generated by (v,Ñ) and w, respectively. We assume that Z is a given F v T 0 -measurable random variable, representing the inside information of the controller, where T 0 > 0 is a constant. Note that Z is independent of F w . dR(t) = h(X(t))dt + dw(t); t ∈ [0, T ], R(0) = 0. (7.2) {eq7.2} Here α : R × R × U → R, β : R × R × U → R, γ : R × R × U × R → R The process u(t) = u(t, Z, ω) is our control process, assumed to have values in a given closed set U ⊆ R . We require that u(t) be adapted to the filtration H := {H t } 0≤t≤T , where H t = R t ∨ σ(Z), (7.4) where R t is the sigma-algebra generated by the observations R(s), s ≤ t. This means that for all t our control process u is of the form u = u(t, Z), where u(t, z) is R t -measurable for each constant z ∈ R. Similarly the signal process can be written X = X(t, Z), where X(t, z) is the solution of (7.1) with the random variable Z replaced by the parameter z ∈ R. We call u(t) admissible if, in addition, (7.1) and (7.2) has a unique strong solution (X(t), R(t)) such that E[ T 0 |f (X(t), u(t))|dt + |g(X(T ))|] < ∞, (7.5) where f : R × U → R and g : R → R are given functions, called the profit rate and the bequest function, respectively. The set of all admissible controls is denoted by A H . For u ∈ A H we define the performance functional J(u) = E[ T 0 f (X(t), u(t))dt + g(X(T ))]. (7.6) We consider the following problem: Problem 7.1 (The noisy observation insider stochastic control problem) Find u * ∈ A H such that sup u∈A H J(u) = J(u * ). (7.7) {eq7.6} We now proceed to show that this noisy observation SDE insider control problem can be transformed into a full observation SPDE insider control problem of the type discussed in the previous sections: To this end, define the probability measureP by dP (ω) = M t (ω)dP (ω) on F v t ∨ F w t ∨ σ(Z), (7.8) where M t (ω) = M t (ω, Z) = exp − t 0 h(X(s, Z))dw(s) − 1 2 t 0 h 2 (X(s, Z))ds . (7.9) It follows by (7.3) and the Girsanov theorem that the observation process R(t) defined by (7.2) is a Brownian motion with respect toP . Moreover, we have dP (ω) = K t (ω)dP (ω), (7.10) where K t = M −1 t = exp t 0 h(X(s, Z))dw(s) + 1 2 t 0 h 2 (X(s, Z))ds = exp t 0 h(X(s, Z))dR(s) − 1 2 t 0 h 2 (X(s, Z))ds . (7.11) For ϕ ∈ C 2 0 (R) and fixed r ∈ R, m ∈ U define the integro-differential operator L = L r,m by (7.12) and let L * be the adjoint of L, in the sense that (Lϕ, ψ) L 2 (R) = (ϕ, L * ψ) L 2 (R) (7.13) for all ϕ, ψ ∈ C 2 0 (R). L r,m ϕ(x) = α(x, r, m) ∂ϕ ∂x (x) + 1 2 β 2 (x, r, m) ∂ 2 ϕ ∂x 2 + R {ϕ(x + γ(x, r, m, ζ)) − ϕ(x) − ∇ϕ(x)γ(x, r, m, ζ)}ν(dζ), Suppose that for all z ∈ R there exists a stochastic process y(t, x) = y(t, x, z) such that EP [ϕ(X(t, z))K t (z)|R t ] = R ϕ(x)y(t, x, z)dx (7.14) {eq5.8} for all bounded measurable functions ϕ. Then y(t, x) is called the unnormalized conditional density of X(t.z) given the observation filtration R t . Note that by the Bayes rule we have E[ϕ(X(t))|R t ] = EP [ϕ(X(t))K t |R t ] EP [K t |R t ] . (7.15) {eq5.8a} It is known that under certain conditions the process y(t, x) = y(t, x, z) exists and satisfies the following integro-SPDE, called the Duncan-Mortensen-Zakai equation: dy(t, x, z) = L * R(t),u(t) y(t, x, z)dt + h(x)y(t, x, z)dR(t); t ≥ 0 y(0, x, z) = F (x, z). (7.16) {eq7.15} See for example Theorem 7.17 in [BC]. If (7.14) holds, we get This transforms the insider partial observation SDE control problem 7.1 into an insider full observation SPDE control problem of the type we have discussed in the previous sections. J(u) = E[ T 0 f (X(t, Z), u(t, Z))dt + g(X(T, Z))] = EP [ T 0 f (X(t, Z), u(t, Z))K t (Z)dt + g(X(T, Z))K T (Z)] = EP T 0 EP [f (X(t, Z), u(t, Z))K t (Z)|H t ]dt + EP [g(X(T, Z)K T (Z)|H T ] = EP T 0 EP [f (X(t, Z), u(t, Z))K t (Z)|R t ∨ σ(Z)]dt + EP [g(X(T, Z)K T (Z)|R T ∨ σ(Z)] = EP T 0 EP [f (X(t, z), v(z))K t (z)|R t ] z=Z,v(z)=u(t,z) dt + EP [g(X(T, z)K T (z)|R T ] z=Z = EP [ T 0 R f (x, We summarise what we have proved as follows: Theorem 7.2 (From noisy obs. SDE control to full info. SPDE control) Assume that (7.14) and (7.16) hold. Then the solution u * (t, Z) of the noisy observation insider SDE control problem 7.1, consisting of (7.1),(7.2),(7.7), coincides with the solution u * of the following (full information) insider SPDE control problem: (7.19) and y(t, x, Z) solves the SPDE 19} 8 Examples 8.1 Example: Optimal control of a second order SPDE, with control not depending on x. Problem 7.3 Find u * ∈ A such that sup u∈A JP (u) = JP (u * ), (7.18) {eq7.6a} where JP (u) = EP [ T 0 R f (x, u(t, Z))y(t, x, Z)dxdt + R g(x, Z)y(T, x, Z)dx ,dy(t, x, Z) = L * R(t),u(t,Z) y(t, x, Z)dt + h(x)y(t, x, Z)dR(t); t ≥ 0 y(0, x, Z) = F (x, Z). (7.20) {eq7. Consider the following controlled stochastic reaction-diffusion equation: dY (t, x, z) = dY π (t, x, z) = [ 1 2 ∂ 2 ∂x 2 Y (t, x, z) + π(t, z)Y (t, x, z)a 0 (t, z)]dt + π(t, z)Y (t, x, z)b 0 (t, z)dB(t); t Y (0, x, z) = α(x) > 0; x ∈ D, (8.1) {Wealth} with performance functional given by and U(x, y, z) = U(x, y, z, ω) : D × (0, ∞) × R × Ω → R is a given utility function, assumed to be concave and C 1 with respect to y and F T -measurable for each x, y, z. Let A H be the set of H-adapted controls π(t) not depending on x and such that J(π) := E[ D U(x, Y π (T, x, Z), Z)dx] = R j(π)dz; (8.2) where j(π) = j(π, z) = E[ D U(x, Y (T, x, z), z)dxE[δ Z (z)|F T ]],(8.E[ T 0 π(t) 2 dt] < ∞. Then it is well-known that the corresponding solution Y π (t, x) of (8.1) is positive for all t, x. See e.g. [Be]. We study the following problem: Problem 8.1 Findπ ∈ A H such that sup π∈A H j(π) = j(π). (8.4) {eq8.3} This is a problem of the type investigated in the previous sections, in the special case with no jumps and with controls π(t, z) not depending on x, and we can apply the results there to solve it. The Hamiltonian (4.1) gets the form, with u = π, H(t, x, y, ϕ, π, p, q, z) = [ 1 2 ∂ 2 ∂x 2 ϕ(x) + πya 0 (t, z)]p + πyb 0 (t, z)q, (8.5) {eq19} while the BSDE (4.2) for the adjoint processes becomes, keeping in mind that ( ∂ 2 ∂x 2 ) * = ∂ 2 ∂x 2 ,      dp(t, x, z) = −[ 1 2 ∂ 2 ∂x 2 p(t, x, z) + π(t, z){a 0 (t, z)p(t, x, z) + b 0 (t, z)q(t, x, z)}]dt + q(t, x, z)dB(t); t ∈ [0, T p(T, x, z) = ∂U ∂y (x, Y (T, x, z), z)E[δ Z (z)|F T ] p(t, x, z) = 0; ∀(t, x, z) ∈ [0, T ] × ∂D × R. (8.6) {eq20} The map π → D H(t, x, Y (t, x, z), Y (t, ., z), π, p(t, x, z), q(t, x, z))dx (8.7) is maximal when D Y (t, x, z)[a 0 (t, z)p(t, x, z) + b 0 (t, z)q(t, x, z)]dx = 0. (8.8) From this we get D Y (t, x, z)q(t, x, z)dx = − a 0 (t, z) b 0 (t, z) D Y (t, x, z)p(t, x, z)dx. (8.9) Letp (t, z) = D Y (t, x, z)p(t, x, z)dx; t ∈ [0, T ]. (8.10) Applying the Itô formula to Y (t, x, z)p(t, x, z) we get: d(Y (t, x, z)p(t, x, z)) = p(t, x, z)[( 1 2 ∂ 2 Y ∂x 2 (t, x, z) + π(t, z)Y (t, x, z)a 0 (t, z))dt + π(t, z)Y (t, x, z)b 0 (t, z)dB(t)] + Y (t, x, z)[(− 1 2 ∂ 2 p ∂x 2 (t, x, z) − π(t, z)(a 0 (t, z)p(t, x, z) + b 0 (t, z)q(t, x, z)))dt + q(t, x, z)dB(t)] + q(t, x, z)π(t, z)Y (t, x, z)b 0 (t, z)dt = [ 1 2 ∂ 2 Y ∂x 2 (t, x, z)p(t, x, z) − 1 2 ∂ 2 p ∂x 2 (t, x, z)Y (t, x, z)]dt + [π(t, z)Y (t, x, z)p(t, x, z)b 0 (t, z) + Y (t, x, z)q(t, x, z)]dB(t) (8.11) Then get that the dynamics ofp(t, z) is given by dp(t, z) =p(t, z)[π(t, z)b 0 (t, z) − a 0 (t,z) b 0 (t,z) ]dB(t) p(T, z) = D By Theorem 7.2, the problem to maximize J(π) over all π ∈ A H is equivalent to the following problem: Problem 8.4 Findπ ∈ A such that sup π∈A JP (π) = JP (π), (8.30) {eq8.23} where JP (π) = EP [ R + U(x, Z)y(T, x, Z)dx],(8.31) and y(t, x, Z) = y π (t, x, Z) is the solution of the SPDE dy(t, x, Z) = (L * π(t,z) y)(t, x, Z)dt + xy(t, x, Z)dR(t); 0 ≤ t ≤ T, y(0, x, Z) = F (x), (8.32) {eq8.24} where (L * π(t) y)(t, x, z) = −π(t, z)α(t)y ′ (t, x, z) + 1 2 π 2 (t, z)β 2 (t)y ′′ (t, x, z), with y ′ (t, x, z) = ∂y(t,x,z) ∂x , y ′′ (t, x, z) = ∂ 2 y(t,x,z) ∂x 2 . Define the space H 1 (R + ) = {y ∈ L 2 (R + ), ∂y ∂x ∈ L 2 (R + )} (8.33) The H 1 norm is given by: y(t, z) 2 H 1 (R + ) = y(t, z) 2 L 2 (R + ) + y ′ (t, z) 2 L 2 (R + ) (8.34) We have H 1 (R + ) ⊂ L 2 (R + ) ⊂ H −1 (R + ) (8.35) We verify the coercivity condition of the operator −L * π(t) : 2 −L * π(t) y, y = 2π(t, z)α(t) y ′ (t, x, z), y(t, x, z) − π 2 (t, z)β 2 (t) y ′′ (t, x, z)y(t, x, z) = 2π(t, z)α(t) R + y ′ (t, x, z)y(t, x, z)dx − π 2 (t, z)β 2 (t) R + y ′′ (t, x, z)y(t, x, z)dx = π(t, z)α(t)[y 2 (t, x, z)] ∂R + − π 2 (t, z)β 2 (t)[y(t, x, z)y ′ (t, x, z)] ∂R + + π 2 (t, z)β 2 (t) R + (y ′ (t, x, z)) 2 dx. (8.36) Suppose that y(t, x, z) = 0 for x = 0. Then we get 2 −L * π(t) y, y = π 2 (t, z)β 2 (t) y ′ (t, z) 2 L 2 (R + ) . (8.37) Let H 1 0 (R + ) = {y ∈ H 1 , y = 0 on ∂R + }. (8.38) We have |y(t, z)| 1,R + = y ′ (t, z) L 2 (R + ) is a norm in H 1 0 (R + ), which is equivalent to the H 1 (R + ) norm; i.e. there exist a, b > 0 such that a y(t, z) 1,R + ≤ |y(t, z)| 1,R + = y ′ (t, z) L 2 (R + ) ≤ b y(t, z) 1,R + (8.39) We conclude that the following coercivity condition is satisfied: 2 −L * π(t) y, y ≥ a 2 π 2 (t, z)β 2 (t) y(t, z) 2 1,R + . (8.40) Using Theorem 1.1 and Theorem 2.1 in Pardoux [Par], we obtain that (8.32) has a unique solution y(., ., z) ∈ L 2 (Ω, C(0, T, L 2 (R + ))) i.e. y(., ., z) satisfies 1. E[y 2 (t, x, z)] < ∞ for all t, x, z. 2. The map t → y(t, ., z) is continuous as a map from [0, T ] into L 2 (R + ), for all z. Moreover, the first and second partial derivatives with respect to x, denoted by y ′ (t, x, z) and y ′′ (t, x, z) respectively, exist and belong to L 2 (R). The problem (8.30) is of the type discussed in Section 6 and we now apply the methods developed there to study it: The Hamiltonian given in (4.1) now gets the form H(t, x, y, ϕ, π, p, q) = (L * π ϕ)p + xyq, (8.41) and the adjoint BSDE (4.2) becomes dp(t, x, z) = −[A π(t,z) p(t, x, z) + xq(t, x, z)]dt + q(t, x, z)dR(t); 0 ≤ t ≤ T, p(T, x, z) = U(x, z)E Q [δ Z (z)|R T ]. where R t is the sigma-algebra generated by {R(s)} s≤t , for 0 ≤ t ≤ T, and A π(t,z) p(t, x, z) = π(t, z)α(t)p ′ (t, x, z) + 1 2 π 2 (t, z)β 2 (t)p ′′ (t, x, z). (8.43) {eq8.43} By [ØPZ] and [ZRW], this backward SPDE (BSPDE for short) admits a unique solution which belongs to L 2 (R + ). The map π → R + H(t, x, y(t, x, z), y(t, ., z), π, p(t, x, z), q(t, x, z))dx is maximal when R + {−α(t)y ′ (t, x, z) + πβ 2 (t)y ′′ (t, x, z)}p(t, x, z)dx = 0, (8.44) i.e. when π =π(t, z), given bŷ π(t, z) = α(t) R + y ′ (t, x, z)p(t, x, z)dx β 2 (t) R + y ′′ (t, x, z)p(t, x, z)dx . (8.45) {eq8.29} Using integration by parts and (7.14) -(7.15) we can rewrite this as follows: π(t, z) = − α(t) R + y(t, x, z)p ′ (t, x, z)dx β 2 (t) R + y(t, x, z)p ′′ (t, x, z)dx = − α(t)E[p ′ (t, X(t), z)|R t ] β 2 (t)E[p ′′ (t, X(t), z)|R t ] . (8.46) {eq8.29a} We summarise what we have proved as follows: Theorem 8.5 Assume that the conditions of Theorem 7.2 hold. A portfolioπ(t, z) ∈ A is an optimal portfolio for the noisy observation insider portfolio problem (8.30), if it is given in feedback form byπ (t, z) = − α(t)E[p ′ (t, X(t), z)|R t ] β 2 (t)E[p ′′ (t, X(t), z)|R t ] , (8.47) {eq8.46} where p(t, x, z) solves the BSPDE dp(t, x, z) = −[Aπ (t,z) p(t, x, z) + xq(t, x, z)]dt + q(t, x, z)dR(t); 0 ≤ t ≤ T, p(T, x, z) = U(x)EP [δ Z (z)|R T ], (8.48) {eq8.47} and p ′′ (t, x, z) = 0 for all t, x, z. t 0 ( 0Y (s, ·), A * u φ) L 2 (D) ds + t 0 (a(s, Y (s, ·), ·), φ) L 2 (D) ds + t 0 (b(s, Y (s, ·), ·), φ) L 2 (D) dB(s) + t 0 R c(s, Y (s, ·), ζ, ·), φ) L 2 (D)Ñ (ds, dζ),(1.4)for all smooth functions φ with compact support in D. Here (ψ, φ) L 2 (D) are given deterministic functions and h : R → R is a given deterministic function such that the Novikov condition holds, i.e. u(t, Z))y(t, x, Z)dxdt + R g(x, Z)y(T, x, Z)dx =: JP (u).(7.17) {eq5.16} ( 8 . 842) {eq8.37} 3) {eq8.2} ∂U ∂y (x, Y (T, x, z), z)Y (T, x, z)dxE[δ Z (z)|F T ].(8.12) {eq25'} Thus we obtain that p(t, z) =p(0, z) exp( t 0 (b 0 (s, z)π(s, z) − a 0 (s, z) b 0 (s, z) )dB(s) − 1 2 t 0 (b 0 (s, z)π(s, z) − a 0 (s, z) b 0 (s, z) ) 2 ds), (8.13) {eq26}for some, not yet determined, constantp(0, z). In particular, for t = T we get, using (8.12),Now assume that U(x, y, z) = k(x, z) ln(y),To make this more explicit, we proceed as follows:DefineThen by the generalized Clark-Ocone theorem in[AaØPU],Solving this SDE for M(t, z) we getSince the two martingalesp(t, z) and M(t, z) are identical for t = T, they are identical for all t ≤ T and hence, by (8.16) we getBy identification of the integrals with respect to ds and the stochastic integrals we get:We summarise what we have proved as follows:Theorem 8.2 The optimal insider controlπ for the problem (8.4) with U(x, y, z) = k(x, z) ln(y) as in (8.15), is given by (8.22), with Φ K (t, z) given by (8.19).Corollary 8.3 Suppose k(x, z) is deterministic. Then the optimal insider controlπ for the problem (8.4) with U(x, y, z) = k(x, z) ln(y), is given bŷNote that the optimal insider portfolio in this case is in fact the same as in the case when Y does not depend on x. See[DØ1].Optimal insider portfolio with noisy observationsWe now study an example illustrating the application in Section 7: Let α and β be given adapted processes, with β bounded away from 0. Suppose the signal process X(t) = X π (t, Z) is given byHere π(t, Z) is the control, representing the portfolio in terms of the amount invested in the risky asset at time t, when the risky asset unit price S(t) is given by(8.26)and the safe investment unit price is S 0 (t) = 1 for all t. The process X(t) then represents the corresponding value of the investment at time t. For π to be in the set A H of admissible controls, we require that X(t) > 0 for all t and3). Suppose the observations R(t) of X(t) at time t are not exact, but subject to uncertainty or noise, so that the observation process is given byHere, as in Section 7, the processes v and w are independent Brownian motions, and the random variable Z represents the information available to the insider from time 0. Let U : [0, ∞) → [−∞, ∞) be a given C 1 (concave) utility function. The performance functional is assumed to be J(π) = E[U(X π (T, Z))].(8.29) T -measurable random variable k(x, z) = k(x, z, ω) such that K(z) := D k(x, z)dx < ∞ a.s. for all z. Then equation (8.14) becomesa given bounded positive F T -measurable random variable k(x, z) = k(x, z, ω) such that K(z) := D k(x, z)dx < ∞ a.s. for all z. Then equation (8.14) becomes White noise generalizations of the Clark-Haussmann-Ocone theorem with application to mathematical finance. K Aase, B Øksendal, N Privault, J Ubøe, Finance Stoch. 4K. Aase, B. Øksendal, N. Privault and J. Ubøe: White noise generalizations of the Clark-Haussmann-Ocone theorem with application to mathematical finance. Finance Stoch. 4 (2000), 465-496. Using the Donsker delta function to compute hedging strategies. K Aase, B Øksendal, J Ubøe, Potential Analysis. 14K. Aase, B. Øksendal and J. Ubøe: Using the Donsker delta function to compute hedging strategies. Potential Analysis 14 (2001), 351-374. A Bain, D Crisan, Fundamentals of Stochastic Filtering. SpringerA. Bain and D. Crisan: Fundamentals of Stochastic Filtering. Springer 2009. On the positivity of the stochastic heat equation. F E Benth, Potential Analysis. 6F.E. Benth: On the positivity of the stochastic heat equation. Potential Analysis 6 (1997), 127-148. . L Breiman, Probability. Addison-WesleyL. Breiman: Probability. Addison-Wesley 1968. A general stochastic calculus approach to insider trading. F Biagini, B , Appl. Math. & Optim. 52F. Biagini and B. Øksendal: A general stochastic calculus approach to insider trad- ing.Appl. Math. & Optim. 52 (2005), 167-181. Malliavin calculus and anticipative Itô formulae for Lévy processes. G Di Nunno, T Meyer-Brandis, B Øksendal, F Proske, Inf. Dim. Anal. Quantum Prob. Rel. Topics. 8G. Di Nunno, T. Meyer-Brandis, B. Øksendal and F. Proske: Malliavin calculus and anticipative Itô formulae for Lévy processes. Inf. Dim. Anal. Quantum Prob. Rel. Topics 8 (2005), 235-258. Optimal portfolio for an insider in a market driven by Lévy processes. G Di Nunno, T Meyer-Brandis, B Øksendal, F Proske, Quant. Finance. 6G. Di Nunno, T. Meyer-Brandis, B. Øksendal and F. Proske: Optimal portfolio for an insider in a market driven by Lévy processes. Quant. Finance 6 (2006), 83-94. A Donsker delta functional approach to optimal insider control and application to finance. O Draouil, B , Comm. Math. Stat. (CIMS). 3O. Draouil and B. Øksendal: A Donsker delta functional approach to optimal insider control and application to finance. Comm. Math. Stat. (CIMS) 3 (2015), 365-421; . DOI10.1007/s40304-015-0065-yDOI 10.1007/s40304-015-0065-y. Optimal insider control and semimartingale decompositions under enlargement of filtration. O Draouil, B , arXiv:1512.01759v1To appear in Stochastic Analysis and Applications. O. Draouil and B. Øksendal: Optimal insider control and semimartingale decompo- sitions under enlargement of filtration. arXiv: 1512.01759v1 (6 Dec.2015). To appear in Stochastic Analysis and Applications. The Donsker delta function, a representation formula for functionals of a Lévy process and application to hedging in incomplete markets. G , Di Nunno, B , Séminaires et Congrèes, Societé Mathématique de France16G. Di Nunno and B. Øksendal: The Donsker delta function, a representation formula for functionals of a Lévy process and application to hedging in incomplete markets. Séminaires et Congrèes, Societé Mathématique de France, Vol. 16 (2007), 71-82. A representation theorem and a sensitivity result for functionals of jump diffusions. G , Di Nunno, B , Mathematical Analysis and Random Phenomena. A.B. Cruzeiro, H. Ouerdiane and N. ObataWorld ScientificG. Di Nunno and B. Øksendal: A representation theorem and a sensitivity result for functionals of jump diffusions. In A.B. Cruzeiro, H. Ouerdiane and N. Obata (editors): Mathematical Analysis and Random Phenomena. World Scientific 2007, pp. 177 -190. G Di Nunno, B Øksendal, F Proske, Malliavin Calculus for Lévy Processes with Applications to Finance. Universitext. SpringerG. Di Nunno, B. Øksendal and F. Proske: Malliavin Calculus for Lévy Processes with Applications to Finance. Universitext, Springer 2009. H Holden, B Øksendal, J Ubøe, T Zhang, Stochastic Partial Differential Equations. SpringerUniversitextSecond EditionH. Holden, B. Øksendal, J. Ubøe and T. Zhang: Stochastic Partial Differential Equations. Universitext, Springer, Second Edition 2010. On explicit strong solution of Itô-SDEs and the Donsker delta function of a diffusion. A Lanconelli, F Proske, Inf. Dim. Anal. Quatum Prob Rel. Topics. 7A. Lanconelli and F. Proske: On explicit strong solution of Itô-SDEs and the Donsker delta function of a diffusion. Inf. Dim. Anal. Quatum Prob Rel. Topics 7 (2004),437- 447. The Donsker delta function of a Lévy process with application to chaos expansion of local time. S Mataramvura, B Øksendal, F Proske, Ann. Inst H. Poincaré Prob. Statist. 40S. Mataramvura, B. Øksendal and F. Proske: The Donsker delta function of a Lévy process with application to chaos expansion of local time. Ann. Inst H. Poincaré Prob. Statist. 40 (2004), 553-567. On the existence and explicit representability of strong solutions of Lévy noise driven SDEs with irregular coefficients. T Meyer-Brandis, F Proske, Commun. Math. Sci. 4T. Meyer-Brandis and F. Proske: On the existence and explicit representability of strong solutions of Lévy noise driven SDEs with irregular coefficients. Commun. Math. Sci. 4 (2006), 129-154. Optimal control of stochastic partial differential equations. B , Stochastic Analysis and Applications. 23B. Øksendal: Optimal control of stochastic partial differential equations. Stochastic Analysis and Applications 23 (2005), 165-179. Backward stochastic partial differential equations with jumps and application to optimal control of random jump fields. B Øksendal, F Proske, T Zhang, Stochastics. 77B. Øksendal, F. Proske and T. Zhang: Backward stochastic partial differential equa- tions with jumps and application to optimal control of random jump fields. Stochastics 77 (2005), 381-399. Applied Stochastic Control of Jump Diffusions. Second Edition. B Øksendal, A Sulem, SpringerB. Øksendal and A. Sulem: Applied Stochastic Control of Jump Diffusions. Second Edition. Springer 2007 Risk minimization in financial markets modeled by Itô-Lévy processes. B Øksendal, A Sulem, 10.1007/s13370-014-02489-9Afrika Matematika. B. Øksendal and A. Sulem: Risk minimization in financial markets modeled by Itô- Lévy processes. Afrika Matematika (2014), DOI: 10.1007/s13370-014-02489-9. E Pardoux, Stochastic Partial Differential Equations and filtering of diffusion processes. Stochastics. 1979, Vol3. E. Pardoux: Stochastic Partial Differential Equations and filtering of diffusion pro- cesses. Stochastics. 1979, Vol3,pp-127-167. P Protter, Stochastic Integration and Differential Equations. SpringerSecond EditionP. Protter: Stochastic Integration and Differential Equations. Second Edition. Springer 2005 Anticipative portfolio optimization. I Pikovsky, I Karatzas, Adv. Appl. Probab. 28I. Pikovsky and I. Karatzas: Anticipative portfolio optimization. Adv. Appl. Probab. 28 (1996), 1095-1122. A concise course on stochastic partial differential equations. C I Prévôt, M Roeckner, Lecture Notes in Mathematics. SpringerCI. Prévôt and M. Roeckner: A concise course on stochastic partial differential equa- tions. Lecture Notes in Mathematics 1905, Springer 2007. Vallois: Forward, backward and symmetric stochastic integration. F Russo, P , Probab. Theor. Rel. Fields. 93F. Russo and P. Vallois: Forward, backward and symmetric stochastic integration. Probab. Theor. Rel. Fields 93 (1993), 403-421. The generalized covariation process and Itô formula. F Russo, P Vallois, Stoch. Proc. Appl. 594F. Russo and P. Vallois. The generalized covariation process and Itô formula. Stoch. Proc. Appl., 59(4):81-104, 1995. Stochastic calculus with respect to continuous finite quadratic variation processes. F Russo, P Vallois, Stoch. Stoch. Rep. 704F. Russo and P. Vallois. Stochastic calculus with respect to continuous finite quadratic variation processes. Stoch. Stoch. Rep., 70(4):1-40, 2000. On solutions to backward stochastic partial differential equations for Lévy processes. Q Zhou, Y Ren, W Wu, Journal of Computational and Applied Mathematics. 235Q. Zhou, Y. Ren and W. Wu. On solutions to backward stochastic partial differential equations for Lévy processes. Journal of Computational and Applied Mathematics 235 (2011), 5411-5421.
[]
[ "ON A THETA LIFT RELATED TO THE SHINTANI LIFT", "ON A THETA LIFT RELATED TO THE SHINTANI LIFT" ]
[ "Claudia Alfes-Neumann ", "Markus Schwagenscheidt " ]
[]
[]
We study a certain theta lift which maps weight −2k to weight 1/2 − k harmonic weak Maass forms for k ∈ Z, k ≥ 0, and which is closely related to the classical Shintani lift from weight 2k + 2 to weight k + 3/2 cusp forms. We compute the Fourier expansion of the theta lift and show that it involves twisted traces of CM values and geodesic cycle integrals of the input function. As an application, we obtain a criterion for the non-vanishing of the central L-value of an integral weight newform G in terms of the holomorphicity of the theta lift of a certain harmonic weak Maass form associated to G. Moreover, we derive interesting identities between cycle integrals of different kinds of modular forms.
10.1016/j.aim.2018.02.015
[ "https://arxiv.org/pdf/1605.07054v2.pdf" ]
52,993,736
1605.07054
3fd4fd6daf518eb14a251c38f9395308df29744c
ON A THETA LIFT RELATED TO THE SHINTANI LIFT May 2016 Claudia Alfes-Neumann Markus Schwagenscheidt ON A THETA LIFT RELATED TO THE SHINTANI LIFT May 2016arXiv:1605.07054v2 [math.NT] 31 We study a certain theta lift which maps weight −2k to weight 1/2 − k harmonic weak Maass forms for k ∈ Z, k ≥ 0, and which is closely related to the classical Shintani lift from weight 2k + 2 to weight k + 3/2 cusp forms. We compute the Fourier expansion of the theta lift and show that it involves twisted traces of CM values and geodesic cycle integrals of the input function. As an application, we obtain a criterion for the non-vanishing of the central L-value of an integral weight newform G in terms of the holomorphicity of the theta lift of a certain harmonic weak Maass form associated to G. Moreover, we derive interesting identities between cycle integrals of different kinds of modular forms. Introduction A famous result of Zagier [Zag02] states that the twisted traces of singular moduli, i.e. the values of the modular j-invariant at quadratic irrationalities in the upper half-plane, occur as the Fourier coefficients of weakly holomorphic modular forms of weight 1/2 and 3/2. Bruinier and Funke [BF06] showed that the generating series of the traces of singular moduli can be obtained as the image of a certain theta lift of J = j − 744. Using this approach, new proofs of Zagier's results, including the modularity of generating series of twisted traces of singular moduli, and generalizations to higher weight and level have been studied in several recent works, e.g. [AE13], [BO13], [Alf14], [AGOR15]. For example, in [AGOR15] a twisted theta lift from weight 0 to weight 1/2 harmonic weak Maass forms was defined which allowed to recover Zagier's generating series of weight 1/2 as a theta lift. Further, it turned out that this lift is closely related to the Shintani lift via the ξ-operator on harmonic weak Maass forms. The classical Shintani lift establishes a connection between integral and half-integral modular forms [Shi75] and is an indispensable tool in the theory of modular forms. Using this relationship between integral and half-integral weight modular forms a number of remarkable theorems were proven, for example the famous theorem of Waldspurger [Wal81], which asserts that the central critical value of the twisted L-function of an even weight newform is proportional to the square of a coefficient of a half-integral weight modular form. In [AGOR15], the connection between the two lifts led to a more explicit version of a theorem of Bruinier and Ono [BO10] which states that the vanishing of the central derivative of the L-series of an elliptic curve is determined by the algebraicity of a Fourier coefficient of the holomorphic part of a certain harmonic weak Maass form of weight 1/2. In the present work, we study a generalization of the theta lift considered in [AGOR15], which we call the Millson theta lift. Our lift maps weight −2k to weight 1/2 − k harmonic weak Maass forms, where k ∈ Z ≥0 , and is again related to the Shintani lift via the ξ-operator. We completely determine the Fourier expansion of the Millson lift of a harmonic weak Maass form F of weight −2k, and we show that the coefficients of the holomorphic part of the lift are given by twisted traces of CM-values of the weight 0 form R k −2k F , whereas the coefficients of the non-holomorphic part are given by twisted traces of geodesic cycle integrals of the weight 2k + 2 cusp form ξ −2k F . Additionally, inspired by the relation of the Millson lift to the Shintani lift, we prove interesting identities between cycle integrals of ξ −2k F and R 2j+1 −2k F , with varying j ≥ 0, for a harmonic weak Maass form F of weight −2k. In certain cases the cycle integrals of R 2j+1 −2k F do not converge, and we propose a regularization in these cases. The necessary computations are quite involved due to the very general setup, but we believe that they will be very useful for the study of similar theta lifts in the future. Further, we hope that our lift can be used to prove a higher weight version of the aforementioned theorem of Bruinier and Ono [BO10] on the non-vanishing of the central values of derivatives of L-functions of even weight newforms. To illustrate our results, let us simplify the setup by restricting to modular forms for the full modular group SL 2 (Z). In the body of the paper we also treat forms for arbitrary congruence subgroups by using the theory of vector valued modular forms for the Weil representation of an even lattice of signature (1, 2). We let z = x + iy ∈ H and q = e 2πiz . Recall from [BF04] that a harmonic weak Maass form of weight k ∈ Z is a smooth function F : H → C which is invariant under the usual weight k slash operation of SL 2 (Z), which is annihilated by the weight k hyperbolic Laplace operator ∆ k , and for which there is a Fourier polynomial P F = n≤0 a + (n)q n ∈ C[q −1 ] such that F − P F is rapidly decreasing at i∞. The space of such forms is denoted by H + k . Every F ∈ H + k has a Fourier expansion consisting of a holomorphic part F + and a non-holomorphic part F − , F (z) = F + (z) + F − (z) = n≫−∞ a + (n)q n + n<0 a − (n)Γ(1 − k, 4π|n|y)q n , where Γ(s, x) = ∞ x t s−1 e −t dt is the incomplete Gamma function. Harmonic weak Maass forms of half-integral weight for Γ 0 (4) are defined analogously. Important tools in the theory of harmonic weak Maass forms are the Maass lowering and raising operators L k = −2i ∂ ∂z and R k = 2iy 2 ∂ ∂z + ky −1 , which lower or raise the weight of a real analytic modular form by 2, as well as the surjective antilinear differential operator ξ k : H + k → S 2−k defined by ξ k F (z) = 2iy k ∂ ∂z F (z). Let D ∈ Z be a discriminant. We let Q D be the set of integral binary quadratic forms [a, b, c] = ax 2 + bxy + cy 2 of discriminant b 2 − 4ac = D. The modular group SL 2 (Z) acts on Q D from the right, with finitely many classes if D = 0. For D < 0 we can split Q D = Q + D ⊔ Q − D into the subsets of positive definite (a > 0) and negative definite (a < 0) forms. Further, for D < 0 the stabilizer SL 2 (Z) Q of Q ∈ Q D in PSL 2 (Z) is finite, and for D > 0 the stabilizer SL 2 (Z) Q is trivial if D is a square and infinite cyclic otherwise. Let Q = [a, b, c] ∈ Q D . For D < 0 there is an associated CM point α Q = (−b + i |D|)/2a ∈ H, while for D > 0 the solutions of a|z| 2 + bℜ(z) + c = 0 define a geodesic c Q in H, which is equipped with a certain orientation. Let ∆ ∈ Z be a fundamental discriminant (possibly 1). For k ∈ Z ≥0 with (−1) k ∆ < 0 the ∆-th Shintani lift of a cusp form F ∈ S 2k+2 is (in our normalization) defined by I Sh ∆ (F, τ ) = −|∆| −(k+1)/2 d>0 (−1) k+1 d≡0,1(4) Q d|∆| /SL 2 (Z) χ ∆ (Q)C(F, Q) e 2πidτ , where C(F, Q) = SL 2 (Z) Q \c Q F (z)Q(z, 1) k dz is a geodesic cycle integral of F and χ ∆ (Q) = ∆ n , if (a, b, c, ∆) = 1 and Q represents n with (n, ∆) = 1, 0, otherwise, is a genus character. It is well known that the Shintani lift of F is a cusp form in S k+3/2 (Γ 0 (4)) which satisfies the Kohnen plus space condition, i.e. the d-th Fourier coefficient vanishes unless (−1) k+1 d ≡ 0, 1(4). For a SL 2 (Z)-invariant function F and d < 0 we define twisted traces by t + ∆ (F ; d) = Q + d|∆| /SL 2 (Z) χ ∆ (Q) F (α Q ) |SL 2 (Z) Q | , t − ∆ (F ; d) = Q − d|∆| /SL 2 (Z) χ ∆ (Q) F (α Q ) |SL 2 (Z) Q | , and for d > 0 and a function F transforming of weight 2k + 2 for SL 2 (Z) we define where Θ ∆ (τ, z, ψ M,k ) is the twisted theta function associated to a certain Schwartz function ψ M,k , and the integral has to be regularized in a certain way to ensure convergence. The theta function, and thus also I M ∆ (F, τ ), transforms like a modular form of weight 1/2 − k in τ . We are now ready to state our main result for the Millson theta lift for SL 2 (Z). Theorem 1.1. Let k ∈ Z ≥0 such that (−1) k ∆ < 0 and let F ∈ H + −2k with vanishing constant term a + (0). (1) The Millson theta lift I M ∆ (F, τ ) is a harmonic weak Maass form in H + 1/2−k (Γ 0 (4)) satisfying the Kohnen plus space condition. Further, if F is weakly holomorphic, then so is I M ∆ (F, τ ). (2) I M ∆ (F, τ ) is related to the Shintani lift of ξ −2k,z F ∈ S 2k+2 by ξ 1/2−k,τ I M ∆ (F, τ ) = − |∆| 2 I Sh ∆ (ξ −2k,z F, τ ). (3) The Fourier expansion of I M ∆ (F, τ ) is given by I M ∆ (F, τ ) = d>0 2 √ d 1 2π |∆|d k t + ∆ (R k −2k F ; d)q d − b>0 2iǫ 1 2πi|∆| k n<0 ∆ n a + (nb)(4πn) k q −|∆|b 2 − d<0 1 2(π|d|) k+1/2 |∆| k/2 t ∆ (ξ −2k F ; d)Γ( 1 2 + k, 4π|d|v)q d , where R k −2k = R −2 • R −4 • · · · • R −2k is the iterated Maass raising operator and ǫ equals 1 or i according to whether ∆ > 0 or ∆ < 0. Remark 1.2. (1) The assumption a + (0) = 0 was imposed here to simplify the exposition in the introduction and will not be used in the body of the paper. If a + (0) = 0 then I M ∆ (F, τ ) also has a constant coefficient, and for k = 0 further non-holomorphic terms appear. In fact, for k = 0 the ξ-image of I M ∆ (F, τ ) turns out to be a linear combination of unary theta series of weight 3/2. Thus, using the theta lift, one can obtain formulas for the coefficients of mock theta functions of weight 1/2 as traces of modular functions, similarly as in [Alf14]. The details will be discussed in a subsequent paper. (2) In Section 6.1 we extend the Millson theta lift to more general harmonic weak Maass forms whose non-holomorphic parts may also grow exponentially at the cusps. Further, we extend the Shintani lift to weakly holomorphic cusp forms and obtain a new proof of Theorem 1.3. (1) of [BGK15], where the authors defined a generalized Shintani lift of a weakly holomorphic cusp form F as the generating series of certain regularized cycle integrals of F . (3) In [BGK14], the authors studied a so-called Zagier lift, which (for level 1 and k > 1) maps weight −2k to weight 1/2 − k harmonic weak Maass forms. The proof of the modularity of this lift uses the Fourier coefficients of non-holomorphic Poincaré series together with the fact that a harmonic Maass form of negative weight is uniquely determined by its principal part. Thus their proof does not work for k = 0. In fact, the Zagier lift agrees with our lift in level 1, so our theorem generalizes Proposition 6.2. of [BGK14] to arbitrary level and to k = 0, using a very different proof. (4) Integrating Θ ∆ (τ, z, ψ M,k ) in τ against a harmonic Maass form of weight 1/2 − k yields a so-called locally harmonic Maass form of weight −2k. This lift was considered in [Höv12], [BKV13] and [Cra15], and it was shown that the resulting theta lift is related to the Shimura lift via the ξ-operator. For the proof of the theorem, we use the interpretation of the Shintani lift as a theta lift and employ various differential equations between the Millson and the Shintani theta functions. Further, we define a certain auxiliary theta lift constructed from the k = 0 Millson theta function and suitable applications of iterated Maass raising and lowering operators, and show that this auxiliary lift essentially agrees with the Millson theta lift on harmonic weak Maass forms of weight −2k. This identity of theta lifts is a bit surprising and interesting in its own right, but due to its quite technical appearence we chose not to state it in the introduction but refer the reader to Theorem 4.6. The relation between the Millson lift and the Shintani lift also yields an interesting criterion on the vanishing of the twisted L-function of a newform at the critical point. Theorem 1.3. Let F ∈ H + −2k , with vanishing constant terms at all cusps if k = 0, such that G = ξ −2k F ∈ S 2k+2 is a normalized newform. For (−1) k ∆ < 0 the lift I M ∆ (F, τ ) is weakly holomorphic if and only if L(G, χ ∆ , k + 1) = 0. Remark 1.4. For the general result regarding forms of higher level, see Theorem 6.5. A version of this theorem for square-free level N and odd weight k has been proved in [Alf14], Theorem 1.1., using the same techniques. Further, the above theorem in the case of level 1 and k > 0 already appeared in [BGK14], Corollary 1.3, but the proof used very different arguments, i.e. the Zagier lift of non-holomorphic Poincaré series. The Fourier coefficients of the non-holomorphic part of the Millson lift in Theorem 1.1 involve cycle integrals of the cusp form ξ −2k F , which reflects the relation between the Millson and the Shintani lift on the level of Fourier expansions. On the other hand, the fact that the Millson theta lift agrees up to some constant with a theta lift of the realanalytic modular function R k −2k F , see Theorem 4.6, suggests that the Fourier coefficients of the non-holomorphic part of the Millson lift should also be expressible in terms of cycle integrals of R k −2k F . Inspired by this idea, we prove the following identities between the cycle integrals of different types of modular forms. Theorem 1.5. Let D > 0 be a discriminant which is not a square and let Q ∈ Q D . Let k ∈ Z ≥0 and F ∈ H + −2k . For j ∈ Z ≥0 we have C(R 2j+1 −2k F, Q) = 1 D k−j j!(k − j)!(2k)! k!(2k − 2j)! C(ξ −2k F, Q), where R 2j+1 −2k = R −2k+2j • · · · • R −2k is the iterated Maass raising operator. We prove this identity by a direct computation using Stokes' theorem and commutation relations for the differential operators involved. Remark 1.6. (1) It is interesting to note that the cycle integral of ξ −2k F on the righthand side does not depend on j. In particular, the cycle integrals of R 2j+1 −2k F for different choices of j are related by a very simple explicit constant. (2) The general result is given in Corollary 7.2. By plugging in special values for j, e.g. j = k, we obtain further interesting formulas (see Corollaries 7.3 and 7.4), which where previously given in Theorem 1.1. from [BGK14] and Theorem 1.1. from [BGK15]. The above identity gives a unified proof for these two previously known, but seemingly unrelated results. (3) We also define a regularized cycle integral C reg (R 2j+1 −2k F, Q) in the case that the discriminant of Q is a square and the associated geodesic is infinite, and derive an analog of the above theorem in this situation, see Section 7.2. The paper is organized as follows. In Section 2 we introduce the basic setup and recall the necessary facts about vector valued harmonic weak Maass forms for the Weil representation associated with an even lattice of signature (1, 2). Sections 3 and 4 are devoted to the study of the (untwisted) Millson and Shintani theta functions and the properties of the corresponding theta lifts. In particular, the relation between the Millson and the Shintani theta lift is proven in Section 4. The Fourier expansion of the untwisted Millson theta lift is computed in Section 5. The necessary calculations are quite delicate and take up a major part of this work. In Section 6 we use a method from [AE13] to twist the results of the previous sections, i.e. we derive the relation between the twisted Millson and Shintani theta functions and the Fourier expansion of the twisted lifts from the corresponding untwisted results. Finally, in Section 7 we consider identities of cycle integrals of different types of modular forms. Preliminaries For a positive rational number N we consider the rational quadratic space of signature (1, 2) given by V = X = x 2 x 1 x 3 −x 2 ; x 1 , x 2 , x 3 ∈ Q with the quadratic form Q(X) = Ndet(X). The associated bilinear form is (X, Y ) = −Ntr(XY ) for X, Y ∈ V . The group SL 2 (Q) acts as isometries on V by gX := gXg −1 . We let D be the Grassmannian of lines in V (R) = V ⊗ R on which the quadratic form Q is positive definite, D = {z ⊂ V (R); dim(z) = 1 and Q| z > 0} , and we identify D with the complex upper half-plane H by associating to z = x + iy ∈ H the positive line generated by X 1 (z) = 1 √ 2Ny −x |z| 2 −1 x . The group SL 2 (R) acts on H by fractional linear transformations and the identification above is SL 2 (R)-equivariant, that is, gX 1 (z) = X 1 (gz) for g ∈ SL 2 (R) and z ∈ H. Let L ⊂ V be an even lattice of full rank. We write L ′ for the dual lattice of L. Let Γ be a congruence subgroup of SL 2 (Q) that takes L to itself and acts trivially on the discriminant group L ′ /L. Further, we let M = Γ \ D be the corresponding modular curve. Example 2.1. A particularly interesting lattice is given by L = b −a/N c −b : a, b, c ∈ Z . Its dual lattice is L ′ = b/2N −a/N c −b/2N : a, b, c ∈ Z . We see that L ′ /L is isomorphic to Z/2NZ with quadratic form x → −x 2 /4N. Thus the level of L is 4N. The group Γ = Γ 0 (N) acts on L by conjugation and fixes the classes of L ′ /L, and the locally symmetric space is the modular curve Y 0 (N) = Γ \ D under the identification above. For fixed m ∈ Q, the elements X = b/2N −a/N c −b/2N ∈ L ′ with Q(X) = m correspond to integral binary quadratic forms [cN, −b, a] = cNx 2 − bxy + ay 2 of discriminant −4Nm, and this identification is compatible with the corresponding actions of Γ 0 (N). Cusps. We identify the set of isotropic lines Iso (V ) in V with P 1 (Q) = Q ∪ {∞} via ψ : P 1 (Q) → Iso(V ), ψ((α : β)) = span αβ α 2 −β 2 −αβ . The map ψ is a bijection and ψ(g(α : β)) = gψ((α : β)) for g ∈ SL 2 (Q). Thus, the cusps of M (i.e. the Γ-classes of P 1 (Q)) can be identified with the Γ-classes of Iso(V ). If we set ℓ ∞ := ψ(∞), then ℓ ∞ is spanned by X ∞ = ( 0 1 0 0 ). For ℓ ∈ Iso(V ) we pick σ ℓ ∈ SL 2 (Z) such that σ ℓ ℓ ∞ = ℓ. We let Γ ℓ be the stabilizer of ℓ in Γ. Then σ −1 ℓ Γ ℓ σ ℓ is generated by 1 α ℓ 0 1 for some α ℓ ∈ Q >0 which we call the width of the cusp ℓ. For each ℓ, there is a β ℓ ∈ Q >0 such that 0 β ℓ 0 0 is a primitive element of ℓ ∞ ∩ σ −1 ℓ L. We write ε ℓ = α ℓ /β ℓ . The quantities α ℓ , β ℓ and ε ℓ only depend on the Γ-class of ℓ. We compactify the modular curve M to a compact Riemann surface M by adding a point for each cusp ℓ ∈ Γ \ Iso(V ), and we denote this point again by ℓ. Write q ℓ = exp(2πiσ −1 ℓ z/α ℓ ) for the chart around ℓ. We define D 1/T = {w ∈ C : |w| < 1 2πT } for T > 0. Note that if T is sufficiently big, then the inverse images q −1 ℓ D 1/T are disjoint in M. We define the truncated modular curve by M T = M \ ℓ∈Γ\Iso(V ) q −1 ℓ D 1/T . (2.1) 2.2. The Weil representation and harmonic weak Maass forms. We recall the basic facts about vector valued harmonic weak Maass forms for the Weil representation from [BF04]. By Mp 2 (Z) we denote the integral metaplectic group consisting of pairs (γ, φ), where γ = ( a b c d ) ∈ SL 2 (Z) and φ : H → C is a holomorphic function with φ(τ ) 2 = cτ + d. We let C[L ′ /L] be the group ring of L ′ /L, generated by the basis vectors e h for h ∈ L ′ /L and equipped with the scalar product e h , e h ′ = δ h,h ′ , which is conjugate-linear in the second variable. The Weil representation ρ L of Mp 2 (Z) is a unitary representation which is defined for the generators S = (( 0 −1 1 0 ) , √ τ ) and T = (( 1 1 0 1 ) , 1) of Mp 2 (Z) and h ∈ C[L ′ /L] by the formulas ρ L (T )e h = e(Q(h))e h , ρ L (S)e h = √ i |L ′ /L| h ′ ∈L ′ /L e(−(h ′ , h))e h ′ , where e(a) := e 2πia . The dual Weil representation will be denoted by ρ L . A twice continuously differentiable function F : H → C[L ′ /L] is called a harmonic weak Maass form of weight k ∈ 1 2 Z with respect to ρ L if it satisfies (1) ∆ k F = 0, where ∆ k = −v 2 ∂ 2 ∂u 2 + ∂ 2 ∂v 2 + ikv ∂ ∂u + i ∂ ∂v with τ = u + iv is the weight k hyperbolic Laplace operator. (2) F (γτ ) = φ(τ ) 2k ρ L (γ, φ)F (τ ) for all (γ, φ) ∈ Mp 2 (Z),(3) There is a Fourier polynomial P F (τ ) = h∈L ′ /L n≤0 a + (n, h)q n e h , called the principal part of F , such that F (τ ) − P F (τ ) = O(e −εv ) as v → ∞, uniformly in u, for some ε > 0. We denote the space of such functions by H + k,ρ L . Further, we let M ! k,ρ L be the subspace of weakly holomorphic modular forms (consisting of the holomorphic forms in H + k,ρ L ). A harmonic weak Maass form uniquely decomposes into a holomorphic and a nonholomorphic part F = F + + F − with Fourier expansions F + (τ ) = h∈L ′ /L n≫−∞ a + (n, h)q n e h , F − (τ ) = h∈L ′ /L n<0 a − (n, h)Γ(1 − k, 4π|m|v)q n e h , where Γ(s, x) = ∞ x t s−1 e −t dt denotes the incomplete Γ-function. The space H + k (Γ) of scalar valued harmonic weak Maass forms of weight k ∈ Z for Γ is defined analogously, with a growth condition as in (3) at each cusp. 2.3. Differential Operators. The Maass lowering and raising operators are defined by L k = −2iv 2 ∂ ∂τ and R k = 2i ∂ ∂τ + kv −1 . They lower or raise the weight of an automorphic form of weight k by 2. Moreover, these operators commute with the slash operator and they are related to the weighted Laplace operator by −∆ k = L k+2 R k + k = R k−2 L k . (2.2) This implies the commutation relations R k ∆ k = (∆ k+2 − k)R k , (2.3) ∆ k−2 L k = L k (∆ k + 2 − k). (2.4) We also define iterated versions of the lowering and raising operators by L n k = L k−2(n−1) • · · · L k−2 • L k , R n k = R k+2(n−1) • · · · • R k+2 • R k . For n = 0 we set L 0 k = R 0 k = id. Using (2.3) and (2.4) inductively one can find many interesting commutation relations between the iterated lowering and raising operators and the weighted Laplacian. We collect some identities for later use. Lemma 2.2. For k ∈ Z ≥0 and ℓ = 0, . . . , k we have ∆ −2ℓ R k−ℓ −2k = R k−ℓ −2k (∆ −2ℓ − (k − ℓ)(k + ℓ + 1)) . If k is even then ∆ 1/2−k L k/2 1/2 = L k/2 1/2 ∆ 1/2 + k 4 (k + 1) , ∆ 3/2+k R k/2 3/2 = R k/2 3/2 ∆ 3/2 + k 4 (k + 1) , and if k is odd we have ∆ 1/2−k L (k+1)/2 3/2 = L (k+1)/2 3/2 ∆ 3/2 + k 4 (k + 1) , ∆ 3/2+k R (k+1)/2 1/2 = R (k+1)/2 1/2 ∆ 1/2 + k 4 (k + 1) . We also require the antilinear differential operator ξ k F = v k−2 L k F (τ ) = R −k v k F (τ ) = 2iv k ∂ ∂τ F (τ ) from [BF04]. It defines a surjective map ξ k : H + k,ρ L → S 2−k,ρ L and acts on the Fourier expansion of F ∈ H + k,ρ L by ξ k F = ξ k F − = − h∈L ′ /L n>0 a − (−n, h)(4πn) 1−k q n e h . Theta Functions In this section we introduce the theta functions that we will employ as kernel functions for the lifts we investigate in this paper. As before we let L ⊆ V be an even lattice with dual lattice L ′ and we let Γ be a congruence subgroup of SL 2 (Q) which maps L to itself acts trivially on L ′ /L. For z = x + iy ∈ H the vectors X 1 (z) = 1 √ 2Ny −x x 2 + y 2 −1 x , X 2 (z) = 1 √ 2Ny x −x 2 + y 2 1 −x , X 3 (z) = 1 √ 2Ny y −2xy 0 −y , form an orthogonal basis of V (R) with (X 1 (z), X 1 (z)) = 1 and (X 2 (z), X 2 (z)) = (X 3 (z), X 3 (z)) = −1. For z ∈ H and X = ( x 2 x 1 x 3 −x 2 ) ∈ V (R) we define the quantities p z (X) = √ 2(X, X 1 (z)) = − √ N y (x 3 |z| 2 − 2x 2 x − x 1 ), Q X (z) = √ 2Ny(X, X 2 (z) + iX 3 (z)) = N(x 3 z 2 − 2x 2 z − x 1 ). Further, we let R(X, z) = 1 2 p 2 z (X) − (X, X). Note that R(X, z) is non-negative and equals 0 if and only if X ∈ RX 1 (z). For γ ∈ SL 2 (R) we have p γz (X) = p z (γ −1 X), Q X (γz) = j(γ, z) −2 Q γ −1 X (z), R(X, γz) = R(γ −1 X, z), (3.1) which can be verified by a direct calculation. Example 3.1. For the lattice L as in Example 2.1 and X = b/2N −a/N c −b/2N ∈ L ′ we have Q X (z) = cNz 2 − bz + a and −y √ Np z (X) = cN|z| 2 − bx + a, i.e. these quantities are related to CM-points and geodesics associated to the quadratic form [cN, −b, a]. For τ = u + iv, z = x + iy ∈ H and k ∈ Z ≥0 we define Schwartz functions on V (R) by ψ M,k (X, τ, z) = v k+1 p z (X)Q k X (z)e −2πvR(X,z) e 2πiQ(X)τ , ϕ KM (X, τ, z) = vp 2 z (X) − 1 2π e −2πvR(X,z) e 2πiQ(X)τ , ϕ Sh,k (X, τ, z) = v 1/2 y −2k−2 Q k+1 X (z)e −2πvR(X,z) e 2πiQ(X)τ , which we call the Millson, the Kudla-Millson and the Shintani Schwartz function, respectively. They have been studied in many recent works, for example [BF04,BF06,Höv12,BFI15,Cra15]. To each Schwartz function ϕ(X, τ, z) on V (R) of this shape we associate a theta function Θ(τ, z, ϕ) = h∈L ′ /L X∈h+L ϕ(X, τ, z)e h , which defines a smooth C[L ′ /L]-valued function in τ and z. We summarize the transformation properties of the theta functions associated to our special Schwartz functions. Proposition 3.2. Let k ∈ Z, k ≥ 0. (1) The Millson theta function Θ(τ, z, ψ M,k ) has weight 1/2 − k in τ for the representation ρ L and Θ(τ, z, ψ M,k ) has weight −2k in z for Γ. (2) The Kudla-Millson theta function Θ(τ, z, ϕ KM ) has weight 3/2 in τ for the representation ρ L and is Γ-invariant in z. (3) The Shintani theta function Θ(τ, z, ϕ Sh,k ) has weight k + 3/2 in τ for the representation ρ L and Θ(τ, z, ϕ Sh,k ) has weight 2k + 2 in z for Γ. Proof. The transformation behaviour of the three theta functions is certainly well known and can be found in the literature above. For convenience, we sketch the proof: The behaviour in z easily follows from the rules (3.1), and the behaviour in τ can be determined using general results from [Bor98]: Let R 1,2 = (R 3 , (x, x) = x 2 1 − x 2 2 − x 2 3 ) be the standard quadratic space of signature (1, 2). Under the isometry V (R) ∼ = R 1,2 given by 3 i=1 α i X i (z) → (α 1 , α 2 , α 3 ) the functions p z (X) and Q X (z) correspond to the polynomials p z (x 1 , x 2 , x 3 ) = √ 2x 1 , Q (x 1 ,x 2 ,x 3 ) (z) = − √ 2Ny(x 2 − ix 3 ), on R 1,2 . They are homogeneous of degree (1, 0) and (0, 1), respectively. Further, the Millson, Kudla-Millson and Shintani theta functions are (up to some powers of v) the theta functions associated to the lattice L and the polynomials p z (X)Q k X (z), p 2 z (X) and y −2k−2 Q k+1 X (z) as in [Bor98], Section 4. The transformation behaviour in τ now follows from Theorem 4.1. in [Bor98]. We want to investigate the growth of the theta functions at the cusps of Γ. To describe this in a convenient way, we follow the ideas of [BFI15, Section 2.2] and define certain theta functions associated to the cusps. For an isotropic line ℓ ∈ Iso(V ) the space W ℓ = ℓ ⊥ /ℓ is a unary negative definite quadratic space with the quadratic form Q(X + ℓ) := Q(X), and K ℓ = (L ∩ ℓ ⊥ )/(L ∩ ℓ) is an even lattice with dual lattice K ′ ℓ = (L ′ ∩ ℓ ⊥ )/(L ′ ∩ ℓ). The vector X ℓ = σ ℓ .X 3 (i) is a basis of W ℓ with (X ℓ , X ℓ ) = −1, and for k ∈ Z ≥0 the polynomial p ℓ,k (X) = (− √ 2N i(X, X ℓ )) k is homogeneous of degree (0, k). We let Θ ℓ,k (τ ) be the theta function associated to K ℓ and p ℓ,k as in [Bor98], Section 4. By [Bor98, Theorem 4.1.] the complex conjugate Θ ℓ,k (τ ) is a holomorphic modular form of weight k +1/2 for the dual Weil representation of K ℓ . Using [Bru02, Lemma 5.6.], it gives rise to a holomorphic modular form of weight k + 1/2 for the dual Weil representation ρ L of L, which we also denote by Θ ℓ,k (τ ). It is a cusp form if k > 0. Proposition 3.3. Let ℓ be a cusp of Γ. (1) For the Millson theta function we have Θ(τ, σ ℓ z, ψ M,k ) = O(e −Cy 2 ), if k = 0, and j(σ ℓ ,z) 2k Θ(τ, σ ℓ z, ψ M,k ) = −y k+1 k 2πβ ℓ v k−1/2 Θ ℓ,k−1 (τ ) + O(e −Cy 2 ), if k > 0, as y → ∞, uniformly in x, for some constant C > 0. (2) For the Kudla-Millson theta function we have Θ(τ, σ ℓ z, ϕ KM ) = O(e −Cy 2 ), as y → ∞, uniformly in x, for some constant C > 0. (3) For the Shintani theta function we have j(σ ℓ , z) −2k−2 Θ(τ, σ ℓ z, ϕ Sh,k ) = y −k 1 √ N β ℓ Θ ℓ,k+1 (τ ) + O(e −Cy 2 ), as y → ∞, uniformly in x, for some constant C > 0. Moreover, all of the partial derivates of the functions hidden in the O-notation are square exponentially decreasing as y → ∞. Proof. Using the rules (3.1) we can write j(σ ℓ ,z) 2k Θ(τ, σ ℓ z, ψ M,k ) = h∈(σ −1 ℓ L) ′ /(σ −1 ℓ L) X∈h+(σ −1 ℓ L) ψ M,k (X, τ, z)e h , and similarly for the other two theta functions, so we can equivalently estimate the growth of the theta functions for the lattice σ −1 ℓ L at the cusp ∞. The result now follows from Theorem 5.2 in [Bor98] applied to the lattice σ −1 ℓ L and the primitive isotropic vector 0 β ℓ 0 0 ∈ ℓ ∞ ∩ σ −1 ℓ L. The theta functions we just defined satisfy some interesting differential equations. All of the following identities can be checked on the level of Schwartz functions by a direct computation using the rules ∂ ∂z y −2 Q X (z) = −i √ N y −2 p z (X), ∂ ∂z p z (X) = − i 2 √ N y −2 Q X (z), (3.2) ∂ ∂z R(X, z) = − i 2 √ N y −2 p z (X)Q X (z), y −2 Q X (z)Q X (z) = 2NR(X, z). Lemma 3.4. For k ≥ 0, we have ∆ 1/2−k,τ Θ(τ, z, ψ M,k ) = 1 4 ∆ −2k,z Θ(τ, z, ψ M,k ), ∆ 3/2,τ Θ(τ, z, ϕ KM ) = 1 4 ∆ 0,z Θ(τ, z, ϕ KM ), ∆ k+3/2,τ Θ(τ, z, ϕ Sh,k ) = 1 4 ∆ 2k+2,z Θ(τ, z, ϕ Sh,k ). Proof. Compare [Höv12, Proposition 3.10], [Bru02,Proposition 4.5]. The Millson and the Shintani theta function are related by the following identity. Lemma 3.5. For k ≥ 0 we have ξ 1/2−k,τ Θ(τ, z, ψ M,k ) = 1 2 √ N ξ 2k+2,z Θ(τ, z, ϕ Sh,k ). Proof. Compare [BKV13, Lemma 3.3] or [Cra15, Lemma 7.2.1]. We will also need the following relation between Millson theta functions of different weights and the Millson and Kudla-Millson theta functions: Lemma 3.6. For k ≥ 0 we have L −2k−2,z L −2k,z L 1/2−k,τ Θ(τ, z, ψ M,k ) = π N (∆ −2k−4,z − 4k − 6) Θ(τ, z, ψ M,k+2 ). Further, we have L 0,z L 3/2,τ Θ(τ, z, ϕ KM ) = − 1 2 √ N (∆ −2,z − 2)Θ(τ, z, ψ M,1 ). Proof. This can be shown by a direct calculation using the rules (3.2). Theta Lifts Let k ∈ Z ≥0 and let F ∈ H + −2k (Γ) be a harmonic weak Maass form. We would like to integrate F against the Millson theta function Θ(τ, z, ψ M,k ) on M = Γ \ H to obtain a function that transforms like a modular form of weight 1/2 − k. Unfortunately, Proposition 3.3 shows that the integral does not converge for k > 0, so it has to be regularized in a suitable way. Using the regularization of [Bor98], we define the Millson theta lift by I M (F, τ ) = lim T →∞ M T F (z)Θ(τ, z, ψ M,k )y −2k dµ(z). Note that we integrate in the orthogonal variable z here. The integral in the symplectic variable τ was considered previously in [Höv12], [BKV13] and [Cra15]. It was shown that the corresponding lift has jump singularities along certain geodesics in the upper-half plane, which led to the discovery of locally harmonic Maass forms. Similarly, the theta lifts investigated in the fundamental works of Borcherds [Bor98] and Bruinier [Bru02] (which are integrals in the τ -variable) have singularities along Heegner divisors in H. In contrast to these singular lifts, it turns out that the Millson theta lift is in fact harmonic on the upper half-plane. Proposition 4.1. For k ∈ Z ≥0 the Millson theta lift I M (F, τ ) of F ∈ H + −2k (Γ) is a harmonic function that transforms like a modular form of weight 1/2 − k for ρ L . Proof. Unwinding the definition of the truncated surface M T , we see that it suffices to show that the limit lim T →∞ T 1 α ℓ 0 F ℓ (z)j(σ ℓ ,z) 2k Θ(τ, σ ℓ z, ψ M,k )y −2k−2 dxdy exists for every cusp ℓ ∈ Iso(V ), where F ℓ = F | −2k σ ℓ . For k = 0 Proposition 3.3 states that the Millson theta function is square exponentially decreasing at all cusps, so the integral actually converges without regularization in this case. For k > 0 we see by the same lemma that it suffices to show that lim T →∞ T 1 α ℓ 0 F ℓ (z)y −k−1 dxdy exists. But the integral over x picks out the constant coefficient a + ℓ (0) of F ℓ , and the limit of the remaining integral over y gives 1 k . This shows that I M (F, τ ) is well-defined. The transformation behaviour of the Millson theta function implies that I M (F, τ ) has weight 1/2 − k for ρ L . To prove that I M (F, τ ) is harmonic we first use Lemma 3.4 to write ∆ 1/2−k,τ I M (F, τ ) = lim T →∞ 1 4 M T F (z)∆ −2k,z Θ(τ, z, ψ M,k )y −2k dµ(z). By Lemma 4.3. in [Bru02], or more generally Stokes' Theorem, we have M T F (z)∆ −2k,z Θ(τ, z, ψ M,k )y −2k dµ(z) − M T ∆ −2k,z F (z)Θ(τ, z, ψ M,k )y −2k dµ(z) = ∂M T L −2k,z F (z)Θ(τ, z, ψ M,k )y −2k−2 dz − ∂M T F (z)L −2k,z Θ(τ, z, ψ M,k )y −2k−2 dz. Now it follows easily from the growth estimates in Proposition 3.3 that the boundary integrals vanish in the limit. Since F is harmonic, we obtain ∆ 1/2−k,τ I M (F, τ ) = 0. We define the Shintani theta lift of a cusp form G ∈ S 2k+2 (Γ) by I Sh (G, τ ) = M G(z)Θ(τ, z, ϕ Sh,k )y 2k+2 dµ(z). The rapid decay of G at the cusps and similar arguments as above show that I Sh (G, τ ) converges to a harmonic function which transforms like a modular form of weight 3/2 + k for ρ L . The Millson and the Shintani theta lifts are related by the following identity. Proposition 4.2. For F ∈ H + 0 (Γ) we have ξ 1/2,τ (I M (F, τ )) = − 1 2 √ N I Sh (ξ 0,z F, τ ) + 1 2N ℓ∈Γ\Iso(V ) ε ℓ a + ℓ (0)Θ ℓ,1 (τ ), and for k ∈ Z >0 and F ∈ H + −2k (Γ) we have ξ 1/2−k,τ (I M (F, τ )) = − 1 2 √ N I Sh (ξ −2k,z F, τ ). Proof. By Lemma 3.5 we have for k ∈ Z ≥0 ξ 1/2−k,τ (I M (F, τ )) = lim T →∞ 1 2 √ N M T F (z)ξ 2k+2,z Θ(τ, z, ϕ Sh,k )y −2k dµ(z). Using Stokes' Theorem we obtain M T F (z)ξ 2k+2,z Θ(τ, z, ϕ Sh,k )y −2k dµ(z) = − M T ξ −2k,z F (z)Θ(τ, z, ϕ Sh,k )y 2k+2 dµ(z) − ∂M T F (z)Θ(τ, z, ϕ Sh,k )dz. The limit of the first integral on the right-hand side is I Sh (ξ −2k,z F, τ ). The boundary integral can be written as − ∂M T F (z)Θ(τ, z, ϕ Sh,k )dz = ℓ∈Γ\Iso(V ) α ℓ +iT iT F ℓ (z)j(σ ℓ , z) −2k−2 Θ(τ, σ ℓ z, ϕ Sh,k )dz, where F ℓ = F | −2k σ ℓ . Using Proposition 3.3 and carrying out the integral we see that the right-hand side vanishes in the limit if k > 0 and equals 1 √ N ℓ∈Γ\Iso(V ) ε ℓ a + ℓ (0)Θ ℓ,1 (τ ) if k = 0. This completes the proof. We summarize the most important mapping properties of the Millson and the Shintani theta lift in the following theorem. (1) The Millson theta lift maps H + −2k (Γ) to H + 1/2−k,ρ L for k ≥ 0. (2) The Millson theta lift maps M ! 0 (Γ) to H + 1/2,ρ L and M ! −2k (Γ) to M ! 1/2−k,ρ L for k > 0. (3) The Shintani theta lift maps S 2k+2 (Γ) to S 3/2+k,ρ L for k ≥ 0. Proof. For the first item it remains to compute the Fourier expansion of the Millson theta lift, which will be done in Section 5. The second claim then follows immediatly from Proposition 4.2 if we use that ξ −2k annihilates holomorphic functions and that Θ ℓ,1 (τ ) is a cusp form of weight 3/2 for ρ L . The third item then follows by combining the first item with Proposition 4.2 and the fact that ξ −2k : H + −2k (Γ) → S 2k+2 (Γ) is surjective, see [BF04, Theorem 3.7.]. The square exponential decay of the k = 0 Millson theta function and the Kudla-Millson theta function at the cusps implies that the integral of a harmonic weak Maass form F ∈ H + 0 (Γ) against each of the two theta functions over M converges without regularization. Following an idea of [BO13], we define theta lifts of F ∈ H + −2k (Γ) by first raising it to a Γ-invariant function, integrating it against the two theta functions, and then applying suitable differential operators to make the result harmonic again. To make this precise let k ∈ Z ≥0 and F ∈ H + −2k (Γ). We define Λ M (F, τ ) =        L k/2 1/2,τ M R k −2k,z F (z)Θ(τ, z, ψ M )dµ(z), if k is even, L (k+1)/2 3/2,τ M R k −2k,z F (z)Θ(τ, z, ϕ KM )dµ(z), if k is odd. Proposition 4.4. Let k ∈ Z ≥0 and F ∈ H + −2k (Γ). The theta lift Λ M (F, τ ) is a harmonic function which transforms like a modular form of weight 1/2 − k for ρ L . Proof. By what we have said above, all integrals converge. The transformation behaviour is then obvious. To prove that the lifts are harmonic we use the relations in Lemma 2.2, Lemma 3.4 and Stokes' Theorem as above. We leave the details to the reader. Remark 4.5. Similarly, we can define a theta lift Λ M (F, τ ) =        R (k+1)/2 1/2,τ M R k −2k,z F (z)Θ(τ, z, ψ M )dµ(z), if k is odd, R k/2 3/2,τ M R k −2k,z F (z)Θ(τ, z, ϕ KM )dµ(z), if k is even. This gives a weakly holomorphic modular form of weight 3/2 + k for ρ L if k > 0 (see [Alf14,Alf15]). The case k = 0 was considered by Bruinier and Funke in [BF06]. We now want to show that the regularized Millson theta lift I M (F, τ ) defined above agrees with Λ M (F, τ ) up to some constant. This will be useful when we compute the Fourier coefficients of I M (F, τ ). Theorem 4.6. Let k ∈ Z ≥0 and F ∈ H + −2k (Γ). Then I M (F, τ ) = − π N k/2 k/2−1 j=0 (k − 2j)(k + 2j + 1) −1 Λ M (F, τ ), if k is even, and I M (F, τ ) = − 1 2 √ N − π N (k−1)/2 (k−1)/2 j=0 (k − 2j + 1)(k + 2j) −1 Λ M (F, τ ), if k is odd. Proof. The proof involves several applications of Stokes' Theorem. Using Proposition 3.3 it is straightforward but tedious to verify that all boundary integrals vanish in the limit. We leave these verifications to the suspicious reader and omit all boundary integrals to simplify the exposition. Let k be even. We consider the expression I j (F, τ ) = lim T →∞ L k/2−j 1/2−2j,τ M T R k−2j −2k,z F (z)Θ(τ, z, ψ M,2j )y −4j dµ(z) for 0 ≤ j ≤ k/2. By the same arguments as above, it converges to a harmonic function of weight 1/2 − k for ρ L which equals Λ M (F, τ ) for j = 0 and I M (F, τ ) for j = k/2. We split off the innermost lowering operator in τ and the two outermost raising operators in z and apply [Bru02, Lemma 4.2] (an instance of Stokes' Theorem) twice to see that I j (F, τ ) equals lim T →∞ L k/2−j−1 1/2−2j−2,τ M T R k−2j−2 −2k,z F (z)L −4j−2,z L −4j,z L 1/2−2j,τ Θ(τ, z, ψ M,2j )y −4j−4 dµ(z). By Lemma 3.6 we have L −4j−2,z L −4j,z L 1/2−2j,τ Θ(τ, z, ψ M,2j ) = π N (∆ −4j−4,z − 8j − 6)Θ(τ, z, ψ M,2j+2 ). Using [Bru02, Lemma 4.3.] we now move the Laplace operator to R k−2k−2 −2k,z F in the integral over M T . Lemma 2.2 shows that ∆ −4j−4,z R k−2j−2 −2k F = −(k − 2j − 2)(k + 2j + 3)R k−2j−2 −2k,z F, so together we obtain after a short calculation I j (F, τ ) = − π N (k − 2j)(k + 2j + 1)I j+1 (F, τ ). The formula for even k now follows inductively. For odd k we first split off the innermost lowering operator in τ and the outermost raising operator in z in Λ M (F, τ ) and apply [Bru02, Lemma 4.2.] to get Λ M (F, τ ) = − lim T →∞ L (k−1)/2 −1/2,τ M T R k−1 −2k,z F (z)L 0,z L 3/2,τ Θ(τ, z, ϕ KM )y −2 dµ(z). By Lemma 3.6 we have L 0,z L 3/2,τ Θ(τ, z, ϕ KM ) = − 1 2 √ N (∆ −2,z − 2)Θ(τ, z, ψ M,1 ). Moving the Laplace operator to R k−1 −2k F (see [Bru02,Lemma 4.3.]) and using that ∆ −2,z R k−1 −2k,z F = −(k − 1)(k + 2)R k−1 −2k,z F we arrive at Λ M (F, τ ) = − 1 2 √ N k(k + 1) lim T →∞ L (k−1)/2 −1/2,τ M T R k−1 −2k,z F (z)Θ(τ, z, ψ M,1 )y −2 dµ(z). Similarly as in the even k case we consider I j (F, τ ) = lim T →∞ L (k−1)/2−j −1/2−2j,τ M T R k−1−2j −2k,z F (z)Θ(τ, z, ψ M,2j+1 )y −4j−2 dµ(z) for 0 ≤ j ≤ (k − 1)/2. Note that − 1 2 √ N k(k + 1)I 0 = Λ M and I (k−1)/2 = I M . As above we see that I j = − π N (k − (2j + 1))(k + (2j + 1) + 1)I j+1 . The formula for odd k now follows inductively. with finitely many orbits, and the stabilizer Γ X is finite. For some Γ-invariant function F function on H we define the modular trace function of F by t(F ; m, h) = X∈Γ\L m,h 1 |Γ X | F (D X ), where Γ X denotes the stabilizer of X in Γ, the image of Γ in PSL 2 (Z). Moreover, we define L + m,h = X = x 2 x 1 x 3 −x 2 ∈ L m,h : x 1 ≥ 0 and L − m,h = L m,h \ L + m,h , and accordingly t + (F ; m, h) = X∈Γ\L + m,h 1 |Γ X | F (D X ) and t − (F ; m, h) = X∈Γ\L − m,h 1 |Γ X | F (D X ). Geodesic cycle integrals. A vector X ∈ V of negative length Q(X) = m ∈ Q <0 defines a geodesic c X in D via c X = {z ∈ D : z ⊥ X}. We write c(X) = Γ X \ c X for the image in M = Γ \ H. If |m|/N is not a square in Q, then X ⊥ is non-split over Q and the stabilizer Γ X is infinite cyclic. On the other hand, if |m|/N is a square, then X ⊥ is split and Γ X is trivial. In the first case the geodesic c(X) is closed, while in the second case c(X) is an infinite geodesic (see also [Fun02,Lemma 3.6]). In the case that c(X) is an infinite geodesic, X is orthogonal to two isotropic lines ℓ X = span(Y ) and ℓ X = span( Y ), with Y and Y positively oriented. We call ℓ X the line associated to X if the triple (X, Y, Y ) is a positively oriented basis for V , and we write X ∼ ℓ X . Note that ℓ X = ℓ −X . For m ∈ Q <0 and X ∈ L m,h we define the cycle integral of a cusp form G ∈ S 2k+2 (Γ) along the geodesic c(X) by C(G, X) = c(X) G(z)Q k X (z)dz, where the orientation of c(X) is defined using an explicit parametrization as follows: Since Q(X) = m < 0, there is some matrix g ∈ SL 2 (R) such that g −1 X = |m|/N ( 1 0 0 −1 ) . Recall that the stabilizer Γ X is either trivial or infinite cyclic. In the second case, the stabilizer of g −1 X in g −1 Γ is generated by some matrix ε 0 0 ε −1 with ε > 1. We can now parametrize c(X) by g.iy with y ∈ (0, ∞) if |m|/N is a square, and y ∈ (1, ε 2 ) if |m|/N is not a square. Note that d dy g.iy = i · j(g, iy) −2 and Q X (g.iy) = j(g, iy) −2 Q g −1 .X (iy) = j(g, iy) −2 (−2 |m|Niy). Writing G g = G| 2k+2 g we find C(G, X) = (−2 |m|Ni) k i ∞ 0 G g (iy)y k dy, if |m|/N is a square and similarly (i.e. with the integral from 1 to ε 2 ) if |m|/N is not a square. Using the transformation behaviour of G it is easy to see that the right-hand side, and thus the implied orientation of c(X), is independent of the choice of the matrix g. Finally, we define the trace of G for m > 0 by t(G; m, h) = X∈Γ\L m,h C(G, X). 5.3. The complementary trace. Let m ∈ Q <0 and assume that |m|/N is a square, i.e. m = −Nd 2 for some d ∈ Q. Let F ∈ H + −2k (Γ). For an isotropic line ℓ we let a + ℓ (w) be the coefficients of the holomorphic part F + ℓ of F ℓ = F | −2k σ ℓ . Let X ∈ L −N d 2 ,h . Recall that Γ X is trivial and X gives rise to an infinite geodesic c(X). Choosing the orientation of V appropriately, we have σ −1 ℓ X X = d 1 −2r ℓ X 0 −1 for some r ℓ X ∈ Q. Note that the geodesic c X in D is given by c X = σ ℓ X {z ∈ D : ℜ(z) = r ℓ X }. Therefore we call ℜ(c(X)) := r ℓ X the real part of c(X). We now define the complementary trace of F by t c (F ; −Nd 2 , h) = X∈Γ\L −Nd 2 ,h w<0 a + ℓ X (w)(4πw) k e 2πiℜ(c(X))w + (−1) k+1 w<0 a + ℓ −X (w)(4πw) k e 2πiℜ(c(−X))w . The Fourier expansion. We are now ready to state the Fourier expansion of the Millson theta lift. Theorem 5.1. Let k ∈ Z ≥0 and let F ∈ H + −2k (Γ). For k > 0 the h-th component of I M (F, τ ) is given by m>0 1 2 √ m √ N 4π √ m k t + (R k −2k F ; m, h) + (−1) k+1 t − (R k −2k F ; m, h) q m + d>0 1 2i √ Nd 1 4πid k t c (F ; −Nd 2 , h)q −N d 2 + (−1) k k! 2 √ Nπ k+1 ℓ∈Γ\Iso(V ) ℓ∩(L+h) =∅ a + ℓ (0) α ℓ β k+1 ℓ ζ(s + 1, k ℓ /β ℓ ) + (−1) k+1 ζ(s + 1, 1 − k ℓ /β ℓ ) s=k − m<0 1 2(4π|m|) k+1/2 t(ξ −2k F ; m, h)Γ 1 2 − k, 4π|m|v q m , where ζ(s, ρ) = n≥0,n+ρ =0 (n + ρ) −s is the Hurwitz zeta function, and k ℓ ∈ Q with 0 ≤ k ℓ < β ℓ is defined by σ −1 ℓ h ℓ = 0 k ℓ 0 0 for some h ℓ ∈ ℓ ∩ (L + h). For k = 0 the h-th component of I M (F, τ ) is given by the same formula as above but with the additional non-holomorphic terms d>0 1 4id √ πN X∈Γ\L −Nd 2 ,h a + ℓ X (0) − a + ℓ −X (0) Γ 1 2 , 4πNd 2 v q −N d 2 . The Hurwitz zeta function ζ(s, ρ) is holomorphic for ℜ(s) > 1 and has a simple pole at s = 1 with residue 1 and constant term −ψ(ρ), where ψ(0) = −γ and ψ(ρ) = Γ ′ (ρ) Γ(ρ) is the digamma function if ρ > 0. Note that k ℓ = 0 is equivalent to h ∈ L. Thus for k > 0 we can simply plug in s = k in the third line of the theorem, and for k = 0 we get ζ(s+1, k ℓ /β ℓ )−ζ(s+1, 1−k ℓ /β ℓ ) s=0 = ψ(1−k ℓ /β ℓ )−ψ(k ℓ /β ℓ ) = 0, h ∈ L, π cot(πk ℓ /β ℓ ), h / ∈ L. Note that the first three lines in Theorem 5.1 are the holomorphic part of I M (F, τ ), whereas the fourth line and the additional terms (for k = 0) are the non-holomorphic part of I M (F, τ ). The alternative form of the complementary trace given in [BF06, Proposition 4.6.] shows that the principal part of I M (F, τ ) is finite. In particular, this completes the proof of Theorem 4.3. For the sake of completeness we also state the Fourier expansion of the Shintani lift in our normalization. Theorem 5.2. Let k ∈ Z ≥0 and G ∈ S 2k+2 (Γ). Then the h-th component of I Sh (G, τ ) is given by I Sh (G, τ ) h = − √ N m>0 t(G; −m, h)q m . Proof. The Fourier expansion of I Sh (G, τ ) can be computed using very similar, but much easier calculations as in the proof of Theorem 5.1 below. Fourier coefficients of positive index. To compute the coefficients of positive index m > 0 we use the relation between I M (F, τ ) and Λ M (F, τ ). In the case that k is odd, these coefficients were already computed in [BO13,Alf14]. In the case that k is even, the (m, h)-th coefficient of Λ M (F, τ ) is given by C(m, h) = X∈L m,h M R k −2k,z F (z)ψ 0 M (X, τ, z)dµ(z), where ψ 0 M (X, τ, z) = vp z (X)e −2πvR(X,z) . For X = ( x 2 x 1 x 3 −x 2 ) ∈ L m,h and m > 0 we have x 3 = 0 and −2πvR(X, z) = 2πv(X, X) − πv N(x 3 x − x 1 ) 2 + q(X) √ Nx 3 y + √ Nx 3 y 2 , which implies that ψ 0 M (X, τ, z) is of square-exponential decay in all directions of H. Thus the integral in C(m, h) actually converges without regularisation. By the usual unfolding argument we obtain C(m, h) = X∈Γ\L + m,h ∪L − m,h 1 Γ X M R k −2k,z F (z)ψ 0 M (X, τ, z)dµ(z) = X∈Γ\L + m,h 1 Γ X M R k −2k,z F (z)ψ 0 M (X, τ, z)dµ(z) − X∈Γ\L − m,h 1 Γ −X M R k −2k,z F (z)ψ 0 M (−X, τ, z)dµ(z). Following Katok and Sarnak [KS93] we rewrite this as an integral over SL 2 (R) and obtain that C(m, h) equals SL 2 (R) R k −2k,z F (gi)    X∈Γ\L + m,h 1 Γ X ψ 0 M (X, τ, gi) − X∈Γ\L − m,h 1 Γ −X ψ 0 M (−X, τ, gi)    dg. Here, we normalize the Haar measure such that the maximal compact open subgroup has volume 1. Since the group SL 2 (R) acts transitively on L + m,h , there is a g 1 ∈ SL 2 (R) such that g −1 1 .X = √ 2mX 1 (i) for X ∈ L + m,h . Also, there is a g 1 ∈ SL 2 (R) such that g −1 1 .(−X) = √ 2mX 1 (i) for X ∈ L − m,h . We then have C(m, h) = X∈Γ\L + m,h 1 Γ X SL 2 (R) R k −2k,z F (g 1 gi)ψ 0 M √ 2mg −1 .X 1 (i), τ, i dg − X∈Γ\L − m,h 1 Γ −X SL 2 (R) R k −2k,z F (g 1 gi)ψ 0 M √ 2mg −1 .X 1 (i), τ, i dg. Then, g 1 i is the Heegner point corresponding to D X . Using the Cartan decomposition of SL 2 (R) we find C(m, h) = X∈Γ\L + m,h 1 Γ X R k −2k,z F (D X )Y c ( √ m) − X∈Γ\L − m,h 1 Γ −X R k −2k,z F (D −X )Y c ( √ m), with Y c (t) = 4πv ∞ 1 ψ 0 M ( √ 2tα(a) −1 .X 1 (i), τ, i)ω c (α(a)) a 2 − a −2 2 da a . Here, ω c (α(a)) = ω c a 2 +a −2 2 is the spherical function of eigenvalue c = −k(k + 1) given by the Legendre polynomial P k (x) and α(a) = a 0 0 a −1 . By substituting a = e r/2 we obtain Y c (t) = 4πvt ∞ 0 cosh(r) sinh(r)P k (cosh(r))e −4πvt 2 sinh(r) 2 dr. Setting x = sinh(r) 2 we get Y c (t) = 2πvt ∞ 0 P k ( √ 1 + x)e −4πvt 2 x dx. This is a Laplace transformation computed in equation (7) on page 180 in [EMOT54] and we obtain Y c ( √ m)e 2πimτ = 1 2 √ m Wk 2 + 3 4 , 1 2 (4πmv)e(mx), where W s,k (y) = y −k/2 W k/2,s−1/2 (y) (y > 0) is the W-Whittaker function. Using (13.1.33) and (13.4.23) in [AS84] it is easy to show that L k/2 1/2 Wk 2 + 3 4 , 1 2 (4πmv)e(mx) = 1 4πm k/2 k/2−1 j=0 k + 1 2 + j j − k 2 Wk 2 + 3 4 , 1 2 −k (4πmv)e(mx) = 1 4πm k/2 k/2−1 j=0 k + 1 2 + j j − k 2 e 2πimτ . Therefore, we have that C(m, h) = 1 2 √ m 1 4πm k/2 k/2−1 j=0 k + 1 2 + j j − k 2 t + (F ; m, h) − t − (F ; m, h) . Combining this with Theorem 4.6 we obtain the formula for the coefficients of positive index. 5.6. Fourier coefficients of negative index. For m < 0 the (m, h)-th coefficient of I M (F, τ ) is given by C(m, h) = X∈Γ\L m,h lim T →∞ M T F (z) γ∈Γ X \Γ ψ 0 M,k (γX, τ, z)y −2k dµ(z), where ψ 0 M,k (X, τ, z) = v k+1 p z (X)Q k X (z)e −2πvR(X,z) . We compute the individual summands for fixed X ∈ L m,h . The computation follows similar arguments as in the proof of Theorem 4.5 in [BF06]. First, a short calulculation using the rules (3.2) shows that the function (5.1) η(X, τ, z) = C k v k+1 Q −k−1 X (z) ∂ k ∂v k v −1 e −2πvR(X,z) , C k = √ N(2N) k (−2π) k+1 , satisfies ξ 2k+2,z η(X, τ, z) = ψ 0 M,k (X, τ, z). Using Stokes' Theorem in the form given in [BKV13, Lemma 2.1.], we obtain lim T →∞ M T F (z) γ∈Γ X \Γ ψ 0 M,k (γX, τ, z)y −2k dµ(z) (5.2) = − lim T →∞ M T ξ −2k,z F (z) γ∈Γ X \Γ η(γX, τ, z)y 2k+2 dµ(z) (5.3) − lim T →∞ ∂M T F (z) γ∈Γ X \Γ η(γX, τ, z)dz. (5.4) Since ξ −2k,z F is a cusp form, we can write the limit of the first integral on the right-hand side as an integral over M. 5.6.1. The integral over M. We first compute the complex conjugate of the integral over M on the right-hand side. Since Q(X) = m < 0 we can find some matrix g ∈ SL 2 (R) such that X ′ := g −1 .X = |m| N 1 0 0 −1 . Replacing z by gz and using the unfolding argument, we find − M ξ −2k,z F (z) γ∈Γ X \Γ η(γX, τ, z)y 2k+2 dµ(z) = − Γ X ′ \H ξ −2k,z F g (z)η(X ′ , τ, z)y 2k+2 dµ(z), where F g = F | −2k g. If |m|/N is not a square then Γ X is infinite cyclic and Γ X ′ = g −1 Γ X g = ± ε 0 0 ε −1 n : n ∈ Z for some ε > 1. On the other hand, if |m|/N is a square then Γ X ′ is trivial, so Γ X ′ \ H = H. Here we only consider the non-square case since the other case is very similar. As a fundamental domain for Γ X ′ \ H we can take the horizontal strip {z ∈ H : 1 ≤ y < ε 2 }. Using the explicit formula η(X ′ , τ, z) = C k v k+1 −2 |m|Nz −k−1 ∂ k ∂v k v −1 e −4π|m|v x 2 y 2 +1 , (5.5) and replacing x y by t in the integral over x, we find that the complex conjugate of (5.3) equals −C k v k+1 (−2 |m|N) k+1 ∂ k ∂v k v −1 ∞ −∞ ε 2 1 (t + i) k+1 y k ξ −2k,z F g (y(t + i))dy e −4π|m|v(t 2 +1) (t 2 + 1) k+1 dt. The inner integral is the contour integral of the holomorphic function z k ξ −2k,z F g (z) along the line y(t + i), y ∈ (1, ε 2 ). Using ξ −2k,z F g (ε 2 z) = ε −2k−2 ξ −2k,z F g (z) it is easily seen by Cauchy's Theorem that the inner integral does in fact not depend on t. Thus the double integral simplifies to −C k v k+1 i k+1 (−2 |m|N) k+1 ε 2 1 ξ −2k,z F g (iy)y k dy ∂ k ∂v k v −1 ∞ −∞ e −4π|m|v(t 2 +1) (t 2 + 1) k+1 dt. It remains to compute the derivative of the last integral. If we replace t 2 by u we see that the integral is equal to ∞ 0 u −1/2 (u + 1) −k−1 e −4π|m|v(u+1) du = Γ( 1 2 )e −4π|m|v U 1 2 , 1 2 − k, 4π|m|v with Kummer's function U(a, b, z), see [AS84, 13.2.5]. The derivative will be computed in the following, slightly more general lemma. Lemma 5.3. For k, ℓ ∈ Z, k ≥ 0, we have ∂ k ∂v k v −1−ℓ e −v U 1 2 , 1 2 − k − ℓ, v = (−1) k v −1−k−ℓ e −v U 1 2 − k, 1 2 − k − ℓ, v . Proof. By induction on k. The case k = 0 is clear. Suppose that the claim holds for some fixed k and all ℓ. Computing the innermost derivative by the product rule and using the recurrence relation [AS84,13.4.25] on the U ′ summand we get ∂ k+1 ∂v k+1 v −1−ℓ e −v U 1 2 , 1 2 − (k + 1) − ℓ, v = − ∂ k ∂v k (1 + ℓ)v −1−(ℓ+1) e −v U 1 2 , 1 2 − k − (ℓ + 1), v + v −1−ℓ e −v U 1 2 , 1 2 − k − ℓ, v . If we apply the induction hypothesis on both summands, for ℓ+1 and ℓ, the last line equals [AS84,13.4.18] is the same as (−1) k+1 v −1−(k+1)−ℓ e −v (1 + ℓ)U 1 2 − k, 1 2 − k − (ℓ + 1), v + vU 1 2 − k, 1 2 − k − ℓ, v , which by(−1) k+1 v −1−(k+1)−ℓ e −v U 1 2 − (k + 1), 1 2 − (k + 1) − ℓ, v . Since ℓ was arbitrary, this completes the induction. For ℓ = 0 the lemma gives v k+1 ∂ k ∂v k v −1 Γ( 1 2 )e −4π|m|v U 1 2 , 1 2 − k, 4π|m|v = (−1) k √ πe −4π|m|v U 1 2 − k, 1 2 − k, 4π|m|v . By [AS64, 13.6.28.], the right-hand side is (−1) k √ πΓ 1 2 + k, 4π|m|v . If we put everything together and recall the definition of the cycle integral of C(ξ −2k,z F, X), we see that (5.3) equals − 1 2(4π|m|) k+1/2 C(ξ −2k,z F, X)Γ( 1 2 + k, 4π|m|v). 5.6.2. The boundary integral. We now consider the limit of the boundary integral in (5.4). By the definition of the truncated curve M T we find − ∂M T F (z) γ∈Γ X \Γ η(γX, τ, z)dz = ℓ∈Γ\Iso(V ) α ℓ +iT z=iT F ℓ (z) γ∈Γ X \Γ η(σ −1 ℓ γX, τ, z)dz, where F ℓ = F | −2k σ ℓ . As in the proof of Lemma 5.2 in [BF06] we see that for each isotropic line ℓ the integral vanishes in the limit unless X is orthogonal to ℓ and γ ∈ Γ ℓ , which can only happen if |m|/N is a square and ℓ = ℓ X or ℓ = ℓ −X . In particular, if |m|/N is not a square, then the whole boundary integral vanishes. On the other hand, if |m|/N is a square, we obtain − ∂M T F (z) γ∈Γ X \Γ η(γX, τ, z)dz = α ℓ X +iT z=iT F ℓ X (z) γ∈Γ ℓ X η(σ −1 ℓ X γX, τ, z)dz, (5.6) + α ℓ −X +iT z=iT F ℓ −X (z) γ∈Γ ℓ −X η(σ −1 ℓ −X γX, τ, z)dz. (5.7) We only compute the first integral on the right-hand side since the second one can be computed in the same way if we first write η(σ −1 ℓ −X γX, τ, z) = (−1) k+1 η(σ −1 ℓ −X γ(−X), τ, z). Let ℓ = ℓ X for brevity. Choosing the orientation of V appropriately, we can assume that X ′ := σ −1 ℓ .X = |m| N 1 −2r ℓ 0 −1 for some r ℓ ∈ Q. Then the first summand in (5.6) equals α ℓ +iT z=iT F ℓ (z) γ∈σ −1 ℓ Γ ℓ σ ℓ η(γX ′ , τ, z)dz. Recall that σ −1 ℓ Γ ℓ σ ℓ consists of the matrices 1 α ℓ n 0 1 with n ∈ Z. Using the definition of η, we see that the first summand in (5.6) equals α ℓ +iT z=iT F ℓ (z) n∈Z η |m| N 1 2(α ℓ n − r ℓ ) 0 −1 , τ, z dz = C k v k+1 (−2 |m|N ) k+1 ∂ k ∂v k v −1 e −4π|m|v α ℓ +iT z=iT F ℓ (z) n∈Z (z + α ℓ n − r ℓ ) −k−1 e −4π|m|v (x+α ℓ n−r ℓ ) 2 y 2 dz. For a function g(t) on R we letĝ(w) = ∞ −∞ g(t)e 2πitw dt be its Fourier transform. Using Poisson summation we can rewrite the inner sum as n∈Z (z + α ℓ n − r ℓ ) −k−1 e −4π|m|v (x+α ℓ n−r ℓ ) 2 y 2 = 1 α ℓ w∈ 1 α ℓ Z e −2πiw(x−r ℓ ) ∞ −∞ (t + iy) −k−1 e −4π|m|v t 2 y 2 e 2πiwt dt, where we replaced t = (x + α ℓ n − r ℓ ). The required Fourier transform is computed in the next lemma. We let a = 2 π|m|v/y and b = y for brevity. Lemma 5.4. For a, b = 0 and k ∈ Z, k ≥ 0, the Fourier transform of h k (t) = (t + ib) −k−1 e −a 2 t 2 is given by h k (w) = − i k+1 k! πe a 2 b 2 e 2πbw erfc(ab + πw/a) k j=0 k j (2πw) k−j (−ia) j H j (iab) + e −(ab+πw/a) 2 2 √ π k j=1 k j (2πw) k−j (−a) j j−1 ℓ=0 j ℓ i ℓ H ℓ (iab)H j−ℓ−1 (ab + πw/a) , where erfc(x) = 2 √ π ∞ x e −u 2 du is the standard complementary error function and H n (x) = (−1) n e x 2 d n dx n e −x 2 is the n-th Hermite polynomial. Proof. Since (t + ib) −k−1 e −a 2 t 2 = i k k! ∂ k ∂b k (t + ib) −1 e −a 2 t 2 , the formula for h k follows from the one for h 0 and Leibniz's rule. Thus it suffices to prove that the Fourier transform of h 0 (t) = (t + ib) −1 e −a 2 t 2 = (t − ib) e −a 2 t 2 t 2 + b 2 is given by h 0 (w) = −iπe a 2 b 2 e 2πbw erfc(ab + πw/a). Using the well known facts that the Fourier transforms of e −a 2 t 2 and 1 t 2 +b 2 are given by √ π a e −π 2 w 2 /a 2 and π b e −2πb|w| , respectively, and that the Fourier transform of a product of two functions is the convolution of the individual transforms, we see that the Fourier transform of f (t) = e −a 2 t 2 t 2 +b 2 is given by f (w) = π 3/2 ab ∞ −∞ e −π 2 x 2 /a 2 e −2πb|w−x| dx = π 3/2 ab e 2πbw ∞ w e −π 2 x 2 /a 2 e −2πbx dx + π 3/2 ab e −2πbw ∞ −w e −π 2 x 2 /a 2 e −2πbx dx = π 2b e a 2 b 2 e 2πbw erfc(ab + πw/a) + e −2πbw erfc(ab − πw/a) . Since the Fourier transform of tf (t) is given by − i Using h 0 = (tf ) − ib f we get the stated formula. Let a + ℓ (w) and a − ℓ (w) denote the Fourier coefficients of F ℓ . Using the above lemma with a = 2 π|m|v/y and b = y, we find that the right-hand side of (5.6) is equal to −C k v k+1 i k+1 π (−2 |m|N) k+1 k! ∂ k ∂v k v −1 lim T →∞ w∈ 1 α ℓ Z (a + ℓ (w) + a − ℓ (w)Γ(1 + 2k, 4π|w|T ))e 2πir ℓ w × erfc 2 π|m|v + T √ πw/(2 |m|v) k j=0 k j (2πw) k−j −2i π|m|v/T j H j 2i π|m|v + e − 2 √ π|m|v+T √ πw/(2 √ |m|v) 2 2 √ π k j=1 k j (2πw) k−j −2 π|m|v j j−1 ℓ=0 j ℓ i ℓ H ℓ 2i π|m|v H j−ℓ−1 2 π|m|v + T √ πw/(2 |m|v) . Note that erfc(x) = O(e −x 2 ) as x → +∞ and lim x→−∞ erfc(x) = 2. Further, the incomplete Gamma function is of linear exponential growth. For k = 0 the last three lines disappear, and the summands for w > 0 vanish as T → ∞. Thus all that remains in the limit is − i 2 |m| w<0 a + ℓ (w)e 2πir ℓ w − i 4 |m| a + ℓ (0) erfc(2 π|m|v), in this case. Note that √ π erfc(2 π|m|v) = Γ( 1 2 , 4π|m|v). For k > 0 all summands for w ≥ 0 vanish in the limit. Further, the summands for 1 ≤ j ≤ k in the third row, and the two last rows vanish as T → ∞. Thus we are left with (−i) k+1 2 |m| √ N 4π |m| k w<0 a + ℓ (w)(4πw) k e 2πir ℓ w , if k > 0. Here we used v k+1 ∂ k ∂v k v −1 = (−1) k k!. 5.7. Fourier coefficients of index 0. We now want to compute C(0, h) = lim T →∞ M T F (z) X∈L 0,h ψ 0 M,k (X, τ, z)y −2k dµ(z), where L 0,h = {X ∈ L + h : Q(X) = 0}. Note that the sum over X is now infinite. Further, we have ψ 0 M,k (0, τ, z) = 0 so we can leave out the summand for X = 0. The computation for Q(X) = 0 is quite similar to the one for Q(X) < 0 above, so we skip some arguments. Using the function η(X, τ, z) defined in (5.1) and Stokes' Theorem we get C(0, h) = − lim T →∞ M T ξ −2k,z F (z) X∈L 0,h X =0 η(X, τ, z)y 2k+2 dµ(z) − lim T →∞ ∂M T F (z) X∈L 0,h X =0 η(X, τ, z)dz. Since ξ −2k,z F is a cusp form, we can write the first integral on the right-hand side as an integral over M. For each isotropic line ℓ ∈ Iso(V ) we choose a positively oriented primitive vector X ℓ ∈ ℓ∩L. If ℓ∩(L+h) = ∅ we can fix some vector h ℓ ∈ ℓ∩(L+h) and write ℓ∩(L+h) = ZX ℓ +h ℓ . Note that σ −1 ℓ (nX ℓ + h ℓ ) = 0 nβ ℓ +k ℓ 0 0 for some k ℓ ∈ Q. We now parametrize the set L 0,h \ {0} by the points nX ℓ + h ℓ , where ℓ runs through all isotropic lines with ℓ ∩ (L + h) = ∅ and n runs through Z such that nβ ℓ + k ℓ = 0. 5.7.1. The integral over M. Using the above parametrization for L 0,h \ {0} the integral over M in C(0, h) becomes − ℓ∈Γ\Iso(V ) ℓ∩(L+h) =∅ M ξ −2k,z F (z) n∈Z nβ ℓ +k ℓ =0 γ∈Γ ℓ \Γ η(γ(nX ℓ + h ℓ ), τ, z)y 2k+2 dµ(z). Replacing z by σ ℓ z and using the unfolding argument, we get − ℓ∈Γ\Iso(V ) ℓ∩(L+h) =∅ ∞ 0 α ℓ 0 ξ −2k,z F ℓ (z) n∈Z nβ ℓ +k ℓ =0 η 0 nβ ℓ + k ℓ 0 0 , τ, z y 2k+2 dx dy y 2 where F ℓ = F | −2k σ ℓ . Explicitly, we have η 0 nβ ℓ + k ℓ 0 0 , τ, z = C k v k+1 (−N(nβ ℓ + k ℓ )) −k−1 ∂ k ∂v k v −1 e −πvN (nβ ℓ +k ℓ ) 2 y 2 , (5.8) which is independent of x. Therefore the integral over x picks out the constant coefficient of ξ −2k,z F ℓ , which is 0 since ξ −2k,z F ℓ is a cusp form. Thus the integral over M vanishes. 5.7.2. The boundary integral. Plugging in the definition of the truncated curve (2.1), the boundary integral is given by lim T →∞ ℓ∈Γ\Iso(V ) ℓ∩(L+h) =∅ ℓ ′ ∈Γ\Iso(V ) α ℓ ′ +iT z=iT F ℓ ′ (z) n∈Z nβ ℓ +k ℓ =0 γ∈Γ ℓ \Γ η σ −1 ℓ ′ γσ ℓ 0 nβ ℓ + k ℓ 0 0 , τ, z dz. It can be seen as in the proof of Lemma 5.2 in [BF06] that in the limit only the contributions for ℓ ′ = ℓ and γ ∈ Γ ℓ remain, so we get lim T →∞ ℓ∈Γ\Iso(V ) ℓ∩(L+h) =∅ α ℓ +iT z=iT F ℓ (z) n∈Z nβ ℓ +k ℓ =0 η 0 nβ ℓ + k ℓ 0 0 , τ, z dz. Using the explicit form (5.8) of η and carrying out the integral this becomes C k v k+1 (−N) k+1 ∂ k ∂v k v −1 ℓ∈Γ\Iso(V ) ℓ∩(L+h) =∅ α ℓ a + ℓ (0) lim T →∞ n∈Z nβ ℓ +k ℓ =0 (nβ ℓ + k ℓ ) −k−1 e −πvN (nβ ℓ +k ℓ ) 2 T 2 . If k ℓ /β ℓ ∈ Z, we can shift the summation index by k ℓ /β ℓ and see that the terms with n and −n cancel if k is even and add up if k is odd, so in this case the limit of the sum over n is 0 if k is even or 2β −k−1 ℓ ζ(k + 1) if k is odd. On the other hand, k ℓ /β ℓ ∈ Z is only possible if h ℓ is an integral multiple of X ℓ and hence in L, i.e. this only happens for h = 0 mod L. Now let h = 0 mod L and thus k ℓ /β ℓ / ∈ Z. For k > 0 we can interchange the sum and the limit by the dominated convergence theorem. Splitting the sum into n ≥ 0 and n < 0 and replacing n by 1 − n in the second part, we obtain lim T →∞ n∈Z (nβ ℓ + k ℓ ) −k−1 e −πvN (nβ ℓ +k ℓ ) 2 T 2 = β −k−1 ℓ ζ(k + 1, k ℓ /β ℓ ) + (−1) k+1 ζ(k + 1, 1 − k ℓ /β ℓ ) , where ζ(s, ρ) = n≥0 (n + ρ) −s denotes the Hurwitz zeta function. For k = 0 we first reorder the sum as n∈Z (nβ ℓ + k ℓ ) −k−1 e −πN v (nβ ℓ +k ℓ ) 2 T 2 = k −1 ℓ e −πvN k 2 ℓ T 2 + β −1 ℓ n>0 (n + k ℓ /β ℓ ) −1 e −πvN (nβ ℓ +k ℓ ) 2 T 2 + (−n + k ℓ /β ℓ ) −1 e −πvN (−nβ ℓ +k ℓ ) 2 T 2 . Now using dominated convergence again, this goes to β −1 ℓ n>0 (n + k ℓ /β ℓ ) −1 + (−n + k ℓ /β ℓ ) −1 + (k ℓ /β ℓ ) −1 = β −1 ℓ π cot(πk ℓ /β ℓ ) as T → ∞. Note that C k v k+1 (−N ) k+1 ∂ k ∂v k v −1 = (−1) k k! 2 √ N π k+1 . This completes the calculation of C(0, h). The twisted Millson theta lift We now explain how to obtain twisted versions of the two theta lifts and how this leads to a generalization of results by Zagier in [Zag02]. Throughout this section we let L = b −a/N c −b : a, b, c ∈ Z be the lattice given in Example 2.1, with dual lattice L ′ = b/2N −a/N c −b/2N : a, b, c ∈ Z , and we let Γ = Γ 0 (N). Note that Γ takes L to itself and acts trivially on L ′ /L. From now on we let ∆ ∈ Z be a fundamental discriminant and r ∈ Z such that ∆ ≡ r 2 (4N). We consider the rescaled lattice ∆L together with the quadratic form Q ∆ (X) := 1 |∆| Q(X). The corresponding bilinear form is given by (·, ·) ∆ = 1 |∆| (·, ·), and the dual lattice of ∆L with respect to (·, ·) ∆ is equal to L ′ as above. We denote the discriminant group L ′ /∆L by D(∆). Note that D(1) = D and |D(∆)| = |∆| 3 |D| = 2N |∆| 3 . Following [GKZ87] we define a generalized genus character for δ = b/2N −a/N c −b/2N ∈ L ′ by χ ∆ (δ) = χ ∆ ([a, b, Nc]) :=      ∆ n , if ∆|b 2 − 4Nac, (b 2 − 4Nac)/∆ is a square mod 4N and gcd(a, b, c, ∆) = 1, 0, otherwise. Here, [a, b, Nc] is the integral binary quadratic form corresponding to δ, and n is any integer prime to ∆ represented by one of the quadratic forms [N 1 a, b, N 2 c] with N 1 N 2 = N and N 1 , N 2 > 0. Note that the function χ ∆ is invariant under the action of Γ 0 (N). Since χ ∆ (δ) depends only on δ ∈ L ′ modulo ∆L, we can view it as a function on the discriminant group D(∆). Let ρ ∆ be the representation corresponding to D(∆). In [AE13] it was shown that we obtain an intertwiner of the Weil representations corresponding to D = L ′ /L and D(∆) via χ ∆ . χ ∆ (δ)e δ . Then ψ ∆,r : D → D(∆) defines an intertwining linear map between the representations ρ L and ρ ∆ , where ρ L = ρ L if ∆ > 0, ρ L if ∆ < 0. We obtain twisted versions of the Millson, Kudla-Millson and Shintani theta function introduced in Section 3 by setting Θ ∆,r (τ, z, ϕ) = h∈L ′ /L ψ ∆,r (e h ), Θ D(∆) (τ, z, ϕ) e h , where Θ D(∆) (τ, z, ϕ) = δ∈D(∆) X∈δ+∆L ϕ(X, τ, z)e δ is the usual theta function associated to a Schwartz function ϕ and the discriminant group D(∆). It is easy to check that these twisted theta functions have the same transformation behaviour as their untwisted counterparts (see Proposition 3.2) and satisfy the same growth estimates (see Proposition 3.3) and differential equations if we replace ρ L byρ L , N by N/|∆| and Θ ℓ,k by Θ ∆,r,ℓ,k = h∈L ′ /L ψ ∆,r (e h ), Θ D(∆),ℓ,k e h . Using the twisted theta functions we construct twisted analogs of the lifts considered in Section 4. For example, for a harmonic weak Maass form F ∈ H + −2k (Γ 0 (N)) we define the twisted Millson theta lift by I M ∆,r (F, τ ) = lim T →∞ M T F (z)Θ ∆,r (τ, z, ψ M,k )y −2k dµ(z). Analogously we obtain the twisted Shintani lift I Sh ∆,r . The twisted theta lifts have the same mapping properties as their untwisted versions (see Theorem 4.3), again with ρ L replaced byρ L . Further, we obtain the following generalization of Proposition 4.2. Proposition 6.2. For F ∈ H + 0 (Γ) we have ξ 1/2,τ (I M ∆,r (F, τ )) = − |∆| 2 √ N I Sh ∆,r (ξ 0,z F, τ ) + 1 2N ℓ∈Γ 0 (N )\Iso(V ) ε ℓ a + ℓ (0)Θ ∆,r,ℓ,1 (τ ), and for k ∈ Z >0 and F ∈ H + −2k (Γ) we have ξ 1/2−k,τ (I M ∆,r (F, τ )) = − |∆| 2 √ N I Sh ∆,r (ξ −2k,z F, τ ). Proof. Let us assume k = 0 for simplicity. Following the approach of [AE13] we write I M ∆,r (F, τ ) = 1 [Γ 0 (N) : Γ ∆ ] h∈L ′ /L ψ ∆,r (e h ), I M (F, τ, D(∆), Γ ∆ ) e h ,ξ 1/2,τ (I M (F, τ, D(∆), Γ ∆ )) = − |∆| 2 √ N I Sh (ξ 0,z F, τ, D(∆), Γ ∆ ) + |∆| 2N ℓ∈Γ ∆ \Iso(V ) ε(∆) ℓ a + ℓ (0)Θ D(∆),ℓ,1 (τ ). Now a short calculation, using ε(∆) ℓ = [Γ 0 (N ) ℓ :(Γ ∆ ) ℓ ] |∆| ε ℓ , the decomposition (6.2) and the analogous decomposition for I Sh ∆,r (ξ 0,z F, τ ), yields the result. This relation gives an interesting criterion for the non-vanishing of the twisted L-function of a newform at the critical point. Proof. By the last proposition, I M ∆,r (F, τ ) is weakly holomorphic if and only if the Shintani lift I Sh ∆,r (G, τ ) vanishes. Since G is a normalized newform, Corollary 2 in Section II.4. of [GKZ87] shows that the square of the absolute value of the D-th coefficient (D < 0 with (D, N) = 1 a fundamental discriminant) of I Sh ∆,r (G, τ ) (viewed as a Jacobi form) is up to non-zero factors given by L(G, χ ∆ , k + 1)L(G, χ D , k + 1). If L(G, χ ∆ , k + 1) = 0, then all fundamental coefficients of I Sh ∆,r (G, τ ) vanish, which implies I Sh ∆,r (G, τ ) = 0. Conversely, the vanishing of the Shintani lift in particular means the vanishing of its ∆-th coefficient, i.e. L(G, χ ∆ , k + 1) 2 = 0. This completes the proof. To describe the Fourier coefficients of the twisted Millson lift we introduce twisted traces of CM values and cycle integrals. If m ∈ Q >0 with m ≡ sgn(∆)Q(h) (Z) and h ∈ D we define the twisted trace of a Γ 0 (N)-invariant function F by t + ∆,r (F ; m, h) = X∈Γ 0 (N )\L + |∆|m,rh χ ∆ (X) Γ X F (D X ), and t − ∆,r (F ; m, h) accordingly. For m ∈ Q <0 with m ≡ sgn(∆)Q(h) (Z) and h ∈ D we define the twisted trace of a cusp form G ∈ S 2k+2 (Γ 0 (N)) by t ∆,r (F ; m, h) = X∈Γ 0 (N )\L |∆|m,rh χ ∆ (X)C(G, X), with the cycle integral C(G, X) defined in Section 5. Finally, for m = −N|∆|d 2 < 0 with d ∈ Q >0 we define the twisted complementary trace by t c ∆,r (F ; −N|∆|d 2 , h) = X∈Γ 0 (N )\L −N|∆| 2 d 2 ,rh χ ∆ (X) w∈Q <0 a + ℓ X (w)(4πw) k e 2πiℜ(c(X))w + (−1) k+1 w∈Q <0 a + ℓ −X (w)(4πw) k e 2πiℜ(c(−X))w . Theorem 6.4. Let k ∈ Z ≥0 and let F ∈ H + −2k (Γ). For k > 0 the h-th component of I M ∆,r (F, τ ) is given by m>0 1 2 √ m √ N 4π |∆|m k t + ∆,r (R k −2k F ; m, h) + (−1) k+1 t − ∆,r (R k −2k F ; m, h) q m + d>0 1 2i N|∆|d 1 4πi|∆|d k t c ∆,r (F ; −N|∆|d 2 , h)q −N |∆|d 2 + |∆|(−1) k k! 2 √ N π k+1 ℓ∈Γ 0 (N )\Iso(V ) ℓ∩(L+rh) =∅ a + ℓ (0) α ℓ β k+1 ℓ d k+1 ℓ × n>0 n≡m ℓ (d ℓ ) χ ∆ (n) n s+1 + (−1) k+1 sgn(∆) n>0 n≡−m ℓ (d ℓ ) χ ∆ (n) n s+1 s=k − m<0 1 2(4π|m|) k+1/2 |∆| k/2 t ∆,r (ξ −2k F ; m, h)Γ 1 2 − k, 4π|m|v q m , where m ℓ , d ℓ ∈ Z ≥0 are defined by (m ℓ , d ℓ ) = 1 and k ℓ /β ℓ = m ℓ /d ℓ . For k = 0 the h-th component of I M ∆,r (F, τ ) is given by the same formula as above but with the additional non-holomorphic terms d>0 1 4i πN|∆|d X∈Γ\L −N|∆|d 2 ,rh χ ∆ (X) a + ℓ X (0) − a + ℓ −X (0) Γ 1 2 , 4πN|∆|d 2 v q −N |∆|d 2 . Proof. As in the proof of Proposition 6.2 we write I M ∆,r (F, τ ) = 1 [Γ 0 (N) : Γ ∆ ] h∈L ′ /L ψ ∆,r (e h ), I M (F, τ, D(∆), Γ ∆ ) e h . We see that the coefficients of the twisted lift can be obtained from the coefficients of the untwisted lift. The twisting of the coefficients of positive and negative index is quite straightforward and can be done as in the proof of Theorem 5.5. in [AE13]. We sketch the twisting of the constant coefficient. For h ∈ D with Q(h) ≡ 0(Z) the (0, h)−th coefficient of I M ∆,r (F, τ ) is given by |∆|(−1) k k! 2 √ N π k+1 1 [Γ 0 (N) : Γ ∆ ] δ∈D(∆) π(δ)=rh Q ∆ (δ)≡0(Z) χ ∆ (δ) ℓ∈Γ ∆ \Iso(V ) ℓ∩(∆L+δ) =∅ a + ℓ (0) α (∆) ℓ (β (∆) ℓ ) k+1 (ζ(s, k (∆) ℓ /β (∆) ℓ ) + (−1) k+1 ζ(s, 1 − k (∆) ℓ /β (∆) ℓ ))| s=k+1 , where the superscript (∆) indicates that the corresponding quantity is taken with respect to the lattice ∆L with quadratic form Q ∆ and the group Γ ∆ . It is easy to see that β (∆) ℓ = |∆|β ℓ and α (∆) ℓ = [Γ 0 (N) ℓ : (Γ ∆ ) ℓ ]α ℓ , but k (∆) ℓ is a bit more complicated: Let X ℓ ∈ ℓ ∩ L be a positively oriented primitive generator of ℓ. If ℓ ∩ (∆L + δ) = ∅ with π(δ) = rh then also ℓ∩(L+rh) = ∅. For a fixed isotropic line ℓ, a system of representatives for the elements δ ∈ D(∆) with π(δ) = rh, Q ∆ (δ) ≡ 0(Z) and ℓ ∩ (∆L + δ) = ∅ is given by the vectors nX ℓ + (rh) ℓ with n running modulo |∆| and some (rh) ℓ ∈ ℓ ∩ (L + rh). In particular, we have k (∆) ℓ /β (∆) ℓ = n/|∆| + m ℓ /|∆|d ℓ . Using the assumption that ∆ is a fundamental discriminant it is not hard to show that (∆, d ℓ ) = 1, χ ∆ (d ℓ ) = 1, and χ ∆ (nX ℓ + (rh) ℓ ) = χ ∆ (nd ℓ + m ℓ ). Putting everything together, we obtain the twisted constant coefficient. In the same way, we obtain the Fourier expansion of the (∆, r)-th Shintani lift: Theorem 6.5. Let k ∈ Z ≥0 and G ∈ S 2k+2 (Γ 0 (N)). Then the h-th component of I Sh ∆,r (G, τ ) is given by I Sh ∆,r (G, τ ) h = − √ N |∆| m>0 1 |∆| k/2 t ∆,r (G; −m, h)q m . Remark 6.6. Let N = 1. In this case the twisted Millson theta function vanishes identically if (−1) k ∆ > 0, which easily follows from replacing X by −X in the sum. On the other hand, the results of [EZ85, Section 5] show that for (−1) k ∆ < 0 the map f 0 (τ )e 0 + f 1 (τ )e 1 → f 0 (4τ ) + f 1 (4τ ) defines an isomorphism of H + 1/2−k, ρ L with the subspace of H + 1/2−k (Γ 0 (4)) of scalar valued harmonic weak Maass forms satisfying the Kohnen plus space condition. Using this identification we can derive the results stated in the introduction from the theorems in this section. Since ∆ ≡ r 2 (4), r mod 2 is already determined by ∆, so we can drop it from the notation. The formula for the coefficients of positive index with coefficients c + f (m, h) and each F ∈ H + −2k (Γ 0 (N)) such that Λ M ∆,r (F, τ ) is weakly holomorphic. For N = 1 and k = 0, choosing F = J = j − 744 and f = Λ M ∆,r (J, τ ), and using that the holomorphic Fourier coefficients of Λ M ∆,r (J, τ ) and Λ M ∆,r (J, τ ) are essentially given by the twisted traces of J (compare [AE13]), one can recover duality results of Zagier [Zag02] for the coefficients of a basis f d = q −d +O(1) for M !,+ 1/2 (Γ 0 (4)) and a basis g ∆ = q −∆ + O(1) for M !,+ 3/2 (Γ 0 (4)). of I M ∆ follows from t − ∆ (R k −2k F ; d) = sgn(∆)t + ∆ (R k −2k F ; d), 6.1. Extensions of the Millson and the Shintani theta lift. In [BF04] a more general notion of harmonic weak Maass forms is considered. For k ∈ Z with k = 1 and a congruence subgroup Γ of SL 2 (Q) the space H k (Γ) is defined similarly as the space H + k (Γ) but with the growth condition replaced by the weaker requirement that the forms should be at most of linear exponential growth at all cusps. A form F ∈ H k (Γ) has a Fourier expansion with a holomorphic part F + and a non-holomorphic part F − , F (z) = F + (z) + F − (z) = n≫−∞ a + (n)e 2πinz + a − (0)y 1−k + n≪∞ n =0 a − (n)H(2πny)e 2πinx , where H(w) = e −w ∞ −2w e −t t −k dt, and there are analogous expansions at the other cusps. We consider the subspace H 0 k (Γ) consisting of forms in H k (Γ) with vanishing constant terms a − (0) of the non-holomorphic parts at all cusps. It is mapped under ξ k to the space S ! 2−k (Γ) of weakly holomorphic modular forms whose constant terms at all cusps vanish. The nice observation here is that the proof of Proposition 4.1 still goes through for F ∈ H 0 −2k (Γ). Thus the Millson theta lift of F ∈ H 0 −2k (Γ) converges to a harmonic function transforming like a modular form of weight 1/2 − k for ρ L . Similarly, the regularization of the Shintani lift also works for a weakly holomorphic modular form G ∈ S ! 2k+2 (Γ) and converges to a harmonic function transforming of weight 3/2 + k for ρ L . Further, the relation between the Millson and the Shintani theta lift given in Proposition 4.2 still holds for F ∈ H 0 −2k (Γ), which can be seen by exactly the same proof as for F ∈ H + −2k (Γ). The computation of the Fourier expansion of I M (F, τ ) for F ∈ H 0 −2k (Γ) is almost the same as before, but we have to be careful with the main integral in the computation of the coefficients of negative index since ξ −2k F need no longer be a cusp form. A thorough analysis shows that the non-holomorphic coefficients are now given by traces of regularized cycle integrals of ξ −2k F as introduced in [BFK14], and that there is now also a contribution of the coefficients a − ℓ (w) for w > 0 to the complementary trace. Similarly, the coefficients of the Shintani lift of G ∈ S ! 2k+2 (Γ) are given by traces of regularized cycle integrals of G as in [BGK15]. The twisting of these extended lifts proceeds in the same way as before. We obtain the following extension of (the twisted versions of) Proposition 4.2 and Theorem 4.3. Theorem 6.8. Let k ∈ Z ≥0 . (1) The Millson theta lift I M ∆,r maps H 0 −2k (Γ 0 (N)) to H + 1/2−k, ρ L . (2) The Shintani theta lift I Sh ∆,r maps S ! 2k+2 (Γ 0 (N)) to S 3/2+k, ρ L . (3) The relation between the Millson and the Shintani theta lift given in Proposition 6.2 also holds for F ∈ H 0 −2k (Γ 0 (N)). Cycle integrals In this section we prove some identities between the cycle integrals of R 2j+1 −2k F, j ≥ 0, and ξ −2k F for a harmonic weak Maass form F ∈ H + −2k (Γ), where Γ is some congruence subgroup of SL 2 (Q) again. 7.1. Closed geodesics. Let X ∈ V with Q(X) = m < 0 such that |m|/N is not a square in Q, i.e. the stabilizer Γ X is infinite cyclic and c(X) = Γ X \c X is a closed geodesic. Further, let G be some smooth function that transforms like a modular form of weight 2k + 2 under Γ for some k ∈ Z. Recall the definition of the cycle integral C(G, X) = (−2 |m|Ni) k i ε 2 1 G g (iy)y k dy, where g ∈ SL 2 (R) is such that g −1 Xg = |m|/N ( 1 0 0 −1 ), ε > 1 is such that ε 0 0 ε −1 generates the stabilizer of g −1 Xg in g −1 Γg, and G g = G| 2k+2 g. Proposition 7.1. Let X ∈ V with Q(X) = m < 0 such that |m|/N is not a square. Let k ∈ Z and F ∈ H + −2k (Γ). For all integers ℓ ≤ k we have C(R k−ℓ+1 −2k F, X) = 1 (4|m|N) ℓ C(ξ −2ℓ R k−ℓ −2k F, X). (7.1) Further, for ℓ ≤ k − 1 we have C(R k−ℓ+1 −2k F, X) = 4|m|N(k − ℓ)(k + ℓ + 1)C(R k−ℓ−1 −2k F, X). (7.2) Proof. Plugging in the definition of the cycle integral, the left-hand side of (7.1) equals (−2 |m|N i) −ℓ i ε 2 1 (R k−ℓ+1 −2k F g )(iy)y −ℓ dy. Since ℓ ≤ k we can split off the outermost raising operator R −2ℓ = 2i ∂ ∂z − 2ℓy −1 to obtain (R k−ℓ+1 −2k F g )(iy)y −ℓ = 2i ∂ ∂z R k−ℓ −2k F g (iy)y −ℓ − 2ℓ(R k−ℓ −2k F g )(iy)y −ℓ−1 . Now we use ∂ ∂z = ∂ ∂z − i ∂ ∂y and apply the product rule to the ∂ ∂y -part to get (R k−ℓ+1 −2k F g )(iy)y −ℓ = 2i ∂ ∂z R k−ℓ −2k F g (iy)y −ℓ + 2 ∂ ∂y (R k−ℓ −2k F g )(iy)y −ℓ . Note that we also used ( ∂ ∂y R k−ℓ −2k F g )(iy) = ∂ ∂y ((R k−ℓ −2k F g )(iy)). The first summand on the right-hand side equals 2i ∂ ∂z R k−ℓ −2k F g (iy)y −ℓ = −(ξ −2ℓ R k−ℓ −2k F g )(iy)y ℓ , giving the right-hand side of (7.1). Further, the integral ε 2 1 ∂ ∂y (R k−ℓ −2k F g )(iy)y −ℓ dy = (R k−ℓ −2k F g )(iε 2 )ε −2ℓ − (R k−ℓ −2k F g )(i) vanishes since (R k−ℓ −2k F g )(iε 2 )ε −2ℓ = (R k−ℓ −2k F g )| −2ℓ ε 0 0 ε −1 (i) and R k−ℓ −2k F g transforms like a modular form of weight −2ℓ for g −1 Γg. This completes the proof of (7.1). The formula (7.2) easily follows from (7.1) if we use that ξ −2ℓ R k−ℓ −2k F = (k − ℓ)(k + ℓ + 1)y −2ℓ−2 R k−ℓ−1 −2k F for all k ∈ Z, all integers ℓ ≤ k − 1 and F ∈ H + −2k (Γ). This follows from Lemma 2.2 if we write ξ −2ℓ = y −2ℓ−2 L −2ℓ and use the relation (2.2). Corollary 7.2. Let X ∈ V with Q(X) = m < 0 such that |m|/N is not a square. Further, let k ∈ Z ≥0 and F ∈ H + −2k (Γ). For j ∈ Z ≥0 we have C(R 2j+1 −2k F, X) = 1 (4|m|N) k−j j!(k − j)!(2k)! k!(2k − 2j)! C(ξ −2k F, X). Proof. We use (7.1) with ℓ = k and then repeatedly apply (7.2). As a we special case we obtain a generalization of Theorem 1.1. from [BGK14]. Corollary 7.3. Let X ∈ V with Q(X) = m < 0 such that |m|/N is not a square. Further, let k ∈ Z ≥0 and F ∈ H + −2k (Γ). For even k we have C(R k+1 −2k F, X) = 1 (4|m|N) k/2 (( k 2 )!) 2 (2k)! (k!) 2 C(ξ −2k F, X), and for odd k we have C(R k −2k F, X) = 1 (4|m|N) (k+1)/2 ( k−1 2 )!( k+1 2 )!(2k)! (k + 1)!k! C(ξ −2k F, X). Moreover, we obtain the non-square part of Theorem 1.1 from [BGK15] which asserts that the cycle integrals of the weight 2k + 2 weakly holomorphic modular forms D 2k+1 F = −(4π) −(2k+1) R 2k+1 −2k F and ξ −2k F agree up to some constant. Corollary 7.4. Let X ∈ V with Q(X) = m < 0 such that |m|/N is not a square. For k ∈ Z ≥0 and F ∈ H + −2k (Γ) we have C(D 2k+1 F, X) = − (2k)! (4π) 2k+1 C(ξ −2k F, X). 7.2. Infinite geodesics. Let X ∈ V with Q(X) = m < 0 such that |m|/N is a square in Q, i.e. the stabilizer Γ X is trivial and c(X) = Γ X \ c X is an infinite geodesic in Γ \ H. Recall that for a cusp form G ∈ S 2k+2 the cycle integral is defined by C(G, X) = (−2 |m|Ni) k i ∞ 0 G g (iy)y k dy, where g ∈ SL 2 (R) is such that g −1 Xg = |m|/N ( 1 0 0 −1 ) and G g = G| 2k+2 g. We would like to prove similar identities as in the last section, but in general the cycle integral of R k−ℓ −2k F does not converge if the geodesic is infinite. If we start with the (convergent) cycle integral of ξ −2k F and repeat the calculations of the last section, we are led to suitable regularized cycle integrals of R k−ℓ −2k F . First, for F ∈ H + −2k (Γ) we write C(ξ −2k F, X) = (−2 |m|Ni) k i ∞ 1 ξ −2k F g (iy)y k dy + (−1) k+1 ∞ 1 ξ −2k F gS (iy)y k dy , with S = ( 0 −1 1 0 ), where we split the integral over (0, ∞) at 1 and replaced y by 1/y in the integral over (0, 1). Now we decompose F g = F + g + F − g and F gS = F + gS + F − gS into their holomorphic and non-holomorphic parts and use ξ −2k F g = ξ −2k F − g and ξ −2k F gS = ξ −2k F − gS . Note that F − g and F − gS are rapidly decreasing at the cusp ∞, but not necessarily at 0, and this is the reason why we split the integral above. We have the following analog of Proposition 7.1: Proposition 7.5. Let X ∈ V with Q(X) = m < 0 such that |m|/N is a square. Let k ∈ Z and F ∈ H + −2k (Γ). For all integers ℓ ≤ k we have ∞ 1 R k−ℓ+1 −2k F − g (iy)y −ℓ dy = − ∞ 1 ξ −2ℓ R k−ℓ −2k F − g (iy)y ℓ dy − 2R k−ℓ −2k F − g (i). Further, for ℓ ≤ k − 1 we have ∞ 1 R k−ℓ+1 −2k F − g (iy)y −ℓ dy = −(k − ℓ)(k + ℓ + 1) ∞ 1 R k−ℓ−1 −2k F − g (iy)y −ℓ−2 dy − 2R k−ℓ −2k F − g (i). The same formulas hold with g replaced by gS. Proof. The computations are the same as in the proof of Proposition 7.1 if we replace ε 2 by ∞ and use the rapid decay of R k−ℓ −2k F − (iy) as y → ∞. A repeated application of the proposition leads to following definition: For every integer j ≥ 0 we define the regularized cycle integral of R 2j+1 −2k F by C reg (R 2j+1 −2k F, X) = (−2 |m|Ni) −k+2j i × j ℓ=0 C ℓ,j (R 2ℓ −2k F − g (i)) + (−1) k+1 j ℓ=0 C ℓ,j (R 2ℓ −2k F − gS (i)) + ∞ 1 R 2j+1 −2k F − g (iy)y −k+2j dy + (−1) k+1 ∞ 1 R 2j+1 −2k F − gS (iy)y −k+2j dy , where C ℓ,j = 2(−1) ℓ+j j t=ℓ+1 (2t)(2k − 2t + 1). Note that R 2ℓ −2k F − g (i) + (−1) k+1 R 2ℓ −2k F − gS (i) = −R 2ℓ −2k F + g (i) − (−1) k+1 R 2ℓ −2k F + gS (i), so the second line above can also be understood as the part of the regularized cycle integral coming from F + . With this definition, we find C reg (R −2k F, X) = 1 (4|m|N) k C(ξ −2k F, X) and C reg (R 2j+1 −2k F, X) = 4|m|N(2j)(2k − 2j + 1)C reg (R 2j−1 −2k F, X) for j ≥ 1, and thus all the corollaries of the last section also hold for |m|/N being a square. Remark 7.6. For k = j = 0 and F ∈ H + 0 (Γ) the regularized cycle integral of R 0 F is defined by C reg (R 0 F, X) = 2iF − g (i) − 2iF − gS (i) + i ∞ 1 R 0 F − g (iy)dy − ∞ 1 R 0 F gS (iy)dy. On the other hand, since R 0 F ∈ S ! 2 (Γ) is in fact a weakly holomorphic cusp form, there is a regularized cycle integral studied in [BFK14], [BGK14] and [BGK15]. It is given by C reg BFK (R 0 F, X) = i ∞ 1 R 0 F g (iy)e −ys dy s=0 − i ∞ 1 R 0 F gS (iy)e −ys dy s=0 , where the expression on the right means that one has to take the value at s = 0 of the analytic continuation of the integral. We want to compare the two regularizations. Let us split F g = F + g +F − g . Due to the rapid decay of F − g , we can plug in s = 0 in the integral over F − g . In the integral over F + g , we insert the Fourier expansion F + g (z) = n∈Q a + g (n)e 2πinz , apply R 0 = 2i ∂ ∂z and obtain after a short calculation i ∞ 1 R 0 F + g (iy)e −ys dy s=0 = − 4πi n =0 na + g (n) 2πn + s e −(2πn+s) s=0 = −2iF + g (i) + 2ia + g (0). Using F + g (i) − F + gS (i) = −F − g (i) + F − gS (i) we find C reg (R 0 F, X) = C reg BFK (R 0 F, X) − 2ia + g (0) + 2ia + gS (0). Note that the regularized cycle integrals considered in [BFK14] are only studied for weakly holomorphic cusp forms, and the analytic continuation of the integrals relies on the particular shape of the Fourier expansion of such forms. For general k and j, the function R 2j+1 −2k F is not weakly holomorphic and has a somewhat complicated Fourier expansion, so it is not clear that the regularization of [BFK14] works. It would be interesting to investigate this problem in the future. Finally, we remark that our regularized cycle integrals look very similar to the cycle integrals of weight zero harmonic weak Maass forms given in [BFI15]. However, the definitions do not overlap since we only consider cycle integrals of R ℓ −2k F for odd ℓ. Again, it would be nice to unify the approaches and define regularized cycle integrals of R ℓ −2k F for all ℓ ≥ 0. t ∆ (F ; d) = Q d|∆| /SL 2 (Z) χ ∆ (Q)C(F, Q),whenever the cycle integrals C(F, Q) converge.Let F ∈ H + −2k be a harmonic weak Maass form. We define the Millson theta lift byI M ∆ (F, τ ) = reg SL 2 (Z)\H F (z)Θ ∆ (τ, z, ψ M,k )y −2k−2 dx dy, (1.1) 5 . 5The Fourier expansion of I M (F, τ ) Let k ∈ Z ≥0 and let F ∈ H + −2k (Γ) be a harmonic weak Maass form of weight −2k for Γ. In order to describe the Fourier expansion of I M (F, τ ) we first have to introduce the modular trace function and geodesic cycle integrals. 5.1. Heegner points and the modular trace function. For X ∈ V with Q(X) = m ∈ Q >0 we let D X = span(X) ∈ D be the Heegner point of discriminant m associated to X. We use the same symbol for the image of D X in M. Note that for m ∈ Q >0 and h ∈ L ′ /L with Q(h) ≡ m(Z), the group Γ acts on the set L m,h = {X ∈ L + h : Q(X) = m} 2 b 2 e 2πbw erfc(ab + πw/a) − e −2πbw erfc(ab − πw/a) . Proposition 6.1 ([AE13, Proposition 3.2.]). Let π : D(∆) → D be the natural projection. For h ∈ D, we define (6.1) ψ ∆,r (e h ) := δ∈D(∆) π(δ)=rh Q ∆ (δ)≡sgn(∆)Q(h) (Z) M (F, τ, D(∆), Γ ∆ ) = lim T →∞ M (∆) T F (z)Θ D(∆) (τ, z, ψ M,k )dµ(z) is the untwisted Millson theta lift for the lattice ∆L and the group Γ ∆ consisting of all elements in Γ 0 (N) which act trivially on D(∆), and M(∆) T is the truncated version of the curve M(∆) = Γ ∆ \ H. By Proposition 4.2 we have Theorem 6 . 3 . 63Let F ∈ H + −2k (Γ 0 (N)), with vanishing constant terms at all cusps if k = 0, such that G = ξ −2k F ∈ S 2k+2 (Γ 0 (N)) is a normalized newform. For ∆ < 0 with (∆, N) = 1 the lift I M ∆,r (F, τ )is weakly holomorphic if and only if L(G, χ ∆ , k + 1) = 0. which can be seen using the map [a, b, c] → [−a, b, −c], and the formula for the principal part is obtained by rewriting the twisted complementary trace as described in [AE13, Proposition 5.7.]. Remark 6.7. As in [Alf14, Thm 5.1] one can show that Λ M ∆,r is orthogonal to cusp forms with respect to the regularized Petersson inner product. In terms of the bilinear pairing {·, ·} introduced in [BF04] (or rather its extension given in [Alf14, Proposition 2.3]) this meansh∈L ′ /L m∈Q m≡sgn(∆)Q(h) (Z) c + f (−m, h)a + Λ M ∆,r (F,τ ) (m, h) = 0, for each f ∈ H + 3/2+k, ρ L AcknowledgementsWe thank Kathrin Bringmann, Jan Hendrik Bruinier, Stephan Ehlen, Jens Funke and Ben Kane for their help.The authors were partially supported by the DFG Research Unit FOR 1920 "Symmetry, Geomerty and Arithmetic". Twisted traces of CM values of weak Maass forms. Claudia Alfes, Stephan Ehlen, J. Number Theory. 1336Claudia Alfes and Stephan Ehlen. Twisted traces of CM values of weak Maass forms. J. Number Theory, 133(6):1827-1845, 2013. Weierstrass mock modular forms and elliptic curves. Claudia Alfes, Michael Griffin, Ken Ono, Larry Rolen, Research in Number Theory. 11Claudia Alfes, Michael Griffin, Ken Ono, and Larry Rolen. Weierstrass mock modular forms and elliptic curves. Research in Number Theory, 1(1):1-31, 2015. Formulas for the coefficients of half-integral weight harmonic Maaß forms. Claudia Alfes, Math. Z. 2773-4Claudia Alfes. Formulas for the coefficients of half-integral weight harmonic Maaß forms. Math. Z., 277(3-4):769-795, 2014. CM values and Fourier coefficients of harmonic Maass forms. Claudia Alfes, TU Darmstadt Diss.Claudia Alfes. CM values and Fourier coefficients of harmonic Maass forms. TU Darmstadt Diss., 2015. Handbook of mathematical functions with formulas, graphs, and mathematical tables. Milton Abramowitz, Irene A Stegun, National Bureau of Standards Applied Mathematics Series. 55Government Printing OfficeMilton Abramowitz and Irene A. Stegun. Handbook of mathematical functions with formulas, graphs, and mathematical tables, volume 55 of National Bureau of Standards Applied Mathe- matics Series. For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C., 1964. Pocketbook of mathematical functions. Milton Abramowitz, Irene A Stegun, Verlag Harri DeutschMilton Abramowitz and Irene A. Stegun. Pocketbook of mathematical functions. Verlag Harri Deutsch, 1984. On two geometric theta lifts. Jan H Bruinier, Jens Funke, Duke Math. J. 1251Jan H. Bruinier and Jens Funke. On two geometric theta lifts. Duke Math. J., 125(1):45-90, 2004. Traces of CM values of modular functions. Jan H Bruinier, Jens Funke, J. Reine Angew. Math. 594Jan H. Bruinier and Jens Funke. Traces of CM values of modular functions. J. Reine Angew. Math., 594:1-33, 2006. Regularized theta liftings and periods of modular functions. H Jan, Jens Bruinier, Funke, Imamolu, Journal für die reine und angewandte Mathematik (Crelles Journal). 703Jan H Bruinier, Jens Funke, andÖzlem Imamolu. Regularized theta liftings and periods of modular functions. Journal für die reine und angewandte Mathematik (Crelles Journal), 2015(703):43-93, 2015. Special L-values and periods of weakly holomorphic modular forms. Kathrin Bringmann, Karl-Heinz Fricke, Zachary A Kent, Proc. Amer. Math. Soc. 14210Kathrin Bringmann, Karl-Heinz Fricke, and Zachary A. Kent. Special L-values and periods of weakly holomorphic modular forms. Proc. Amer. Math. Soc., 142(10):3425-3439, 2014. Shintani lifts and fractional derivatives for harmonic weak Maass forms. Kathrin Bringmann, Pavel Guerzhoy, Ben Kane, Adv. Math. 255Kathrin Bringmann, Pavel Guerzhoy, and Ben Kane. Shintani lifts and fractional derivatives for harmonic weak Maass forms. Adv. Math., 255:641-671, 2014. On cycle integrals of weakly holomorphic modular forms. Kathrin Bringmann, Pavel Guerzhoy, Ben Kane, Math. Proc. Cambridge Philos. Soc. 1583Kathrin Bringmann, Pavel Guerzhoy, and Ben Kane. On cycle integrals of weakly holomorphic modular forms. Math. Proc. Cambridge Philos. Soc., 158(3):439-449, 2015. Theta lifts and local Maass forms. Kathrin Bringmann, Ben Kane, Maryna Viazovska, Math. Res. Lett. 202Kathrin Bringmann, Ben Kane, and Maryna Viazovska. Theta lifts and local Maass forms. Math. Res. Lett., 20(2):213-234, 2013. Heegner divisors, L-functions and harmonic weak Maass forms. Jan H Bruinier, Ken Ono, Ann. of Math. 1722Jan H. Bruinier and Ken Ono. Heegner divisors, L-functions and harmonic weak Maass forms. Ann. of Math. (2), 172(3):2135-2181, 2010. Algebraic formulas for the coefficients of half-integral weight harmonic weak Maass forms. Jan H Bruinier, Ken Ono, Adv. Math. 246Jan H. Bruinier and Ken Ono. Algebraic formulas for the coefficients of half-integral weight harmonic weak Maass forms. Adv. Math., 246:198-219, 2013. Automorphic forms with singularities on Grassmannians. Richard E Borcherds, Invent. Math. 1323Richard E. Borcherds. Automorphic forms with singularities on Grassmannians. Invent. Math., 132(3):491-562, 1998. Borcherds products on O(2, l) and Chern classes of Heegner divisors. Jan H Bruinier, Lecture Notes in Mathematics. 1780Springer-VerlagJan H. Bruinier. Borcherds products on O(2, l) and Chern classes of Heegner divisors, volume 1780 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 2002. A Singular Theta Lift and the Shimura Correspondence. Jonathan Crawford, Durham UniversityPhD thesisJonathan Crawford. A Singular Theta Lift and the Shimura Correspondence. PhD thesis, Durham University, 2015. A Erdélyi, W Magnus, F Oberhettinger, F G Tricomi, Tables of integral transforms. Harry BatemanNew York-Toronto-LondonMcGraw-Hill Book Company, IncA. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi. Tables of integral transforms. Vol. I. McGraw-Hill Book Company, Inc., New York-Toronto-London, 1954. Based, in part, on notes left by Harry Bateman. The theory of Jacobi forms. Martin Eichler, Don Zagier, Progress in Mathematics. 55Birkhäuser Boston IncMartin Eichler and Don Zagier. The theory of Jacobi forms, volume 55 of Progress in Mathe- matics. Birkhäuser Boston Inc., Boston, MA, 1985. Heegner divisors and nonholomorphic modular forms. Jens Funke, Compositio Math. 1333Jens Funke. Heegner divisors and nonholomorphic modular forms. Compositio Math., 133(3):289-321, 2002. Heegner points and derivatives of L-series. H Benedict, Winfried Gross, Don B Kohnen, Zagier, II. Math. Ann. 2781-4Benedict H. Gross, Winfried Kohnen, and Don B. Zagier. Heegner points and derivatives of L-series. II. Math. Ann., 278(1-4):497-562, 1987. Automorphe Formen mit Singularitäten auf dem hyperbolischen Raum. Martin Hövel, TU Darmstadt Diss.Martin Hövel. Automorphe Formen mit Singularitäten auf dem hyperbolischen Raum. TU Darmstadt Diss., 2012. Heegner points, cycles and Maass forms. Svetlana Katok, Peter Sarnak, Israel J. Math. 841-2Svetlana Katok and Peter Sarnak. Heegner points, cycles and Maass forms. Israel J. Math., 84(1-2):193-227, 1993. On construction of holomorphic cusp forms of half integral weight. Takuro Shintani, Nagoya Mathematical Journal. 58Takuro Shintani. On construction of holomorphic cusp forms of half integral weight. Nagoya Mathematical Journal, 58:83-126, 1975. Sur les coefficients de fourier des formes modulaires de poids demi-entier. J-L Waldspurger, J. Math. Pures Appl. 60J-L Waldspurger. Sur les coefficients de fourier des formes modulaires de poids demi-entier. J. Math. Pures Appl., 60:375-484, 1981. Traces of singular moduli. Don B Zagier, Motives, polylogarithms and Hodge theory, Part I. Irvine, CA; Somerville, MAInt. Press3Don B. Zagier. Traces of singular moduli. In Motives, polylogarithms and Hodge theory, Part I (Irvine, CA, 1998), volume 3 of Int. Press Lect. Ser., pages 211-244. Int. Press, Somerville, MA, 2002.
[]
[ "Synthetic NLTE accretion disc spectra for the dwarf nova SS Cyg during an outburst cycle", "Synthetic NLTE accretion disc spectra for the dwarf nova SS Cyg during an outburst cycle" ]
[ "M Kromer \nInstitut für Astronomie und Astrophysik\nUniversität Tübingen\nSand 172076TübingenGermany\n\nMax-Planck-Institut für Astrophysik\nKarl-Schwarzschild-Straße 185741GarchingGermany\n", "T Nagel \nInstitut für Astronomie und Astrophysik\nUniversität Tübingen\nSand 172076TübingenGermany\n", "K Werner \nInstitut für Astronomie und Astrophysik\nUniversität Tübingen\nSand 172076TübingenGermany\n" ]
[ "Institut für Astronomie und Astrophysik\nUniversität Tübingen\nSand 172076TübingenGermany", "Max-Planck-Institut für Astrophysik\nKarl-Schwarzschild-Straße 185741GarchingGermany", "Institut für Astronomie und Astrophysik\nUniversität Tübingen\nSand 172076TübingenGermany", "Institut für Astronomie und Astrophysik\nUniversität Tübingen\nSand 172076TübingenGermany" ]
[]
Context. Dwarf nova outbursts result from enhanced mass transport through the accretion disc of a cataclysmic variable system. Aims. We assess the question of whether these outbursts are caused by an enhanced mass transfer from the late-type main sequence star onto the white dwarf (so-called mass transfer instability model, MTI) or by a thermal instability in the accretion disc (disc instability model, DIM). Methods. We compute non-LTE models and spectra of accretion discs in quiescence and outburst and construct spectral time sequences for discs over a complete outburst cycle. We then compare our spectra to published optical spectroscopy of the dwarf nova SS Cygni. In particular, we investigate the hydrogen and helium line profiles that are turning from emission into absorption during the rise to outburst. Results. The evolution of the hydrogen and helium line profiles during the rise to outburst and decline clearly favour the discinstability model. Our spectral model sequences allow us to distinguish inside-out and outside-in moving heating waves in the disc of SS Cygni, which can be related to symmetric and asymmetric outburst light curves, respectively.
10.1051/0004-6361:20077898
[ "https://arxiv.org/pdf/0709.0382v1.pdf" ]
18,267,189
0709.0382
6277b60de3beff1854dd4553022b1f8582768c94
Synthetic NLTE accretion disc spectra for the dwarf nova SS Cyg during an outburst cycle 4 Sep 2007 February 1, 2008 M Kromer Institut für Astronomie und Astrophysik Universität Tübingen Sand 172076TübingenGermany Max-Planck-Institut für Astrophysik Karl-Schwarzschild-Straße 185741GarchingGermany T Nagel Institut für Astronomie und Astrophysik Universität Tübingen Sand 172076TübingenGermany K Werner Institut für Astronomie und Astrophysik Universität Tübingen Sand 172076TübingenGermany Synthetic NLTE accretion disc spectra for the dwarf nova SS Cyg during an outburst cycle 4 Sep 2007 February 1, 2008Received xxxx; accepted xxxxAstronomy & Astrophysics manuscript no. 7898Accretionaccretion disks -Stars: dwarf novae -Novaecataclysmic variables -Stars: individual: SS Cygni Context. Dwarf nova outbursts result from enhanced mass transport through the accretion disc of a cataclysmic variable system. Aims. We assess the question of whether these outbursts are caused by an enhanced mass transfer from the late-type main sequence star onto the white dwarf (so-called mass transfer instability model, MTI) or by a thermal instability in the accretion disc (disc instability model, DIM). Methods. We compute non-LTE models and spectra of accretion discs in quiescence and outburst and construct spectral time sequences for discs over a complete outburst cycle. We then compare our spectra to published optical spectroscopy of the dwarf nova SS Cygni. In particular, we investigate the hydrogen and helium line profiles that are turning from emission into absorption during the rise to outburst. Results. The evolution of the hydrogen and helium line profiles during the rise to outburst and decline clearly favour the discinstability model. Our spectral model sequences allow us to distinguish inside-out and outside-in moving heating waves in the disc of SS Cygni, which can be related to symmetric and asymmetric outburst light curves, respectively. Introduction Dwarf novae (DN) belong to the non-magnetic cataclysmic variables which are binary systems consisting of a white dwarf as primary component and an orbiting late-type main sequence star. Due to their close orbit, mass transfer from the secondary onto the primary via Roche lobe overflow occurs. Because of conservation of angular momentum an accretion disc forms around the white dwarf (Warner 1995). Dwarf novae are characterised by more or less regular outbursts during which the system undergoes a rise in optical brightness of 2-6 magnitudes. The observed outbursts can be divided into two categories, depending whether the lightcurves are symmetric or not. For asymmetric outbursts they are characterised by a fast rise and slower decline and show a delay of the rise in the UV against the optical. For symmetric outbursts UV and optical fluxes rise simultaneously on a longer timescale than the asymmetric outbursts. Sometimes for both types plateaus are observed during maximum. It is commonly accepted that the outbursts are caused by a luminosity increase in the disc that arises from a temporarily increased mass transport through the disc. The origin of this increased mass transport, however, has been discussed controversially. According to the mass transfer instability model (MTI, Bath 1975) an instability in the secondary star leads to a temporarily increased mass transfer rateṀ 2 from the secondary onto the disc, so that also the mass transport through the disc increases. Because the instability originates in the secondary, the outbursts must start at the outer edge of the disc and proceed inwards in the framework of this model. Send offprint requests to: T. Nagel, e-mail: [email protected] In the disc instability model (DIM, Osaki 1974) the mass transfer from the secondary is constant and the outbursts are attributed to thermal viscous instabilities in the disc that lead to a temporarily increased mass transport through the disc. Meyer & Meyer-Hofmeister (1981) and Faulkner et al. (1983) have shown that this instability is due to the local ionisation of hydrogen. Radial temperature and viscosity gradients lead to the propagation of heating or cooling waves throughout the disc, which carry the whole disc over to outburst or quiescence, respectively. In particular this allows the outbursts to start any place in the disc so that outbursts can proceed inwards or outwards. Today the DIM is generally favoured over the MTI. This is due to the existence of a detailed theoretical framework for the DIM that can explain the outburst behaviour and the different outburst types in a natural manner. According to Smak (1984a) the different rise times for asymmetric and symmetric outbursts are caused by different propagation directions of the outbursts: for the asymmetric outbursts (type-A after Smak 1984a) the heating wave originates in the outer part of the disc and proceeds inside moving with the mass stream, so that the heating wave can move relatively fast. Finally the hot inner part of the disc switches to outburst, so that the rise in the UV is delayed against the optical. In contrast to the symmetric type-B outbursts, the heating wave proceeds inside-out and therefore has to move against the mass stream, resulting in a relatively slow rise. In this case the hot inner parts of the disc are the first to switch to outburst so that optical and UV fluxes increase simultaneously. Furthermore, that no luminosity increase in the hotspot -the region of the disc, where the mass stream from the secondary impinges -is observed during an outburst contradicts the MTI model, because such a luminosity increase would be expected if the mass transfer from the secondary increases. A possibility of distinguishing between both models by observational data arises if one is able to decide whether the outbursts proceed outside-in or inside-out. This can be achieved by comparing time-resolved spectra for an entire outburst cycle with the appropriate model spectra because quiescence and outburst spectra differ significantly. In quiescence the optical spectrum shows the strong hydrogen Balmer emission lines characteristic of an optically thin disc. In contrast, during outburst, broad absorption features in the Balmer series indicate an optically thick disc. At the same time, the intensity in the blue wavelength range increases particularly strongly indicating a rise in the disc temperature. To this end we calculated time-resolved model spectra (Sect. 3) tailored to the dwarf nova SS Cyg. This is the brightest known DN, showing an outburst brightness of V = 8.2 mag (Ritter & Kolb 2003), making it one of the best-studied DN. Before presenting these models in Sect. 3, we give a short overview of our approach in Sect. 2. The results are discussed in Sect. 4. Model assumptions To calculate the accretion disc spectra we use our accretion disc code AcDc (Nagel 2003;Nagel et al. 2004), which is based on the assumption of a geometrically thin disc (total disc thickness H is much smaller than the disc diameter). This allows us to decouple vertical and radial structures and, together with the assumption of axial symmetry, to separate the disc into concentric annuli of plane-parallel geometry. Then radiative transfer becomes a one-dimensional problem. Each of these disc rings, which are located at a given radial distance r from the white dwarf, is assumed to be stationary. Thus it can be characterised by a constant mass transport rateṀ. The rate of energy generation from viscous shear then becomes independent of the kinematic viscosity ν k and can be parameterised by the effective temperature T eff (r) = 3GM 1Ṁ 8πσr 3 1 − r 1 r 1/4 (1) (for example Warner 1995). Thereby M 1 denotes the mass, r 1 the radius of the primary white dwarf, G the gravitational constant, and σ the Stefan-Boltzmann constant. To get a self-consistent solution, the radiative transfer equation, the hydrostatic and energy equilibrium equations, as well as the NLTE rate equations, that determine the occupation numbers of the atomic levels, are solved simultaneously by an iterative scheme. Therefore detailed information about the involved atomic levels is needed, which is provided in the form of a model atom (cf. Rauch & Deetjen 2003). The kinematic viscosity, which is needed for the vertical structure calculation, can be parameterised by the α-approach of Shakura & Sunyaev (1973) ν k = αc s H(2) (where c s is the speed of sound and α ≤ 1 a dimensionless parameter) or after Lynden-Bell & Pringle (1974) by the Reynolds number Re ν k = rv φ Re = √ GM 1 r Re ,(3) where v φ is the Kepler velocity. We choose the latter approach that is numerically easier to implement because we save a further iteration to solve consistently for c s and H. Irradiation of the disc by the primary is considered via the upper boundary condition for the radiative transfer equation. For that purpose the irradiation angle β for each disc ring and the spectrum of the primary must be specified. The spectrum of the primary is parameterised by a blackbody temperature T bb , or detailed white dwarf model atmosphere spectra are calculated. The complete set of input parameters, which we must provide for each disc ring, thus consists of M 1 , r 1 ,Ṁ, r, Re, β, T bb . The spectrum of the complete disc is then obtained by integrating the spectra of these disc rings for different inclination angles, the spectral lines are Doppler shifted according to the radial component of the Keplerian rotation velocity in the disc. As the accretion discs of dwarf novae are fed by a late-type main sequence star, we assume a disc composition of hydrogen and helium with relative solar abundances. The model atoms used for the disc model calculations presented here contain the ionisation stages H , H , and He -He . The number of NLTE levels and lines considered is 15 and 105 for H , 29 and 61 for He , and 14 and 78 for He . We consider the H − opacity and Rayleigh scattering for H and He, which is important for the coolest regions of the disc model. In addition, the Ly α line in the cool models for the quiescent disk is so broad that it contributes considerably to the source function in the optical band. The reason is that most of the hydrogen (about 90 -99%) is neutral throughout most of the line-forming region ( Fig. 1, second panel from top). Models In the following we present detailed models for the accretion disc of SS Cyg in outburst and quiescence. SS Cyg is the brightest known DN and belongs to the U Gem type of DN. For our models we have chosen the orbital parameters according to Ritter & Kolb (2003), who give M 1 = (1.19 ± 0.02) M ⊙ for the mass of the white dwarf. According to the mass-radius relation, this corresponds to a white dwarf radius of 3.9 · 10 8 cm. Together with the mass of the companion M 2 = (0.704±0.002) M ⊙ and the orbital period P = 6.6031 h, the tidal radius -the radius where the disc is disrupted by tidal interactions with the secondaryfollows from r tidal = 0.60 · a 1 + q (4) (Hellier 2001) and amounts to r tidal = 5.78 · 10 10 cm. Here a denotes the distance between primary and secondary, and it can be calculated from the third Kepler law. q is the mass ratio M 2 M 1 . The minimal extension of the disc is given by the so-called circularisation radius r circ = r 4 L 1 · 1 + q a 3(5) at which the angular momentum is equal to the angular momentum at the Lagrange point L 1 . The Roche lobe and therefore the distance r L 1 of the Lagrange point L 1 must be calculated numerically. After Plavec & Kratochvil (1964), however, for 0.1 < q < 10 the approximation r L 1 = a · 0.500 − 0.227 log q(6) is possible. This finally leads to r circ = 1.65 · 10 10 cm. In this radial range we have increased the disc's outer edge r o until the double-peaked line profiles matched the observation, so we chose r o = 4 · 10 10 cm. A lower value would give line profiles that are too broad due to the higher Kepler rotation velocity for smaller radii. The inner edge of the disc model was fixed from the following arguments. Our model cannot be applied to the boundary layer expected at the transition from the disc to the primary, so we must truncate the disc well before the white dwarf. Despite the higher temperatures in the inner disc, this has little influence on the optical spectrum because the surface area of the inner parts of the disc is much smaller than that of the outer parts. Thus the inner parts can be neglected for modelling optical spectra. In contrast, in the UV range there will be strong imprints from the inner disc portions (cf. Fig. 2). For the inclination angle i we have chosen 40 • , which is consistent with the value of i = (37 ± 5) • given by Ritter & Kolb (2003). All disc rings have been irradiated with a 50 000 K blackbody spectrum. This temperature is compatible with the observational results for the WD in SS Cyg (Long et al. 2005;Smak 1984b). Tests with a 50 000 K white dwarf model atmosphere have shown that the blackbody approximation has little influence on the emerging disc spectra. The irradiation angle was set to 1 • . According to the system geometry, this is possible but probably marks an upper limit. For such small angles, the irradiation increases the effective temperature of the disc ring compared to the value expected according to Eq. 2 only marginally. The relative differences are below 10 −3 and decrease with increasing r. Outburst For the hot disc during outburst, we assume a constant mass transport rate through the disc ofṀ = 4 · 10 −9 M ⊙ /yr and a constant viscosity of α ≈ 0.30 according to the DIM. With those parameters, we calculated a disc model from 1 · 10 9 cm ≤ r ≤ 4 · 10 10 cm by dividing the disc into 20 rings to obtain a smooth distribution of T eff (r) with a maximal difference of ∼ 3500 K between neighbouring rings (see Table 1). The resulting disc is optically thick except for the outermost ring. This ring's effective temperature of 5862 K is more typical for a cold disc. As an example of the vertical structure of the disc model, the temperature, hydrogen ionisation fraction, density, and optical depth are plotted against the height above the disc midplane in Fig. 1 at a distance of 7.35 · 10 9 cm from the white dwarf. The temperature shows an inversion at the disc surface due to the heating by irradiation of the WD before a strong drop down at log m ≈ −4 occurs and then the temperature rises slowly towards the disc's midplane. At log m ≈ −1 the disc becomes optically thick. Figure 2 shows the integrated spectrum of the hot disc and the contribution of selected disc rings. Doppler broadening due to the Keplerian velocity is taken into account. In the optical it is characterised by hydrogen Balmer absorption lines. In the UV, strong absorption lines of the hydrogen Lyman series and He  appear. The latter lines originate in the inner disc rings, where T eff becomes high enough to populate He  levels. This high T eff is also the reason the inner disc rings dominate the spectrum in the UV range despite their small surface area compared to the outer rings. The situation is completely different in the optical. There the disc spectrum is dominated by the outermost ring due to its large surface area, so the weak absorption line of He  at 4686 Å, which is visible in the inner ring spectra, is outshone by the much larger continuum flux of the outer rings. For a comparison to observational results, Fig. 3 shows our synthetic spectrum of the hot accretion disc with the spectra of Overplotted is our synthetic spectrum for the accretion disc in outburst. The model flux was multiplied by a constant factor to match the observed continuum flux. Clarke et al. (1984), who observed SS Cyg during a rise to outburst. Quiescence Wood et al. (1986) studied the radial temperature distribution in the accretion disc of the DN Z Cha by eclipse mapping. In contrast to the T eff ∝ r −3/4 power law expected for stationary accretion discs, they found a more or less constant value of the effective temperature at a level of several thousand Kelvin. This has been interpreted as a hint that the accretion discs of DN in quiescence are not stationary. Therefore we assumed a constant effective temperature of ∼ 4200 K throughout the disc, which is in the typical range for cold discs. To achieve this temperature for all rings, we adjusted the mass transport rates through the rings (see Table 2). We also adjusted the Reynolds number to get typical values for α in a disc in quiescence according to the DIM. The resulting kinematic viscosity is smaller than in the hot disc by a factor of 10, except for the outermost ring, where the viscosity is as high as for disc rings in outburst. For this disc ring, it was not possible to construct a low-viscosity model with strong emission lines. Table 2. Parameters for the ring models of the cold disc in SS Cyg. # r 10 9 cm Ṁ M ⊙ /yr Re τ tot h 10 8 cm 1 4.00 1.4 · 10 −12 19000 0.27 0.71 2 6.00 4.0 · 10 −12 16000 0.24 1.13 3 8.00 1.0 · 10 −11 13000 0.28 1.59 4 9.00 1.4 · 10 −11 13000 0.30 2.00 5 10.00 1.9 · 10 −11 10000 0.29 2.34 6 20.00 1.4 · 10 −10 3000 0.28 6.00 7 40.00 1.0 · 10 −9 500 0.22 14.56 The number of disc rings required is much smaller than for the hot disc, as the change in spectral properties across the radius is marginal due to the constant temperature. Furthermore we did not extend the cold disc as far in as the hot disc, but truncated the model at r = 4 · 10 9 cm. This is again justified by the constant effective temperature, which prevents a strong contribution of the inner rings to the UV-flux in contrast to the case of the hot disc (see Fig. 4). The resulting disc spectrum (Fig. 4) is compared to an observed spectrum taken from Martinez-Pais et al. (1994) in Fig. 5. For that purpose the model spectrum has been normalised to the local continuum flux. In principle, the model reproduces the hydrogen Balmer emission lines, but they are not as strong as in the observation. The He  emission lines are not seen in our model spectrum, and the reason may be that they form in the hot spot, which we have not included in our models. -Pais et al. 1994). Overplotted is our synthetic spectrum for the accretion disc of SS Cyg in quiescence (grey). The flux was normalised to the local continuum flux. As an example, the vertical structure of the cold disc is shown in Fig. 6 at a radial distance of 4.0 · 10 10 cm. In contrast to the hot disc, the temperature does not increase towards the disc midplane but declines monotonically. Towards the disc surface, the irradiation of the WD again causes a temperature inversion. The entire cold disc is optically thin. Rise to outburst To examine the spectral evolution from quiescence to outburst, we combined rings of the cold and hot discs to a sequence of disc models such that this sequence simulates the propagation of the heating wave throughout the disc. In this way we studied the two different cases of outside-in and inside-out moving heating waves to achieve further insight into the processes taking place in the disc during rise to outburst. For the outside-in outburst this sequence consists of five disc models in which the cold rings have been replaced by the next-neighbouring hot rings from the outside. The assembly of these models is shown in Table 3. The left panel of Fig. 7 shows the spectral evolution for this sequence from a pure cold disc to full outburst, as well as the left panel of Fig. 8 where the spectra are normalised to the local continuum. The Balmer series turns from emission to absorption immediately after the outermost disk ring flipped into the hot state, because the overall disc flux in the optical is dominated by the flux of the outer rings and these are dominated by absorption in the hot state. Table 3. Assembly of disc models for the simulated outside-in outburst. # Step 1 Step 2 Step 3 Step 4 Step 5 1 4.00cold 4.00cold 4.00cold 4.00cold 4.00cold 2 6.00cold 6.00cold 6.00cold 6.00cold 6.00cold 3 8.00cold 8.00cold 8.00cold 8.00cold 7.35hot 4 9.00cold 9.00cold 9.00cold 9.65hot 9.65hot 5 10.0cold 10.0cold 13.5hot 13.5hot 13.5hot 6 20.0cold 21.0hot 21.0hot 21.0hot 21.0hot 7 40.0hot 40.0hot 40.0hot 40.0hot 40.0hot Numbers in "step" columns denote the radial position of the model in 10 9 cm, the following "cold" or "hot" label whether a cold or hot ring was used. Similarly we modelled the inside-out outburst by a sequence of five disc models. In the first step the cold disc's innermost ring is replaced by the hot rings that lie inside of its radial position. At the same time the disc is extended inwards to the inner boundary of the hot disc at 1 · 10 9 cm. For optical spectra, to which we will restrict our discussion in the following, this simplification can be justified because the inner disc rings will only contribute to the UV due to their high effective temperature and because they only cover a small surface area. In the subsequent steps, the next-neighbouring rings from the inside are replaced by hot ones. The complete assembly of the discs for the insideout model sequence is shown in Table 4. Table 4. Assembly of disc models for the simulated inside-out outburst. # Step 1 Step 2 Step 3 Step 4 Step 5 1 The right panels of Figs. 7 and 8 show the spectral evolution for this sequence from a pure cold disc to full outburst. In con-trast to the outside-in scenario the hydrogen Balmer emission only diminishes slowly during rise to outburst, while increasing absorption wings appear. One has to keep in mind that the steps of our sequences are not equidistant in time. For an outside-in outburst, for example, the heating wave moves inwards quickly; and according to our models, the spectral lines change from emission to absorption as soon as a part of the outer region is in outburst, so one will observe an absorption-line spectrum during most of the rise of an outside-in outburst. In the case of an inside-out outburst, the heating wave moves rather slowly outwards. As our models show, the line spectrum of the disk does not change to pure absorption until the outermost regions are in outburst. Hence one would observe an emission-line spectrum most of the time of an inside-out outburst. Decline Decline from outburst to quiescence is mediated by a cooling wave. This wave always propagates outside-in as argued by e.g. Warner (1995), so that the cooling can be studied by examining the inside-out moving heating wave of Table 4 in reverse order. According to the right panel of Fig. 7, the hydrogen Balmer lines evolve smoothly from absorption to pure emission. Results and discussion We used the models presented above to determine the nature of the outbursts in SS Cyg by comparing them to spectra available in the literature. This turned out to be rather difficult, because adequate time-resolved spectra during rise to outburst are very rare even for the well-studied case of SS Cyg. Martinez-Pais et al. (1996) presented time-resolved spectra for different outbursts of SS Cyg. Among them are two spectra taken during a rise to an outburst of the symmetric type. According to their Fig. 2 the hydrogen Balmer emission lines decrease slowly between these two spectra, while the absorption wings increase like in the right panel of our Fig. 7 for the inside-out outburst. This leads to an identification of symmetric outbursts in SS Cyg with inside-out outbursts, which is in good agreement to Smak's (1984a) description of type-B outbursts, but contrasts to the original conclusion of Martinez-Pais et al. (1996). They interpret the late appearance of the He  4686 Å line as a consequence of an outside-in propagating heating wave, because they assumed that the He  4686 Å line originates in the hot inner part of the disc. This is questionable in light of our models. The line should be significantly broader due to the higher Kepler velocity if this assumption is true. "Some indication of an increase in the hot spot's vertical size and, perhaps, brightness" leads them to conclude further that symmetric outbursts are connected with an instability of the secondary star, so they favoured the MTI for this outburst. This is put into question by our models, which indicate an inside-out outburst in favour of the DIM. The observations of Clarke et al. (1984) cover a complete outburst cycle of SS Cyg. The outburst shows an asymmetric light curve. Three of its spectra before and during rise, as well as during maximum, are shown in Fig. 3. The spectral evolution of this outburst differs significantly from what was observed by Martinez-Pais et al. (1996): quiescent Balmer line emission abruptly disappears at the onset of rise to maximum, before full absorption sets in during the rise. This fits our outside-in model sequence, indicating that the asymmetric outbursts are indeed The spectra are normalised to the local continuum. The lowermost graphs show a pure cold disc that evolves to full outburst (uppermost graph) over steps 1 to 5 of Tables 3 and 4, respectively. connected to outside-in, i.e. type-A outbursts following Smak (1984a). The fact that the observed spectra of Clarke et al. (1984) during full maximum do not show pure absorption lines like our model might be a consequence of the radial extension of the disc during outburst. This is not considered in our model, although it is expected in the DIM due to the higher transport of angular momentum during outburst. It might be possible that this portion of the disc outshines the basic inner part of the disc due to its large surface area. If this portion of the disc then has comparable properties to the current outermost grid point, which is relatively cool and emission-dominated, or at least continuumdominated, the resulting model spectrum of the extended disc might show no absorption lines anymore. For decline from outburst to quiescence, another study by Hessman et al. (1984) exists. Their Fig. 5 is in good agreement with the right panel of our Fig. 7 if read from top to bottom, which means that they witnessed an outside-in propagating cooling wave. This again agrees with the DIM. During decline the 4686 Å line of He  often shows a prominent emission feature like in the study of Hessman et al. (1984). This does not appear in our model. If it originates in the disc, it must arise from the inner parts, because only there does T eff become high enough to populate He  levels. Accordingly, the inner rings of our hot disc model show He  4686 Å, however, not in emission but in absorption. This supports the results of Unda-Sanzana et al. (2006), which identified the gas stream/disc impact region as the origin of the He  4686 Å emission by means of Doppler tomography. That Hessman et al. (1984) observed central emission peaks in the Balmer lines right from the beginning of the decline might be indicating that not the complete disc but only the inner parts participated in the outburst. If for example the disc stays cold for r > 10 · 10 10 cm, we only have to compare the lower five curves of the right panel in Fig. 7 to the observation. As Hessman et al. (1984) observed a short outburst without plateau, this would be again in good agreement with the DIM. There the short outbursts are attributed to discs that are not completely in outburst, while plateaus are supposed to appear if matter is accreted with a constant rate through a disc that is completely in outburst. It will be interesting to extend our study to the UV range where especially the C  1550 Å line shows a similar behaviour to the Balmer lines. For that purpose, heavier elements must be included in the model calculations, and the influence of the disc wind, which becomes obvious in the P Cyg shaped profile of the C  1550 Å line in outburst, will be considered. Another question is the influence of metal opacities, which we have neglected here, on the hydrogen and helium lines. From our experience in working on stellar atmospheres (O and sdO stars), we would predict that metal line blanketing and surface cooling will produce slightly deeper H and He absorption lines in the hot disk; however, we expect no qualitative change in the optical spectrum. The situation is different in the cool ring models, which are optically thin. Work is in progress to investigate the metal line blanketing problem. Summary In this paper, we have presented NLTE model calculations for the accretion disc of SS Cyg in outburst and quiescence. The resulting synthetic spectra describe the observed optical spectra and their transition well from absorption during outburst to emission during quiescence. Simulations of the spectral evolution for outside-in and inside-out propagating heating waves were carried out. We compared them to published observations and conclude that symmetric outbursts belong to the inside-out type. This confirms DIM expectations (e.g. Smak 1984a), which are based on rise-time arguments and explicitly excludes the MTI model. In contrast, asymmetric outbursts seem to be outside-in outbursts. Fig. 1 . 1Vertical structure of the hot disc at a distance of 7.35 · 10 9 cm from the white dwarf. The physical variables are plotted against the column mass measured from the surface towards the midplane. Fig. 2 . 2Model spectra for the accretion disc of SS Cyg in outburst (uppermost curve). The other curves show the contribution of selected individual disc rings starting with the outermost ring (top) and then continuing towards the inner disc edge. The inclination angle is 40 • . Fig. 3 . 3Observed spectra of SS Cyg (fromClarke et al. 1984) during rise to outburst (outburst maximum corresponds to 7/28/83). Fig. 4 . 4Model spectra for the accretion disc of SS Cyg in quiescence (uppermost curve). The other curves show the contribution of selected individual disc rings starting with the outermost ring (top) and then continuing towards the inner disc edge. The inclination is 40 • . The spectral lines are getting broader due to the higher Kepler velocities of the inner disc rings. Fig. 5 . 5Observed spectra of the accretion disc of SS Cyg during quiescence (Martinez Fig. 6 . 6Vertical structure of the cold disc at a distance of 4.0 · 10 10 cm from the white dwarf. The physical variables are plotted against the column mass measured from the surface towards the midplane. Fig. 8 . 8Spectral evolution between 4000 and 5300 Å for an outside-in outburst (left panel) and an inside-out outburst (right panel). Table 1 . 1Parameters of the rings for the hot disc in SS Cyg. τ tot is the total Rosseland optical depth from top to disc midplane and h = H/2 the vertical extension of the disc from the midplane. The other symbols are defined in the text.# r 10 9 cm Re T eff [K] τ tot h 10 8 cm 1 1.00 3200 74912 104 0.21 2 1.10 3200 71056 110 0.23 3 1.22 3000 66935 114 0.26 4 1.35 2900 63013 119 0.30 5 1.50 2800 59074 124 0.34 6 1.66 2600 56742 125 0.39 7 1.84 2500 51916 127 0.44 8 2.05 2400 48403 128 0.50 9 2.30 2350 44873 130 0.58 10 2.60 2300 41350 131 0.67 11 2.97 2300 37798 134 0.79 12 3.43 2300 34259 139 0.93 13 4.02 2200 30705 145 1.13 14 4.80 2100 27135 155 1.40 15 5.85 1700 23610 163 1.81 16 7.35 1620 20080 192 2.38 17 9.65 1550 16525 221 3.29 18 13.50 1450 12969 258 4.93 19 21.00 1200 9404 169 8.14 20 40.00 500 5862 1.61 16.18 Fig. 7. Spectral evolution between 4000 and 5300 Å for an outside-in outburst (left panel) and an inside-out outburst (right panel). The lowermost graphs show a pure cold disc that evolves to full outburst (uppermost graph) over steps 1 to 5 of Tables 3 and 4, respectively.1300 1400 4000 4200 4400 4600 4800 5000 5200 λ[A o ] F λ [1.5x10 34 erg sterad -1 s -1 A o -1 ] Hβ Hγ Hδ inside-out 4000 4200 4400 4600 4800 5000 5200 1 2 3 4 4000 4200 4400 4600 4800 λ[A o ] relative flux 4000 4200 4400 4600 4800 M. Kromer, T. Nagel, and K. Werner: NLTE accretion disc spectra for SS Cyg . G T Bath, MNRAS. 171311Bath, G. T. 1975, MNRAS, 171, 311 . J T Clarke, S Bowyer, D Capel, ApJ. 287845Clarke, J. T., Bowyer, S., & Capel, D. 1984, ApJ, 287, 845 . J Faulkner, D N C Lin, J Papaloizou, MNRAS. 205359Faulkner, J., Lin, D. N. C., & Papaloizou, J. 1983, MNRAS, 205, 359 C Hellier, Cataclysmic Variable Stars. Springer PraxisHellier, C. 2001, Cataclysmic Variable Stars (Springer Praxis) . F V Hessman, E L Robinson, R E Nather, E.-H Zhang, ApJ. 286747Hessman, F. V., Robinson, E. L., Nather, R. E., & Zhang, E.-H. 1984, ApJ, 286, 747 . K S Long, C S Froning, C Knigge, ApJ. 630511Long, K. S., Froning, C. S., Knigge, C., et al. 2005, ApJ, 630, 511 . D Lynden-Bell, J E Pringle, MNRAS. 168603Lynden-Bell, D., & Pringle, J. E. 1974, MNRAS, 168, 603 . I G Martinez-Pais, F Giovannelli, C Rossi, S Gaudenzi, A&A. 291455Martinez-Pais, I. G., Giovannelli, F., Rossi, C., & Gaudenzi, S. 1994, A&A, 291, 455 . I G Martinez-Pais, F Giovannelli, C Rossi, S Gaudenzi, A&A. 308833Martinez-Pais, I. G., Giovannelli, F., Rossi, C., & Gaudenzi, S. 1996, A&A, 308, 833 . F Meyer, E Meyer-Hofmeister, A&A. 10410Meyer, F., & Meyer-Hofmeister, E. 1981, A&A, 104, L10 . T ; T Nagel, S Dreizler, T Rauch, K Werner, A&A. 428109Eberhard-Karls-Universität Tübingen Nagel,PhD thesisNagel, T. 2003, PhD thesis, Eberhard-Karls-Universität Tübingen Nagel, T., Dreizler, S., Rauch, T., & Werner, K. 2004, A&A, 428, 109 . Y Osaki, PASJ. 26429Osaki, Y. 1974, PASJ, 26, 429 . M Plavec, P Kratochvil, Bull. Astr. Inst. Czechosl. 15Plavec, M., & Kratochvil, P. 1964, Bull. Astr. Inst. Czechosl., 15 T Rauch, J L Deetjen, Stellar Atmosphere Modelling. I. Hubeny, D. Mihalas, & K. Werner288103Rauch, T., & Deetjen, J. L. 2003, in Stellar Atmosphere Modelling, ed. I. Hubeny, D. Mihalas, & K. Werner, ASP Conference Series, 288, 103 . H Ritter, U Kolb, A&A. 404301Ritter, H., & Kolb, U. 2003, A&A, 404, 301 . N I Shakura, R A Sunyaev, A&A. 24337Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337 . J Smak, PASP. 965Smak, J. 1984a, PASP, 96, 5 . J Smak, Acta Astron. 34317Smak, J. 1984b, Acta Astron., 34, 317 . E Unda-Sanzana, T R Marsh, L Morales-Rueda, MNRAS. 369805Unda-Sanzana, E., Marsh, T. R., & Morales-Rueda, L. 2006, MNRAS, 369, 805 B Warner, Cataclysmic Variable Stars. Cambridge University Press28Warner, B. 1995, Cambridge Astrophysics Series, Vol. 28, Cataclysmic Variable Stars (Cambridge University Press) . J Wood, K Horne, G Berriman, MNRAS. 219629Wood, J., Horne, K., Berriman, G., et al. 1986, MNRAS, 219, 629
[]
[ "A prognosis oriented microscopic stock market model", "A prognosis oriented microscopic stock market model" ]
[ "Christian Busshaus \nInstitut für Theoretische Physik\nUniversität zu Köln\n50923KölnGermany\n", "Heiko Rieger \nInstitut für Theoretische Physik\nUniversität zu Köln\n50923KölnGermany\n\nNIC c/o Forschungszentrum Jülich\n52425JülichGermany\n" ]
[ "Institut für Theoretische Physik\nUniversität zu Köln\n50923KölnGermany", "Institut für Theoretische Physik\nUniversität zu Köln\n50923KölnGermany", "NIC c/o Forschungszentrum Jülich\n52425JülichGermany" ]
[]
We present a new microscopic stochastic model for an ensemble of interacting investors that buy and sell stocks in discrete time steps via limit orders based on individual forecasts about the price of the stock. These orders determine the supply and demand fixing after each round (time step) the new price of the stock according to which the limited buy and sell orders are then executed and new forecasts are made. We show via numerical simulation of this model that the distribution of price differences obeys an exponentially truncated Levy-distribution with a self similarity exponent µ ≈ 5.
10.1016/s0378-4371(99)00060-6
[ "https://arxiv.org/pdf/cond-mat/9903079v1.pdf" ]
122,289,220
cond-mat/9903079
2f8548afd86594bcec9596c7f061e70f5e92d7c7
A prognosis oriented microscopic stock market model 4 Mar 1999 Christian Busshaus Institut für Theoretische Physik Universität zu Köln 50923KölnGermany Heiko Rieger Institut für Theoretische Physik Universität zu Köln 50923KölnGermany NIC c/o Forschungszentrum Jülich 52425JülichGermany A prognosis oriented microscopic stock market model 4 Mar 1999(February 25, 1999)numbers: 0540-a0540Fb0565+b8990+n Keywords: Stock market modelsinteracting investorsprice fluctuationstruncated Levy distribution Typeset using REVT E X 1 We present a new microscopic stochastic model for an ensemble of interacting investors that buy and sell stocks in discrete time steps via limit orders based on individual forecasts about the price of the stock. These orders determine the supply and demand fixing after each round (time step) the new price of the stock according to which the limited buy and sell orders are then executed and new forecasts are made. We show via numerical simulation of this model that the distribution of price differences obeys an exponentially truncated Levy-distribution with a self similarity exponent µ ≈ 5. I. INTRODUCTION In the last years a number of microscopic models for price fluctuations have been developed by physicists [1][2][3][4][5][6] and economists [7,8]. The purpose of these models is, in our view, not to make specific predictions about the future developments of the stock market (for instance with the intention to make a fortune) but to reproduce the universal statistical properties of liquid markets. Some of these properties are an exponentially truncated Levy-distribution for the price differences on short time scales (significantly less than one month) and a linear autocorrelation function of the prices which decays to zero within a few minutes [9][10][11][12][13]. We present a new microscopic model with interacting investors in the spirit of [8,2,14] that speculate on price changes that are produced by themselves. The main features of the model are individual forecasts (or prognoses) for the stock price in the future, a very simple trading strategy to gain profit, limited orders for buying and selling stocks [7] and various versions of interaction among the investors during the stage of forecasting the future price of a stock. The paper is organized as follows: In section 2 we define our model, in section 3 we present the results of numerical simulations of this model including specific examples of the price fluctuations using different interactions among the investors, the autocorrelation function of the price differences and most importantly their distribution, which turn out to be (exponentially) truncated Levy distributions. Section 4 summarizes our findings and provides an outlook for further refinements of the model. II. THE MODEL The system consists of one single stock with actual price K(t) and N investors labeled by an index i = 1, . . . , N. In the most simplified version of the model the investors have identical features and are described at each time step by three variables: 2 P i (t) The personal prognosis of investor i at time t about the price of the stock at time t + 1. C i (t) The cash capital (real variable) of investor i at time t. S i (t) The number of shares (integer variable) of investor i at time t. The system at time t = 0 is initialized with some appropriately generated initial values for P i (t = 0), C i (t = 0) and S i (t = 0), plus a particular price for the stock. The dynamics of the system evolves in discrete time steps t = 1, 2, 3, . . . and is defined as follows. Suppose time step t has been finished, i.e. the variables K(t), P i (t), C i (t) and S i (t) are known. Then the following consecutive procedures are executed. Make Prognosis Each investor sets up a new personal prognosis via P i (t + 1) = (xP i (t) + (1 − x)K(t)) · e r i ,(1) where x ∈ [0, 1] is a model dependent weighting factor (for the investor's old prognosis and the price of the stock) and r i are independent identically distributed random variables of mean zero and variance σ that mimic a (supposedly) stochastic component in the individual prognosis (external influence, greed, fear, sentiments · · ·, see also [7]). Make Orders Each investor gives his limit order on the basis of his old and his new prognosis: P i (t + 1) − P i (t) > 0: investor i puts a buy-order limited by P i (t), which means that he wants to transform all cash C i (t) into int[C i (t)/P i (t)] shares if K(t + 1) ≤ P i (t) . P i (t + 1) − P i (t) < 0: investor i puts a sell-order limited by P i (t), which means that he wants to transform all stocks into S i (t) · K(t + 1) cash if K(t + 1) ≥ P i (t). Now let i 1 , i 2 , . . . , i N A be the investors that have put a sell-order and their limits are P i 1 (t) ≤ P i 2 (t) ≤ · · · ≤ P i N A (t), and let j 1 , j 2 , . . . , j N B be the investors that have put a buy-order and their limits are P j 1 (t) ≥ P j 2 (t) ≥ · · · ≥ P j N B (t). Calculate new price Define the supply and demand functions A(K) and B(K), respectively, via A(K) = N A a=1 S ia · θ(K − P ia (t)) B(K) = N A b=1 ∆S j b · [1 − θ(K − P j b (t))] (2) with ∆S j b = int[C j b (t)/P j b (t)] the number of shares demanded by investor j b , and θ(x) = 1 for x ≥ 0 and θ(x) = 0 for x < 0. Then the total turnover at price K would be Z(K) = min {A(K), B(K)}(3) and the new price is determined is such a way that Z(K) is maximized. Since Z(K) is a piece-wise constant function it is maximal in a whole interval, say K ∈ [P imax , P jmax ] for some i max ∈ {i 1 , . . . , i N A } and j max ∈ {j 1 , . . . , i N B }. Then we define the new price to be the weighted mean K(t + 1) = P imax · A(P imax ) + P jmax · B(P jmax ) A(P imax ) + B(P jmax ) .(4) Note that the weight by the total supply and demand takes care of the price being slightly higher (lower) than the arithmetic mean (P imax + P jmax )/2 if the supply is smaller (larger) than the demand. Execute orders Finally the sell-orders of the investors i 1 , . . . , i max and the buy-orders of the investors j 1 , . . . , j max are executed at the new price K(t + 1), i.e. the buyers j 1 , . . . , j max update S j b (t + 1) = S j b (t) + int[C j b (t)/P j b (t)] C j b (t + 1) = C j b (t) − K(t + 1) · (S j b (t + 1) − S j b (t))(5) and the investors i 1 , . . . , i max sell all their shares at price K(t + 1): S ia (t + 1) = 0 C ia (t + 1) = C ia (t) + S ia (t) · K(t + 1)(6) If A(P imax ) < B(P jmax ) then investor j max cannot buy int[C jmax(t) /P jmax (t)] but only the remaining shares, whereas in the case A(P imax ) > B(P jmax ) investor i max cannot sell all his shares. The orders of the investors i max+1 , . . . , i N A and j max+1 , . . . , j N B cannot be executed due to their limits. The execution of orders completes one round, measurements of observables can be made and then the next time step will be processed. A huge variety of interaction among the investors can be modeled, here we restrict ourselves to three different versions taking place at the level of the individual prognosis genesis: I 1 : Each investor i knows the prognoses P i 1 (t), . . . , P im (t) of m randomly selected (once at the beginning of the simulation) neighbors. When making an order, he modifies his strategy and puts in the case P i (t + 1) − [g i (t)P i (t) + m n=1 g in (t)P in (t)] < (>)0 (7) a buy (sell) order limited still by his own prognosis P i (t). We choose the weights g i (t) = 1/2 and g in (t) = 1/2m for n = 1, . . . , m. I 2 : In addition to interaction I 1 investor i changes the weights g after the calculation of the new price K(t + 1) according to the success of the prognoses: g i − (t + 1) = g i − (t) − ∆g g i + (t + 1) = g i + (t) + ∆g(8) where fro each investro i the index i − (i + ) denotes the investor from the set {i, i 1 , . . . , i m } with the worst (best) prognosis, i.e.: i − ∈ {i, i 1 , . . . , i m } such that abs[P i − (t) − K(t + 1)] is maximal i + ∈ {i, i 1 , . . . , i m } such that abs[P i + (t) − K(t + 1)] is minimal (9) The weight g i is forced to be positive, because an investor should believe in his own prognosis P i (t). I 3 : In addition to interaction I 2 neighbors with weights g i − (t + 1) < 0 are replaced by randomly selected new neighbors. III. RESULTS In this section we present the results of numerical simulations of the model described above. In what follows we consider a system with 1000 investors and build ensemble averages over 10000 independent samples (i.e. simulations) of the system. We checked that the results we are going to present below do not depend on the system size (the number of traders). When changing the system size, i.e. the number N of investors, the statistical properties of the price differences do not change qualitatively. Increasing N only decreases the average volatility (variance of the price changes). For concreteness we have chosen the following parameters: the initial price of the stock is K 0 = 100 (arbitrary units, [7]), Each trader has initially C i (t = 0) = 50000 units of cash and S i (t = 0) = 500 stocks (thus the total capital of each trader is initially 100000 units). The standard deviation of the Gaussian random variable z is σ = 0.01 (with mean zero). We performed the simulations over 1000 time steps which is roughly 10 time longer than the transient time of the process for these parameters. In other words, we are looking at its stationary properties. First we should note that in the deterministic case σ = 0 no trade would take place [1], hence the stochastic component in the individual forecasts is essential for any interesting time evolution of the stock market price. We focus on the time dependence of the price K(t), the price change ∆ T (t) = K t+T − K T in an interval T , their time dependent autocorrelation C T (τ ) = ∆ T (t + τ )∆ T (t) ∆ T (t + τ ) ∆ T (t) (∆ T (t)) 2 − ∆ T (t) 2(10) and their probability distribution P (∆ T (t)). The statistical properties of the price changes produced by our model depend very sensitively on the parameter x in equation (1). In 6 particular for the case x = 1 it turns out that the total turnover decays like t −1/2 in the interaction-free case, which implies that after a long enough time no investor will buy or sell anything anymore. However, only an infinitesimal deviation from x = 1 leads to a saturation of the total turnover at some finite value and trading will never cease. In Fig.1-4 we present the results of the interaction-less case with x = 1 (Fig. 1) and x = 0 and contrast it with the results of the model with interactions I 1 , also for x = 1 (Fig. 3) and x = 0 (Fig. 4). In the opposite case x = 1 investor i makes his new prognosis P i (t + 1) based on his own old one and never looks at the current stock price. Now we can show that the distribution of the price differences decays exponentially in its asymptotic, but the self similarity exponent 1/µ ≈ 0.2 is too small to agree with a Levy stable distribution. The autocorrelation function of the price differences decays very quickly, so that there are significant linear anti-correlations only between consecutive differences. The selfsimilarity exponent has been determined via the scaling relation P (∆ T = 0) ∼ T −1/µ and a linear fit to the data of P (∆ T = 0) versus T in a log-log plot. These least square fits yield the relative errors for our estimates of the self similarity exponent 1/µ in the table above, which lay between 0.1% and 0.3%. 7 IV. SUMMARY AND OUTLOOK We presented a new microscopic model for liquid markets that produces an exponentially truncated Levy-distribution with a self similarity exponent 1/µ ≈ 0.2 for the price differences investors look only at their old prognosis P i (t)). Shown are the price fluctuations for one sample (top), the autocorrelation function C T (τ ) for T = 1 (middle) and the probability distribution P (∆ T ) of the price differences for T = 1. For x = 0 0investor i does not look at his old prognosis but only at the actual stock price when making a new prognosis. In this case the distribution of the price can be fitted very well by a Gaussian distribution irrespective of the version of interaction or no interaction. The self similarity exponent 1/µ ≈ 0.5 agrees with the scaling behavior of a Gaussian distribution. The autocorrelation function of the price differences decays alternating to zero within a few time steps. FIG. 1 . 1on short time scales. Studying the distribution on longer time scales we find that it converges to a Gaussian distribution. The autocorrelation function of the price changes decays to zero within a few time steps. The statistical properties of our prognosis oriented model depend very sensitively on the rules how the investors make their prognoses.There are many possible variations of our model that could be studied. It is plausible that a heterogeneous system of traders leads to stronger price fluctuations and thus a smaller value for the self similarity exponent µ (which appears to be 1/µ ≈ 0.7 for real stock price fluctuations[10]). The starting wealth could be distributed with a potential law (comparable with the cluster size in the Cont-Bouchaud model). Or the investors could have different rules for making prognoses and following trading strategies. Another possible variation is to implement a threshold in the simple strategy in order to simulate risk aversion (the value of the threshold could depend on the actual volatility).Unfortunately, forecasts for real stock markets cannot be made with our model, because it is a stochastic model. We see possible applications for this model in the pricing and the risk measurement of complex financial derivatives. Results of numerical simulations for the model without interactions I 0 and x = 0 (i.e. FIG. 2 .FIG. 3 .FIG. 4 .FIG. 5 . 2345The same asFig. 1, however with x = 1 (i.e. investors look only at the old price K(t)). The same asFig. 1, however with interactions I 1 (see text) and x = 0 (i.e. investors look only at their old prognosis P i (t)). The same asFig. 1, however with interactions I 1 (see text) and x = 1 (i.e. investors look only at the old price K(t)). Note the spikes in the time dependence of the price marking the significant enhancement of price fluctuations that lead to the truncated Levy-The price fluctuations K(t) (top) and the price difference distribution P (∆ 1 ) (bottom) of the model with interactions of the investors I 2 (left) and I 3 (right). The delta peak at ∆ 1 = 0 comes from the events were no trade took place. . M Levy, H Levy, S Solomon, J. Physique I. 51087M. Levy, H. Levy, and S. Solomon, J. Physique I, 5 1087, (1995). . R Cont, J.-P Bouchaud, preprint cond-mat/9712318 v2R. Cont and J.-P. Bouchaud, preprint cond-mat/9712318 v2, (1997). . P Bak, M Paczuski, M Shubik, Physica A. 246430P. Bak, M. Paczuski, and M. Shubik, Physica A 246, 430 (1997). . G Caldarelli, M Marsili, Y.-C Zhang, Europhysics Letters. 40479G. Caldarelli, M. Marsili, and Y.-C. Zhang, Europhysics Letters, 40 479. . D Chowdhury, D Stauffer, cond-mat/9810162Europ. Phys. J. B. 2in pressD. Chowdhury and D. Stauffer, Europ. Phys. J. B, in press; cond-mat/9810162 v2 (1998). . S Moss De Oliveira, P M C De Oliveira, D Stauffer, Evolution. TeubnerS. Moss de Oliveira, P.M.C. de Oliveira, and D. Stauffer, Evolution, Money, War and Computers, (Teubner, Stuttgart-Leipzig, 1999) . G W Kim, H M Markowitz, J. Portfolio Management. 1645G. W. Kim and H.M. Markowitz, J. Portfolio Management 16, 45 (1989). . T Lux, M Marchesi, Nature. 397498T. Lux and M. Marchesi, Nature 397, 498 (1999); . T Lux, J Economic, 105881T. Lux, Economic J. 105, 881 (1995). . J.-P Bouchaud, Physica A. 263415J.-P. Bouchaud, Physica A 263, 415 (1999). . R N Mantegna, H E Stanley, Physica A. 25477R. N. Mantegna and H. E. Stanley, Physica A 254, 77 (1998). . P Gopikrishnan, M Meyer, L A N Amaral, H E Stanley, Europhys. Phys. J. B. 3139P. Gopikrishnan, M. Meyer, L. A. N. Amaral, and H. E. Stanley. Europhys. Phys. J. B 3, 139 (1998); . T Lux, Appl. Financial Economics. 6463T. Lux, Appl. Financial Economics 6, 463 (1996). . A Matacz, cond-mat/9710197A. Matacz, cond-mat/9710197 (1997). . R Cont, M Potters, J.-P Bouchaud, cond-mat/9705087R. Cont, M. Potters, and J.-P. Bouchaud, cond-mat/9705087 (1997). Market force, ecology and evolution, preprint adap-org/9812005. J D Farmer, J. D. Farmer, Market force, ecology and evolution, preprint adap-org/9812005.
[]
[ "Uniformly perfect finitely generated simple left orderable groups", "Uniformly perfect finitely generated simple left orderable groups" ]
[ "James Hyde ", "Yash Lodha ", "Andrés Navas ", "Cristóbal Rivas " ]
[]
[]
We show that the finitely generated simple left orderable groups G ρ constructed by the first two authors in[8]are uniformly perfect -each element in the group can be expressed as a product of three commutators of elements in the group. This implies that the group does not admit any homogeneous quasimorphism. Moreover, any nontrivial action of the group on the circle, that lifts to an action on the real line, admits a global fixed point. Most strikingly, it follows that the groups are examples of left orderable monsters, which means that any faithful action on the real line without a global fixed point is globally contracting. This answers Question 4 from the 2018 ICM proceedings article of the third author. (This question has also been answered simultaneously and independently, using completely different methods, by Matte Bon and Triestino in[11].) To prove our results, we provide a certain characterisation of elements of the group G ρ which is a useful new tool in the study of these examples.Corollary 0.2. Let ρ be a quasi-periodic labelling. Then every faithful action of G ρ on S 1 , that lifts to an action on the real line, admits a global fixed point on S 1 .Recall that for every action of a finitely generated group G by orientation preserving homeomorphisms of the real line without global fixed points, there are one of three possibilities:(i) There is a σ-finite measure µ that is invariant under the action.(ii) The action is semiconjugate to a minimal action for which every small enough interval is sent into a sequence of intervals that converge to a point under well chosen group elements, but this property does not hold for every bounded interval. (Here, by a semiconjugacy we roughly mean a factor action for which the factor map is a continuous, non decreasing, proper map of the real line.) (iii) The action is globally contracting; more precisely, it is semiconjugate to a minimal one for which the contraction property above holds for all bounded intervals.We obtain the following as an immediate consequence of Corollary 0.2.Corollary 0.3. Let ρ be a quasi-periodic labelling. Then any faithful action of the group G ρ on R without global fixed points is of type (iii).This answers the following question of the third author.Question 0.4. (Navas, ICM proceedings 2018 Question 4) Does there exist an infinite, finitely-generated group that acts on the real line all of whose actions by orientationpreserving homeomorphisms of the line without global fixed points are of type (iii)?Remark 0.5. The above question has been answered simultaneously and independently by Matte Bon and Triestino in[11]. They provide a new family of finitely generated simple left orderable groups, which are overgroups of the groups G ρ , and prove the analog of Corollary 0.2 for that family. Their methods are completely different from ours.
10.1017/etds.2019.59
[ "https://arxiv.org/pdf/1901.03314v2.pdf" ]
119,171,854
1901.03314
ee85b3b355606e5bb371500c1b93b0cb24546220
Uniformly perfect finitely generated simple left orderable groups 14 Jan 2019 James Hyde Yash Lodha Andrés Navas Cristóbal Rivas Uniformly perfect finitely generated simple left orderable groups 14 Jan 2019 We show that the finitely generated simple left orderable groups G ρ constructed by the first two authors in[8]are uniformly perfect -each element in the group can be expressed as a product of three commutators of elements in the group. This implies that the group does not admit any homogeneous quasimorphism. Moreover, any nontrivial action of the group on the circle, that lifts to an action on the real line, admits a global fixed point. Most strikingly, it follows that the groups are examples of left orderable monsters, which means that any faithful action on the real line without a global fixed point is globally contracting. This answers Question 4 from the 2018 ICM proceedings article of the third author. (This question has also been answered simultaneously and independently, using completely different methods, by Matte Bon and Triestino in[11].) To prove our results, we provide a certain characterisation of elements of the group G ρ which is a useful new tool in the study of these examples.Corollary 0.2. Let ρ be a quasi-periodic labelling. Then every faithful action of G ρ on S 1 , that lifts to an action on the real line, admits a global fixed point on S 1 .Recall that for every action of a finitely generated group G by orientation preserving homeomorphisms of the real line without global fixed points, there are one of three possibilities:(i) There is a σ-finite measure µ that is invariant under the action.(ii) The action is semiconjugate to a minimal action for which every small enough interval is sent into a sequence of intervals that converge to a point under well chosen group elements, but this property does not hold for every bounded interval. (Here, by a semiconjugacy we roughly mean a factor action for which the factor map is a continuous, non decreasing, proper map of the real line.) (iii) The action is globally contracting; more precisely, it is semiconjugate to a minimal one for which the contraction property above holds for all bounded intervals.We obtain the following as an immediate consequence of Corollary 0.2.Corollary 0.3. Let ρ be a quasi-periodic labelling. Then any faithful action of the group G ρ on R without global fixed points is of type (iii).This answers the following question of the third author.Question 0.4. (Navas, ICM proceedings 2018 Question 4) Does there exist an infinite, finitely-generated group that acts on the real line all of whose actions by orientationpreserving homeomorphisms of the line without global fixed points are of type (iii)?Remark 0.5. The above question has been answered simultaneously and independently by Matte Bon and Triestino in[11]. They provide a new family of finitely generated simple left orderable groups, which are overgroups of the groups G ρ , and prove the analog of Corollary 0.2 for that family. Their methods are completely different from ours. In 1980 Rhemtulla asked whether there exist finitely generated simple left orderable groups (see [8] for a discussion around the history of the problem and references). This question was answered in the affirmative by the first two authors in [8]. The construction takes as an input a certain quasi-periodic labelling ρ of the set 1 2 Z which is a map ρ : 1 2 Z → {a, b, a −1 , b −1 } that satisfies a certain set of axioms. (See the preliminaries section for details.) Such labellings exist and are easy to construct explicitly. For each such labelling ρ, one constructs an explicit group action G ρ < Homeo + (R) which is a finitely generated simple left orderable group. Given a group G and an element f ∈ [G, G], the integer cl(f ) is defined as the smallest k such that f can be expressed as a product of k commutators of elements in G. Our main theorem is the following: Theorem 0.1. Let ρ be a quasi-periodic labelling. Then for each element f ∈ G ρ , it holds that cl(f ) ≤ 3. Recall that a homogeneous quasimorphism is a quasimorphism φ : G → R with the property that the restriction of φ to any cyclic subgroup is a homomorphism. As a consequence of Theorem 0.1 we obtain that the stable commutator length vanishes, and hence the group does not admit any nontrivial homogeneous quasimorphism. Using the work of Ghys [7], this allows us to show the following. Corollaries 0.2 and 0.3 should be compared with similar theorems for lattices in higher rank simple Lie groups. For them, it is known that every action on the circle has a finite orbit; therefore, up to a finite index group, they admit a global fixed point [3,6]. However, it is still unknown whether they admit nontrivial actions on the line or not, yet several definitive results are known [9,10,13]. In case one of these lattices admits such an action, it is not hard to see that it would also provide an affirmative answer to the Question above (see [5]). The proof of Theorem 0.1 uses the following new description of the group which is the main technical result of the article. Let ρ be a quasiperiodic labelling. (Recall this notion from [8], or see Definition 1.5) Given an x ∈ R and n ∈ N, we define a word W(x, n) as follows. Let y ∈ 1 2 Z \ Z such that x ∈ [y − 1 2 , y + 1 2 ). Then we define W(x, n) = ρ(y − 1 2 n)ρ(y − 1 2 (n − 1))...ρ(y)...ρ(y + 1 2 (n − 1))ρ(y + 1 2 n) For each integer n ∈ Z, we denote by ι n as the unique orientation reversing isometry ι n : [n, n + 1) → (n, n + 1]. For x ∈ R, we define the map ι : R → R as x · ι = x · ι n such that x ∈ [n, n + 1) for n ∈ Z In what follows by a countably singular piecewise linear homeomorphism, we mean a piecewise linear homeomorphism with a countable set of singularities (or breakpoints). Definition 0.6. Let K ρ be the set of homeomorphisms f ∈ Homeo + (R) satisfying the following: 1. f is a countably singular piecewise linear homeomorphism of R with a discrete set of singularities that lie in Z[ 1 2 ]. 2. f ′ (x), wherever it exists, is an integer power of 2. 3. There is a k f ∈ N such that the following holds. 3.a Whenever x, y ∈ R satisfy that x − y ∈ Z W(x, k f ) = W(y, k f ) it holds that x − x · f = y − y · f 3.b Whenever x, y ∈ R satisfy that x − y ∈ Z W(x, k f ) = W −1 (y, k f ) it holds that x − x · f = y ′ · f − y ′ where y ′ = y · ι It is not hard to check that K ρ is a group. Remark 0.7. Note that given an element f ∈ K ρ and a number k f ∈ N satisfying the conditions of Definition 0.6, any number k ′ f ∈ N such that k ′ f > k f also satisfies the conditions of the Definition. Theorem 0.8. K ρ ∼ = G ρ . This characterisation provides a useful new definition of the groups G ρ as groups of homeomorphisms of the real line satisfying a natural set of criterion. This also provides useful new structural results such as the following. Proposition 0.9. Let ρ be a quasi-periodic labelling. Given any element f ∈ G ρ , there are elements g 1 , g 2 ∈ G ρ such that the following holds: 1. f = g 1 g 2 . 2. g 2 is a commutator in G ρ . 3. There is a subgroup K < G ρ such that K ∼ = F ′ ⊕ ... ⊕ F ′ and g 1 ∈ K. Preliminaries All actions will be right actions, unless otherwise specified. Given a group action G < Homeo + (R) and a g ∈ G, we denote by Supp(g) the set Supp(g) = {x ∈ R | x · g = x} Note that Supp(g) is an open set, and that R can be replaced by another 1-manifold . A homeomorphism f : [0, 1] → [0, 1] is said to be compactly supported in (0, 1) if Supp(f ) ⊂ (0, 1). Similarly, a homeomorphism f : R → R is said to be compactly supported in R if Supp(f ) is a compact interval in R. A point x ∈ R is said to be a transition point of f if x ∈ ∂Supp(f ) = Supp(f ) \ Supp(f ) Our construction uses in an essential way the structure and properties of Thompson's group F . We shall only describe the features of F here that we need, and we direct the reader to [4] and [2] for more comprehensive surveys. Recall that the group PL + ([0, 1]) is the group of orientation preserving piecewise linear homeomorphisms of [0, 1]. Recall that F is defined as the subgroup of PL + ([0, 1]) that satisfy the following: 1. Each element has at most finitely many breakpoints. All breakpoints lie in the set of dyadic rationals, i.e. Z[ 1 2 ]. 2. For each element, the derivatives, wherever they exist, are powers of 2. By breakpoint (or a singularity point) we mean a point where the derivative does not exist. For r, s ∈ Z[ 1 2 ] ∩ [0, 1] such that r < s, we denote by F [r,s] the subgroup of elements whose support lies in [r, s]. The following are well known facts that we shall need. The group F satisfies the following: 1. F is 2-generated. 2. For each pair r, s ∈ Z[ 1 2 ] ∩ [0, 1] such that r < s, the group F [r,s] is isomorphic to F and hence is also 2-generated. 3. F ′ is simple and consists of precisely the set of elements g ∈ F such that Supp(g) ⊂ (0, 1). An interval I ⊆ [0, 1] is said to be a standard dyadic interval, if it is of the form [ a 2 n , a+1 2 n ] such that a, n ∈ N, a < 2 n − 1. The following are elementary facts about the action of F on the standard dyadic intervals. Lemma 1.1. Let I, J be standard dyadic intervals in (0, 1). Then there is an element f ∈ F ′ such that: 1. I · f = J. 2. f ↾ I is linear. Lemma 1.2. Let I 1 , I 2 and J 1 , J 2 be standard dyadic intervals in (0, 1) such that sup(I 1 ) < inf (I 2 ) sup(J 1 ) < inf (J 2 ) Then there is an element f ∈ F ′ such that: 1. I 1 · f = J 1 and I 2 · f = J 2 . 2. f ↾ I 1 and f ↾ I 2 are linear. We fix ι : (0, 1) → (0, 1) as the unique orientation reversing isometry. We say that an element f ∈ F is symmetric, if f = ι • f • ι. We say that a set I ⊂ (0, 1) is symmetric if I · ι = I. Note that given any symmetric set I with nonempty interior, we can find a nontrivial symmetric element f ∈ F ′ such that Supp(f ) ⊂ int(I). We extend the map ι to R as follows. For each integer n ∈ Z, we denote the unique orientation reversing isometry ι n : [n, n + 1) → (n, n + 1] For x ∈ R, we define the map ι : R → R as x · ι = x · ι n such that x ∈ [n, n + 1) for n ∈ Z In the paper we shall also use the notations ι [x,y) : [x, y) → (x, y] or ι I : I → I to denote the unique orientation reversing isometries between intervals of the form [x, y) and (x, y] (for x, y ∈ R), or a compact subinterval I of R. The usage of this notation will be made clear when it occurs. (Note that it differs from the ι defined above.) Definition 1.3. We fix an element c 0 ∈ F with the following properties: 1. The support of c 0 equals (0, 1 4 ) and x · c 0 > x for each x ∈ (0, 1 4 ). 2. c 0 ↾ (0, 1 16 ) equals the map t → 2t. Let c 1 = ι • c 0 • ι ν 1 = c 0 c 1 Note that ν 1 ∈ F is a symmetric element. We define a subgroup H of F as H = F ′ , ν 1 Finally, we fix ν 2 , ν 3 : [0, 1] → [0, 1] as chosen homeomorphisms whose supports are contained in ( Lemma 1.4. H is generated by ν 1 , ν 2 , ν 3 . H ′ is simple and consists of precisely the set of elements of H (or F ) that are compactly supported in (0, 1). In particular, H ′ = F ′ . Definition 1.5. We consider the additive group 1 2 Z = { 1 2 k | k ∈ Z}. A labelling is a map ρ : 1 2 Z → {a, b, a −1 , b −1 } which satisfies: 1. ρ(k) ∈ {a, a −1 } for each k ∈ Z. 2. ρ(k) ∈ {b, b −1 } for each k ∈ 1 2 Z \ Z. We regard ρ( 1 2 Z) as a bi-infinite word with respect to the usual ordering of the integers. A subset X ⊆ 1 2 Z is said to be a block if it is of the form {k, k + 1 2 , ..., k + 1 2 n} for some k ∈ 1 2 Z, n ∈ N. Note that each block is endowed with the usual ordering inherited from R. The set of blocks of 1 2 Z is denoted as B. To each block X = {k, k + 1 2 , ..., k + 1 2 n}, we assign a formal word W ρ (X) = ρ(k)ρ(k + 1 2 )...ρ(k + 1 2 n) which is a word in the letters {a, b, a −1 , b −1 }. Such a formal word is called a subword of the labelling. Recall that given a word w 1 ...w n in the letters {a, b, a −1 , b −1 }, the formal inverse of the word is w −1 n ...w −1 1 . The formal inverse of W ρ (X) is denoted as W −1 ρ (X). A labelling ρ is said to be quasi-periodic if the following holds: 1. For each block X ∈ B, there is an n ∈ N such that whenever Y ∈ B is a block of size at least n, then W ρ (X) is a subword of W ρ (Y ). For each block X ∈ B, there is a block Y ∈ B such that W ρ (Y ) = W −1 ρ (X). Note that by subword in the above we mean a string of consecutive letters in the word. A nonempty finite word w 1 ...w n for w i ∈ {a, b, a −1 , b −1 } is said to be a permissible word if n is odd and the following holds. For odd i ≤ n one has w i ∈ {a, a −1 }, and for even i ≤ n one has w i ∈ {b, b −1 }. The following is Lemma 3.1 in [8]. Lemma 1.6. Given any permissible word w 1 ...w m , there is a quasi-periodic labelling ρ of 1 2 Z and a block X ∈ B satisfying that W ρ (X) = w 1 ...w m . Following [8], we recall that to each labelling ρ, we associate a group G ρ < Homeo + (R) as follows. Definition 1.7. Let H < Homeo + ([0, 1]) be the group defined in Definition 1.3. Recall from Lemma 1.4 that the group H is generated by the three elements ν 1 , ν 2 , ν 3 defined in Definition 1. 3. In what appears below, by ∼ = T we mean that the restrictions are topologically conjugate via the unique orientation preserving isometry that maps [0, 1] to the respective interval. We define the homeomorphisms ζ 1 , ζ 2 , ζ 3 , χ 1 , χ 2 , χ 3 : R → R as follows for each i ∈ {1, 2, 3} and n ∈ Z: ζ i ↾ [n, n + 1] ∼ = T ν i if ρ(n + 1 2 ) = b ζ i ↾ [n, n + 1] ∼ = T (ι • ν i • ι) if ρ(n + 1 2 ) = b −1 χ i ↾ [n − 1 2 , n + 1 2 ] ∼ = T ν i if ρ(n) = a χ i ↾ [n − 1 2 , n + 1 2 ] ∼ = T (ι • ν i • ι) if ρ(n) = a −1 The group G ρ is defined as G ρ := ζ 1 , ζ 2 , ζ 3 , χ 1 , χ 2 , χ 3 < Homeo + (R) We denote the above generating set of G ρ as S ρ := {ζ 1 , ζ 2 , ζ 3 , χ 1 , χ 2 , χ 3 } We also define subgroups K := ζ 1 , ζ 2 , ζ 3 L := χ 1 , χ 2 , χ 3 of G ρ that are both isomorphic to H, and K ′ ∼ = L ′ ∼ = F ′ Note that the definition of K, L requires us to fix a labelling ρ but we denote them as such for simplicity of notation. Note that the group G ρ is defined for every labelling ρ. The following is proved in [8]. Theorem 1.8. Let ρ be a quasi-periodic labelling. Then the group G ρ is simple. For simplicity of notation, in what follows we will not explicitly mention the labelling ρ in what we now define. Recall that given an x ∈ R and n ∈ N, we define a word W(x, n) as follows. Let y ∈ 1 2 Z \ Z such that x ∈ [y − 1 2 , y + 1 2 ). Then we define W(x, n) = ρ(y − 1 2 n)ρ(y − 1 2 (n − 1))...ρ(y)...ρ(y + 1 2 (n − 1))ρ(y + 1 2 n) Given a compact integer interval (i.e. with integer endpoints) J ⊂ R and n 1 , n 2 ∈ N, we define a word W(J, n 1 , n 2 ) as follows. Let y 1 = inf (J) + 1 2 y 2 = sup(J) − 1 2 Then we define W(J, n 1 , n 2 ) = ρ(y 1 − 1 2 n 1 )ρ(y 1 − 1 2 (n 1 − 1))...ρ(y 1 )...ρ(y 2 )...ρ(y 2 + 1 2 (n 2 − 1))ρ(y 2 + 1 2 n 2 ) In case n 1 = n 2 = n we denote W(J, n 1 , n 2 ) as simply W(J, n). We denote by W −1 (x, n) and W −1 (J, n) as the formal inverses of the words W(x, n) and W(J, n) respectively. We now state a few structural results about the groups G ρ that were proved in [8]. For what follows, we assume that ρ is a quasiperiodic labelling. Lemma 1.9. Let f ∈ G ρ be a nonidentity element such that f = w 1 ...w k w i ∈ S ρ for 1 ≤ i ≤ k Then the following hold: 1. The set of breakpoints of f is discrete and the set of transition points is also discrete. 2. There is an m f ∈ N such that for any compact interval J of length at least m f , f fixes a point in J. 3. For each x ∈ R and each i ≤ k, x · w 1 ...w i ∈ [x − (k + 1), x + (k + 1)] Lemma 1.10. The action of G ρ on R is minimal. and I · w 1 ...w i ⊂ [inf {m 1 , m 2 }, sup{m 1 + 1, m 2 + 1}] for each 1 ≤ i ≤ k. The following is an elementary corollary of the third part of Lemma 1.9. Corollary 1.12. Let f ∈ G ρ . There is an m ∈ N such that for any x 1 , x 2 ∈ R so that x 1 − x 2 ∈ Z, the following holds: 1. If W(x 1 , m) = W(x 2 , m) then x 1 − x 1 · f = x 2 − x 2 · f 2. If W −1 (x 1 , m) = W(x 2 , m) then x 1 − x 1 · f = x 3 · f − x 3 where x 3 = x 2 · ι Finally, we shall also need the following folklore result (see the Appendix in [1] for a proof.) Theorem 1.13. Every element in F ′ can be expressed as a product of at most two commutators of elements in F ′ . A characterisation of elements of G ρ The goal of this section is to establish the characterisation of elements in G ρ as described in the introduction (Definition 0.6). In effect, this requires us to prove Theorem 0.8. Throughout this section we fix a quasi periodic labelling ρ. Note that it follows from Corollary 1.12 that G ρ ⊆ K ρ . So much of the rest of the article shall be devoted to proving that K ρ ⊆ G ρ . The proof of this requires us to establish some preliminary structural results about the group G ρ . The main structural result is Proposition 2.3. The proof of the main Theorems and Corollaries will follow from it. Proposition 2.3 will be proved in a subsequent section, and its proof involves the construction of a certain family of special elements in G ρ . 2. There is an ǫ > 0 such that, for each x ∈ (m 1 − ǫ, m 1 + ǫ) ∪ (m 2 − ǫ, m 2 + ǫ), one has x · f = x. For any m ∈ (m 1 , m 2 ) ∩ Z and any ǫ > 0, there is a point x ∈ (m − ǫ, m + ǫ) such that x · f = x. Note that given a stable homeomorphism f , there is a unique way to express R as a union of integer intervals {I α } α∈P such that f ↾ I α is an atom for each α ∈ P and different intervals intersect in at most one endpoint. For simplicity, we will refer just to the intervals I α as the atoms of f . Given an atom f ↾ I, we call the intervals [inf (I), inf (I) + 1] and [sup(I) − 1, sup(I)] as the head and the foot of the atom, respectively. Note that is it possible that an atom I α has the same interval as the head and the foot, in which case |I α | = 1. Two atoms f ↾ [m 1 , m 2 ] and f ↾ [m 3 , m 4 ] are said to be conjugate if there is an integer translation h(t) = t + z for z ∈ Z such that f ↾ [m 1 , m 2 ] = h −1 • f • h ↾ [m 3 , m 4 ] and flip-conjugate if there is an integer translation h(t) = t + z for z ∈ Z such that f ↾ [m 1 , m 2 ] = h −1 • (ι [m 1 ,m 2 ] • f • ι [m 1 ,m 2 ] ) • h ↾ [m 3 , m 4 ] where ι [m 1 ,m 2 ] : [m 1 , m 2 ] → [m 1 , m 2 ] is the unique orientation reversing isometry. For a fixed n ∈ N we consider the set of decorated atoms: T n (f ) = {(I α , n) | α ∈ P } We say that a pair of decorated atoms (I α , n) and (I β , n) are equivalent if either of the following holds: 1. I α , I β are conjugate and W(I α , n) = W(I β , n). 2. I α , I β are flip-conjugate and W(I α , n) = W −1 (I β , n). The element f is said to be uniformly stable, if it is stable and there are finitely many equivalence classes of decorated atoms for each n ∈ N. Note that if there are finitely many equivalence classes of decorated atoms of f for some n ∈ N, then this holds for any n ∈ N. This is true since there are finitely many words of length n in {a, b, a −1 , b −1 }. Let ζ be an equivalence class of elements in T n (f ). We define the homeomorphism f ζ as f ζ ↾ I α = f ↾ I α if (I α , n) ∈ ζ f ζ ↾ I α = id ↾ I α if (I α , n) / ∈ ζ If ζ 1 , ..., ζ m are the equivalence classes of elements in T n (f ), then the list of homeomorphisms f ζ 1 , ..., f ζm is called the cellular decomposition of f . Lemma 2.2. Let g ∈ K ρ . Then there exist g 1 , g 2 ∈ G ρ , where g 2 is a commutator of elements in G ρ , such that g −1 1 (gg −1 2 )g 1 ∈ K ρ is uniformly stable. Proof. Since g ∈ K ρ , we know that there is a constant k f that witnesses the conditions of Definition 0.6. Let x ∈ R be such that x · g > x. Since ρ is quasi-periodic, there is a y ∈ R such that x − y ∈ Z and W −1 (y, k f ) = W(x, k f ) It follows from 3.b in Definition 0.6 that y ′ · g < y ′ for y ′ = y · ι. It follows that g admits a fixed point p 0 ∈ R. A similar conclusion is achieved starting with a point x for which x · g < x. Assume that p 0 ∈ R \ Z. The case when p 0 ∈ R \ ( 1 2 Z \ Z) is dealt with similarly. We find an element l 2 ∈ F ′ such that l 2 is a commutator in F ′ and g 2 = λ(l 2 ) coincides with g on a neighborhood of p 0 . Note that this is possible since p 0 is a fixed point of g and g satisfies the first condition of Definition 0.6. It follows that gg −1 2 fixes pointwise a subinterval I of nonempty interior. Since the action of G ρ on R is minimal (see Lemma 1.11), we can find g 1 ∈ G ρ such that 0 · g −1 1 ∈ I. It follows that g −1 1 (gg −1 2 )g 1 fixes a neighborhood of 0. From an application of quasi periodicity and Definition 0.6, it follows that this element is uniformly stable. The core of the proof of Theorem 0.8 reduces to the following Proposition. Proposition 2.3. Consider a uniformly stable element f ∈ K ρ . There is an n ∈ N such that the following holds. Let ζ 1 , ..., ζ m be the equivalence classes of T n (f ). Then f ζ 1 , ..., f ζm ∈ G ρ . In particular, since f = f ζ 1 · · · f ζm , it follows that f ∈ G ρ . Special elements in G ρ The proof of Proposition 2.3 requires the construction of a certain family of special elements in G ρ . We define and construct them in this section. The construction of such elements is also a useful tool to study the groups G ρ . Throughout the section we assume that ρ is a quasi periodic labelling. Recall the definitions of the subgroups K, L ≤ G ρ from the preliminaries. Recall that in [8] we fixed notation for the natural isomorphisms λ : H → K π : H → L as follows for each f ∈ H, n ∈ Z. λ(f ) ↾ [n, n + 1] ∼ = T f if ρ(n + 1 2 ) = b λ(f ) ↾ [n, n + 1] ∼ = T (ι • f • ι) if ρ(n + 1 2 ) = b −1 π(f ) ↾ [n − 1 2 , n + 1 2 ] ∼ = T f if ρ(n) = a π(f ) ↾ [n − 1 2 , n + 1 2 ] ∼ = T (ι • f • ι) if ρ(n) = a −1 We also denote the naturally defined inverse isomorphisms as: λ −1 : K → H π −1 : L → H We consider the set of triples Ω = {a, b, a −1 , b −1 } <N × N × N Any element ω ∈ Ω is a triple represented as (W, k 1 , k 2 ). Definition 3.1. Given an element f ∈ F ′ and ω as above we define an element λ ω (f ) ∈ Homeo + (R) as follows. For each n ∈ Z: λ ω (f ) ↾ [n, n + 1] = λ(f ) ↾ [n, n + 1] if      W([n, n + 1], k 1 , k 2 ) = W or W([n, n + 1], k 2 , k 1 ) = W −1 λ ω (f ) ↾ [n, n + 1] = id ↾ [n, n + 1] otherwise Similarly, we define the special elements π ω (f ) ∈ Homeo + (R) as follows. For each n ∈ 1 2 Z \ Z: π ω (f ) ↾ [n, n + 1] = π(f ) ↾ [n, n + 1] if      W([n, n + 1], k 1 , k 2 ) = W or W([n, n + 1], k 2 , k 1 ) = W −1 π ω (f ) ↾ [n, n + 1] = id ↾ [n, n + 1] otherwise Given ω = (W, k 1 , k 2 ) where W = w −k 1 ...w 0 ...w k 2 , we call w 0 the central letter of the word W . Remark 3.2. Note the order of appearance of k 1 , k 2 in W([n, n + 1], ·, ·) in the above definition. The following is a direct consequence of the definitions. Lemma 3.3. Consider ω 1 = (W 1 , k 1 , k 2 ) and ω 2 = (W 2 , k 2 , k 1 ) such that W 1 = W −1 2 . Then it follows that for each f ∈ F ′ λ ω 1 (f ) = λ ω 2 (ι • f • ι) π ω 1 (f ) = π ω 2 (ι • f • ι) In particular, by symmetry it follows that: 1. λ ω 1 (f ) ∈ G ρ for each f ∈ F ′ if and only if λ ω 2 (f ) ∈ G ρ for each f ∈ F ′ . 2. π ω 1 (f ) ∈ G ρ for each f ∈ F ′ if and only if π ω 2 (f ) ∈ G ρ for each f ∈ F ′ . Remark 3.4. Note that λ ω (f ), τ ω (f ) for ω = (W, k 1 , k 2 ) will be equal to the identity homeomorphism, or the trivial element of G ρ , if W does not occur as a subword of the labelling ρ. If |W | = k 1 + k 2 + 1 then these elements will also be trivial. Finally, note that λ ω (f ) is trivial if w 0 ∈ {a, a −1 } and π ω (f ) is trivial if w 0 ∈ {b, b −1 }. The key technical step in the proof of the main theorem is the following localization result. Proposition 3.5. Let ω ∈ Ω and f ∈ F ′ . Then λ ω (f ), π ω (f ) ∈ G ρ . Proof. We show this for λ ω , the proof for τ ω is similar. Thanks to Lemma 3.3, we can assume without loss of generality that ω = (W, k 1 , k 2 ) satisfies that the central letter of W equals b. As an appetizer, we first demonstrate the above proposition for k 1 , k 2 ∈ {0, 1}. The statement in its full generality will then follow using an induction on n which is essentially similar to the base case. The case k 1 = k 2 = 0. If W = b then λ ω (f ) = λ(f ). The case k 1 = 0, k 2 = 1 or k 1 = 1, k 2 = 0 Consider the case W = ba. Given an f ∈ F ′ , we wish to show that λ ω (f ) ∈ G ρ . Since F ′ is generated by commutators, it suffices to show this in the case when f is a commutator. Since f ∈ F ′ , there is an f 1 ∈ F ′ such that Supp(f 1 f f −1 1 ) ⊂ ( 1 2, 1) Let f 2 = f 1 f f −1 1 . By self similarity of F ′ , we note that f 2 is a commutator in F ′ ( 1 2 ,1) . Let f 2 = [f 3 , f 4 ] for f 3 , f 4 ∈ F ′ [ 1 2 ,1] ⊂ F ′ [0,1] Let f ′ 4 ∈ F ′ [0, 1 2 ] ⊆ F ′ [0,1] such that f ′ 4 = hf 4 h −1 where h(t) = t + 1 2 . We claim that λ ω (f 2 ) = [λ(f 3 ), π(f ′ 4 )] Consider an interval [n, n + 1] where n ∈ Z. If either ρ(n + 1 2 )ρ(n + 1) = ba or ρ(n)ρ(n + 1 2 ) = a −1 b −1 then [λ(f 3 ), π(f ′ 4 )] ↾ [n, n + 1] = λ ω (f 2 ) ↾ [n, n + 1] If ρ(n)ρ(n + 1 2 )ρ(n + 1) ∈ {ab −1 a, ab −1 a −1 , a −1 ba −1 } then (Supp(λ(f 3 )) ∩ [n, n + 1]) (Supp(π(f ′ 4 )) ∩ [n, n + 1]) = ∅ and hence [λ(f 3 ), π(f ′ 4 )] ↾ [n, n + 1] = id ↾ [n, n + 1] = λ ω (f 2 ) ↾ [n, n + 1] Since the central letter of W is b, we obtain λ ω (f ) = λ(f −1 1 )λ ω (f 2 )λ(f 1 ) ∈ G ρ The cases W ∈ {a −1 b, ab, ba −1 } are very similar and are left as a pleasant visual exercise for the reader. The general case We perform an induction on sup{k 1 , k 2 }. Let the inductive hypothesis hold for n ∈ N. Consider a word W = w −k 1 ...w 0 ...w k 2 w i ∈ {a, a −1 , b, b −1 } such that sup{k 1 , k 2 } = n + 1. There are three cases: 1. k 2 > k 1 . 2. k 1 > k 2 . 3. k 1 = k 2 . The first two cases are symmetric, and we deal with k 2 > k 1 and k 1 = k 2 . The case k 2 > k 1 Assume as above that w 0 = b. We wish to show that given an f ∈ F ′ , λ ω ∈ G ρ . Since F ′ is generated by commutators, as above it suffices to show this in the case when f is a commutator. Since f ∈ F ′ , there is an f 1 ∈ F ′ such that Supp(f 1 f f −1 1 ) ⊂ ( 1 2, 1) Let f 2 = f 1 f f −1 1 As before, by self similarity of F ′ , we note that f 2 is a commutator in F ′ ( 1 2 ,1) . Let f 2 = [f 3 , f 4 ] f 3 , f 4 ∈ F ′ [ 1 2 ,1] ⊂ F ′ [0,1] Let f ′ 4 ∈ F ′ [0, 1 2 ] ⊆ F ′ [0,1] such that f ′ 4 = hf 4 h −1 where h(t) = t + 1 2 . and f ′′ 3 = f ′ 3 if w −1 = a Let f ′′ 4 = ι • f ′ 4 • ι if w 1 = a −1 and f ′′ 4 = f ′ 4 if w 1 = a From our inductive hypothesis, we know that λ ω 1 (k), λ ω 2 (k) ∈ G ρ for each k ∈ F ′ . One checks that λ ω (f 2 ) = [λ(f −1 5 )π ω 1 (f ′′ 3 )λ(f 5 ), π ω 2 (f ′′ 4 )] Since w 0 = b, it follows that λ ω (f ) = λ(f −1 1 )λ ω (f 2 )λ(f 1 ) ∈ G ρ 4 The epilogue The goal of this section is to prove Proposition 2.3, and subsequently the results stated in the Introduction. We consider a uniformly stable element f ∈ K ρ . Let {I α } α∈P be the set of atoms of f . From Definition 0.6, we know that there is a k f ∈ N such that parts (3.a), (3.b) of the Definition hold. Let l = sup{|I β | | β ∈ P }. Note that from Definition 0.6 it follows that l is finite. For Special elements in G ρ provide a natural source of atom preserving elements, as is observed in the proof of the Lemma below. We can now finish the proof of Theorems 0.8, 0.1 and Corollaries 0.9, 0.2. Proof of Theorem 0.8. We know that G ρ ≤ K ρ . It remains to show that given g ∈ K ρ , one has g ∈ G ρ . Using Lemma 2.2, we know that there exist g 1 , g 2 ∈ G ρ such that h = g −1 1 (gg −1 2 )g 1 ∈ K ρ is uniformly stable. Using Proposition 2.3 we conclude that h ∈ G ρ . Therefore it follows that g ∈ G ρ . Proof of Proposition 0.9. Let h ∈ K ρ = G ρ . Thanks to Lemma 2.2, we know that there are elements f 1 , f 2 ∈ G ρ such that f 2 is a commutator of elements in G ρ and the element f = f −1 1 (hf −1 2 )f 1 is uniformly stable. Claim: There is a subgroup K < G ρ such that K ∼ = F ′ ⊕ ... ⊕ F ′ and f ∈ K. Note that the claim implies that hf −1 2 ∈ f 1 Kf −1 1 ∼ = F ′ ⊕ ... ⊕ F ′ < G ρ So the conclusion of Proposition 0.9 for h follows from this claim. Proof of Claim. We know that f ∈ G ρ is a uniformly stable element. Let {I α } α∈P be the atoms of f . Let l f be the constant from Definition 4.2. Let the cellular decomposition of f as decorated atoms T l f (f ) be f ζ 1 , ..., f ζm . Here we represent the equivalence classes of decorated atoms in T l f (f ) as ζ 1 , ..., ζ m . For each 1 ≤ i ≤ m, seet L i = |I α | where (I α , l f ) ∈ ζ i . (Recall that |I α | = |I β | whenever (I α , l f ), (I β , l f ) ∈ ζ i .) For each 1 ≤ i ≤ m, define the canonical isomorphism φ i : F ′ → F ′ [0,L i ] where F [0,L i ] isI α ∼ = T ι L i • φ i (g i ) • ι L i if (I α , l f ) ∈ ζ i and W(I α , l f ) = W −1 i where ι L i : [0, L i ] → [0, L i ] is the unique orientation reversing isometry. It is easy to check that this is an injective group homomorphism. Moreover, the image of each element under φ satisfies the conditions of Definition 0.6. Therefore, the image of φ lies in K ρ = G ρ and contains f = φ(φ −1 1 (f ζ 1 ), . . . , φ −1 m (f ζm )). Proof of Theorem 0.1. Let f ∈ G ρ . We know from Lemma 2.2 that there is a commutator f 1 ∈ G ρ and an f 2 ∈ G ρ such that f 0 = f 2 (f f −1 1 )f −1 2 is uniformly stable. By Proposition 0.9, we know that there is a subgroup of G ρ that contains f 0 and is isomorphic to a direct sum of copies of F ′ . Since by Theorem 1.13 every element in F ′ can be expressed as a product of at most two commutators of elements in F ′ , the same holds for a direct sum of copies of F ′ . It follows that f 0 can be expressed as a product of at most two commutators of elements in G ρ . Therefore, f can be expressed as a product of at most three commutators of elements in G ρ . Proof of Corollary 0.2. This follows from a theorem of Ghys [7], according to which such an action by orientation preserving homeomorphisms of the circle induces a homogeneous quasimorphism (the rotation number), which is nontrivial in case of absence of a global fixed point. Since by Theorem 0.1 the stable commutator length of G ρ vanishes, this quasimorphism must be trivial. Therefore, every such action of G ρ on S 1 must admit a global fixed point. Proof of Corollary 0.3. The group G ρ for a quasiperiodic labelling ρ cannot admit a type (i) action since it is not locally indicable (recall that G ρ is a simple group). For a type (ii) action of G ρ it is easy to construct an element h ∈ Homeo + (R) such that h commutes with each element of G ρ . Upon taking a quotient, this provides a faithful fixed point free action of G ρ on the circle which contradicts Corollary 0.2. Lemma 1. 11 . 11For each pair of elements m 1 , m 2 ∈ Z and a closed interval I ⊂ (m 1 , m 1 + 1), there is a word w 1 ...w k in the generators S ρ such that I · w 1 ...w k ⊂ (m 2 , m 2 + 1) Definition 2.1. A homeomorphism f ∈ Homeo + (R) is said to be stable if there exists an n ∈ N such that the following holds. For any compact interval I of length at least n, there is a nonempty open subinterval J ⊂ I such that J is pointwise fixed by f and J ∩ Z = ∅. Given a stable homeomorphism f ∈ Homeo + (R), and an interval [m 1 , m 2 ], the restriction f ↾ [m 1 , m 2 ] is said to be an atom of f , if the following holds: 1. m 1 , m 2 ∈ Z. the standard copy of F supported on the interval [0, L i ]. For each 1 ≤ i ≤ m, we have{W(I α , l f ) | (I α , l f ) ∈ ζ i } = {W i , W −1 i } for words W 1 , ..., W m . Define a map φ : ⊕ 1≤i≤m F ′ → Homeo + (R)as follows. For α ∈ P and 1 ≤ i ≤ m:φ(g 1 , ..., g m ) ↾ I α ∼ = T φ i (g i )if (I α , l f ) ∈ ζ i and W(I α , l f ) = W i φ(g 1 , ..., g m ) ↾ Lemma 4.1. Let f ∈ K ρ and {I α } α∈P be as above. There is a number l f > k f such that the following holds. Consider n, m ∈ Z, α ∈ P such that [n, n + 1], [m, m + 1] are respectively the head and the foot of I α . Assume that n = m (and hence I α has a distinct head and foot.) Then it follows thatW([n, n + 1], l f ) = W([m, m + 1], l f ) W([n, n + 1], l f ) = W −1 ([m, m + 1], l f ) Proof. First we claim that W([n, n + 1], k f + 2) = W([m, m + 1], k f + 2) From the definition of the atoms of f , there is an ǫ > 0 such that f fixes each point in [n − ǫ, n + ǫ]. However, there is a point in [m − ǫ, m + ǫ] that is moved by f . Therefore, the claim follows.It follows from Definition 0.6 that either W(n − 1 2 , k f ) = W(m − 1 2 , k f ) or W(n + 1 2 , k f ) = W(m + 1 2 , k f ) We define W 1 = w −k 1 ...w −1 w 0 W 2 = w 1 ...w k 2 l 1 = k 1 , l 2 = 0, l 3 = 0, l 4 = k 2 − 1 ω 1 = (W 1 , l 1 , l 2 ) ω 2 = (W 2 , l 3 , l 4 )Note that the central letter of W 1 is b and the central letter of W 2 is w 1 ∈ {a, a −1 }. From our inductive hypothesis, we know that λ ω 1 (h), λ ω 2 (h) ∈ G ρ for each h ∈ F ′ . One checks that:2. If w 1 = a −1 thenThe case k 1 = k 2 Assume as above that w 0 = b. We wish to show that given an f ∈ F ′ , λ ω ∈ G ρ . Once again, as above it suffices to show this in the case when f is a commutator.Just as above we fix an f 1 ∈ F ′ such thatAnd letAs before, by self similarity of F ′ , we note that f 2 is a commutator in F ′Note that w −1 , w 1 are the central letters of W 1 , W 2 respectively. Let(To see this, assume by way of contradiction that the equality holds. This would imply that there is a number t ∈ [n, m] such that ρ(t) = ρ(t) −1 , which is impossible.) It follows that both inequalities hold for l f = k f + l.Definition 4.2. Given any f ∈ K ρ that is uniformly stable, we define the number emerging from the proof of the above Lemma asNote that l f satisfies both the conditions of Definition 0.6 and the conclusion of Lemma 4.1.Since f is uniformly stable, we can consider the cellular decomposition of f as decorated atoms T l f (f ). Let ζ 1 , ..., ζ m be the equivalence classes of T l f (g). The list of homeomorphisms f ζ 1 , ..., f ζm form the resulting cellular decomposition. To prove Proposition 2.3 we would like to show that f ζ 1 , ..., f ζm ∈ G ρ . 1. For each α ∈ P , g pointwise fixes a neighborhood of inf (I α ), sup(I α ).2. If (I α , l f ) and (I β , l f ) are equivalent, thenwhere ι β : I β → I β is the unique orientation reversing isometry.Note that these properties are closed under composition of elements, and hence we define a subgroup of G ρ M f = {g ∈ G ρ | g is atom preserving for f } Lemma 4.5.The restriction M f ↾ int(I α ) for each α ∈ P does not admit a global fixed point.Proof. Let x ∈ int(I α ). We would like to show the existence of an element g ∈ M f such that x · g = x. Let n 1 = inf (I α ), n 2 = sup(I α ). There are two cases:for n ∈ Z ∩ int(I α ) and ǫ > 0. Let g ∈ F ′ be an element such that (ǫ, 1 − ǫ) ⊂ Supp(g). In the latter case, from an application of Lemmas 4.1 and 4.3, it is easy to see that the special element π ω 1 (g) forIn the former case, the special element is λ ω 2 (g) foris atom preserving, and x · λ ω 2 (g) = x since 1 2 ∈ (ǫ, 1 − ǫ) ⊂ Supp(g). Proof of Proposition 2.3. Let f ζ j be an element in the cellular decomposition of f . We would like to show that f ζ j ∈ G ρ . For each α ∈ P , letFrom an application of Lemma 4.5, we find an element g ∈ M f < G ρ such that for each α ∈ P such that J α = ∅ one of the following holds:1. J α · g is a subset of the head of I α .2. J α · g is a subset of the foot of I α .Indeed, if α, β ∈ P are such that W −1 (I α , l f ) = W(I β , l f ), then J α · g being a subset of the head of I α implies that J β · g is a subset of the foot of I β .It follows from an application of Lemmas 4.1,4.3 thatwhere h ∈ F ′ and ω = (W(I α , l f ), l f , l f ) for some (or any) I α such that J α · g is a subset of the head of I α .In particular,Since by Proposition 3.5 λ ω (h) ∈ G ρ , we conclude that f ζ i ∈ G ρ . Interpretation of the arithmetic in certain groups of piecewise affine permutations of an interval. T Altinel, A Muranov, Journal of the Institute of Mathematics of Jussieu. 84T. Altinel, A. Muranov. Interpretation of the arithmetic in certain groups of piece- wise affine permutations of an interval. Journal of the Institute of Mathematics of Jussieu 8(4) (2009), 623-652. Thompson's group F. J Belk, arXiv:0708.3609Cornell UniversityPh.D. ThesisJ. Belk. Thompson's group F . Ph.D. Thesis, Cornell University. arXiv:0708.3609 (2004). Continuous bounded cohomology and applications to rigidity theory. M Burger, N Monod, Geom. Funct. Anal. 122M. Burger, N. Monod. Continuous bounded cohomology and applications to rigidity theory. Geom. Funct. Anal. 12 (2002), no. 2, 219-280. Introductory notes on Richard Thompson's groups. J W Cannon, W J Floyd, W R Parry, Enseign. Math. 423-4J.W. Cannon, W.J. Floyd, W.R. Parry. Introductory notes on Richard Thompson's groups. Enseign. Math. 42 (3-4) (1996), 215-256. B Deroin, A Navas, C Rivas, arxiv:1408.5805Groups, Orders, and Dynamics. B. Deroin, A. Navas, C. Rivas. Groups, Orders, and Dynamics. arxiv:1408.5805 (version 2015). Actions de réseaux sur le cercle. É Ghys, Invent. Math. 1371É. Ghys. Actions de réseaux sur le cercle.Invent. Math. 137 (1999), no. 1, 199-231. Groupes d'homéomorphismes du cercle et cohomologie bornée. É Ghys, The Lefschetz centennial conference, Part III. 58III, Amer. Math. SocContemp. Math.É. Ghys. Groupes d'homéomorphismes du cercle et cohomologie bornée. The Lefschetz centennial conference, Part III (Mexico City, 1984), 81-106. Contemp. Math. 58, III, Amer. Math. Soc., Providence, RI (1987). Finitely generated infinite simple groups of homeomorphisms of the real line. J Hyde, Y Lodha, arXiv:1807.06478v1J. Hyde, Y. Lodha. Finitely generated infinite simple groups of homeomorphisms of the real line. arXiv:1807.06478v1 (2018). Bounded generation and lattices that cannot act on the line. L Lifschitz, D Morris Witte, Special Issue: In honor of Grigory Margulis. Part. 4L. Lifschitz, D. Morris Witte. Bounded generation and lattices that cannot act on the line. Pure Appl. Math. Q. 4 (2008), no. 1, Special Issue: In honor of Grigory Margulis. Part 2, 99-126. Isotropic nonarchimedean S-arithmetic groups are not left orderable. L Lifschitz, D Morris Witte, C. R. Math. Acad. Sci. Paris. 3396L. Lifschitz, D. Morris Witte. Isotropic nonarchimedean S-arithmetic groups are not left orderable. C. R. Math. Acad. Sci. Paris 339 (2004), no. 6, 417-420. N Bon, M Triestino, arXiv:1811.12256Groups of piecewise linear homeomorphisms of flows. N. Matte Bon, M. Triestino. Groups of piecewise linear homeomorphisms of flows. arXiv:1811.12256 (2018). Group actions on 1-manifolds: a list of very concrete open questions. A Navas, Proceedings of the ICM. A. Navas. Group actions on 1-manifolds: a list of very concrete open questions. Pro- ceedings of the ICM (2018). Arithmetic groups of higher Q-rank cannot act on 1-manifolds. D Witte, Proc. Amer. Math. Soc. 1222D. Witte. Arithmetic groups of higher Q-rank cannot act on 1-manifolds. Proc. Amer. Math. Soc. 122 (1994), no. 2, 333-340.
[]
[ "Classical Versions of q-Gaussian Processes: Conditional Moments and Bell's Inequality", "Classical Versions of q-Gaussian Processes: Conditional Moments and Bell's Inequality" ]
[ "Wlodzimierz Bryc [email protected] \nDepartment of Mathematics\nUniversity of Cincinnati\nPO Box 21002545221-0025CincinnatiOHUSA\n" ]
[ "Department of Mathematics\nUniversity of Cincinnati\nPO Box 21002545221-0025CincinnatiOHUSA" ]
[ "Math. Phys" ]
We show that classical processes corresponding to operators which satisfy a q-commutative relation have linear regressions and quadratic conditional variances. From this we deduce that Bell's inequality for their covariances can be extended from q = −1 to the entire range −1 ≤ q < 1.Corrections of Sunday, February 29, 2004 at 15:40The following corrections were found after the printed version appeared in Comm.
10.1007/s002200100401
[ "https://arxiv.org/pdf/math/0009023v2.pdf" ]
14,089,413
math/0009023
89b8367983edfc7575a3da011f02d2e715808773
Classical Versions of q-Gaussian Processes: Conditional Moments and Bell's Inequality 2001 Wlodzimierz Bryc [email protected] Department of Mathematics University of Cincinnati PO Box 21002545221-0025CincinnatiOHUSA Classical Versions of q-Gaussian Processes: Conditional Moments and Bell's Inequality Math. Phys 2192001Received: 18 September, 2000/Accepted: 17 November, 2000Communications in Mathematical Physics manuscript No. (will be inserted by the editor) 1. Of course, the expected value E : A → C, not into R. 2. Conditional variance in Proposition 2 is now correct. We show that classical processes corresponding to operators which satisfy a q-commutative relation have linear regressions and quadratic conditional variances. From this we deduce that Bell's inequality for their covariances can be extended from q = −1 to the entire range −1 ≤ q < 1.Corrections of Sunday, February 29, 2004 at 15:40The following corrections were found after the printed version appeared in Comm. Introduction In this paper we consider a linear mapping H ∋ f → a f ∈ B from the real Hilbert space H into the algebra B of bounded operators acting on a complex Hilbert space which satisfies the q-commutation relations a f a * g − qa * g a f = f, g I,(1) and a f Φ = 0 for a vacuum vector Φ. This defines a non-commutative stochastic process X f = a f + a * f , first studied in [5], which following [2] we call the q-Gaussian process. For different values of q, these processes interpolate between the bosonic (q = 1) and fermionic (q = −1) processes, and include free processes of Voiculescu [7] (q = 0). One of the basic problems arising in this context is the existence of the classical versions of q-Gaussian processes, see Definition 2. For q = 1, these are the classical Gaussian processes with the covariances f, g f,g∈H . For q = −1, the classical versions are two-valued, so Bell's inequality [1] shows that only some covariances may have the classical versions. In [5] classical versions were constructed for covariances corresponding to stationary two-valued Markov processes (q = −1). In [2,Prop. 3.9], the existence of such classical versions was proved for all −1 < q < 1 in the case where the q-Gaussian process is Markovian (which can be characterized in terms of the covariance function). The situation for other covariances remained open in [2] and it was unclear which q-Gaussian processes have no classical realizations. This issue is addressed in the present paper. Using a formula for conditional variances of classical versions we derive a constraint on the covariance which extends one of the Bell's inequalities from q = −1 to general −1 ≤ q < 1. The inequality implies that there are covariances such that the corresponding non-commutative q-Gaussian processes cannot have classical versions over the entire range −1 ≤ q < 1. Since q interpolates between the values q = −1, where classical versions may fail to exist and q = 1, where the classical versions always exist, it is interesting that there is a version of Bell's inequality which does not depend on q. The proof relies on formulas for conditional moments of the first two orders, which are of independent interest. Computations to derive them were possible thanks to recent advances in the Fock space representation of q-commutation relations (1), see [2,3]. Preliminaries This section introduces the Fock space representation of q-Gaussian processes, and states known results in the form convenient for us. It is based on [2]. 2.1. Notation. Throughout the paper, q is a fixed parameter and −1 < q < 1. For n = 0, 1, 2, . . . we define q-integers [n] q := 1−q n 1−q . The q-factorials are [n] q ! : = [1] q [2] q . . . [n] q , with the convention [0] q ! := 1. The q-Hermite polynomials are defined by the recurrence xH n (x) = H n+1 (x) + [n] q H n−1 (x), n ≥ 0(2) with H −1 (x) := 0, H 0 (x) := 1. These polynomials are orthogonal with respect to the unique absolutely continuous probability measure ν q (dx) = f q (x)dx sup- ported on [−2/ √ 1 − q, 2/ √ 1 − q] , where density f q (x) has explicit product expansion, see [2, Theorem 1.10] or [6]; the second moments of q-Hermite polynomials are 2/ √ 1−q −2/ √ 1−q (H n (x)) 2 ν q (dx) = [n] q !. In our notation we are suppressing the dependence of H n (x) on q. 2.2. q-Fock space. For a real Hilbert space H with complexification H c = H⊕iH we define its q-Fock space Γ q (H) as the closure of CΦ ⊕ n H ⊗n c , the linear span of vectors f 1 ⊗ . . . ⊗ f n , in the scalar product Here Φ is the vacuum vector, S n are permutations of {1, . . . , n} and |σ| = #{(i, j) : i < j, σ(i) > σ(j)}. For the proof that (3) indeed is non-negative definite, see [3]. Given the q-Fock space Γ q (H) and f ∈ H we define the creation operator a f : Γ q (H) → Γ q (H) and its ·|· q -adjoint, the annihilation operator a * f : Γ q (H) → Γ q (H) as follows: f 1 ⊗ . . . ⊗ f n |g 1 ⊗ . . . ⊗ g m q = σ∈Sn q |σ| n j=1 f j , g σ(j) if m = n 0 if m = n .(3a f Φ := 0, a f f 1 ⊗ . . . ⊗ f n := n j=1 q j−1 f, f j f 1 ⊗ . . . ⊗ f j−1 ⊗ f j+1 ⊗ . . . ⊗ f n ,(4) and a * f Φ = f, a * f f 1 ⊗ . . . ⊗ f n := f ⊗ f 1 ⊗ . . . ⊗ f n .(5) These operators are bounded, satisfy commutation relation (1), and a f +g = a f + a g , see [3]. 2.3. q-Gaussian processes. We now consider (non-commutative) random variables as the elements of the algebra A generated by the self-adjoint operators X f := a f + a * f , with vacuum expectation state E : A → C given by E(X) = Φ|XΦ q . Definition 1. We will call {X(t) : t ∈ T } a q-Gaussian (non-commutative) pro- cess indexed by T if there are vectors h(t) ∈ H such that X(t) = X h(t) . For a q-Gaussian process the covariance function c t,s := E(X t X s ) becomes c t,s = h(t), h(s) . The Wick products ψ(f 1 ⊗ . . . ⊗ f n ) ∈ A are defined recurrently by ψ(Φ) := I, and ψ(f ⊗ f 1 ⊗ . . . ⊗ f n ) := (6) X f ψ(f 1 ⊗ . . . ⊗ f n ) − n j=1 q j−1 f, f j ψ(f 1 ⊗ . . . ⊗ f j−1 ⊗ f j+1 ⊗ . . . ⊗ f n ). An important property of Wick products is that if X = ψ(f 1 ⊗ . . . ⊗ f n ) then XΦ = f 1 ⊗ . . . ⊗ f n .(7) We will also use the connection with q-Hermite polynomials. If f = 1 then ψ f ⊗n = H n (X f ) ,(8)E (H n (X f )) 2 = σ∈Sn q |σ| = [n] q !.(9) Thus ν q is indeed the distribution of X f . Our main use of the Wick product is to compute certain conditional expectations. Conditional expectations. Recall that a (non-commutative) conditional expectation on the probability space (A, E) with respect to the subalgebra B ⊂ A is a mapping E : A → B such that E(Y 1 XY 2 ) = E(Y 1 E(X)Y 2 ) (10) for all X ∈ A, Y 1 , Y 2 ∈ B. We will study only algebras B generated by the identity and the finite number of random variables X f1 , . . . , X fn . In this situation, we will use a more probabilistic notation: E(X|X f1 , . . . , X fn ) := E(X), X ∈ A. In this setting conditional expectations are easily computed for X given by Wick products. This important result comes from [2, Theorem 2.13]. Theorem 1. If Y = ψ(g 1 ⊗ . . . ⊗ g m ), X 1 = X f1 , . . . , X k = X f k for some f i , g j ∈ H and P : H → H denotes orthogonal projection onto the span of f 1 , . . . , f k then E(Y|X 1 , . . . , X k ) = ψ(P g 1 ⊗ . . . ⊗ P g m ). The following formula is an immediate consequence of Theorem 1 and (8), and is implicit in [2, Proof of Theorem 4.6]. Corollary 1. If X = X f , Y = X g with unit vectors f = g = 1 and H n is the n th q-Hermite polynomial, see (2), then E(H n (Y)|X) = f, g n H n (X).(11) For a finite number of vectors f 0 , f 1 , . . . , f k ∈ H, let X k := X f k . These (noncommutative) random variables have linear regressions and constant conditional variances like the classical (commutative) Gaussian random variables. Proposition 1. E(X 0 |X 1 , . . . , X k ) = k j=1 a j X j(12) and E(X 2 0 |X 1 , . . . , X k ) = ( k j=1 a j X j ) 2 + cI.(13) If f 1 . . . , f k ∈ H are linearly independent then the coefficients a j , c are uniquely determined by the covariance matrix C = [c i,j ] := [ f i , f j ]. Notice that Eq. (13) can indeed be rewritten as the statement that conditional variance is constant, V ar(X 0 |X 1 , . . . , X k ) := E (X 0 − E(X 0 |X 1 , . . . , X k )) 2 |X 1 , . . . , X k = cI. Proof. This follows from Theorem 1 and (6). Write the orthogonal projection of f 0 onto the span of f 1 , . . . f k as the linear combination g = j a j f j . Then E(X 0 |X 1 , . . . , X k ) = E(ψ(f 0 )|X 1 , . . . , X k ) = ψ(g) = j a j ψ(f j ), which proves (12). Similarly, E(X 2 0 − f 0 2 I|X 1 , . . . , X k ) = E(ψ(f 0 ⊗ f 0 )|X 1 , . . . , X k ) = ψ(g ⊗ g) = ( j a j X j ) 2 − g 2 I. This proves (13) with c = f 0 2 − g 2 . If f 1 . . . , f k ∈ H are linearly independent then the representation g = j a j f j is unique. To analyze standardized triplets in more detail we need the explicit form of the coefficients. (We omit the straightforward calculation.) Corollary 2. If X := X f , Y := X g , Z := X h and f, h ∈ H are linearly indepen- dent unit vectors, then E(Y|X, Z) = aX + bZ,(14)E(Y 2 |X, Z) = (aX + bZ) 2 + cI,(15) where a = f, g − g, h f, h 1 − f, h 2 ,(16)b = g, h − f, g f, h 1 − f, h 2 .(17) Another calculation shows that c = det(C)/(1 − f, h 2 ), where C is the covariance matrix; in particular c ≥ 0. Conditional Moments of Classical Versions We give the definition of a classical version which is convenient for bounded processes; for a more general definition, see [2, Def. 3.1]. Definition 2. A classical version of the process X(t) indexed by t ∈ T ⊂ R is a stochastic processX(t) defined on some classical probability space such that for any finite number of indexes t 1 < t 2 < . . . < t k and any polynomials P 1 , . . . , P k , E (P 1 (X(t 1 ))P 2 (X(t 2 )) . . . P k (X(t k ))) = (18) E P 1 (X(t 1 ))P 2 (X(t 2 )) . . . P k (X(t k )) . Here E(·) denotes the classical expected value given by Lebesgue integral with respect to the classical probability measure. Our main interest is in finite index set T = {t 1 , t 2 , t 3 }, where t 1 < t 2 < t 3 . In this case we write X := X(t 1 ), Y := X(t 2 ), Z := X(t 3 ). We say that an ordered triplet (X, Y, Z) has a classical versionX,Ỹ ,Z, if E (P 1 (X)P 2 (Y)P 3 (Z)) = E P 1 (X)P 2 (Ỹ )P 3 (Z) for all polynomials P 1 , P 2 , P 3 . The classical version of a non-commutative process is order-dependent, since the left-hand side of (18) may depend on the ordering of the variables, while the right-hand side does not. For specific example in the context of q-Gaussian random variables, see [5, formulas (2,64) and (2.65)]. Triplets. All pairs (X f , X g ) of q-Gaussian random variables have classical versions because E(X m f X n g ) = E(X n f X m g ) for all integer m, n; however, the classical version of a triplet may fail to exist. With this in mind we consider q-Gaussian triplets X := X f , Y := X g , Z := X h .(19) To simplify the notation we take unit vectors f = g = h = 1. We assume that there is a classical version (X,Ỹ ,Z) of (X, Y, Z), in this order. From Corollary 2 we know that non-commutative random variables X, Y, Z have linear regression and constant conditional variance. It turns out that the corresponding classical random variablesX,Ỹ ,Z also have linear regressions, while their conditional variances get perturbed into quadratic polynomials. Theorem 2. If (X,Ỹ ,Z) is a classical version of the q-Gaussian triplet (19) then E(Ỹ |X,Z) = aX + bZ,(20)E(Ỹ 2 |X,Z) = AX 2 + BXZ + CZ 2 + D,(21) where a, b are given by (16), (17), A = ab(1 − q) f, h + a 2 (1 − q f, h 2 ) 1 − q f, h 2 ,(22)B = ab(1 + q)(1 − f, h 2 ) 1 − q f, h 2 ,(23)C = ab(1 − q) f, h + b 2 (1 − q f, h 2 ) 1 − q f, h 2 ,(24) and D = 1 − A − B f, h − C.(25) The proof relies on the following technical result. Proof of Theorem 2. SinceX,Ỹ ,Z are bounded random variables, to prove (20) we need only to verify that for arbitrary polynomials P, Q, E (H n (X)ZXH m (Z)) =      f, h n+1 [n + 2] q ! if m = n + 2 f, h n−1 [n] q ! if m = n − 2 f, h n−1 ([n] q + 1) f, h 2 + q[n] q [n] q ! if m = n 0 otherwise ,(26)E (H n (X)XZH m (Z)) =      f, h n+1 [n + 2] q ! if m = n + 2 f, h n−1 [n] q ! if m = n − 2 f, h n−1 [n + 1] q f, h 2 + [n] q [n] q ! if m = n 0 otherwise , E H n (X)X 2 H m (Z) = E H m (X)Z 2 H n (Z) = (28)      f, h n+2 [n + 2] q ! if m = n + 2 f, h n−2 [n] q ! if m = n − 2 f, h n ([n + 1] q + [n] q ) [n] q ! if m = n 0 otherwise(27)E P (X)Ỹ Q(Z) = E P (X)(aX + bZ)Q(Z) . This is equivalent to E (P (X)YQ(Z)) = E (P (X)(aX + bZ)Q(Z)) , see (18). The latter follows from (14) and (10), proving (20). To prove (21), we verify that for arbitrary polynomials P, Q we have E P (X)Ỹ 2 Q(Z) = E P (X)(AX 2 + BXZ + CZ 2 + D)Q(Z) . By definition (18), this is equivalent to E P (X)Y 2 Q(Z) = E P (X)(AX 2 + BXZ + CZ 2 + D)Q(Z) .(30) It suffices to show that (30) holds true when P = H n and Q = H m are the q-Hermite polynomials defined by (2). Formula (15) implies that the left-hand side of (30) is given by cE(H n (X)H m (Z)) + a 2 E(H n (X)X 2 H m (Z)) + b 2 E(H n (X)Z 2 H m (Z)) +abE(H n (X)XZH m (Z)) + abE(H n (X)ZXH m (Z)), and the right-hand side becomes AE(H n (X)X 2 H m (Z)) + CE(H n (X)Z 2 H m (Z))+ BE(H n (X)XZH m (Z)) + DE(H n (X)H m (Z)). Using formulas from Lemma 1 we can see that both sides are zero, except when m = n or m = n ± 2. We now consider these three cases separately. Case m = n + 2: Using Lemma 1, (30) simplifies to (a 2 f, h 2 +2ab f, h +b 2 ) f, h n [n+2] q ! = (A f, h 2 +B f, h +C) f, h n [n+2] q !. This equation is satisfied when coefficients A, B, C satisfy the equation A f, h 2 + B f, h + C = a 2 f, h 2 + 2ab f, h + b 2 .(31) Case m = n − 2: Using Lemma 1, (30) simplifies to (a 2 + 2ab f, h + b 2 f, h 2 ) f, h n−2 [n] q ! = (A + B f, h + C f, h 2 ) f, h n−2 [n] q !. This equation is satisfied whenever A + B f, h + C f, h 2 = a 2 + 2ab f, h + b 2 f, h 2 .(32f, h (a 2 + b 2 )([n + 1] q + [n] q ) + ab f, h 2 [n + 1] q + (1 + q)[n] q + c f, h = (1 + q)(A + C) + B(q f, h 2 + 1)[n] q + D f, h . Now we use [n + 1] q = 1 + q[n] q . Suppressing the correction to the constant term (i.e., the term free of n), we get (1 + q) f, h (a 2 + b 2 ) + ab(1 + f, h 2 ) [n] q + c f, h + . . . = (1 + q)(A + C) f, h + B(q f, h 2 + 1) [n] q + D f, h , where c + . . . denotes the suppressed constant term corrections. This equation holds true when the coefficients at [n] q match, which gives 7) and (8) we get E(H n (X)ZXH m (Z)) = ZH n (X)Φ|XH m (Z)Φ q = X h f ⊗n |X f h ⊗m q . Therefore (4), and (5) imply (1+q) f, h (A+C)+B(q f, h 2 +1) = (1+q) (a 2 + b 2 ) f, h + ab( f, h 2 + 1) ,(33)E(H n (X)ZXH m (Z)) = (34) [n] q f, h f ⊗n−1 + h ⊗ f ⊗n |[m] q f, h h ⊗m−1 + f ⊗ h ⊗m q . The latter is zero, except when m = n or m = n ± 2. We will consider these two cases separately. If m = n, by orthogonality we have E(H n (X)ZXH m (Z)) = [n] 2 q f, h 2 f ⊗n−1 |h ⊗n−1 q + h ⊗ f ⊗n |f ⊗ h ⊗n q . Clearly, f ⊗n−1 |h ⊗n−1 q = f, h n−1 [n − 1] q !; this can be seen either from (9) and (11), or directly from the definition (3). By (3) the second term splits into the sum over permutations σ ′ ∈ S n+1 such that σ ′ (1) = 1 and the sum over the permutations such that σ ′ (1) = k > 1. This gives h ⊗ f ⊗n |f ⊗ h ⊗n q = σ∈Sn f, h q |σ| f, h n + n+1 k=2 σ∈Sn q k−1+|σ| f, h n−1 = f, h n+1 [n] q ! + f, h n−1 q[n] q [n] q !. Elementary algebra now yields (26) for m = n. If m = n + 2, then the right-hand side of (34) consists of only one term we get E(H n (X)ZXH n+2 (Z)) = [n + 2] q f, h h ⊗ f ⊗n |h ⊗n+1 q = [n + 2] q f, h σ∈Sn+1 q |σ| f, h n = f, h n+1 [n + 2] q !. Since m = n−2 is given by the same expression with the roles of m, n switched around, this ends the proof of (26). The remaining expectations match the corresponding commutative values, and can also be evaluated using recurrence (2) and formulas (11), and (9). To prove (27) notice that since X and H n (X) commute, using (2) and (11) we get E(H n (X)XZH m (Z)) = E(XH n (X)(H m+1 (Z) + [m] q H m−1 (Z))) = f, h m+1 E(XH n (X)H m+1 (X)) + [m] q f, h m−1 E(XH n (X)H m−1 (X)). The only non-zero values are when m = n, or m = n ± 2. Using (2) again, and then (9) we get (27). Since by (11) we have E(H n (X)X 2 H m (Z)) = f, h m EX 2 H n (X)H m (X), recurrence (2) used twice proves (28). Formula (29) is an immediate consequence of (11) and (9). ⊓ ⊔ 3.2. Relation to processes with independent increments. In [2, Definition 3.5] the authors define the non-commutative q-Brownian motion and show that it has a classical version, see [2,Cor. 4.5]. Since the classical version of the q-Brownian motion is Markov, Theorem 2 implies that all regressions are linear, and all conditional variances are quadratic. A computation gives the following expression for the conditional variances. Proposition 2. LetX t be the classical version of the q-Brownian motion, i. e., f t , f s = min{s, t}. Then for t 1 < t 2 < . . . < t n < s < t we have V ar(X s |X t1 , . . . ,X tn , X t ) = (t − s) (s − t n ) (t − qt n )   1 + X t −X tn tX t − t nXtn (1 − q) (t − t n ) 2   . In [8], classical processes with independent increments, linear regressions, and quadratic conditional variances are analyzed. These processes have the same covariances as q-Brownian motion, but the conditional variances are quadratic functions of the incrementX t −X tn only. Proposition 2 shows that the classical realizations of q-Brownian motion are not among the processes in [8] and thus have dependent increments. Bell's Inequality It is well known that all q-Gaussian n-tuples with q = 1 have classical versions: these are given by the classical Gaussian distribution with the same covariance matrix [ f i , f j ]. For q = −1 the classical version of the the standardized q-Gaussian triplet (X, Y, Z) consists of the ±1-valued symmetric random variables. The celebrated Bell's inequality [1] therefore restricts their covariances: 1 − f, h ≥ | f, g − g, h |.(35) In particular, there are triplets of q-Gaussian random variables with q = −1 which do not have a classical version. The following shows that restriction (35) is in force for sub-Markov covariances over the entire range −1 ≤ q < 1. Theorem 3. Suppose that (X,Ỹ ,Z) is a classical version of q-Gaussian (X, Y, Z) := (X f , X g , X h ), where f, h ∈ H are linearly independent, and −1 ≤ q < 1. If either f, g g, h ≤ f, h and 0 < f, h < 1,(36) or f, h = 0, or q = −1, then inequality (35) holds true. Proof. Since the case q = −1 is well known, we restrict our attention to the case −1 < q < 1. Our starting point is expression (21). A computation shows that the conditional variance V ar(Ỹ |X,Z) := E(Ỹ 2 |X,Z) − E(Ỹ |X,Z) 2 is as follows. V ar(Ỹ |X,Z) = 1 − a 2 − b 2 − ab f, h (1+q)(1− f,h 2 )+2(1−q) 1−q f,h 2 − (37) ab(1−q) 1−q f,h 2 Z f, h −X f, h X −Z . The right-hand side of this expression must be non-negative over the support ofX,Z. It is known, see [4,Lemma 8.1] or [2, Theorem 1.10], thatX,Z have the joint probability density function f (x, z) with respect to the product of marginals ν q . Moreover, f is defined for all −2/ √ 1 − q ≤ x, z ≤ 2/ √ 1 − q and from its explicit product expansion we can see that f (x, z) ≥ ∞ k=0 (1 − f, h 2 q k ) (1 + f, h q k ) 4 is strictly positive. In particular, the right-hand side of (37) must be non-negative when evaluated atX = √ 2/ √ 1 − q,Z = − √ 2/ √ 1 − q. Using formulas (16), (17) with the above values ofX,Ỹ we get the rational expression for the conditional variance which can be written as follows. 1 − q f, h 2 (1 − f, h ) 2 V ar Ỹ |X,Z =(38) (1 − f, h ) 2 1 − q f, h 2 + (1 + q) f, h f, g g, h − ( f, g − g, h ) 2 (1 + f, h 2 ). Therefore (1 − f, h ) 2 1 − q f, h 2 + (1 + q) f, h f, g g, h ≥ ( f, g − g, h ) 2 (1 + f, h 2 ). Since the assumptions imply that 1 − q f, h 2 + (1 + q) f, h f, g g, h ≤ 1 + f, h 2 , this implies (1 − f, h ) 2 ≥ ( f, g − g, h ) 2 , proving (35). 4.1. Examples. The first example shows that there are covariances such that q-Gaussian random variables have no classical version for all −1 ≤ q < 1. Example 1. Consider the case f, h = g, h > 0. This can be realized when the covariance matrix is non-negative definite; a computation shows that this is equivalent to the condition 2 f, h 2 ≤ 1 + f, g . Since (36) is satisfied, Bell's inequality (35) implies 1 + f, g ≥ 2 f, h . Therefore, all choices of vectors f, g, h ∈ H such that f, h = g, h , 0 < f, h < 1, and 2 f, h 2 − 1 < f, g < 2 f, h − 1 lead to q-Gaussian triplets with no classical version for −1 ≤ q < 1. A nice feature of Theorem 3 is that its statement does not depend on q, as long as q < 1. But such a result cannot be sharp. A less transparent statement that the conditional variance is non-negative is a stronger restriction on the covariances, and it depends on q. This is illustrated by the next example. Example 2. Suppose f, h = g, h = 1/2. Inequality (35) used in Example 1 implies that if a classical version of a q-Gaussian process exists then f, g ≥ 0. Evaluating the conditional variance V ar(Ỹ |X,Z) atX = 2/ √ 1 − q,Z = −X we get a more restrictive constraint f, g ≥ q+5 36 . Lemma 1 . 1If H n , H m are q-Hermite polynomials given by(2), then and the constant terms match: c + . . . = D. The latter holds true when the expectations are equal (n = m = 0), and hence this condition is equivalent to (25). The remaining three equations (31), (32), and (33) have a unique solution given by the expressions (22), (23), (24). ⊓ ⊔ Proof of Lemma 1. Using the definition of vacuum expectation state, ( )8 Wlodzimierz Bryc Case m = n: We use again Lemma 1. On both sides of Eq. (30) we factor out f, h n−1 [n] q !, and equate the remaining coefficients. (This is allowed since we are after sufficient conditions only!) We get Acknowledgements. I would like to thank the referee, A. Dembo, and T. Hodges for suggestions which improved the presentation, and to V. Kaftal for several helpful discussions. On the Einstein-Podolski-Rosen paradox. J S Bell, Physics. 1Bell, J. S.: On the Einstein-Podolski-Rosen paradox. Physics, 1, 195-200, (1964) q-Gaussian processes: Non-commutative and classical aspects. M Bożejko, B Kümmerer, R Speicher, Comm. Math. Physics. 185Bożejko, M., Kümmerer, B., and Speicher, R.: q-Gaussian processes: Non-commutative and classical aspects. Comm. Math. Physics, 185, 129-154, (1997) An example of a generalized Brownian motion. M Bożejko, R Speicher, Comm. Math. Phys. 1373Bożejko, M. and Speicher, R.: An example of a generalized Brownian motion. Comm. Math. Phys., 137(3), 519-531, (1991) Stationary random fields with linear regressions. W Bryc, Ann. Probab. 29Bryc, W.: Stationary random fields with linear regressions. Ann. Probab., . 29, 504-519 (2001) . U Frisch, R Bourret, Parastochastics. J. Math. Phys. 112Frisch, U. and Bourret, R.: Parastochastics. J. Math. Phys., 11(2), 364-390, (1970) The Askey-scheme of hypergeometric orthogonal polynomials and its q-analogue. R Koekoek, R F Swarttouw, no. 98-17Delft University of TechnologyReportKoekoek, R. and Swarttouw, R. F.: The Askey-scheme of hypergeometric orthogonal poly- nomials and its q-analogue, Report no. 98-17, Delft University of Technology, 1998, WWW: http://aw.twi.tudelft.nl/~koekoek/askey.html D V Voiculescu, K J Dykema, A Nica, Free random variables. Providence, RIAmerican Mathematical SocietyVoiculescu, D. V. Dykema, K. J., and Nica, A.: Free random variables. Providence, RI: American Mathematical Society, 1992 Stochastic processes with linear conditional expectation and quadratic conditional variance. J Weso Lowski, Probab. Math. Statist. H. Araki14Wroc lawWeso lowski, J.: Stochastic processes with linear conditional expectation and quadratic conditional variance. Probab. Math. Statist. (Wroc law), 14, 33-44, (1993) Communicated by H. Araki
[]
[ "Dedekind sums: a combinatorial-geometric viewpoint", "Dedekind sums: a combinatorial-geometric viewpoint", "Dedekind sums: a combinatorial-geometric viewpoint", "Dedekind sums: a combinatorial-geometric viewpoint" ]
[ "Matthias Beck ", "Sinai Robins ", "Matthias Beck ", "Sinai Robins " ]
[]
[ "DIMACS Series in Discrete Mathematics and Theoretical Computer Science", "DIMACS Series in Discrete Mathematics and Theoretical Computer Science" ]
The literature on Dedekind sums is vast. In this expository paper we show that there is a common thread to many generalizations of Dedekind sums, namely through the study of lattice point enumeration of rational polytopes. In particular, there are some natural finite Fourier series which we call Fourier-Dedekind sums, and which form the building blocks of the number of partitions of an integer from a finite set of positive integers. This problem also goes by the name of the 'coin exchange problem'. Dedekind sums have enjoyed a resurgence of interest recently, from such diverse fields as topology, number theory, and combinatorial geometry. The Fourier-Dedekind sums we study here include as special cases generalized Dedekind sums studied by Berndt, Carlitz, Grosswald, Knuth, Rademacher, and Zagier. Our interest in these sums stems from the appearance of Dedekind's and Zagier's sums in lattice point count formulas for polytopes. Using some simple generating functions, we show that generalized Dedekind sums are natural ingredients for such formulas. As immediate 'geometric' corollaries to our formulas, we obtain and generalize reciprocity laws of Dedekind, Zagier, and Gessel. Finally, we prove a polynomial-time complexity result for Zagier's higher-dimensional Dedekind sums.
10.1090/dimacs/064/04
[ "https://arxiv.org/pdf/math/0112076v2.pdf" ]
14,092,093
math/0112076
d46fa7f4276110b84388f5a23f8b407e4d9e9743
Dedekind sums: a combinatorial-geometric viewpoint 2 Jan 2005 Matthias Beck Sinai Robins Dedekind sums: a combinatorial-geometric viewpoint DIMACS Series in Discrete Mathematics and Theoretical Computer Science 2 Jan 2005 The literature on Dedekind sums is vast. In this expository paper we show that there is a common thread to many generalizations of Dedekind sums, namely through the study of lattice point enumeration of rational polytopes. In particular, there are some natural finite Fourier series which we call Fourier-Dedekind sums, and which form the building blocks of the number of partitions of an integer from a finite set of positive integers. This problem also goes by the name of the 'coin exchange problem'. Dedekind sums have enjoyed a resurgence of interest recently, from such diverse fields as topology, number theory, and combinatorial geometry. The Fourier-Dedekind sums we study here include as special cases generalized Dedekind sums studied by Berndt, Carlitz, Grosswald, Knuth, Rademacher, and Zagier. Our interest in these sums stems from the appearance of Dedekind's and Zagier's sums in lattice point count formulas for polytopes. Using some simple generating functions, we show that generalized Dedekind sums are natural ingredients for such formulas. As immediate 'geometric' corollaries to our formulas, we obtain and generalize reciprocity laws of Dedekind, Zagier, and Gessel. Finally, we prove a polynomial-time complexity result for Zagier's higher-dimensional Dedekind sums. Introduction In recent years, Dedekind sums and their various siblings have enjoyed a new renaissance. Historically, they appeared in analytic number theory (Dedekind's ηfunction [De]), algebraic number theory (class number formulae [Me]), topology (signature defects of manifolds [HZ]), combinatorial geometry (lattice point enumeration [Mo]), and algorithmic complexity (pseudo random number generators [K]). In this expository paper, we define some broad generalizations of Dedekind sums, which are in fact finite Fourier series. We show that they appear naturally in the enumeration of lattice points in polytopes, and prove reciprocity laws for them. In combinatorial number theory, one is interested in partitions of an integer n from a finite set. That is, one writes n as a nonnegative integer linear combination of a given finite set of positive integers. We showed in [BDR] that the number of such partitions of n from a finite set is a quasipolynomial in n, whose coefficients are built up from the following generalization of Dedekind sums. Definition 1.1. For a 0 , . . . , a d , n ∈ Z, we define the Fourier-Dedekind sum as σ n (a 1 , . . . , a d ; a 0 ) := 1 a 0 λ a 0 =1 λ n (1 − λ a1 ) · · · (1 − λ a d ) . Here the sum is taken over all a 0 'th roots of unity for which the summand is not singular. In [G], Gessel systematically studied sums of the form λ a =1 R(λ) , where R is a rational function, and the sum is taken over all a'th roots of unity for which R is not singular. He called them 'generalized Dedekind sums', since his definition includes various generalizations of the Dedekind sum as special cases. Hence we study Gessel's sums where the poles of R are restricted to be roots of unity. In Section 2, we give a brief history on those generalizations of the classical Dedekind sum (due to Rademacher [R], and Zagier [Z]) which can be written as Fourier-Dedekind sums. Our interest in these sums stems from the appearance of Dedekind's and Zagier's sums in lattice point enumeration formulas for polytopes [Mo, P, BV, DR]. Using generating functions, we show in Section 3 that generalized Dedekind sums are natural ingredients for such formulas, which also apply to the theory of partition functions. In Section 4 we obtain and generalize reciprocity laws of Dedekind [De], Zagier [Z], and Gessel [G] as 'geometric' corollaries to our formulas. Finally, in Section 5, we prove that Zagier's higher-dimensional Dedekind sums are in fact polynomial-time computable in fixed dimension. For Dedekind sums in 2 dimensions, this fact follows easily from their reciprocity law; but for higher dimensional Dedekind sums the polynomial-time complexity does not seem to follow so easily, and we therefore invoke some recent work of [BP] and [DR]. Classical Dedekind sums and generalizations According to Riemann's will, it was his wish that Dedekind should get Riemann's unpublished notes and manuscripts [RG]. Among these was a discussion of the important function η(z) = e πiz 12 n≥1 1 − e 2πinz , which Dedekind took up and eventually published in Riemann's collected works [De]. Definition 2.1. Let ((x)) be the sawtooth function defined by ((x)) := {x} − 1 2 if x ∈ Z 0 if x ∈ Z . Here {x} = x − [x] denotes the fractional part of x. For two integers a and b, we define the Dedekind sum as s(a, b) := k mod b ka b k b . Here the sum is over a complete residue system modulo b. Through the study of the transformation properties of η under SL 2 (Z), Dedekind naturally arrived at s(a, b). The classic introduction to the arithmetic properties of the Dedekind sum is [RG]. The most important of these, already proved by Dedekind [De], is the famous reciprocity law: Theorem 2.2 (Dedekind). If a and b are relatively prime then s(a, b) + s(b, a) = − 1 4 + 1 12 a b + 1 ab + b a . This reciprocity law is easily seen to be equivalent to the transformation law of the η-function [De]. Due to the periodicity of ((x)), we can reduce a modulo b in the Dedekind sum: s(a, b) = s(a mod b, b). Therefore, Theorem 2.2 allows us to compute s(a, b) in polynomial time, similar in spirit to the Euclidean algorithm. The Dedekind sum s(a, b) has various generalizations, two of which we introduce here. The first one is due to Rademacher [R], who generalized sums introduced by Meyer [Me] and Dieter [D]: Definition 2.3. For a, b ∈ Z, x, y ∈ R, the Dedekind-Rademacher sum is defined by s(a, b; x, y) := k mod b (k + y)a b + x k + y b . This sum posesses again a reciprocity law: Theorem 2.4 (Rademacher). If a and b are relatively prime and x and y are not both integers, then s(a, b; x, y) + s(b, a; y, x) = ((x))((y)) + 1 2 a b B 2 (y) + 1 ab B 2 (ay + bx) + b a B 2 (x) . Here B 2 (x) := (x − [x]) 2 − (x − [x]) + 1 6 is the periodized second Bernoulli polynomial. If x and y are both integers, the Dedekind-Rademacher sum is simply the classical Dedekind sum, whose reciprocity law we already stated. As with the reciprocity law for the classical Dedekind sum, Theorem 2.4 can be used to compute s(a, b; x, y) in polynomial time. The second generalization of the Dedekind sum we mention here is due to Zagier [Z]. From topological considerations, he arrived naturally at expressions of the following kind: Definition 2.5. Let a 1 , . . . , a d be integers relatively prime to a 0 ∈ N. Define the higher-dimensional Dedekind sum as s(a 0 ; a 1 , . . . , a d ) := (−1) d/2 a 0 a0−1 k=1 cot πka 1 a 0 · · · cot πka d a 0 . This sum vanishes if d is odd. It is not hard to see that this indeed generalizes the classical Dedekind sum: the latter can be written in terms of cotangents [RG], which yields s(a, b) = 1 4b k mod b cot πka b cot πk b = − 1 4 s(b; a, 1) . Again, there exists a reciprocity law for Zagier's sums: Theorem 2.6 (Zagier). If a 0 , . . . , a d ∈ N are pairwise relatively prime then d j=0 s(a j ; a 0 , . . . ,â j , . . . , a d ) = φ(a 0 , . . . , a d ) . Here φ is a rational function in a 0 , . . . , a d , which can be expressed in terms of Hirzebruch L-functions [Z]. It should be mentioned that a version of the higher-dimensional Dedekind sums had already been introduced by Carlitz [ C]: k1,...,k d mod a0 a 1 k 1 + · · · + a d k d a 0 a 1 a 0 · · · a d a 0 . Berndt [B] noticed that these sums are, up to trivial factor, Zagier's higher-dimensional Dedekind sums. If we write the higher-dimensional Dedekind sum as a sum over roots of unity, s(a 0 ; a 1 , . . . , a d ) = 1 a 0 λ a 0 =1 =λ λ a1 + 1 λ a1 − 1 · · · λ a d + 1 λ a d − 1 , it becomes clear that it suffices to study sums of the form 1 a 0 λ a 0 =1 =λ 1 (λ a1 − 1) · · · (λ a d − 1) . Zagier's Dedekind sum can be expressed as a sum of expressions of this kind. On the other hand, we consider special cases of the Dedekind-Rademacher sum, namely, for n ∈ Z, s a, b; n b , 0 = k mod b ka + n b k b . Knuth [K] discovered that these generalized Dedekind sums describe the statistics of pseudo random number generators. In [BR], we used the convolution theorem for finite Fourier series to show that, if a and b are relatively prime, (2.1) s a, b; n b , 0 = − 1 b λ b =1 =λ λ −n (1 − λ a )(1 − λ) − 1 2 n b + 1 4 − 1 4b . Here {x} = x − [x] denotes the fractional part of x. Comparing this with the representation we obtained for Zagier's Dedekind sums motivates the study of the Fourier-Dedekind sum σ n (a 1 , . . . , a d ; a 0 ) = 1 a 0 λ a 0 =1 λ n (1 − λ a1 ) · · · (1 − λ a d ) , a finite Fourier series in n. Gessel [G] gave a new reciprocity law for a special case of Fourier-Dedekind sums: Theorem 2.7 (Gessel). Let p and q be relatively prime and suppose that 1 ≤ n ≤ p + q. Then 1 p λ p =1 =λ λ n (1 − λ q ) (1 − λ) + 1 q λ q =1 =λ λ n (1 − λ p ) (1 − λ) = − n 2 2pq + n 2 1 p + 1 q + 1 pq − 1 4 1 p + 1 q + 1 − 1 12 p q + 1 pq + q p . It is easy to see that the reciprocity law for classical Dedekind sums (Theorem 2.2) is a special case of Gessel's theorem. We can rephrase the statement of Gessel's theorem in terms of Dedekind-Rademacher sums by means of (2.1): for p and q relatively prime, and 1 ≤ n ≤ p + q, s q, p; −n p , 0 + s p, q; −n q , 0 def = p−1 k=0 qk − n p k p + q−1 k=0 pk − n q k q = n 2 2pq − n 2 1 p + 1 q + 1 pq + 1 4 + 1 12 p q + 1 pq + q p − 1 2 −t p − 1 2 −t q . We will now view the Fourier-Dedekind sum from a generating-function point of view, which will allow us to obtain and extend geometric proofs of Dedekind's, Zagier's and Gessel's reciprocity laws. A new combinatorial identity for partitions from a finite set The form of the Fourier-Dedekind sum σ −n (a 1 , . . . , a d ; a 0 ) = 1 a 0 λ a 0 =1 =λ λ −n (1 − λ a1 ) · · · (1 − λ a d ) suggests the use of a generating function f (z) := 1 1 − z a0 z −n (1 − z a1 ) · · · (1 − z a d ) . In fact, let's expand this generating function into partial fractions: suppose, for simplicity, that n > 0, and a 0 , . . . , a d are pairwise relatively prime. Then we can write f (z) = λ a 0 =1 =λ A λ z − λ + · · · + λ a d =1 =λ A λ z − λ + d+1 k=1 B k (z − 1) k + n k=1 C k z k . The coefficient A λ for, say, a nontrivial a 0 'th root of unity λ can be derived easily: A λ = lim z→λ (z − λ)f (z) = − λ a 0 λ −n (1 − λ a1 ) · · · (1 − λ an ) . Hence we obtain the Fourier-Dedekind sums if we consider the constant coefficient of f (in the Laurent series about z = 0): const(f ) = λ a 0 =1 =λ A λ −λ + · · · + λ a d =1 =λ A λ −λ + d+1 k=1 (−1) k B k (3.1) = σ −n (a 1 , . . . , a d ; a 0 ) + · · · + σ −n (a 0 , . . . , a d−1 ; a n ) + d+1 k=1 (−1) k B k . The coefficients B k are simply the coefficients of the Laurent series of f about z = 1, and are easily computed, by hand or using mathematics software such as Maple or Mathematica. It is not hard to see that they are polynomials in n whose coefficients are rational functions of the a 0 , . . . , a d . 1 To simplify notation, define (3.2) q(a 0 , . . . , a d , n) := d+1 k=1 (−1) k B k . On the other hand, we can compute the constant coefficient of f by brute force: By expanding f (z) =   k0≥0 z k0a0   · · ·   k d ≥0 z k d a d   z −n , we can see that const(f ) enumerates the ways of writing n as a linear combination of the a 0 , . . . , a d with nonnegative coefficients: const(f ) = # (k 0 , . . . , k d ) ∈ Z d+1 : k j ≥ 0, k 0 a 0 + · · · + k d a d = n (3.3) = p {a0,...,a d } (n) . This defines the partition function with parts in the finite set A := {a 0 , . . . , a n }. Geometrically, p A (n) enumerates the integer points in n-dilates of the rational polytope P := (x 0 , . . . , x d ) ∈ R d+1 : x j ≥ 0, x 0 a 0 + · · · + x d a d = 1 . This geometric interpretation allows us to use the machinery of Ehrhart theory, which will be advantageous in the following section. We next give an explicit formula for the famous 'coin-exchange problem'-that is, the number of ways to form n cents from a finite set of coins with given denominations a 0 , . . . , a d : comparing (3.1) with (3.3) yields our central result [BDR]. Theorem 3.1. Suppose a 0 , . . . , a d are pairwise relatively prime, positive integers. We recall that the number of partitions of an integer n from the finite set of a i 's is defined by where q(a 0 , . . . , a d , n) is given by (3.2). p {a0,...,a d } (n) := (k 0 , . . . , k d ) ∈ Z d+1 : k j ≥ 0, k 0 a 0 + · · · + k d a d = n . 1 After this paper was submitted, general formulas for these polynomials were discovered in [BGK]. The first few expressions for q(a 0 , . . . , a d , n) are q(a 0 , n) = 1 a 0 q(a 0 , a 1 , n) = n a 0 a 1 + 1 2 1 a 0 + 1 a 1 q(a 0 , a 1 , a 2 , n) = n 2 2a 0 a 1 a 2 + n 2 1 a 0 a 1 + 1 a 0 a 2 + 1 a 1 a 2 + 1 12 a 1 , a 2 , a 3 , n) = n 3 6a 0 a 1 a 2 a 3 + n 2 4 1 a 0 a 1 a 2 + 1 a 0 a 1 a 3 + 1 a 0 a 2 a 3 + 1 a 1 a 2 a 3 + n 4 3 a 0 + 3 a 1 + 3 a 2 + a 0 a 1 a 2 + a 1 a 0 a 2 + a 2 a 0 a 1 q(a 0 ,1 a 0 a 1 + 1 a 0 a 2 + 1 a 0 a 3 + 1 a 1 a 2 + 1 a 1 a 3 + 1 a 2 a 3 + n 12 a 0 a 1 a 2 a 3 + a 1 a 0 a 2 a 3 + a 2 a 0 a 1 a 3 + a 3 a 0 a 1 a 2 + 1 24 a 0 a 1 a 2 + a 0 a 1 a 3 + a 0 a 2 a 3 + a 1 a 0 a 2 + a 1 a 0 a 3 + a 1 a 2 a 3 + a 2 a 0 a 1 + a 2 a 0 a 3 + a 2 a 1 a 3 + a 3 a 0 a 1 + a 3 a 0 a 2 + a 3 a 1 a 2 + 1 8 1 a 0 + 1 a 1 + 1 a 2 + 1 a 3 . Reciprocity laws We will now use Theorem 3.1 to prove and extend some of the reciprocity theorems stated earlier. We will make use of two results due to Ehrhart for rational polytopes, that is, polytopes whose vertices are rational. Ehrhart [E] initiated the study of the number of integer points ("lattice points") in integer dilates of such polytopes: Definition 4.1. Let P ⊂ R d be a rational polytope, and n a positive integer. We denote the number of lattice points in the dilates of the closure of P and its interior by L(P, n) := # nP ∩ Z d and L(P • , n) := # nP • ∩ Z d , respectively. Ehrhart proved that L(P, n) and L(P • , n) are quasipolynomials in the integer variable n, that is, expressions of the form c d (n) n d + · · · + c 1 (n) n + c 0 (n) , where c 0 , . . . , c d are periodic functions in n. Ehrhart conjectured the following fundamental theorem, which establishes an algebraic connection between our two lattice-point-count operators. Its original proof is due to Macdonald [Ma]. Theorem 4.2 (Ehrhart-Macdonald reciprocity law). Suppose the rational polytope P is homeomorphic to a d-manifold. Then L(P • , −n) = (−1) d L (P, n) . This enables us to rephrase Theorem 3.1 for the quantity p • {a0,...,a d } (n) := # (k 0 , . . . , k d ) ∈ Z d+1 : k j > 0, k 0 a 0 + · · · + k d a d = n . By Theorem 4.2, we have the following result. where q(a 0 , . . . , a d , n) is given by (3.2). We note that we could have derived this identity from scratch in a similar way as Theorem 3.1, without using Ehrhart-Macdonald reciprocity. The reason for switching to p • {a0,...,a d } (n) is that p • {a0,...,a d } (n) = 0 for 0 < n < a 0 + · · · + a d , by the very definition of p • {a0,...,a d } (n). This yields a reciprocity law: Theorem 4.4. Let a 0 , . . . , a d be pairwise relatively prime integers and 0 < n < a 0 + · · · + a d . Then For d = 2, a 0 = p, a 1 = q, a 2 = 1, this is the statement of Gessel's Theorem 2.7, which, in turn, implies Dedekind's reciprocity law Theorem 2.2. To prove Zagier's Theorem 2.6 in the language of Fourier-Dedekind sums, we make use of another result of Ehrhart [E] on lattice polytopes, that is, polytopes whose vertices have integer coordinates. Recall that the reduced Euler characteristic of a polytope P can be defined as χ(P) := σ (−1) dim σ , where the sum is over all sub-simplices of P. Theorem 4.5 (Ehrhart). Let P be a lattice polytope. Then L(P, n) is a polynomial in n whose constant term is χ (P). We note that the polytope P corresponding to p {a0,...,a d } (n) is convex and hence has Euler characteristic 1. If we now dilate P only by multiples of a 0 · · · a d , say n = a 0 · · · a d m, we obtain the dilates of a lattice polytope. Theorem 3.1 simplifies for these n to p {a0,...,a d } (a 0 · · · a d m) = q(a 0 , . . . , a d , a 0 · · · a d m) + d j=0 σ 0 (a 0 , . . . ,â j , . . . , a d ; a j ) , by the periodicity of the Fourier-Dedekind sums. However, χ(P) = 1, and Theorem 4.5 yields a result equivalent to Zagier's reciprocity law for his higher-dimensional Dedekind sums, Theorem 2.6: Theorem 4.6. For pairwise relatively prime integers a 1 , . . . , a d , d j=0 σ 0 (a 0 , . . . ,â j , . . . , a d ; a j ) = 1 − q(a 0 , . . . , a d , 0) , where q(a 0 , . . . , a d , n) is given by (3.2). The computational complexity of Zagier's higher-dimensional Dedekind sums In this section we give a proof of the polynomial-time complexity of Zagier's higher-dimensional Dedekind sums, in fixed dimension d. In [BP], there is a nice theorem due to Barvinok which guarantees the polynomial-time computability of the generating function attached to a rational polyhedron. We will use his theorem for a cone. First, we mention that a common way to enumerate lattice points in a cone K (and in polytopes) is to use the generating function f (K, x) := m∈K∩Z d x m , where we use the standard multivariate notation x m := x m1 1 . . . x m d d . It is an elementary fact that for rational cones these generating functions are always rational functions of the variable x. Barvinok's theorem reads as follows: Theorem 5.1 (Barvinok). Let us fix the dimension d. There exists a polynomial-time algorithm, which for a given rational polyhedron K ⊂ R d , K = {x ∈ R d : c i , x ≤ β i , i = 1...m}, where c i ∈ Z d and β i ∈ Q computes the generating function f (K, x) := m∈K∩Z d x m in the form (a virtual decomposition) f (K, x) = i∈I ǫ i x ai (1 − x bi1 ) · · · (1 − x b id ) , where ǫ i ∈ {−1, 1}, a i ∈ Z, and b i1 , ..., b id is a basis of Z d for each i. The computational complexity of the algorithm for finding this virtual decomposition is L O(d) , where L is the input size of K. In particular, the number I of terms in the summand is L O(d) . Thus Barvinok's algorithm finds the coefficients of the rational function f (K, x) in polynomial time. In [DR], on the other hand, the generating function f (K, x) is given in terms of an average over a finite abelian group of a product of d cotangent functions, whose arguments are in terms of the coordinates of the vertices which generate the cone K (these are the extreme points of K whose convex hull is K). This is the main theorem in [DR] and we apply it below to a special lattice cone which will give us the Zagier-Dedekind sums we want to study. The following theorem is part of a bigger project on the computability of generalized Dedekind sums in all dimensions. A slightly different proof is sketched in [BP]. Theorem 5.2. For fixed dimension d, the higher-dimensional Dedekind sums s(a 0 ; a 1 , . . . , a d ) = (−1) d/2 a 0 a0−1 k=1 cot πka 1 a 0 · · · cot πka d a 0 . are polynomial-time computable. Proof. Let K ⊂ R d+1 be the cone generated by the positive real span of the vectors v 1 = (1, 0, . . . , 0, a 1 ) v 2 = (0, 1, 0, . . . , 0, a 2 ) . . . Then the right-hand side of the main theorem of [DR] is in this case 1 2 d+1 a 0 a0−1 k=1 d j=0 1 + coth π a 0 (s + ia j k) . When we compute the coefficient of s −1 in this meromorphic function of s, we arrive at the following higher-dimensional Dedekind sum: a0−1 k=1 cot πka 1 a 0 · · · cot πka d a 0 plus other products of lower-dimensional Zagier-Dedekind sums. By induction on the dimension, all of the lower-dimensional Zagier-Dedekind sums are polynomialtime computable, and since the left-hand side of the main theorem is polynomialtime computable by Barvinok's theorem, the above Zagier-Dedekind sums in dimension d is now also polynomial-time computable. Then p {a0,...,a d } (n) = q(a 0 , . . . , a d , n) + d j=0 σ −n (a 0 , . . . ,â j , . . . , a d ; a j ) , Corollary 4 . 3 . 43Suppose a 0 , . . . , a d ∈ N are pairwise relatively prime. Thenp • {a0,...,a d } (n) = (−1) d   q(a 0 , . . . , a d , −n) + d j=0σ n (a 0 , . . . ,â j , . . . , a d ; a j ) (a 0 , . . . ,â j , . . . , a d ; a j ) = −q(a 0 , . . . , a d , −n) , where q(a 0 , . . . , a d , n) is given by (3.2). v d = (0, . . . , 0, 1, a d ) v d+1 = (0, . . . , 0, a 0 ) An algorithmic theory of lattice points in polyhedra. A I Barvinok, J E Pommersheim, New perspectives in algebraic combinatorics. Berkeley, CA; CambridgeCambridge Univ. Press38A. I. Barvinok, J. E. Pommersheim, An algorithmic theory of lattice points in polyhedra, in New perspectives in algebraic combinatorics, Berkeley, CA, (1996-97), Math. Sci. Res. Inst. Publ. 38, Cambridge Univ. Press, Cambridge (1999), 91-147. The Frobenius problem, rational polytopes, and Fourier-Dedekind sums. M Beck, R Diaz, S Robins, J. Number Th. 96to appear inM. Beck, R. Diaz, S. Robins, The Frobenius problem, rational polytopes, and Fourier- Dedekind sums, to appear in J. Number Th. 96 (2002), 1-21. The polynomial part of a restricted partition function related to the Frobenius problem. M Beck, I M Gessel, T Komatsu, Electronic J. Combin. 81M. Beck, I. M. Gessel, T. Komatsu, The polynomial part of a restricted partition function related to the Frobenius problem, Electronic J. Combin. 8, no. 1 (2001), N 7. Explicit and efficient formulas for the lattice point count inside rational polygons. M Beck, S Robins, Discr. Comp. Geom. 27M. Beck, S. Robins, Explicit and efficient formulas for the lattice point count inside rational polygons, Discr. Comp. Geom. 27 (2002), 443-459. Reciprocity theorems for Dedekind sums and generalizations. B Berndt, Adv. in Math. 233B. Berndt Reciprocity theorems for Dedekind sums and generalizations, Adv. in Math. 23, no. 3 (1977), 285-316. Vergne An equivariant Riemann-Roch theorem for simplicial toric varieties. M Brion, M , J. reine angew. Math. 482M. Brion, M. Vergne An equivariant Riemann-Roch theorem for simplicial toric varieties, J. reine angew. Math. 482 (1997), 67-92. A note on generalized Dedekind sums. L Carlitz, Duke Math. J. 21L. Carlitz, A note on generalized Dedekind sums, Duke Math. J. 21 (1954), 399-404. R Dedekind, Erläuterungen zu den Fragmenten XXVIII. New YorkDover PublCollected works of Bernhard RiemannR. Dedekind, Erläuterungen zu den Fragmenten XXVIII, in Collected works of Bernhard Riemann, Dover Publ., New York (1953), 466-478. The Erhart polynomial of a lattice polytope. R Diaz, S Robins, Ann. Math. 145R. Diaz, S. Robins, The Erhart polynomial of a lattice polytope, Ann. Math. 145 (1997), 503-518. Das Verhalten der Kleinschen Funktionen log σ g,h (w 1 , w 2 ) gegenüber Modultransformationen und verallgemeinerte Dedekindsche Summen. U Dieter, J. reine angew. Math. 201U. Dieter, Das Verhalten der Kleinschen Funktionen log σ g,h (w 1 , w 2 ) gegenüber Modul- transformationen und verallgemeinerte Dedekindsche Summen, J. reine angew. Math. 201 (1959), 37-70. Sur un problème de géométrie diophantienne linéaire II. E Ehrhart, J. reine angewandte Math. 227E. Ehrhart, Sur un problème de géométrie diophantienne linéaire II, J. reine angewandte Math. 227 (1967), 25-49. The Atiyah-Singer theorem and elementary number theory, Publish or Perish. F Hirzebruch, D Zagier, BostonF. Hirzebruch, D. Zagier, The Atiyah-Singer theorem and elementary number theory, Publish or Perish, Boston (1974). Generating functions and generalized Dedekind sums. I Gessel, Electronic J. Comb. 4211I. Gessel, Generating functions and generalized Dedekind sums, Electronic J. Comb. 4, no. 2 (1997), R 11. The art or computer programming. D Knuth, Addison-Wesley2Reading, MassD. Knuth, The art or computer programming, vol. 2, Addison-Wesley, Reading, Mass., (1981). Polynomials associated with finite cell complexes. I G Macdonald, J. London Math. Soc. 4I. G. Macdonald, Polynomials associated with finite cell complexes, J. London Math. Soc. 4 (1971), 181-192. Über einige Anwendungen Dedekindscher Summen. C Meyer, J. reine angewandte Math. 198C. Meyer,Über einige Anwendungen Dedekindscher Summen, J. reine angewandte Math. 198 (1957), 143-203. Lattice points in a tetrahedron and generalized Dedekind sums. L J Mordell, J. Indian Math. 15L. J. Mordell, Lattice points in a tetrahedron and generalized Dedekind sums, J. Indian Math. 15 (1951), 41-46. Toric varieties, lattice points, and Dedekind sums. J Pommersheim, Math. Ann. 295J. Pommersheim, Toric varieties, lattice points, and Dedekind sums, Math. Ann. 295 (1993), 1-24. Some remarks on certain generalized Dedekind sums. H Rademacher, Acta Aritm. 9H. Rademacher, Some remarks on certain generalized Dedekind sums, Acta Aritm. 9 (1964), 97-105. Dedekind sums. H Rademacher, E Grosswald, Carus Mathematical Monographs. The Mathematical Association of AmericaH. Rademacher, E. Grosswald, Dedekind sums, Carus Mathematical Monographs, The Mathematical Association of America (1972). Higher dimensional Dedekind sums. D Zagier, Math. Ann. 202D. Zagier, Higher dimensional Dedekind sums, Math. Ann. 202 (1973), 149-172.
[]
[ "OPERATORS OF HARMONIC ANALYSIS IN VARIABLE EXPONENT LEBESGUE SPACES. TWO-WEIGHT ESTIMATES", "OPERATORS OF HARMONIC ANALYSIS IN VARIABLE EXPONENT LEBESGUE SPACES. TWO-WEIGHT ESTIMATES" ]
[ "Vakhtang Kokilashvili ", "Alexander Meskhi ", "Muhammad Sarwar " ]
[]
[]
In the paper two-weighted norm estimates with general weights for Hardy-type transforms, maximal functions, potentials and Calderón-Zygmund singular integrals in variable exponent Lebesgue spaces defined on quasimetric measure spaces (X, d, µ) are established. In particular, we derive integral-type easily verifiable sufficient conditions governing two-weight inequalities for these operators. If exponents of Lebesgue spaces are constants, then most of the derived conditions are simultaneously necessary and sufficient for appropriate inequalities. Examples of weights governing the boundedness of maximal, potential and singular operators in weighted variable exponent Lebesgue spaces are given.
null
[ "https://arxiv.org/pdf/1007.1351v1.pdf" ]
119,125,866
1007.1351
f3417780275d815011f88874787a97bc94b4e304
OPERATORS OF HARMONIC ANALYSIS IN VARIABLE EXPONENT LEBESGUE SPACES. TWO-WEIGHT ESTIMATES 8 Jul 2010 Vakhtang Kokilashvili Alexander Meskhi Muhammad Sarwar OPERATORS OF HARMONIC ANALYSIS IN VARIABLE EXPONENT LEBESGUE SPACES. TWO-WEIGHT ESTIMATES 8 Jul 2010Variable exponent Lebesgue spacesHardy transformsfractional and singular inte- gralsquasimetric measure spacesspaces of homogeneous typetwo-weight inequality AMS Subject Classification: 42B2042B2546E30 In the paper two-weighted norm estimates with general weights for Hardy-type transforms, maximal functions, potentials and Calderón-Zygmund singular integrals in variable exponent Lebesgue spaces defined on quasimetric measure spaces (X, d, µ) are established. In particular, we derive integral-type easily verifiable sufficient conditions governing two-weight inequalities for these operators. If exponents of Lebesgue spaces are constants, then most of the derived conditions are simultaneously necessary and sufficient for appropriate inequalities. Examples of weights governing the boundedness of maximal, potential and singular operators in weighted variable exponent Lebesgue spaces are given. Introduction We study the two-weight problem for Hardy-type, maximal, potentials and singular operators in Lebesgue spaces with non-standard growth defined on quasimetric measure spaces. In particular, our aim is to derive easily verifiable sufficient conditions for the boundedness of these operators in weighted L p(·) (X) spaces which enable us effectively construct examples of appropriate weights. The conditions are simultaneously necessary and sufficient for corresponding inequalities when the weights are of special type and the exponent p of the space is constant. We assume that the exponent p satisfies the local log-Hölder continuity condition and if the diameter of X is infinite, then we suppose that p is constant outside some ball. In the framework of variable exponent analysis such a condition first appeared in the paper [8], where the author established the boundedness of the Hardy-Littlewood maximal operator in L p(·) (R n ). As far as we know, unfortunately, it is not known an analog of the log-Hölder decay condition (at infinity) for p : X → [1, ∞) even in the unweighted case, which is well-known and natural for the Euclidean spaces (see [5], [41], [3]). The local log-Hölder continuity condition for the exponent p together with the log-Hölder decay condition guarantees the boundedness of operators of harmonic analysis in L p(·) (R n ) spaces (see e.g., [7]). A considerable interest of researchers is attracted to the study of mapping properties of integral operators defined on (quasi-)metric measure spaces. Such spaces with doubling measure and all their generalities naturally arise when studying boundary value problems for partial differential equations with variable coefficients, for instance, when the quasimetric might be induced by a differential operator, or tailored to fit kernels of integral operators. The problem of the boundedness of integral operators naturally arises also in the Lebesgue spaces with non-standard growth. Historically the boundedness of the Hardy-Littlewood maximal, potential and singular operators in L p(·) spaces defined on (quasi)metric measure spaces was derived in [21], [22], [27], [29], [33]- [36], [1] (see also references cited therein). Weighted inequalities for classical operators in L p(·) w spaces, where w is a power-type weight, were established in the papers [31]- [33], [34]- [36], [30], [19], [46], [42], [13] (see also the survey papers [45], [24]), while the same problems with general weights for Hardy, maximal, potential and singular operators were studied in [16]- [18], [28], [33], [34], [38], [10], [2], [40], [11]. Moreover, in the paper [11] a complete solution of the one-weight problem for maximal functions defined on Euclidean spaces are given in terms of Muckenhoupt-type conditions. Finally we notice that in the paper [18] modular-type sufficient conditions governing the two-weight inequality for maximal and singular operators were established. It should be emphasized that in the classical Lebesgue spaces the two-weight problem for fractional integrals is already solved (see [26], [25]) but it is often useful to construct concrete examples of weights from transparent and easily verifiable conditions. This problem for singular integrals still remains open. However, some sufficient conditions governing two-weight estimates for the Calderón-Zygmund operators were given in the papers [14], [6] (see also the monographs [15], [49] and references cited therein). To derive two-weight estimates for maximal, singular and potential operators we use the appropriate inequalities for Hardy-type transforms on X (which are also derived in this paper). The paper is organized as follows: In Section 1 we give some definitions and auxiliary results regarding quasimetric measure spaces and the variable exponent Lebesgue spaces. Section 2 is devoted to the sufficient conditions governing two-weight inequalities for Hardy-type defined on quasimetric measure spaces, while in Section 3 we study the two-weight problem for potentials defined on quasimetric measure spaces. In Section 4 we discuss weighted estimates for maximal and singular integrals. Finally we point out that constants (often different constants in the same series of inequalities) will generally be denoted by c or C. The symbol f (x) ≈ g(x) means that there are positive constants c 1 and c 2 independent of x such that the inequality f (x) ≤ c 1 g(x) ≤ c 2 f (x) holds. Throughout the paper by the symbol p ′ (x) is denoted the function p(x)/(p(x) − 1). preliminaries Let X := (X, d, µ) be a topological space with a complete measure µ such that the space of compactly supported continuous functions is dense in L 1 (X, µ) and there exists a non-negative real-valued function (quasi-metric) d on X × X satisfying the conditions: (i) d(x, y) = 0 if and only if x = y; (ii) there exists a constant a 1 > 0, such that d(x, y) ≤ a 1 (d(x, z) + d(z, y)) for all x, y, z ∈ X; (iii) there exists a constant a 0 > 0, such that d(x, y) ≤ a 0 d(y, x) for all x, y, ∈ X. We assume that the balls B(x, r) := {y ∈ X : d(x, y) < r} are measurable and 0 ≤ µ(B(x, r)) < ∞ for all x ∈ X and r > 0; for every neighborhood V of x ∈ X, there exists r > 0, such that B(x, r) ⊂ V. Throughout the paper we also suppose that µ{x} = 0 and that B(x, R) \ B(x, r) = ∅(1) for all x ∈ X, positive r and R with 0 < r < R < L, where L := diam (X) = sup{d(x, y) : x, y ∈ X}. We call the triple (X, d, µ) a quasimetric measure space. If µ satisfies the doubling condition µ(B(x, 2r)) ≤ cµ(B(x, r)), where the positive constant c does not depend on x ∈ X and r > 0, then (X, d, µ) is called a space of homogeneous type (SHT). For the definition and some properties of an SHT see, e.g., [4], [48], [20]. A quasimetric measure space, where the doubling condition is not assumed and may fail, is called a non-homogeneous space. Notice that the condition L < ∞ implies that µ(X) < ∞ because every ball in X has a finite measure. We say that the measure µ is upper Ahlfors Q-regular if there is a positive constant c 1 such that µB(x, r) ≤ c 1 r Q for for all x ∈ X and r > 0. Further, µ is lower Ahlfors q− regular if there is a positive constant c 2 such that µB(x, r) ≥ c 2 r q for all x ∈ X and r > 0. It is easy to check that if L < ∞, then µ is lower Ahlfors regular (see also, e.g., [22]). For the boundedness of potential operators in weighted Lebesgue spaces with constant exponents on non-homogeneous spaces we refer, for example, to the monograph [17] (Ch. 6) and references cited therein. Let p be a non-negative µ− measurable function on X. Suppose that E is a µ− measurable set in X and a is a constant satisfying the condition 1 < a < ∞. Throughout the paper we use the notation: p − (E) := inf E p; p + (E) := sup E p; p − := p − (X); p + := p + (X); B(x, r) := {y ∈ X : d(x, y) ≤ r}, kB(x, r) := B(x, kr); B xy := B(x, d(x, y)); B xy := B(x, d(x, y)); g B := 1 µ(B) B |g(x)|dµ(x). where 1 < p − ≤ p + < ∞. Assume now that 1 ≤ p − ≤ p + < ∞. The variable exponent Lebesgue space L p(·) (X) (some- times it is denoted by L p(x) (X)) is the class of all µ-measurable functions f on X for which S p (f ) := X |f (x)| p(x) dµ(x) < ∞. The norm in L p(·) (X) is defined as follows: f L p(·) (X) = inf{λ > 0 : S p (f /λ) ≤ 1}. It is known (see e.g. [39], [43], [31], [22]) that L p(·) space is a Banach space. For other properties of L p(·) spaces we refer to [47], [39], [43], [45], [24], etc. Now we introduce several definitions: Definition 1.1. Let (X, d, µ) be a quasimetric measure space and let N ≥ 1 be a constant. Suppose that p satisfy the condition 0 < p − ≤ p + < ∞. We say that p ∈ P(N, x), where x ∈ X, if there are positive constants b and c (which might be depended on x) such that µ(B(x, N r)) p− (B(x,r))−p+(B(x,r)) ≤ c(2) holds for all r, 0 < r ≤ b. Further, p ∈ P(N ) if there are a positive constants b and c such that (2) holds for all x ∈ X and all r satisfying the condition 0 < r ≤ b. Definition 1.2. Let (X, d, µ) be an SHT . Suppose that 0 < p − ≤ p + < ∞. We say that p ∈ LH(X, x) ( p satisfies the log-Hölder-type condition at a point x ∈ X) if there are positive constants b and c (which might be depended on x) such that |p(x) − p(y)| ≤ c − ln µ(B xy(3) holds for all y satisfying the condition d(x, y) ≤ b. Further, p ∈ LH(X) ( p satisfies the log-Hölder type condition on X)if there are positive constants b and c such that (3) holds for all x, y with d(x, y) ≤ b. Definition 1.3. Let (X, d, µ) be a quasimetric measure space and let 0 < p − ≤ p + < ∞. We say that p ∈ LH(X, x) if there are positive constants b and c (which might be depended on x) such that |p(x) − p(y)| ≤ c − ln d(x, y)(4) for all y with d(x, y) ≤ b. Further, p ∈ LH(X) if (4) holds for all x, y with d(x, y) ≤ b. It is easy to see that if the measure µ is upper Ahlfors Q-regular and p ∈ LH(X) (resp. p ∈ LH(X, x)), then p ∈ LH(X) (resp. p ∈ LH(X, x). Further, if µ is lower Ahlfors q-regular and p ∈ LH(X) (resp. p ∈ LH(X, x)), then p ∈ LH(X) (resp. p ∈ LH(X, x)). Remark 1.1. It can be checked easily that if (X, d, µ) is an SHT, then µB x0x ≈ µB xx0 . Remark 1.2. Let (X, d, µ) be an SHT with L < ∞. It is known (see, e.g., [22], [27]) that if p ∈ LH(X)), then p ∈ P(1). Further, if µ is upper Ahlfors Q-regular, then the condition p ∈ P(1) implies that p ∈ LH(X). Proposition 1.4. If 0 < p − (X) ≤ p + (X) < ∞ and p ∈ LH(X) ( resp. p ∈ LH(X) ), then the functions cp(·) and 1/p(·) belong to the class LH(X) ( resp. LH(X) ). Further if p ∈ LH(X, x) (resp. p ∈ LH(X, x)) then cp(·)and1/p(·) belong to LH(X, x) ( resp. p ∈ LH(X, x) ), where c is a positive constant. If 1 < p − (X) ≤ p + (X) < ∞ and p ∈ LH(X) ( resp. p ∈ LH(X)LH(X) ), then p ′ ∈ LH(X, x) ( resp. p ′ ∈ LH(X, x)). Proof of this statement follows immediately from the definitions of the classes LH(X, x), LH(X), LH(X, x), LH(X); therefore we omit the details. Proposition 1.5. Let (X, d, µ) be an SHT and let p ∈ P(1). Then (µB xy ) p(x) ≤ c(µB yx ) p(y) , for all x, y ∈ X with µ(B(x, d(x, y))) ≤ b, where b is a small constant, and the constant c does not depend on x, y ∈ X. Proof. Due to the doubling condition for µ, Remark 1.1, the condition p ∈ P(1) and the fact x ∈ B(y, a 1 (a 0 +1)d(y, x)) we have the following estimates: µ(B xy ) p(x) ≤ µ B(y, a 1 (a 0 +1)d(x, y)) p(x) ≤ cµB(y, a 1 (a 0 + 1)d(x, y)) p(y) ≤ c(µB yx ) p(y) , which proves the statement. The proof of the next statement is trivial and follows directly from the definition of the classes P(N, x) and P(N ). Details are omitted. Proposition 1.6. Let (X, d, µ) be a quasimetric measure space and let x 0 ∈ X. Suppose that N ≥ 1 be a constant. Then the following statements hold: (i) If p ∈ P(N, x 0 ) (resp. p ∈ P(N )), then there are positive constants r 0 , c 1 and c 2 such that for all 0 < r ≤ r 0 and all y ∈ B(x 0 , r) (resp. for all x 0 , y with d(x 0 , y) < r ≤ r 0 ) we have that µ B(x 0 , N r) p(x0) ≤ c 1 µ B(x 0 , N r) p(y) ≤ c 2 µ B(x 0 , N r) p(x0) . (ii) Let p ∈ P(N, x 0 ). Then there are positive constants r 0 , c 1 and c 2 (in general, depending on x 0 ) such that for all r (r ≤ r 0 ) and all x, y ∈ B(x 0 , r) we have µ B(x 0 , N r) p(x) ≤ c 1 µ B(x 0 , N r) p(y) ≤ c 2 µ B(x 0 , N r) p(x) . (iii) Let p ∈ P(N ). Then there are positive constants r 0 , c 1 and c 2 such that for all balls B with radius r (r ≤ r 0 ) and all x, y ∈ B, we have µ( N B) p(x) ≤ c 1 µ(N B) p(y) ≤ c 2 µ(N B) p(x) . It is known that (see, e.g., [39], [43]) if f is a measurable function on X and E is a measurable subset of X, then the following inequalities hold: f p+(E) L p(·) (E) ≤ S p (f χ E ) ≤ f p−(E) L p(·) (E) , f L p(·) (E) ≤ 1; f p−(E) L p(·) (E) ≤ S p (f χ E ) ≤ f p+(E) L p(·) (E) , f L p(·) (E) > 1. Hölder's inequality in variable exponent Lebesgue spaces has the following form: E f gdµ ≤ 1/p − (E) + 1/(p ′ ) − (E) f L p(·) (E) g L p ′ (·) (E) . Lemma 1.7. Let (X, d, µ) be an SHT. (i) Let β be a measurable function on X such that β + < −1 and let r be a small positive number. Then there exists a positive constant c independent of r and x such that X\B(x0,r) (µB x0y ) β(x) dµ(y) ≤ c β(x) + 1 β(x) µ(B(x 0 , r)) β(x)+1 ; (ii) Suppose that p and α are measurable functions on X satisfying the conditions 1 < p − ≤ p + < ∞ and α − > 1/p − . Then there exists a positive constant c such that for all x ∈ X the inequality B(x0,2d(x0,x)) µB(x, d(x, y)) (α(x)−1)p ′ (x) dµ(y) ≤ c µB(x 0 , d(x 0 , x)) (α(x)−1)p ′ (x)+1 holds. Proof. Part (i) was proved in [27] (see also [15], p.372, for constant β). The proof of Part (ii) was given in [15] (Lemma 6.5.2, p. 348) but repeating those arguments we can see that it is also true for variable α and p. Details are omitted. Let M be a maximal operator on X given by M f (x) := sup x∈X,r>0 1 µ(B(x, r)) B(x,r) |f (y)|dµ(y). Definition 1.8. Let (X, d, µ) be a quasimetric measure space. We say that p ∈ M(X) if the operator M is bounded in L p(·) (X). L. Diening [8] proved that if Ω is a bounded domain in R n , 1 < p − ≤ p + < ∞ and p satisfies the local log-Hölder continuity condition on Ω (i.e., |p(x) − p(y)| ≤ c − ln(|x−y|) for all x, y ∈ Ω with |x − y| ≤ 1/2), then the Hardy-Littlewood maximal operator defined on Ω is bounded in L p(·) (Ω). Now we prove the following lemma: Lemma 1.9. Let (X, d, µ) be an SHT . Suppose that 0 < p − ≤ p + < ∞. Then p satisfies the condition p ∈ P(1) (resp. p ∈ P(1, x)) if and only if p ∈ LH(X) ( resp. p ∈ LH(X, x) ). Proof. Necessity. Let p ∈ P(1) and let x, y ∈ X with d(x, y) < c 0 for some positive constant c 0 . Observe that x, y ∈ B, where B := B(x, 2d(x, y)). By the doubling condition for µ we have that µB xy −|p(x)−p(y)| ≤ c µB −|p(x)−p(y)| ≤ c µB p−(B)−p+(B) ≤ C, where C is a positive constant which is greater than 1. Taking now the logarithm in the last inequality we have that p ∈ LH(X). If p ∈ P(1, x), then by the same arguments we find that p ∈ LH(X, x). Sufficiency. Let B := B(x 0 , r). First observe that If x, y ∈ B, then µB xy ≤ cµB(x 0 , r). Consequently, this inequality and the condition p ∈ LH(X) yield |p − (B) − p + (B)| ≤ C − ln c0µB(x0,r) . Further, there exists r 0 such that 0 < r 0 < 1/2 and c 1 ≤ ln µ(B) − ln c0 ln µ(B) ≤ c 2 , 0 < r ≤ r 0 , where c 1 and c 2 are positive constants. Hence µ(B) p−(B)−p+(B) ≤ µ(B) C ln c 0 µ(B) = exp C ln µ(B) ln c0µ(B) ≤ C. Let now p ∈ LH(X, x) and let B x := B(x, r) where r is a small number. We have that p + (B x ) − p(x) ≤ c − ln c0µB(x,r) and p(x) − p − (B x ) ≤ c − ln c0µB(x,r) for some positive constant c 0 . Conse- quently, (µ(B x )) p−(Bx)−p+(Bx) = µ(B x ) p(x)−p+(Bx) µ(B x ) p−(Bx)−p(x) ≤ c µ(B x ) −2c − ln(c 0 µBx )) ≤ C. Definition 1.10. A measure µ on X is said to satisfy the reverse doubling condition (µ ∈ RDC(X)) if there exist constants A > 1 and B > 1 such that the inequality µ B(a, Ar) ≥ Bµ B(a, r) holds. Remark 1.3. It is known that if all annulus in X are not empty, then µ ∈ DC(X) implies that µ ∈ RDC(X) (see, e.g., [48]). In the sequel we will use the notation: I 1,k := B(x 0 , A k−1 L/a 1 ) if L < ∞ B(x 0 , A k−1 /a 1 ) if L = ∞ , I 2,k := B(x 0 , A k+2 a 1 L) \ B(x 0 , A k−1 L/a 1 ), if L < ∞ B(x 0 , A k+2 a 1 ) \ B(x 0 , A k−1 /a 1 ) if L = ∞, , I 3,k := X \ B(x 0 , A k+2 La 1 ) if L < ∞ X \ B(x 0 , A k+2 a 1 ) if L = ∞ , E k := B(x 0 , A k+1 L) \ B(x 0 , A k L) if L < ∞ B(x 0 , A k+1 ) \ B(x 0 , A k ) if L = ∞ , where the constant A is defined in the reverse doubling condition and the constant a 1 is taken from the triangle inequality for the quasimetric d. Lemma 1.11. Let (X, d, µ) be an SHT. Suppose that there is a point x 0 ∈ X such that p ∈ LH(X, x 0 ). Then there exist positive constants r 0 and C ( which might be depended on x 0 ) such that for all r, 0 < r ≤ r 0 , the inequity (µB A ) p−(BA)−p+(BA) ≤ C holds, where B A := B(x 0 , Ar) \ B(x 0 , r) and the constant C is independent of r and the constant A is defined in Definition 1.10. Proof. Let B := B(x 0 , r). First observe that by the doubling and reverse doubling conditions we have that µB A = µB(x 0 , Ar) − µB(x 0 , r) ≥ (B − 1)µB(x 0 , r) ≥ cµ(AB). Suppose that 0 < r < c 0 , where c 0 is a sufficiently small constant. Then by using Lemma 1.9 we find that µB A p−(BA)−p+(BA) ≤ c µ(AB) p−(BA)−p+(BA) ≤ c µ(AB) p−(AB)−p+(AB) ≤ c. Lemma 1.12. Let (X, d, µ) be an SHT and let 1 < p − (x) ≤ p(x) ≤ q(x) ≤ q + (X) < ∞. Suppose that there is a point x 0 ∈ X such that p, q ∈ LH(X, x 0 ). Assume that p(x) ≡ p c ≡ const, q(x) ≡ q c ≡ const outside some ball B(x 0 , a) if L = ∞. Then there exist a positive constant C such that k f χ I 2,k L p(·) (X) gχ I 2,k L q ′ (·) (X) ≤ C f L p(·) (X) g L q ′ (·) (X) for all f ∈ L p(·) (X) and g ∈ L q ′ (·) (X). Proof. Suppose that L = ∞. To prove the lemma first observe that µ(E k ) ≈ µB(x 0 , A k ) and µ(I 2,k ) ≈ µB(x 0 , A k−1 ). This holds because µ satisfies the reverse doubling condition and, consequently, µE k = µ B(x 0 , A k+1 ) \ B(x 0 , A k ) = µB(x 0 , A k+1 ) − µB(x 0 , A k ) = µB(x 0 , AA k ) − µB(x 0 , A k ) ≥ BµB(x 0 , A k ) − µB(x 0 , A k ) = (B − 1)µB(x 0 , A k ) Moreover, using the doubling condition we have µE k ≤ µB(x 0 , AA k ) ≤ cµB(x 0 , A k ), where c > 1. Hence, µE k ≈ µB(x 0 , A k ). Further, since we can assume that a 1 ≥ 1, we find that µI 2,k = µ B(x 0 , A k+2 a 1 ) \ B(x 0 , A k−1 /a 1 ) = µB(x 0 , A k+2 a 1 ) − µB(x 0 , A k−1 /a 1 ) = µB(x 0 , AA k+1 a 1 ) − µB(x 0 , A k−1 /a 1 ) ≥ BµB(x 0 , A k+1 a 1 ) − µB(x 0 , A k−1 /a 1 ) ≥ B 2 µB(x 0 , A k /a 1 ) − µB(x 0 , A k−1 /a 1 ) ≥ B 3 µB(x 0 , A k−1 /a 1 ) − µB(x 0 , A k−1 /a 1 ) = (B 3 − 1)µB(x 0 , A k−1 /a 1 ). Moreover, using the doubling condition we have µI 2,k ≤ µB(x 0 , A k+2 r) ≤ cµB(x 0 , A k+1 r) ≤ c 2 µB(x 0 , A k /a 1 ) ≤ c 3 µB(x 0 , A k−1 /a 1 ). This gives the estimates (B 3 − 1)µB(x 0 , A k−1 /a 1 ) ≤ µ(I 2,k ) ≤ c 3 µB(x 0 , A k−1 /a 1 ). For simplicity assume that a = 1. Suppose that m 0 is an integer such that A m 0 −1 a1 > 1. Let us split the sum as follows: i f χ I2,i L p(·) (X) · gχ I2,i L q ′ (·) (X) = i≤m0 · · · + i>m0 · · · =: J 1 + J 2 . Since p(x) ≡ p c = const, q(x) = q c = const outside the ball B(x 0 , 1), by using Hölder's inequality and the fact that p c ≤ q c , we have J 2 = i>m0 f χ I2,i L pc (X) · gχ I2,i L (qc) ′ (X) ≤ c f L p(·) (X) · g L q ′ (·) (X) . Let us estimate J 1 . Suppose that f L p(·) (X) ≤ 1 and g L q ′ (·) (X) ≤ 1. Also, by Proposition 1.4 we have that 1/q ′ ∈ LH(X, x 0 ). Therefore by Lemma 1.11 and the fact that 1/q ′ ∈ LH(X, x 0 ) we obtain that µ I 2,k 1 q + (I 2,k ) ≈ χ I 2,k L q(·) (X) ≈ µ I 2,k 1 q − (I 2,k ) and µ I 2,k 1 q ′ + (I 2,k ) ≈ χ I 2,k L q ′ (·) (X) ≈ µ I 2,k 1 q ′ − (I k ) , where k ≤ m 0 . Further, observe that these estimates and Hölder's inequality yield the following chain of inequalities: J 1 ≤ c k≤m0 B(x0,A m 0 +1 ) f χ I 2,k L p(·) (X) · gχ I 2,k L q ′ (·) (X) χ I 2,k L q(·) (X) · χ I 2,k L q ′ (·) (X) χ E k (x)dµ(x) = c B(x0,A m 0 +1 ) k≤m0 f χ I 2,k L p(·) (X) · gχ I 2,k L q ′ (·) (X) χ I 2,k L q(·) (X) · χ I 2,k L q ′ (·) (X) χ E k (x)dµ(x) ≤ c k≤m0 f χ I 2,k L p(·) (X) χ I 2,k L q(·) (X) χ E k (x) L q(·) (B(x0,A m 0 +1 )) × k≤m0 gχ I 2,k L q ′ (·) (X) χ I 2,k L q ′ (·) (X) χ E k (x) L q ′ (·) (B(x0,A m 0 +1 )) =: cS 1 (f ) · S 2 (g). Now we claim that S 1 (f ) ≤ cI(f ), where I(f ) := k≤m0 f χ I 2,k L p(·) (X) χ I 2k L p(·) (X) χ E k (·) L p(·) (B(x0,A m 0 +1 )) and the positive constant c does not depend on f . Indeed, suppose that I(f ) ≤ 1. Then taking into account Lemma 1.11 we have that k≤m0 1 µ(I 2,k ) E k f χ I 2,k p(x) L p(·) (X) dµ(x) ≤ c B(x0,A m 0 +1 ) k≤m0 f χ I 2,k L p(·) (X) χ I 2,k L p(·) (X) χ E k (x) p(x) dµ(x) ≤ c. Consequently, since p(x) ≤ q(x), E k ⊆ I 2,k and f L p(·) (X) ≤ 1, we find that k≤m0 1 µ(I 2,k ) E k f χ I 2,k q(x) L p(·) (X) dµ(x) ≤ k≤m0 1 µ(I 2,k ) E k f χ I 2,k p(x) L p(·) (X) dµ(x) ≤ c. This implies that S 1 (f ) ≤ c. Thus the desired inequality is proved. Further, let us introduce the following function: P(y) := k≤2 p + (χ I 2,k )χ E k (y) . It is clear that p(y) ≤ P(y) because E k ⊂ I 2,k . Hence I(f ) ≤ c k≤m0 f χ I 2,k L p(·) (X) χ I 2k L p(·) (X) χ E k (·) L P(·) (B(x0,A m 0 +1 )) for some positive constant c. Then by using the this inequality, the definition of the function P, the condition p ∈ LH(X) and the obvious estimate χ I 2,k p+(I 2,k ) L p(·) (X) ≥ cµ(I 2,k ), we find that B(x0,A m 0 +1 ) k≤m0 f χ I 2,k L p(·) (X) χ I 2,k L p(·) (X) χ E k (x) P(x) dµ(x) = B(x0,A m 0 +1 ) k≤m0 f χ I 2,k p+(I 2,k ) L p(·) (X) χ I 2,k p+(I 2,k ) L p(·) (X) χ E k (x) dµ(x) ≤ c B(x0,A m 0 +1 ) k≤m0 f χ I 2,k p+(I 2,k ) L p(·) (X) µ(I 2,k ) χ E k (x) dµ(x) ≤ c k≤m0 f χ I 2,k p+(I 2,k ) L p(·) (X) ≤ c k≤m0 I 2,k |f (x)| p(x) dµ(x) ≤ c X |f (x)| p(x) dµ(x) ≤ c. Consequently, I(f ) ≤ c f L p(·) (X) . Hence, S 1 (f ) ≤ c f L p(·) (X) . Analogously taking into account the fact that q ′ ∈ DL(X) and arguing as above we find that S 2 (g) ≤ c g L q ′ (·) (X) . Thus summarizing these estimates we conclude that i≤m0 f χ Ii L p(·) (X) gχ Ii L q ′ (·) (X) ≤ c f L p(·) (X) g L q ′ (·) (X) . The next statement for metric measure spaces was proved in [22] (see also [27], [29] for quasimetric measure spaces). Theorem A. Let (X, d, µ) be an SHT and let µ(X) < ∞. Suppose that 1 < p − ≤ p + < ∞ and p ∈ P(1). Then M is bounded in L p(·) (X). For the following statement we refer to [23]: Theorem B. Let (X, d, µ) be an SHT and let L = ∞. Suppose that 1 < p − ≤ p + < ∞ and p ∈ P(1). Suppose also that p = p c = const outside some ball B := B(x 0 , R). Then M is bounded in L p(·) (X). Hardy-type transforms In this section we derive two-weight estimates for the operators: T v,w f (x) = v(x) Bx 0 x f (y)w(y)dµ(y) and T ′ v,w f (x) = v(x) X\Bx 0 x f (y)w(y)dµ(y). Let a is a positive constant and let p be a measurable function defined on X. Let us introduce the notation: p 0 (x) := p − (B x0x ); p 0 (x) := p 0 (x) if d(x 0 , x) ≤ a; p c = const if d(x 0 , x) > a. p 1 (x) := p − B(x 0 , a) \ B x0x ; p 1 (x) := p 1 (x) if d(x 0 , x) ≤ a; p c = const if d(x 0 , x) > a. Remark 2.1. If we deal with a quasi-metric measure space with L < ∞, then we will assume that a = L. Obviously, p 0 ≡ p 0 and p 1 ≡ p 1 in this case. Theorem 2.1. Let (X, d, µ) be a quasi-metric measure space . Assume that p and q are measurable functions on X satisfying the condition 1 < p − ≤ p 0 (x) ≤ q(x) ≤ q + < ∞. In the case when L = ∞ suppose that p ≡ p c ≡ const, q ≡ q c ≡ const, outside some ball B(x 0 , a). If the condition A 1 := sup 0≤t≤L t<d(x0,x)≤L v(x) q(x) d(x0,x)≤t w ( p0) ′ (x) (y)dµ(y) q(x) ( p 0 ) ′ (x) dµ(x) < ∞, hold, then T v,w is bounded from L p(·) (X) to L q(·) (X). Proof. Here we use the arguments of the proofs of Theorem 1.1.4 in [15] (see p. 7) and of Theorem 2.1 in [17]. First we notice that p − ≤ p 0 (x) ≤ p(x) for all x ∈ X. Let f ≥ 0 and let S p (f ) ≤ 1. First assume that L < ∞. We denote we take m = ∞. Since 0 ≤ I(β) ≤ I(s j ) ≤ 2 j for every j, we have that I(β) = 0. It is obvious that X = j≤m {x : s j < d(x 0 , x) ≤ s j+1 }. Further, we have that S q (T v,w f ) = X (T v,w f (x)) q(x) dµ(x) = X v(x) B(x0, d(x0,x)) f (y)w(y)dµ(y) q(x) dµ(x) = X (v(x)) q(x) B(x0, d(x0,x)) f (y)w(y)dµ(y) q(x) dµ(x) ≤ m j=−∞ sj <d(x0,x)≤sj+1 v(x) q(x) d(x0,y)<sj+1 f (y)w(y)dµ(y) q(x) dµ(x). Notice that I(s j+1 ) ≤ 2 j+1 ≤ 4 sj−1≤d(x0,y)≤sj w(y)f (y)dµ(y). Consequently, by this estimate and Hölder's inequality with respect to the exponent p 0 (x) we find that S q T v,w f ≤ c m j=−∞ sj <d(x0,x)≤sj+1 v(x) q(x) sj−1≤d(x0,y)≤sj f (y)w(y)dµ(y) q(x) dµ(x) ≤ c m j=−∞ sj <d(x0,x)≤sj+1 v(x) q(x) J k (x)dµ(x), where J k (x) := sj−1≤d(x0,y)≤sj f (y) p0(x) dµ(y) q(x) p 0 (x) sj−1≤d(x0,y)≤sj w(y) (p0) ′ (x) dµ(y) q(x) (p 0 ) ′ (x) . Observe now that q(x) ≥ p 0 (x). Hence, this fact and the condition S p (f ) ≤ 1 imply that J k (x) ≤ c {y:sj−1≤d(x0,y)≤sj }∩{y:f (y)≤1} f (y) p0(x) dµ(y) + {y:sj−1≤d(x0,y)≤sj }∩{y:f (y)>1} f (y) p(y) dµ(y) q(x) p 0 (x) × sj−1≤d(x0,y)≤sj w(y) (p0) ′ (x) dµ(y) q(x) (p 0 ) ′ (x) ≤ c µ {y : s j−1 ≤ d(x 0 , y) ≤ s j } + {y:sj−1≤d(x0,y)≤sj}∩{y:f (y)>1} f (y) p(y) dµ(y) × sj−1≤d(x0,y)≤sj w(y) (p0) ′ (x) dµ(y) q(x) (p 0 ) ′ (x) . It follows now that S q (T v,w f ) ≤ c m j=−∞ µ {y : s j−1 ≤ d(x 0 , y) ≤ s j } sj <d(x0,x)≤sj+1 v(x) q(x) × sj−1≤d(x0,y)≤sj w(y) (p ′ 0 )(x) dµ(y) q(x) (p 0 ) ′ (x) dµ(x) + m j=−∞ y:{sj−1≤d(x0,y)≤sj}∩{y:f (y)>1} f (y) p(y) dµ(y) × sj <d(x0,x)≤sj+1 v(x) q(x) sj−1≤d(x0,y)≤sj w(y) (p0) ′ (x) dµ(y) q(x) (p 0 ) ′ (x) dµ(x) := c N 1 + N 2 . It is obvious that N 1 ≤ A 1 m+1 j=−∞ µ {y : s j−1 ≤ d(x 0 , y) ≤ s j } ≤ CA 1 and N 2 ≤ A 1 m+1 j=−∞ {y:sj−1≤d(x0,y)≤sj } f (y) p(y) dµ(y) = C X f (y) p(y) dµ(y) = A 1 S p (f ) ≤ A 1 . Finally S q (T v,w f ) ≤ c cA 1 + A 1 < ∞. Thus T v,w is bounded if A 1 < ∞. Let us now suppose that L = ∞. We have T v,w f (x) = χ B(x0,a) (x)v(x) Bx 0 x f (y)w(y)dµ(y) +χ X\B(x0,a) (x)v(x) Bx 0 x f (y)w(y)dµ(y) =: T (1) v,w f (x) + T (2) v,w f (x) By using already proved result for L < ∞ and the fact that diam B(x 0 , a) < ∞ we find that T (1) v,w f L q(·) B(x0,a) ≤ c f L p(·) B(x0,a) ≤ c because A (a) 1 := sup 0≤t≤a t<d(x0,x)≤a v(x) q(x) d(x0,x)≤t w (p0) ′ (x) (y)dµ(y) q(x) (p 0 ) ′ (x) dµ(x) ≤ A 1 < ∞. Further, observe that T (2) v,w f (x)=χ X\B(x0,a) (x)v(x) Bx 0 x f (y)w(y)dµ(y) = χ X\B(x0,a) (x)v(x) d(x0,y)≤a f (y)w(y)dµ(y) +χ X\B(x0,a) (x)v(x) a≤d(x0,y)≤d(x0,x) f (y)w(y)dµ(y) =: T (2,1) v,w f (x) + T (2,2) v,w f (x). It is easy to see that (see also T v,w f (x) = v(x) a≤d(x0,y)<d(x0,x) f (y)w(y)dµ(y) from L pc X\B(x 0 , a) to L qc X\B(x 0 , a) . Thus T (2,2) v,w is bounded. It remains to prove that T (2,1) v,w is bounded. We have T (2,1) v,w f L p(·) (X) = B(x0,a) c v(x) qc dµ(x) 1 qc B(x0,a) f (y)w(y)dµ(y) ≤ B(x0,a) c v(x) qc dµ(x) 1 qc f L p(·) B(x0,a) w L p ′ (·) B(x0,a) . Observe now that the condition A 1 < ∞ guarantees that the integral For I 1 , we have that I 1 ≤ µ B(x 0 , a)) < ∞. Since L = ∞ and condition (1) holds, there exists a point y 0 ∈ X such that a < d(x 0 , y 0 ) < 2a. Consequently, B(x 0 , a) ⊂ B(x 0 , d(x 0 , y 0 )) and p(y) ≥ p − B(x 0 , d(x 0 , y 0 )) = p 0 (y 0 ), where y ∈ B(x 0 , a). Consequently, the condition B(x0,a) c v(x) qc dµ(x) is finite. Moreover, N := w L p ′ (·) B(x0,a) < ∞. Indeed, we have that N ≤            B(x0,a) w(y) p ′ (y) dµ(y) 1 p − (B(x 0 ,a)) ′ if w L p ′ (·) (B(A 1 < ∞ yields I 2 ≤ B(x0,a) w(y) (p0) ′ (y0) dy < ∞. Finally we have that T (2,1) v,w f L p(·) (X) ≤ C. Hence, T v,w is bounded from L p(·) (X) to L q(·) (X). The proof of the following statement is similar to that of Theorem 2.1; therefore we omit it (see also the proofs of Theorem 1.1.3 in [15] and Theorems 2.6 and 2.7 in [17] for similar arguments). Theorem 2.2. Let (X, d, µ) be a quasi-metric measure space . Assume that p and q are measurable functions on X satisfying the condition 1 < p − ≤ p 1 (x) ≤ q(x) ≤ q + < ∞. If L = ∞, then we assume that p ≡ p c ≡ const, q ≡ q c ≡ const outside some ball B(x 0 , a). If B 1 = sup 0≤t≤L d(x0,x)≤t v(x) q(x) t≤d(x0,x)≤L w ( p1) ′ (x) (y)dµ(y) q(x) ( p 1 ) ′ (x) dµ(x) < ∞, then T ′ v,w is bounded from L p(·) (X) toL q(·) (X). Remark 2.2. If p ≡ const, then the condition A 1 < ∞ in Theorem 2.1 (resp. B 1 < ∞ in Theorem 2.2) is also necessary for the boundedness of T v,w (resp. T ′ v,w ) from L p(·) (X) to L q(·) (X). See [15], pp.4-5, for the details. Potentials In this section we discuss two-weight estimates for the potential operators T α(·) and I α(·) on quasi-metric measure spaces, where 0 < α − ≤ α + < 1. If α ≡ const, then we denote T α(·) and I α(·) by T α and I α respectively. The boundedness of Riesz potential operators in L p(·) (Ω) spaces, where Ω is a domain in R n was established in [9], [44], [7], [3]. For the following statement we refer to [34]: Theorem C. Let (X, d, µ) be an SHT. Suppose that 1 < p − ≤ p + < ∞ and p ∈ P(1). Assume that if L = ∞, then p ≡ const outside some ball. Let α be a constant satisfying the condition 0 < α < 1/p + . We set q(x) = p(x) 1−αp(x) . Then T α is bounded in L p(·) (X). Theorem D [29]. Let (X, d, µ) be a non-homogeneous space with L < ∞ and let N be a constant defined by N = a 1 (1 + 2a 0 ), where the constants a 0 and a 1 are taken from the definition of the quasi-metric d. Suppose that 1 < p − < p + < ∞, p, α ∈ P(N ) and that µ is upper Ahlfors 1-regular. We define q(x) = p(x) 1−α(x)p(x) , where 0 < α − ≤ α + < 1/p − . Then I α(·) is bounded from L p(·) (X) to L q(·) (X). For the statements and their proofs of this section we keep the notation of the previous sections and, in addition, introduce the new notation: v (1) α (x) := v(x)(µB x0x ) α−1 , w (1) α (x) := w −1 (x); v (2) α (x) := v(x); w (2) α (x) := w −1 (x)(µB x0x ) α−1 ; F x := {y ∈ X : d(x0,y)L A 2 a1 ≤ d(x 0 , y) ≤ A 2 La 1 d(x 0 , x)}, if L < ∞ {y ∈ X : d(x0,y) A 2 a1 ≤ d(x 0 , y) ≤ A 2 a 1 d(x 0 , x)}, if L = ∞, , where A and a 1 are constants defined in Definition 1.10 and the triangle inequality for d respectively. We begin this section with the following general-type statement: Theorem 3.1. Let (X, d, µ) be an SHT without atoms. Suppose that 1 < p − ≤ p + < ∞ and α is a constant satisfying the condition 0 < α < 1/p + . Let p ∈ P(1). We set q(x) = p(x) 1−αp(x) . Further, if L = ∞, then we assume that p ≡ p c ≡ const outside some ball B(x 0 , a). Then the inequality v(T α f ) L q(·) (X) ≤ c wf L p(·) (X)(5) holds if the following three conditions are satisfied: (a) T v (1) α ,w (1) α is bounded from L p(·) (X) to L q(·) (X) ; (b) T v (2) α ,w (2) α is bounded from L p(·) (X) to L q(·) (X); (c) there is a positive constant b such that one of the following inequality holds: 1) v + (F x ) ≤ bw(x) for µ− a.e. x ∈ X ; 2) v(x) ≤ bw − (F x ) for µ− a.e. x ∈ X. Proof. For simplicity suppose that L < ∞. The proof for the case L = ∞ is similar to that of the previous case. Recall that the sets I i,k , i = 1, 2, 3 and E k are defined in Section 1. Let f ≥ 0 and let g L q ′ (·) (X) ≤ 1. We have X (T α f )(x)g(x)v(x)dµ(x) = 0 k=−∞ E k (T α f )(x)g(x)v(x)dµ(x) ≤ 0 k=−∞ E k (T α f 1,k )(x)g(x)v(x)dµ(x) + 0 k=−∞ E k (T α f 2,k )(x)g(x)v(x)dµ(x) + 0 k=−∞ E k (T α f 3,k )(x)g(x)v(x)dµ(x) := S 1 + S 2 + S 3 , where f 1,k = f · χ I 1,k , f 2,k = f · χ I 2,k , f 3,k = f · χ I 3,k . Observe that if x ∈ E k and y ∈ I 1,k , then d(x 0 , y) ≤ d(x 0 , x)/Aa 1 . Consequently, the triangle inequality for d yields d(x 0 , x) ≤ A ′ a 1 a 0 d(x, y), where A ′ = A/(A − 1). Hence, by using Remark 1.1 we find that µ(B x0x ) ≤ cµ(B xy ). Applying now condition (a) we have that S 1 ≤ c µB x0x α−1 v(x) Bx 0 x f (y)dµ(y) L q(x) (X) g L q ′ (·) (X) ≤ c f L p(·) (X) . Further, observe that if x ∈ E k and y ∈ I 3,k , then µ B x0y ≤ cµ B xy . By condition (b) we find that S 3 ≤ c f L p(·) (X) . Now we estimate S 2 . Suppose that v + (F x ) ≤ bw(x). Theorem C and Lemma 1.12 yield S 2 ≤ k T α f 2,k (·)χ E k (·)v(·) L q(·) (X) gχ E k (·) L q ′ (·) (X) ≤ k v + (E k ) (T α f 2,k )(·) L q(·) (X) g(·)χ E k (·) L q ′ (·) (X) ≤ c k v + (E k ) f 2,k L p(·) (X) g(·)χ E k (·) L q ′ (·) (X) ≤ c k f 2,k (·)w(·)χ I 2,k (·) L p(·) (X) g(·)χ E k (·) L q ′ (·) (X) ≤ c f (·)w(·) L p(·) (X) g(·) L q ′ (·) (X) ≤ c f (·)w(·) L p(·) (X) . The estimate of S 2 for the case when v(x) ≤ bw − (F x ) is similar to that of the previous one. Details are omitted. Theorems 3.1, 2.1 and 2.2 imply the following statement: Theorem 3.2. Let (X, d, µ) be an SHT. Suppose that 1 < p − ≤ p + < ∞ and α is a constant satisfying the condition 0 < α < 1/p + . Let p ∈ P(1). We set q(x) = p(x) 1−αp(x) . If L = ∞, then we suppose that p ≡ p c ≡ const outside some ball B(x 0 , a). Then inequality (5) holds if the following three conditions are satisfied: (i) P 1 := sup 0<t≤L t<d(x0,x)≤L v(x) µ(B x0x ) 1−α q(x) d(x0,y)≤t w −( p0) ′ (x) (y)dµ(y) q(x) ( p 0 ) ′ (x) dµ(x) < ∞; (ii) P 2 := sup 0<t≤L d(x0,x)≤t v(x) q(x) t<d(x0,y)≤L w(y) µB x0y 1−α −( p1) ′ (x) dµ(y) q(x) ( p 1 ) ′ (x) dµ(x) < ∞, (iii) condition (c) of Theorem 3.1 holds. Remark 3.1. If p = p c ≡ const on X, then the conditions P i < ∞, i = 1, 2, are necessary for (5). Necessity of the condition P 1 < ∞ follows by taking the test function f = w −(pc) ′ χ B(x0,t) in (5) and observing that µB xy ≤ cµB x0x for those x and y which satisfy the conditions d(x 0 , x) ≥ t and d(x 0 , y) ≤ t (see also [15], Theorem 6.6.1, p. 418 for the similar arguments), while necessity of the condition P 2 < ∞ can be derived by choosing the test func- tion f (x) = w −(pc) ′ (x)χ X\B(x0,t) (x) µB x0x (α−1)((pc) ′ −1) and taking into account the estimate µB xy ≤ µB x0y for d(x 0 , x) ≤ t and d(x 0 , y) ≥ t. The next statement follows in the same manner as the previous one. In this case Theorem D is used instead of Theorem C. The proof is omitted. Let (X, d, µ) be a non-homogeneous space with L < ∞. Let N be a constant defined by N = a 1 (1 + 2a 0 ). Suppose that 1 < p − ≤ p + < ∞, p, α ∈ P(N ) and that µ is upper Ahlfors 1-regular. We define q(x) = Theorem 3.3.p(x) 1−α(x)p(x) , where 0 < α − ≤ α + < 1/p + . Then the inequality v(·)(I α(·) f )(·) L q(·) (X) ≤ c w(·)f (·) L p(·) (X) (6) holds if (i) sup 0≤t≤L t<d(x0,x)≤L v(x) d(x 0 , x) 1−α(x) q(x) B(x0,t) w −(p0) ′ (x) (y)dµ(y) q(x) (p 0 ) ′ (x) dµ(x) < ∞; (ii) sup 0≤t≤L B(x0,t) v(x) q(x) t<d(x0,y)≤L w(y)d(x 0 , y) 1−α(y) −(p1) ′ (x) dµ(y) q(x) (p 1 ) ′ (x) dµ(x) < ∞, (iii) condition (c) of Theorem 3.1 is satisfied. Remark 3.2. It is easy to check that if p and α are constants, then conditions (i) and (ii) in Theorem 3.3 are also necessary for (6). This follows easily by choosing appropriate test functions in (6) (see also Remark 3.1) Theorem 3.4. Let (X, d, µ) be an SHT without atoms. Let 1 < p − ≤ p + < ∞ and let α be a constant with the condition 0 < α < 1/p + . We set q(x) = p(x) 1−αp(x) . Assume that p has a minimum at x 0 and that p ∈ LH(X). Suppose also that if L = ∞, then p is constant outside some ball B(x 0 , a). Let v and w be positive increasing functions on (0, 2L). Then the inequality v(d(x 0 , ·))(T α f )(·) L q(·) (X) ≤ c w(d(x 0 , ·))f (·) L p(·) (X) holds if I 1 := sup 0<t≤L I 1 (t):= sup 0<t≤L t<d(x0,x)≤L v(d(x 0 , x)) µ(B x0x ) 1−α q(x) × d(x0,y)≤t w −( p0) ′ (x) (d(x 0 , y))dµ(y) q(x) ( p 0 ) ′ (x) dµ(x) < ∞ for L = ∞; J 1 := sup 0<t≤L t<d(x0,x)≤L v(d(x 0 , x)) µ(B x0x ) 1−α q(x) d(x0,y)≤t w −p ′ (x0) (d(x 0 , y))dµ(y) q(x) p ′ (x 0 ) dµ(x) < ∞ for L < ∞. Proof. Let L = ∞. Observe that by Lemma 1.9 the condition p ∈ LH(X) implies p ∈ P(1). We will show that the condition I 1 < ∞ implies the inequality v(A 2 a1t) w(t) ≤ C for all t > 0, where A and a 1 are constants defined in Definition 1.10 and the triangle inequality for d respectively. Indeed, let us assume that t ≤ b 1 , where b 1 is a small positive constant. Then, taking into account the monotonicity of v and w, and the facts that p 0 (x) = p 0 (x) (for small d(x 0 , x)) and µ ∈ RDC(X), we have I 1 (t) ≥ A 2 a1t≤d(x0,x)<A 3 a1t v(A 2 a 1 t) w(t) q(x) µB(x 0 , t) (α−1/p0(x))q(x) dµ(x) ≥ v(A 2 a 1 t) w(t) q− A 2 a1t≤d(x0,x)<A 3 a1t µB(x 0 , t) (α−1/p0(x))q(x) dµ(x) ≥ c v(A 2 a 1 t) w(t) q− . Hence, c := lim t→0 v(A 2 a1t) w(t) < ∞. Further, if t > b 2 , where b 2 is a large number, then since p and q are constants, for d(x 0 , x) > t, we have that I 1 (t) ≥ A 2 a1t≤d(x0,x)<A 3 a1t v(d(x 0 , x)) qc µB(x 0 , t) (α−1)qc dµ(x) × B(x0,t) w −(pc) ′ (x)dµ(x) qc/(pc) ′ dµ(x) ≥ C v(A 2 a 1 t) w(t) qc A 2 a1t≤d(x0,x)<A 3 a1t µB(x 0 , t) (α−1/pc)qc dµ(x) ≥ c v(A 2 a 1 t) w(t) qc . In the last inequality we used the fact that µ satisfies the reverse doubling condition. Now we show that the condition I 1 < ∞ implies (v(d(x 0 , x))) q(x) d(x0,y)>t w −( p1) ′ (x) (d(x 0 , y)) × µ(B x0y ) (α−1)( p1) ′ (x) dµ(y) q(x) ( p 1 ) ′ (x) dµ(x) < ∞ Due to monotonicity of functions v and w, the condition p ∈ LH(X), Proposition 1.4, Lemma 1.7, Lemma 1.9 and the assumption that p has a minimum at x 0 , we find that for all t > 0, I 2 (t) ≤ d(x0,x)≤t v(t) w(t) q(x) µ B(x 0 , t) (α−1/p(x0))q(x) dµ(x) ≤ c d(x0,x)≤t v(t) w(t) q(x) µ B(x 0 , t) α−1/p(x0) q(x0) dµ(x) ≤ c d(x0,x)≤t v(A 2 a 1 t) w(t) q(x) dµ(x) µ B(x 0 , t) −1 ≤ C. Now Theorem 3.2 completes the proof. Theorem 3.5. Let (X, d, µ) be an SHT with L < ∞. Suppose that p, q and α are measurable functions on X satisfying the conditions: 1 < p − ≤ p(x) ≤ q(x) ≤ q + < ∞ and 1/p − < α − ≤ α + < 1. Assume that there is a point x 0 such that µ{x 0 } = 0 and p, q, α ∈ LH(X, x 0 ). Suppose also that w is a positive increasing function on (0, 2L).Then the inequality T α(·) f v L q(·) (X) ≤ c w(d(x 0 , ·))f (·) L p(·) (X) holds if the following two conditions are satisfied: I 1 := sup 0<t≤L t≤d(x0,x)≤L v(x) µB x0x 1−α(x) q(x) × d(x0,x)≤t w −(p0) ′ (x) (d(x 0 , y))dµ(y) q(x) (p 0 ) ′ (x) dµ(x) < ∞; I 2 := sup 0<t≤L d(x0,x)≤t v(x) q(x) t≤d(x0,x)≤L w(d(x 0 , y)) × µB x0y 1−α(x) −(p1) ′ (x) dµ(y) q(x) (p 1 ) ′ (x) dµ(x) < ∞. Proof. For simplicity assume that L = 1. First observe that by Lemma 1.9 we have p, q, α ∈ P (1). Suppose that f ≥ 0 and S p w(d(x 0 , ·))f (·) ≤ 1. We will show that S q v(T α(·) f ) ≤ C. We have S q vT α(·) f ≤ C q X v(x) d(x0,y)≤d(x0,x)/(2a1) f (y) µB xy α(x)−1 dµ(y) q(x) dµ(x) + X v(x) d(x0,x)/(2a1)≤d(x0,y)≤2a1d(x0,x) f (y) µB xy α(x)−1 dµ(y) q(x) dµ(x) + X v(x) d(x0,y)≥2a1d(x0,x) f (y) µB xy α(x)−1 dµ(y) q(x) dµ(x) := C q [I 1 + I 2 + I 3 ]. First observe that by virtue of the doubling condition for µ, Remark 1.1 and simple calculation we find that µ B x0x ≤ cµ B xy . Taking into account this estimate and Theorem 2.1 we have that I 1 ≤ c X v(x) µB x0x 1−α(x) d(x0,y)<d(x0,x) f (y)dµ(y) q(x) dµ(x) ≤ C. Further, it is easy to see that if d(x 0 , y) ≥ 2a 1 d(x 0 , x), then the triangle inequality for d and the doubling condition for µ yield that µB x0y ≤ cµB xy . Hence due to Proposition 1.5 we see that µB x0y α(x)−1 ≥ c µB xy α(y)−1 for such x and y. Therefore, Theorem 2.2 implies that I 3 ≤ C. It remains to estimate I 2 . Let us denote: E (1) (x) := B x0x \ B x 0 , d(x 0 , x)/(2a 1 ) ; E (2) (x) := B x 0 , 2a 1 d(x 0 , x) \ B x0x . Then we have that Using Hölder's inequality for the classical Lebesgue spaces we find that I 2 ≤ C X v(x) E (1) (x) f (y) µB xy α(x)−1 dµ(y) q(x) dµ(x) + X v(x) E (2) (x) f(I 21 ≤ X v q(x) (x) E (1) (x) w p0(x) (d(x 0 , y))(f (y)) p0(x) dµ(y) q(x)/p0(x) × E (1) (x) w −(p0) ′ (x) (d(x 0 , y)) µB xy (α(x)−1)(p0) ′ (x) dµ(y) q(x)/(p0) ′ (x) dµ(x). Denote the first inner integral by J (1) and the second one by J (2) . By using the fact that p 0 (x) ≤ p(y), where y ∈ E (1) (x), we see that J (1) ≤ µ(B x0x ) + E (1) (x) (f (y)) p(y) w(d(x 0 , y)) p(y) dµ(y), while by applying Lemma 1.7, for J (2) , we have that J (2) ≤ cw −(p0) ′ (x) d(x 0 , x) 2a 1 E (1) (x) µB xy (α(x)−1)(p0) ′ (x) dµ(y) ≤ cw −(p0) ′ (x) d(x 0 , x) 2a 1 µB x0x (α(x)−1)(p0) ′ (x)+1 . Summarizing these estimates for J (1) and J (2) we conclude that I 21 ≤ X v q(x) (x) µB x0x q(x)α(x) w −q(x) d(x 0 , x) 2a 1 dµ(x) + X v q(x) (x) × E (1) (x) w p(y) (d(x 0 , y))(f (y)) p(y) dµ(y) q(x)/p0(x) µB x0x q(x)(α(x)−1/p0(x)) ×w −q(x) d(x 0 , x) 2a 1 dµ(x) =: I(1)21 + I(2) 21 . By applying monotonicity of w, the reverse doubling property for µ with the constants A and B (see Remark 1.3), and the condition I 1 < ∞ we have that I (1) 21 ≤ c 0 k=−∞ B(x0,A k )\B(x0,A k−1 ) v(x) q(x) B x0, A k−1 2a 1 w −(p0) ′ (x) (d(x 0 , y))dµ(y) q(x) (p 0 ) ′ (x) × µB x0,x q(x) p 0 (x) +(α(x)−1)q(x) dµ(x) ≤ c 0 k=−∞ µB(x 0 , A k ) q−/p+ × B(x0,A k )\B(x0,A k−1 ) v(x) q(x) B x0,A k w −(p0) ′ (x) (d(x 0 , y))dµ(y) q(x) (p 0 ) ′ (x) × µB x0,x q(x)(α(x)−1) dµ(x) ≤ c 0 k=−∞ µB(x 0 , A k ) \ B(x 0 , A k−1 ) q−/p+ ≤ c 0 k=−∞ µB(x0,A k )\B(x0,A k−1 ) µB x0,x q−/p+−1 dµ(y) ≤ c X µB x0,x q−/p+−1 dµ(y) < ∞. Due to the facts that q(x) ≥ p 0 (x), S p w d(x 0 , ·)f (·) ≤ 1, I 1 < ∞ and w is increasing, for I (2) 21 , we find that I (2) 21 ≤ c 0 k=−∞ µB(x0,A k+1 a1)\B(x0,A k−2 ) w p(y) (d(x 0 , y))(f (y)) p(y) dµ(y) × µB(x0,A k )\B(x0,A k−1 ) v q(x) (x) B(x0,A k−1 ) w −(p0) ′ (x) (d(x 0 , y))dµ(y) q(x) (p 0 ) ′ (x) × µB x0,x (α(x)−1)q(x) dµ(x) ≤ cS p (f (·)w(d(x 0 , ·)) ≤ c. Analogously, it follows the estimate for I 22 . In this case we use the condition I 2 < ∞ and the fact that p 1 (x) ≤ p(y) when d(x 0 , y) ≤ d(x 0 , y) < 2a 1 d(x 0 , x). The details are omitted. The theorem is proved. Taking into account the proof of Theorem 3.5 we can easily derive the following statement proof of which is omitted: Theorem 3.6. Let (X, d, µ) be an SHT with L < ∞. Suppose that p, q and α are measurable functions on X satisfying the conditions 1 < p − ≤ p(x) ≤ q(x) ≤ q + < ∞ and 1/p − < α − ≤ α + < 1. Assume that there is a point x 0 such that p, q, α ∈ LH(X, x 0 ) and p has a minimum at x 0 . Let v and w be positive increasing function on (0, 2L) satisfying the condition J 1 < ∞ ( see Theorem 3.4 ). Then inequality (7) is fulfilled. Theorem 3.7. Let (X, d, µ) be an SHT with L < ∞ and let µ be upper Ahlfors 1-regular. Suppose that 1 < p − ≤ p + < ∞ and that p ∈ LH(X). Let p have a minimum at x 0 . Assume that α is constant satisfying the condition α < 1/p + . We set q(x) = p(x) dµ(x) < ∞, then the inequality v d(x 0 , ·) (I α f )(·) L q(·) (X) ≤ c w d(x 0 , ·) f (·) L p(·) (X) holds. Proof is similar to that of Theorem 3.4. We only discuss some details. First observe that due to Remark 1.2 we have that p ∈ P(N ), where N = a 1 (1 + 2a 0 ). It is easy to check that the condition E < ∞ implies that v(A 2 a1t) w(t) ≤ C for all t, where the constant A is defined in Definition 1.10 and a 1 is from the triangle inequality for d. Further, Lemmas 1.7, 1.9, the fact that p has a minimum at x 0 and the inequality Example 3.8. Let v(t) = t γ and w(t) = t β , where γ and β are constants satisfying the condition 0 ≤ β < 1/(p − ) ′ , γ ≥ max{0, 1 − α − 1 q+ − q− q+ (−β + 1 (p−) ′ )}. Then (v, w) satisfies the conditions of Theorem 3.4. Maximal and Singular Operators Let Kf (x) = p.v. X k(x, y)f (y)dµ(y), where k : X × X \ {(x, x) : x ∈ X} → R be a measurable function satisfying the conditions: |k(x, y)| ≤ c µB(x, d(x, y)) , x, y ∈ X, x = y; |k(x 1 , y) − k(x 2 , y)| + |k(y, x 1 ) − k(y, x 2 )| ≤ cω d(x 2 , x 1 ) d(x 2 , y) 1 µB(x 2 , d(x 2 , y)) for all x 1 , x 2 and y with d(x 2 , y) ≥ cd(x, x 2 ), where ω is a positive non-decreasing function on (0, ∞) which satisfies the ∆ 2 condition: ω(2t) ≤ cω(t) (t > 0); and the Dini condition: 1 0 ω(t)/t dt < ∞. We also assume that for some constant s, 1 < s < ∞, and all f ∈ L s (X) the limit Kf (x) exists almost everywhere on X and that K is bounded in L s (X). It is known (see, e.g., [15], Ch. 7) that if r is constant such that 1 < r < ∞, (X, d, µ) is an SHT and the weight function w ∈ A r (X), i.e. sup B 1 µ(B) B w(x)dµ(x) 1 µ(B) B w 1−r ′ (x)dµ(x) r−1 < ∞, where the supremum is taken over all balls B in X, then the one-weight inequality w 1/r Kf L r (X) ≤ c w 1/r f L r (X) holds. The boundedness of Calderón-Zygmund operators in L p(·) (R n ) was establish in [12]. Theorem E [37]. Let (X, d, µ) be an SHT. Suppose that p ∈ P(1). Then the singular operator K is bounded in L p(·) (X). Before formulating the main results of this section we introduce the notation: v(x) := v(x) µ(B x0x ) , w(x) := 1 w(x) , w 1 (x) := 1 w(x)µ(B x0x ) . The following statements follows in the same way as Theorem 3.1 was proved. In this case Theorem 1.2 (for the maximal operator) and Theorem E (for singular integrals) are used instead of Theorem C. Details are omitted. Theorem 4.1. Let (X, d, µ) be an SHT and let 1 < p − ≤ p + < ∞. Further suppose that p ∈ P(1). If L = ∞, then we assume that p is constant outside some ball B(x 0 , a). Then the inequality v(N f ) L p(·) (X) ≤ C wf L p(·) (X) , (8) where N is M or K, holds if the following three conditions are satisfied: (a) T v, w is bounded in L p(·) (X); (b) T ′ v, w1 is bounded in L p(·) (X); (c) there is a positive constant b such that one of the following two conditions hold: 1) v + (F x ) ≤ bw(x) µ− a.e. x ∈ X; 2) v(x) ≤ b w − (F x ) µ− a.e. x ∈ X, where F x is the set depended on x which is defined in Section 3. By using the representation formula of a general integral by improper integral and the fact that µ is Ahlfors 1− regular, it follows that W (t) ≤ C 1 ln −1 2L t and V (t) ≤ C 2 ln 2L t for 0 < t ≤ L, where the positive constants does not depend on t. Hence the result follows. Observe that for the constant p both weights v and w are outside the Muckenhoupt class A p (X) (see e.g. [15], Ch. 8). f (y)w(y)dµ(y) for s ∈ [0, L]. Suppose that I(L) < ∞. Then I(L) ∈ (2 m , 2 m+1 ] for some m ∈ Z. Let us denote s j := sup{s : I(s) ≤ 2 j }, j ≤ m, and s m+1 := L. Then s j m+1 j=−∞ is a non-decreasing sequence. It is easy to check that I(s j ) ≤ 2 j , I(s) > 2 j for s > s j , and 2 j ≤ sj ≤d(x0,y)≤sj+1 f (y)w(y)dµ(y). If β := lim j→−∞ s j , then d(x 0 , x) < L if and only if d(x 0 , x) ∈ [0, β] ∪ m j=−∞ (s j , s j+1 ]. If I(L) = ∞ then y) p ′ (y) dµ(y) = B(x0,a)∩{w≤1} w(y) p ′ (y) dµ(y) + B(x0,a)∩{w>1} w(y) p ′ (y) dµ(y) := I 1 + I 2 . y) µB xy α(x)−1 dµ(y) q(x) dµ(x) := c[I 21 + I 22 ]. 1−αp(x) . If v and w are positive increasing functions on (0, 2L) satisfying the condition p1) ′ (x) dµ(y) ≤ ct (α−1)(p1) ′ (x)+1 ,where the constant c does not depend on t and x, yield thatsup 0≤t≤L d(x0,x)≤t (v(d(x 0 , x))) q(x) d(x0,y)>t w(d(x 0 , y)) d(x 0 , y) 1 ) ′ (x)dµ(x) < ∞. guarantees the boundedness of the operatorTheorem 1.1.3 or 1.1.4 of [15]) the condition A (a) 1 := sup t≥a d(x0,x)≥t v(x) qc dµ(x) 1 qc a≤d(x0,y)≤t w(y) (pc) ′ dµ(y) 1 (pc) ′ < ∞ Acknowledgement. The first and second authors were partially supported by the Georgian National Science Foundation Grant (project numbers: No. GNSF/ST09/23/3-100 and No. GNSF/ST07/3-169). The part of this work was fulfilled in Abdus Salam School of Mathematical sciences, GC University, Lahore. The second and third authors are grateful to the Higher Educational Commission of Pakistan for the financial support.The next two statements are direct consequences of Theorems 4.1, 2.1 and 2.2.Theorem 4.2. Let (X, d, µ) be an SHT and let 1 < p − ≤ p + < ∞. Further suppose that p ∈ P(1). If L = ∞, then we assume that p ≡ p c ≡ const outside some ball B(x 0 , a). Let N be M or K. Then inequality (8) holds if:(iii) condition (c) of the previous theorem is satisfied.Remark 4.1. It is known (see[14]) that if p ≡ const, then conditions (i) and (ii) (written for X = R, the Euclidean distance and the Lebesgue measure) of Theorem 4.2 are also necessary for the two-weight inequalitywhere H is the Hilbert transform on R:x−t dt.Assume that p has a minimum at x 0 and that p ∈ LH(X). If L = ∞ we also assume that p ≡ p c ≡ const outside some ball B(x 0 , a). Let v and w be positive increasing functions on (0, 2L). Then the inequalitywhere N is M or K, holds if the following condition is satisfied:Proof of this statement is similar to that of Theorem 3.4; therefore we omit it. Notice that Lemma 1.9 yields that p ∈ LH(X) ⇒ p ∈ P(1).Example 4.4. Let (X, d, µ) be a quasimetric measure space with L < ∞. Suppose that 1 < p − ≤ p + < ∞ and p ∈ LH(X). Assume that the measure µ is upper and lower Ahlfors 1− regular. Let there exist x 0 ∈ X such that p has a minimum at x 0 . Then the conditionis satisfied for the weight functions v(t) = t 1/p ′ (x0) , w(t) = t 1/p ′ (x0) ln 2L t and, consequently, by Theorem 4.3 inequality (9) holds, where N is M or K.Indeed, first observe that v and w are both increasing on [0, L]. Further, it is easy to check that Fractional and hypersingular operators in variable exponent spaces on metric measure spaces. A Almeaida, S Samko, Mediterr. J. Math. 6A. Almeaida and S. Samko, Fractional and hypersingular operators in variable exponent spaces on metric measure spaces., Mediterr. J. Math. 6 (2009), pp. 215-232. Boundedness criteria for maximal functions and potentials on the half-space in weighted Lebesgue spaces with variable exponents. M Asif, V Kokilashvili, A Meskhi, Int. Trans. Spec. Funct. 20M. Asif, V. Kokilashvili, and A. Meskhi, Boundedness criteria for maximal functions and potentials on the half-space in weighted Lebesgue spaces with variable exponents., Int. Trans. Spec. Funct. 20 (2009), pp. 805-819. The fractional maximal operator on variable L p spaces. C Capone, D Cruz-Uribe, A Sfo, Fiorenza, Revista Mat. Iberoamericana. 323C. Capone, D. Cruz-Uribe SFO and A. Fiorenza, The fractional maximal operator on variable L p spaces, Revista Mat. Iberoamericana 3(23) (2007), 747-770. Analyse harmonique non-commutative sur certains espaces homogénes. R R Coifman, G Weiss, Lecture Notes in Math. 242Springer-VerlagR. R. Coifman and G. Weiss, Analyse harmonique non-commutative sur certains espaces homogénes, Lecture Notes in Math., Vol. 242, Springer-Verlag, Berlin, 1971. The maximal function on variable L p spaces. D Cruz-Uribe, A Sfo, C J Fiorenza, Neugebauer, Ann. Acad. Sci. Fenn. Math. 28D. Cruz-Uribe, SFO, A. Fiorenza and C.J. Neugebauer, The maximal function on variable L p spaces, Ann. Acad. Sci. Fenn. Math. 28 (2003), 223-238, and 29 (2004), 247-249. Sharp two-weight inequalities for singular integrals, with applications to the Hilbert transform and the Sarason conjecture. D Cruz-Uribe, J M Martell, C Perez, Adv. Math. 216D. Cruz-Uribe, J.M. Martell, and C. Perez, Sharp two-weight inequalities for singular integrals, with applica- tions to the Hilbert transform and the Sarason conjecture., Adv. Math. 216 (2007), pp. 647-676. The boundedness of classical operators on variable L p spaces. D Cruz-Uribe, Ann. Acad. Sci. Fenn. Math. 31D. Cruz-Uribe et al., The boundedness of classical operators on variable L p spaces., Ann. Acad. Sci. Fenn. Math. 31 (2006), pp. 239-264. Maximal function on generalized Lebesgue spaces L p(·). L Diening, Math. Inequal. Appl. 72L. Diening, Maximal function on generalized Lebesgue spaces L p(·) ., Math. Inequal. Appl. 7(2) (2004), pp. 245-253. Riesz potentials and Sobolev embeddings on generalized Lebesgue and Sobolev spaces L p(·) and W k,p(·). L Diening, Math. Nachr. 268L. Diening, Riesz potentials and Sobolev embeddings on generalized Lebesgue and Sobolev spaces L p(·) and W k,p(·) ., Math. Nachr. 268 (2004), pp. 31-34. Maximal function on Musielak-Orlicz spaces and generalized Lebesgue spaces. L Diening, Bull. Sci. Math. 129L. Diening, Maximal function on Musielak-Orlicz spaces and generalized Lebesgue spaces., Bull. Sci. Math. 129 (2005), pp. 657-700. Muckenhoup weights in variable exponent spaces. L Diening, P Hasto, PreprintL. Diening and P. Hasto, Muckenhoup weights in variable exponent spaces., Preprint, Available at http://www.helsinki.fi/∼pharjule/varsob/publications.shtml . Calderón-Zygmund operators on generalized Lebesgue spaces L p(·) and problems related to fluid dynamics. L Diening, M Ruzicka, J. Reine Angew. Math. 63L. Diening and M. Ruzicka, Calderón-Zygmund operators on generalized Lebesgue spaces L p(·) and problems related to fluid dynamics., J. Reine Angew. Math. 63 (2003), pp. 197-220. Hardy inequality in variable exponent Lebesgue spaces. L Diening, S Samko, Frac. Calc. Appl. Anal. 10L. Diening and S. Samko, Hardy inequality in variable exponent Lebesgue spaces., Frac. Calc. Appl. Anal. 10 (2007), pp. 1-18. Two-weight inequalities for singular integrals. D E Edmunds, V Kokilashvili, Canadian Math. Bull. 38D.E. Edmunds and V. Kokilashvili, Two-weight inequalities for singular integrals., Canadian Math. Bull. 38 (1995), pp. 119-125. D E Edmunds, V Kokilashvili, A Meskhi, Mathematics and Its Applications. Dordrecht, Boston, LondonKluwer Academic PublishersD.E . Edmunds, V. Kokilashvili, and A. Meskhi, Bounded and compact integral operators., Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, Boston, London (2002). A trace inequality for generalized potentials in Lebesgue spaces with variable exponent. D E Edmunds, V Kokilashvili, A Meskhi, J. Funct. Spaces Appl. 2D. E. Edmunds, V. Kokilashvili, and A. Meskhi, A trace inequality for generalized potentials in Lebesgue spaces with variable exponent., J. Funct. Spaces Appl. 2 (2004), pp. 55-69. On the boundedness and compactness of the weighted Hardy operators in L p(x) spaces. D E Edmunds, V Kokilashvili, A Meskhi, Georgian Math. J. 12D. E. Edmunds, V. Kokilashvili, and A. Meskhi, On the boundedness and compactness of the weighted Hardy operators in L p(x) spaces., Georgian Math. J. 12 (2005), pp. 27-44. Two-weight estimates in L p(x) spaces with applications to Fourier series. D E Edmunds, V Kokilashvili, A Meskhi, Houston J. Math. 2D. E. Edmunds, V. Kokilashvili, and A. Meskhi, Two-weight estimates in L p(x) spaces with applications to Fourier series., Houston J. Math. 2 (2009), pp. 665-689. Potential-type operators in L p(x) spaces. D E Edmunds, A Meskhi, Z. Anal. Anwend. 21D.E. Edmunds and A. Meskhi, Potential-type operators in L p(x) spaces., Z. Anal. Anwend 21 (2002), pp. 681-690. Hardy spaces on homogeneous groups. G B Folland, E M Stein, Princeton University Press and University of Tokyo PressPrinceton, New JerseyG. B. Folland and E. M. Stein, Hardy spaces on homogeneous groups, Princeton University Press and University of Tokyo Press, Princeton, New Jersey, 1982. Sobolev embeddings in metric measure spaces with variable dimension. P Harjulehto, P Hasto, V Latvala, Math. Z. 254P. Harjulehto, P. Hasto, and V. Latvala, Sobolev embeddings in metric measure spaces with variable dimension., Math. Z 254 (2006), pp. 591-609. Variable exponent Lebesgue spaces on metric spaces: the Hardy-Littlewood maximal operator. P Harjulehto, P Hasto, M Pere, Real Anal. Exchange. 30P. Harjulehto, P. Hasto, and M. Pere, Variable exponent Lebesgue spaces on metric spaces: the Hardy-Littlewood maximal operator., Real Anal. Exchange 30 (2004-2005), pp. 87-104. Maximal functions in L p(·) spaces. M Khabazi, Proc. A. Razmadze Math. Inst. 135M. Khabazi, Maximal functions in L p(·) spaces, Proc. A. Razmadze Math. Inst. 135 (2004), pp. 145-146. On a progress in the theory of integral operators in weighted Banach function spaces. V Kokilashvili, Proceedings of the Conference held in Milovy, Bohemian-Moravian Uplands. the Conference held in Milovy, Bohemian-Moravian UplandsPragueFunction Spaces, Differential Operators and Nonlinear AnalysisV. Kokilashvili, On a progress in the theory of integral operators in weighted Banach function spaces. In"Function Spaces, Differential Operators and Nonlinear Analysis", Proceedings of the Conference held in Milovy, Bohemian-Moravian Uplands, May 28-June 2, Math. Inst. Acad. Sci. of Czech Republic, Prague (2004). V Kokilashvili, New aspects in weight theory and applications, In: Function Spaces, Differential Operators and Nonlinear Analysis. M. Krbec et.al.PraguePrometheus Publishing HousePaseky nad JizerouV. Kokilashvili, New aspects in weight theory and applications, In: Function Spaces, Differential Operators and Nonlinear Analysis, M. Krbec et.al.(eds), Paseky nad Jizerou, September 3-9 , 1995, Prometheus Publishing House, Prague, 1996, 51-70. Weighted inequalities in Lorentz and Orlicz spaces. V Kokilashvili, M Krbec, World ScientificSingapore, New Jersey, London, Hong KongV. Kokilashvili and M. Krbec, Weighted inequalities in Lorentz and Orlicz spaces, Singapore, New Jersey, London, Hong Kong, World Scientific, 1991. Maximal and singular integrals in Morrey spaces with variable exponent. V Kokilashvili, A Meskhi, Arm. J. Math. 1V. Kokilashvili and A. Meskhi, Maximal and singular integrals in Morrey spaces with variable exponent., Arm. J. Math. 1 (2008), pp. 18-28. Weighted criteria for generalized fractional maximal functions and potentials in Lebesgue spaces with variable exponent. V Kokilashvili, A Meskhi, Integr. Trans. Spec. Func. 18V. Kokilashvili and A. Meskhi, Weighted criteria for generalized fractional maximal functions and potentials in Lebesgue spaces with variable exponent, Integr. Trans. Spec. Func. 18 (2007), pp. 609-628. Maximal functions and potentials in variable exponent Morrey spaces with non-doubling measure. V Kokilashvili, A Meskhi, Complex Variables and Elliptic Equations. to appearV. Kokilashvili and A. Meskhi, Maximal functions and potentials in variable exponent Morrey spaces with non-doubling measure., Complex Variables and Elliptic Equations (to appear). Singular operators in variable spaces L p(·) (ω, ρ) with oscillating weights. V Kokilashvili, N Samko, S Samko, Math. Nachr. 280V. Kokilashvili, N. Samko, and S. Samko, Singular operators in variable spaces L p(·) (ω, ρ) with oscillating weights., Math. Nachr. 280 (2007), pp. 1145-1156. Maximal and fractional operators in weighted L p(x) spaces. V Kokilashvili, S Samko, Rev. Mat. Iberoamericana. 20V. Kokilashvili and S. Samko, Maximal and fractional operators in weighted L p(x) spaces, Rev. Mat. Iberoamer- icana 20 (2004), pp. 493-515. On Sobolev theorem for Riesz-type potentials in Lebesgue spaces with variable exponent. V Kokilashvili, S Samko, Z. Anal. Anwendungen. 22V. Kokilashvili and S. Samko, On Sobolev theorem for Riesz-type potentials in Lebesgue spaces with variable exponent, Z. Anal. Anwendungen 22 (2003), pp. 899-910. The maximal operator in weighted variable spaces on metric spaces. V Kokilashvili, S Samko, Proc. A. Razmadze Math. Inst. 144V. Kokilashvili and S. Samko, The maximal operator in weighted variable spaces on metric spaces, Proc. A. Razmadze Math. Inst. 144 (2007), pp. 137-144. Operators of harmonis analysis in weighted spaces with non-standard growth. V Kokilashvili, S Samko, J. Math. Anal. Appl. 3521V. Kokilashvili and S. Samko, Operators of harmonis analysis in weighted spaces with non-standard growth., J. Math. Anal. Appl. 352(1) (2009), 15-34. (2008). Boundedness of maximal operators and potential opertators on Carleson curves in Lebesgue spaces with variable exponent. V Kokilashvili, S Samko, DoI:10.1007/s10114-008-6464-1.Acta Mathematica Sinica. V. Kokilashvili and S. Samko, Boundedness of maximal operators and potential opertators on Carleson curves in Lebesgue spaces with variable exponent., Acta Mathematica Sinica DoI: 10.1007/s10114-008-6464-1. (2008). The maximal operator in weighted variable exponent spaces on metric spaces. V Kokilashvili, S Samko, Georgian Math. J. 15V. Kokilashvili and S. Samko, The maximal operator in weighted variable exponent spaces on metric spaces., Georgian Math. J. 15 (2008), pp. 683-712. Boundedness in Lebesgue spaces with variable exponent of maximal, singular and potential operators. V Kokilashvili, S Samko, Izv. Visshikh Uchebn. Zaved. Severo-Kavk. Region. V. Kokilashvili and S. Samko, Boundedness in Lebesgue spaces with variable exponent of maximal, singular and potential operators., Izv. Visshikh Uchebn. Zaved. Severo-Kavk. Region (2005), pp. 152-157. On some structural properties of Banach function spaces and boundedness of certain integral operators. T S Kopaliani, Czechoslovak Math. J. 54129T.S. Kopaliani, On some structural properties of Banach function spaces and boundedness of certain integral operators., Czechoslovak Math. J. 54(129) (2004), pp. 791-805. On spaces L p(x) and W k,p(x). O Kovácik, J Rákosník, Czechoslovak Math. J. 414O. Kovácik and J. Rákosník, On spaces L p(x) and W k,p(x) , Czechoslovak Math. J. 41(4) (1991), pp. 592-618. On a two-weighted estimation of maximal operator in the Lebesgue space with variable exponent. F Mamedov, Y Zeren, 10.1007/s10231-010-0149Annali di Matematica. F.Mamedov and Y. Zeren, On a two-weighted estimation of maximal operator in the Lebesgue space with variable exponent. Annali di Matematica, Doi 10.1007/s10231-010-0149. Hardy-Littlewood maximal operator on L p(·) (R n ). A Nekvinda, Math. Ineq. Appl. 72A. Nekvinda, Hardy-Littlewood maximal operator on L p(·) (R n ), Math. Ineq. Appl. 7 (2004), No. 2, 255-265. Weighted Sobolev theorem in Lebesgue spaces with variable exponent. N G Samko, S G Samko, B Vakulov, J. Math. Anal. Appl. 335N.G. Samko, S.G. Samko, and B. Vakulov, Weighted Sobolev theorem in Lebesgue spaces with variable expo- nent., J. Math. Anal. Appl. 335 (2007), p. 560-583. S Samko, Convolution type operators in L p(x). 7S. Samko, Convolution type operators in L p(x) , Integral Transf. and Spec. Funct. 7(1-2) (1998), pp. 123-144. Convolution and potential type operators in L p(x) (R n ). S Samko, Integr. Transf. and Spec. Funct. 73-4S. Samko, Convolution and potential type operators in L p(x) (R n ), Integr. Transf. and Spec. Funct. 7(3-4) (1998), pp. 261-284. On a progress in the theory of Lebesgue spaces with variable exponent: maximal and singular operators. S Samko, Integral Transforms Spec. Funct. 1656S. Samko, On a progress in the theory of Lebesgue spaces with variable exponent: maximal and singular operators, Integral Transforms Spec. Funct. 16(56) (2005), pp. 461-482. Weighted Sobolev theorem with variable exponent. S Samko, B Vakulov, J. Math. Anal. Appl. 310S. Samko and B. Vakulov, Weighted Sobolev theorem with variable exponent, J. Math. Anal. Appl 310 (2005), pp. 229-246. I I Sharapudinov, On a topology of the space L p(t). 260, 1I.I. Sharapudinov, On a topology of the space L p(t) ([0, 1])., Mat. Zametki 26 (1979), pp. 613-632. Weighted Hardy spaces. J O Strömberg, A Torchinsky, Lecture Notes in Math. 1381Springer VerlagJ.O. Strömberg and A. Torchinsky, Weighted Hardy spaces, Lecture Notes in Math., Springer Verlag, Berlin 1381 (1989). Calderón-Zygmund capacities and operators on nonhomogeneous spaces. A Volberg, CBMS Regional Conference Series in Mathematics. 100A. Volberg, Calderón-Zygmund capacities and operators on nonhomogeneous spaces, CBMS Regional Confer- ence Series in Mathematics 100 (2003).
[]
[ "Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers", "Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers" ]
[ "Mikael Djurfeldt ", "Hans Ekkehard Plesser ", "Wolfram Schenck ", "Weiliang Chen [email protected] ", "Weiliang Chen ", "Erik De Schutter ", "\nArjen Van Ooyen\nVrije Universiteit Amsterdam\nNetherlands\n", "\nRoyal Institute of Technology\nSweden\n", "\nNorwegian University of Life Sciences\nNorway\n", "\nComputational Neuroscience Unit\nBielefeld University of Applied Sciences\nGermany\n", "\nOkinawa Institute of Science and Technology Graduate University\nOkinawaJapan\n" ]
[ "Arjen Van Ooyen\nVrije Universiteit Amsterdam\nNetherlands", "Royal Institute of Technology\nSweden", "Norwegian University of Life Sciences\nNorway", "Computational Neuroscience Unit\nBielefeld University of Applied Sciences\nGermany", "Okinawa Institute of Science and Technology Graduate University\nOkinawaJapan" ]
[ "Frontiers in Neuroinformatics | www.frontiersin.org" ]
Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.
10.3389/fninf.2017.00013
null
16,854,371
1610.02258
ccb5fd0c387de0932e9b86f754140a1ab0033830
Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers 1 February 2017 Mikael Djurfeldt Hans Ekkehard Plesser Wolfram Schenck Weiliang Chen [email protected] Weiliang Chen Erik De Schutter Arjen Van Ooyen Vrije Universiteit Amsterdam Netherlands Royal Institute of Technology Sweden Norwegian University of Life Sciences Norway Computational Neuroscience Unit Bielefeld University of Applied Sciences Germany Okinawa Institute of Science and Technology Graduate University OkinawaJapan Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers Frontiers in Neuroinformatics | www.frontiersin.org 11131 February 201710.3389/fninf.2017.00013Received: 06 October 2016 Accepted: 27 January 2017METHODS Edited by: Reviewed by: *Correspondence: Citation: Chen W and De Schutter E (2017) Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers. Front. Neuroinform. 11:13.STEPSparallel simulationstochasticspatial reaction-diffusionHPC Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. INTRODUCTION Recent research in systems biology and computational neuroscience, such as the study of Purkinje cell calcium dynamics (Anwar et al., 2014), has significantly boosted the development of spatial stochastic reaction-diffusion simulators. These simulators can be separated into two major categories, voxel-based and particle-based. Voxel-based simulators, such as STEPS (Hepburn et al., 2012), URDME (Drawert et al., 2012), MesoRD (Hattne et al., 2005), and NeuroRD (Oliveira et al., 2010), divide the geometry into small voxels where different spatial variants of the Gillespie Stochastic Simulation Algorithm (Gillespie SSA) (Gillespie, 1976) are applied. Particle-based simulators, for example, Smoldyn (Andrews and Bray, 2004) and MCell (Kerr et al., 2008), track the Brownian motion of individual molecules, and simulate molecular reactions caused by collisions. Although greatly successful, both voxel-based and particle-based approaches are computationally expensive. Particle-based simulators suffer from the requirement of tracking the position and movement of every molecule in the system. While tracking individual molecules is not required for voxel-based simulators, the exact solution of Gillespie SSA is highly sequential and inefficient for large-scale simulation due to the massive amount of SSA events (Dematté and Mazza, 2008). There is a major need for more efficient stochastic spatial reaction-diffusion simulation of large-scale systems. Over the years several efforts have achieved considerable success, both in algorithm development and software implementation, but increasing simulation scale and complexity have significantly exceeded the gains in speed. Since the introduction of the original Gillespie SSA, performance of voxel-based simulators has been substantially improved thanks to new algorithms and data structures. Giving N as the number of possible kinetic events (reactions and diffusions) in the system, the computational complexity of a single SSA iteration has been reduced from O(N) with the Direct method (Gillespie, 1976), to O(log(N)) with Gibson and Bruck's modification (Gibson and Bruck, 2000), to O(1) with the composition and rejection SSA (Fricke and Schnakenberg, 1991;Slepoy et al., 2008). Approximate solutions for well-stirred systems, such as the well-known tau-leaping method (Gillespie, 2001) can also be applied to the spatial domain (Marquez-Lago and Burrage, 2007;Koh and Blackwell, 2011), providing further speedup with controllable errors. It is clear, however, that the performance of a serial simulator is restricted by the clock speed of a single computing core, while multi-core CPU platforms have become mainstream. One possible way to bypass the clock speed limitation is parallelization, but development of an efficient and scalable parallel solution has proven challenging. An optimistic Parallel Discrete Event Simulation (PDES) solution has been applied to the exact Gillespie SSA, achieving a maximum 8x speedup with a 12-core cluster (Dematté and Mazza, 2008). This approach has been further investigated and tested with different synchronization algorithms available for PDES systems (Wang et al., 2009), such as Time Warp (TW), Breathing Time Bucket (BTB) and Breathing Time Warp (BTW). Their results indicate that while considerable speedup can be achieved, for example 5x speedup with 8 cores using the BTW method, speed decays rapidly once inter-node communication is involved, due to significant network latency. Another optimization attempt using the PDES solution with thread-based implementation achieved a 9x acceleration with 32 processing threads (Lin et al., 2015). All the foregoing studies show scalability limitations due to the dramatic increase in rollbacks triggered by conflicting diffusion events between partitions, even with support from well-developed PDES algorithms. Parallelization of approximate SSA methods has also been investigated. D' Agostino et al. (2014) introduced a parallel spatial tau-leaping solution with both Message Passing Interface (MPI)-based and Graphics Processing Unit (GPU)-based implementations, achieving a 20-fold acceleration with 32 CPU cores, and about 50x on a 192-core GTX-Titan. Two variants of the operator-splitting approach, originating from the serial Gillespie Multi-Particle (GMP) method (Rodríguez et al., 2006), have been independently employed by Roberts (Roberts et al., 2013) and Vigelius (Vigelius et al., 2011). Both GPU implementations achieve more than 100-fold speedup compared to the CPU-based serial SSA implementations. It is worth noting that the above-mentioned parallel solutions divide simulated geometries into sub-volumes using cubic mesh grids, which may not accurately represent realistic morphologies (Hepburn et al., 2012). Several studies of parallel particle-based implementations have been reported. Balls et al. (2004) demonstrated their early attempt at parallel MCell implementation under the KeLP infrastructure (Fink et al., 1998) with a 64-core cluster. Two GPU-based parallel implementations of Smoldyn have also been reported (Gladkov et al., 2011;Dematté, 2012); both show 100∼200-fold speedup gains compared to the CPU-based serial Smoldyn implementation. Here we introduce an MPI-based parallel implementation of the STochastic Engine for Pathway Simulation (STEPS) (Hepburn et al., 2012). STEPS is a GNU-licensed, stochastic spatial reaction-diffusion simulator implemented in C++ with a Python user interface. The main solver of serial STEPS simulates reaction and diffusion events by applying a spatial extension of the composition and rejection SSA (Slepoy et al., 2008) to sub-volumes of unstructured tetrahedral meshes. Our parallel implementation aims to provide an efficient and scalable solution that can utilize state-of-the-art supercomputers to simulate large scale stochastic reaction-diffusion models with complex morphologies. In Section Methods we explain the main algorithm and details essential to our implementation. In Section Results, we then showcase two examples, from a simple model to a complex real-world research model, and analyze the performance of our implementation with their results. Finally, we discuss possible future developments of parallel STEPS in Section Discussion and Future Directions. METHODS We choose the MPI protocol for CPU clusters as the development environment of our parallel implementation, since it is currently the best-supported parallel environment in academic research. Modern clusters allow us to explore the scalability of our implementation with a massive number of computing nodes, and provide insights for further optimization for super-largescale simulations. The MPI-based implementation also serves as the foundation of future implementations with other parallel protocols and hardware, such as GPU and Intel Xeon Phi clusters. Previous attempts (Dematté and Mazza, 2008;Wang et al., 2009;Lin et al., 2015) to parallelize the exact Gillespie SSA have shown that system rollbacks triggered by straggler crossprocess diffusion events can negate any performance gained from parallelization. The issue is further exacerbated for MPI-based implementations due to significant network latency. To take full advantage of parallelization, it is important to relax exact time dependency of diffusion events and to take an approximate, time-window approach that minimizes data communication and eliminates system rollbacks. Inspired by the GMP method, we developed a tetrahedral-based operator-splitting algorithm as the fundamental algorithm of our parallel implementation. The serial implementation of this algorithm and its accuracy have been discussed previously (Hepburn et al., 2016). Here we discuss implementation details of the parallel version. Initialization of a Parallel STEPS Simulation To initialize a parallel STEPS simulation, the user is required to provide the biochemical model and geometry to the parallel solver. For user convenience, our parallel implementation accepts the same biochemical model and geometry data used as inputs in the serial SSA solver. In addition, mesh partitioning information is required so that tetrahedrons can be distributed and simulated. Partitioning information is a simple list that can be generated automatically using the grid-based partitioning solution provided in the STEPS utility module, or more sophisticated, third-party partitioning applications, such as Metis (Coupez et al., 2000). The STEPS utility module currently provides necessary support functions for format conversions between STEPS and Metis files. Assuming that a set of tetrahedrons is hosted by an MPI process p, {tet|tet is hosted by p}, parallel STEPS first creates a standard Gillespie SSA system for all reactions in each hosted tetrahedron. This includes the population state of all molecule species and propensities of reactions. For each reaction R tet,p , it also creates an update dependency list deps(R tet,p ), that is, a list of reactions and diffusions that require an update if R tet,p is chosen and applied by the SSA. Since a reaction only affects molecule states and propensities of reactions and diffusions within its own tetrahedron, the above information can be stored locally in p. The localized storage of SSA and dependency information significantly reduces memory consumption for each process compared to a serial SSA implementation, which is crucial to simulator performance. We will further address its importance with simulation results in section Results. The simulation also stores the set of hosted diffusion processes {D tet,p |D tet,p is in tet hosted by p} and the dependency list deps(D tet, p ) for each diffusion D tet,p . In addition, if a tetrahedron tet is a boundary tetrahedron of p, in other words, the molecule state of tet is affected by diffusions in tetrahedrons hosted by other MPI processes rather than p, a species update dependency list for every diffusive species S tet,p in tet is also created. The species update dependency list, deps(S tet ,p ), is defined as the list of reactions and diffusions that are hosted by p, and that require an update if the count of S tet,p is modified by cross-process diffusion. The species dependency list allows each MPI process to update hosted reactions and diffusions independently after receiving molecule change information from other processes, thus reducing the need for cross-process communication. Furthermore, a suitable diffusion time window is determined according to the biochemical model and geometry being simulated (Hepburn et al., 2016). Given d S,tet as the local diffusion rate for diffusive species S in tetrahedron tet, each process p computes a local minimal time window τ p = min 1 d S,tet , over all diffusive species in every hosted tetrahedron. Collective communication is then performed to determine the global minimum, τ = min(τ p ), which is set as the diffusion time window for every process in the simulation. Note that τ is completely determined by the biochemical model and geometry, and remains constant regardless of changes in the molecule population. Therefore, continual updates of τ are not required during the simulation. The final step is to initialize the molecule population state of the simulation, which can be done using various API functions provided in parallel STEPS. Once this is completed, the simulation is ready to enter the runtime main loop described below. Runtime Main Loop The runtime main loop for each MPI process is shown in Algorithm 1 in Supplementary Material. When a process is asked to execute the simulation from time t to t end , a remote change buffer for cross-process data communication is created for each of the neighboring processes of p. Details of the buffer will be discussed later. The entire runtime [t, t end ] is divided into iterations of the constant time window τ , the value of which is computed during initialization. At the start of every time window, each process first executes the Reaction SSA operator for the period of τ . The mean number of a molecule species S present in a tetrahedron tet during τ is used to determine the number of S to be distributed among neighbors of tet. Therefore, in addition to the standard exact SSA routine, the process also updates time and occupancy for each reactant and product species (Hepburn et al., 2016). The parallel solver treats diffusion events differently, based on ownerships of tetrahedrons involved. If both source and destination tetrahedrons of a diffusion event are in a single process, diffusion is applied directly. If a diffusion is cross-process, that is, the source tetrahedron and destination tetrahedron are hosted by different processes, the change to the source tetrahedron is applied directly, while the change to the destination tetrahedron is registered to the corresponding remote change buffer. Once all diffusion events are applied or registered, the buffers are sent to associated remote processes via nonblocking communication, where molecule changes in destination tetrahedrons are applied. The algorithm is designed for optimal operation in a parallel environment. Most of its operations can be performed independently without network communication. In fact, the only data communication required is the transfer of remote change buffers between neighboring processes. This has two important implications. First and foremost, the communication is strictly regional, meaning that each process only communicates to a small subset of processes with which it shares geometric boundaries, regardless of the overall scale of the simulation. Secondly, thanks to the non-blocking communication, each process can start the Reaction SSA Operator for the next iteration t 1 , as soon as it receives remote change buffers for the current iteration t 0 from all neighboring processes and applies those changes (Figure 1). Therefore, data communication can be hidden behind computation, which helps to reduce the impact of network latency. Since the remote change buffer holds the only data being transferred across the network, it is important to limit its size so that communication time can be reduced. Furthermore, an FIGURE 1 | Schematic illustration of different runtime stages of two processes, p 1 and p 2 , assuming that p 1 is the only neighboring process of p 2 . Once p 2 receives and applies the remote change buffer from p 1 for iteration t 0 , it can immediately start the reaction SSA computation for iteration t 1 , without waiting for p 1 to complete iteration t 0 . Due to non-blocking communication mechanism, the actual data transfer may take place any time within the communication period. Data communication between p 1 and its neighboring processes except p 2 is skipped for simplification. efficient registering method is also required since all molecule changes applied to remotely hosted tetrahedrons need to be recorded. Algorithm 2 in Supplementary Material and Figure 2 illustrate the procedure and data structure for the registration. Instead of recording every cross-process diffusion event, the remote change buffer records the accumulated change of a molecule species in a remotely hosted tetrahedron. Thus, the size of the remote change buffer has an upper bound corresponding to the number of remotely-hosted neighboring tetrahedrons and the number of diffusive species within those tetrahedrons. The lower bound is zero if no cross-process diffusion event occurs during an iteration. The remote change buffer is a vector that stores entries of molecule changes sequentially. Each entry consists of three elements, the destination tetrahedron tet ′ , the diffusive species S, as well as its accumulated change m tet ′ ,S . All elements are represented by integers. For every possible cross-process diffusion event, D tet→tet ′ ,S , the host process stores a location marker Loc tet ′ ,S , that indicates where the corresponding entry is previously stored in the buffer. When a cross-process diffusion event occurs, the host process of the source tetrahedron first compares the destination tetrahedron and information about the diffusive species to the entry data stored at the marked location in the buffer. If the data match the record, the accumulated change of this entry is increased according to the diffusion event. Each buffer is cleared after its content has been sent to corresponding processes, thus a mismatch of entry information indicates that a reset has taken place since the previous registration of the same diffusion event, in which case a new entry is appended to the end of the buffer and the location of this entry is stored at the location marker for future reference. Both accessing entry data and appending new entries have constant complexity with C++ FIGURE 2 | Schematic illustration of the remote change buffer data structure. For every cross-process diffusion event taking place in tet, it first compares its destination tetrahedron and species information with the entry data stored at Loc tet ′ ,S of the remote change buffer. If the match is successful, the accumulated change of this entry is increased, otherwise a new entry is appended to the buffer. Standard Template Library (STL) vectors, providing an efficient solution for registering remote molecule changes. RESULTS Because the accuracy of the solution method has been examined previously (Hepburn et al., 2016), here we mainly focus on the performance and scalability of our implementation. The implementation passed all validations, and simulation results were checked with serial SSA solutions. It is worth mentioning that the diffusion time window approximation unavoidably introduces small errors into simulations, as discussed in the publication above. Simulations reported in this paper were run on OIST's high performance cluster, "Sango." Each computing node on Sango has two 12-core 2.5 GHz Intel Xeon E5-2680v3 processors, sharing 128 GiB of system memory. All nodes are interconnected using 56 Gbit/s InfiniBand FDR. In total, Sango comprises 10,224 computing cores and 60.75 TiB of main memory. Due to the sharing policy, only a limited number of cores could be used for our tests. Cluster conditions were different for each test and in some cases, computing cores were scattered across the entire cluster. Unfortunately, cluster conditions may affect simulation performance. To measure this impact and to understand how our implementation performs under real-life cluster restrictions, we repeated the tests multiple times, each starting at a different date and time with variable cluster conditions. For simulations with the simple model (Section Reaction-Diffusion Simulation with Simple Model and Geometry) we were able to limit the number of cores used per processor to 10. We were unable to exert the same control over large-scale simulations due to resource restriction. In all cases, hyper-threading was deactivated. Our results show that the standard deviations in wall-clock time amount to ∼1% of the mean results; therefore, only mean results are reported. Simulation performance was measured by both speedup and efficiency. Each simulation was run for a predefined period, and the wall-clock time was recorded. Given a problem with fixed size, the average wall-clock time for a set of repeated simulations to solve this problem is denoted as T p , where p is the number of MPI processes used in each simulation. The speedup of a parallel simulation with p processes relative to one with q processes is defined as S p/q = T q / T p . Specifically, the speedup of parallel simulation with p processes relative to its serial SSA counterpart is defined as S p/SSA = T SSA / T p , where T SSA is the wall-clock time for the same simulation run by the serial SSA solver. Note that while sharing many similarities, the parallel operator-splitting implementation and the serial SSA implementation have different algorithms, data structures as well as core routines, so there is no guarantee that S 1/SSA equals one. We further define the strong scaling efficiency of a simulation with p processes relative to one with q processes as E p/q = S p/q · q p . Strong scaling efficiency is used to investigate the scalability performance of parallel implementations of fixed-size problems. Scalability of a parallel implementation can also be studied by scaling both process count and problem size of the simulation together, called the weak scaling efficiency. Given T N,p as the wall-clock time of a p-process simulation with problem size N, and T kN,kp as the wall-clock time of another simulation in which both problem size and the number of processes are multiplied by k times, we define the weak scaling efficiency as E k = T N,p / T kN,kp . We will investigate both scalability performances of our implementation in later sections. Reaction-Diffusion Simulation with Simple Model and Geometry We first examine simulation results of a fixed-size reactiondiffusion problem. The simulated model (Table 1) was previously used to benchmark our serial spatial SSA solver (Hepburn et al., 2012) and to test the accuracy of our serial operatorsplitting solution (Hepburn et al., 2016). It consists of 10 diffusive species, each with differing diffusion coefficients and initial molecule counts, and 4 reversible reactions with various rate constants. The model was simulated in a 10 × 10 × 100µm 3 cuboid mesh with 3363 tetrahedrons. It is worth mentioning that different partitioning approaches can affect simulation performance dramatically, as will be shown hereafter. Here we partitioned the tetrahedrons linearly based on the y and z coordinates of their barycenters (the center of mass) where the numbers of partitions of each axis for a simulation with p processes was arranged as [Parts x = 1, Parts y = 5, Parts z = p/5]. At the beginning of each simulation, species molecules were placed uniformly into the geometry, and the simulation was run for t end = 20 s, after which the wall-clock time was recorded. We started each series of simulations from p = 5 and progressively increased the number of processes in increments of 5 until p = 300. Each series was repeated 30 times to produce an average result. Speedup and strong scaling efficiency are reported relative to the simulation result with 5 processes, in other words, S p/5 and E p/5 . By increasing the number of processes, simulation performance of the fixed-size problem improves dramatically. In fact, the simulation maintains super-linear speedup until p ≈ 250 ( Figure 3A). While efficiency decreases in general, it remains above 0.8 with p = 300 (Figure 3B), where on average each process hosts approximately 10 tetrahedrons. In addition to the overall wall-clock time, we also recorded the time cost of each algorithm segment in order to analyze the behavior of the implementation. The total time cost for the simulation T total is divided into three portions. The computation time T comp includes the time cost for the reaction SSA and the cost of diffusion operations within the process (corresponds to the Reaction SSA Operator and Diffusion Operator in Algorithm 1 in Supplementary Material, colored black and red in Figure 1). The synchronization time T sync includes the time cost for receiving remote change buffers from neighboring processes, and the time cost for applying those changes (corresponds to the Cross-Process Synchronization Period in Algorithm 1 in Supplementary Material, colored yellow and blue in Figure 1). The time spent waiting for the buffer's arrival, as well as the wait time for all buffers to be sent after completion of reaction SSA, is recorded as the idle time, T idle (corresponds to the Idle Period in Algorithm 1 in Supplementary Material, colored white in Figure 1). In summary, T total = T comp + T sync + T idle, A detailed look at the time cost distribution of a single series trial ( Figure 3C) suggests that the majority of the speedup is contributed by T comp , which is consistently above the theoretical ideal ( Figure 3D), thanks to the improved memory caching performance caused by distributed storage of SSA and to update dependency information mentioned above. The result shows that T sync also decreases significantly as the number of processes increases; however, as the number of boundary tetrahedrons is limited in the simulations, T sync contributes the least to overall time consumption (Figure 3C). Another important finding is that the change of T idle becomes insignificant when p > 100. Since T comp and T sync decrease as p increases, T idle becomes progressively more critical in determining simulation performance. To further study how molecule density affects simulation performance, we repeated the above test with two new settings, one reduces the initial count of each molecular species by 10x, and the other increases molecule counts by 10x ( Figure 4A). We named these tests "Default, " "0.1x" and "10x, " respectively. Speedups relative to the serial SSA counterparts S p/SSA are FIGURE 4 | Strong scaling performance of simulations with different molecule density. (A) Speedups relative to simulations with p = 5. Simulations with low molecule density (0.1x) achieve smaller speedups compared to the default and high density (10x) cases. (B) In general, simulation with higher molecule density and larger scale of parallelization achieves higher speedup relative to its serial SSA counterpart. (C) In the 0.1x cases, T comp rapidly decreases and eventually drops below T idle ; thus, the overall speedup is less significant. (D) In the 10x cases, T comp remains above T idle , therefore its contribution to speedup is significant throughout the series. also reported for comparison ( Figure 4B). As the number of molecules in the system increases, the simulation achieves better speedup performance. This is because in the 0.1x simulations T comp quickly decreases below T idle , and the speedup becomes less significant as T idle is mostly consistent throughout the series (Figure 4C). In the 10x simulations T comp maintains its domination, thus simulations achieve similar speedup ratio as the default ones ( Figure 4D). This result also indicates that S p/SSA greatly depends on molecule density. In general, parallel simulations with high molecule density and high number of processes can achieve higher speedup relative to the serial SSA counterpart (Figure 4B). Mesh coarseness also greatly affects simulation performance. Figure 5 shows the results of simulations with the same model, geometry, and dimensions, but different numbers of tetrahedrons within the mesh. Simulations with a finer mesh generally take longer to complete because while the number of reaction events remains similar regardless of mesh coarseness, the number of diffusion events increases with a finer mesh. The number of main loop iterations also increases for finer mesh due to the inverse relationship between the diffusion time window τ and the local diffusion rate d S,tet ( Figure 5B). This leads to increases of all three timing segments ( Figure 5C). Nevertheless, giving n_tets as the number of tetrahedrons simulated, the relative time cost of the simulation, that is, T total /n_tets, decreases more significantly for a finer mesh (Figure 5D), indicating improved efficiency. It is further confirmed in Figure 5E as both 13,009 and 113,096 cases achieve dramatic relative speedups from parallelization, where the 113,096 series is the most cost-efficient with high process counts. This is because in these simulations, reaction events take place stochastically over the whole extent of the mesh with no specific "hot-spot, " due to the homogeneous distribution of molecules and similar sizes of tetrahedrons. Therefore, the average memory address distance between two consecutive reaction events in each process is determined by the size of partitions hosted by the process. This distance is essential to memory caching performance. In general, smaller hosted partitions mean shorter address distances and are more cachefriendly. The performance boost from the caching effect is particularly significant for simulations with a fine mesh because the address space frequently accessed by the main loop cannot fit in the cache completely when a small number of processes is used. To investigate the weak scaling efficiency of our implementation, we used the "Default" simulation with 300 processes as a baseline, and increased the problem size by duplicating the geometry along a specific axis, as well as by increasing the number of initial molecules proportionally. Table 2 gives a summary of all simulation settings. As the problem size increases, the simulation efficiency progressively deteriorates (Figure 6). While ∼95% efficiency is maintained after doubling the problem size, tripling the problem size reduces the efficiency to ∼80%. This is an expected outcome of the current implementation, because although the storage of reaction SSA and update dependency information are distributed, each process in the current implementation still keeps the complete information of the simulation state, including geometry and connectivity data of each tetrahedron, as well as the number of molecules within. Therefore, the memory footprint per process for storing this information increases linearly with problem size. The increased memory footprint of the simulation state widens the address distances between frequently accessed data, reducing cache prefetching performance and consequently the overall simulation efficiency. Optimizing memory footprint and memory caching for super-large-scale problems will be a main focus of our next development iteration. Our result also indicates that geometry partitioning plays an important role in determining simulation performance, as extending the mesh along the z axis gives better efficiency than extending it along the y axis, even though they have similar numbers of tetrahedrons. This can be explained by the increase of boundary tetrahedrons in the latter case. Since the number of boundary tetrahedrons determines the upper-bound of the size of remote change buffer and consequently the time for communication, reducing the number of boundary tetrahedrons is a general recommendation for geometry partitioning in parallel STEPS simulations. Large Scale Reaction-Diffusion Simulation with Real-World Model and Geometry Simulations from real-world research often consist of reactiondiffusion models and geometries that are notably more complex than the ones studied above. As a preliminary example, we extracted the reaction-diffusion components of a previously published spatial stochastic calcium burst model (Anwar et al., 2013) as our test model to investigate how our implementation performs with large-scale real-world simulations. The extracted model consists of 15 molecule species, 8 of which are diffusive, as well as 22 reactions. Initial molecule concentrations, reaction rate constants and diffusion coefficients were kept the same as in the published model. The Purkinje cell sub-branch morphology, published along with the model, was also used to generate a tetrahedral mesh that is suitable for parallel simulation. The newly generated mesh has 111,664 tetrahedrons, and was partitioned using Metis and STEPS supporting utilities. As discussed before, reducing boundary tetrahedrons is the general partitioning strategy for parallel STEPS simulations. This is particularly important for simulations with a tree-like morphology because a grid-based partitioning approach used for the previous models cannot capture and utilize spatial features of such morphology. The sub-branch mesh for our simulation is partitioned based on the connectivity of tetrahedrons. Specifically, a connectivity graph of all tetrahedrons in the mesh was presented to Metis as input. Metis then produced a partitioning profile which met the following criteria. First of all, the number of tetrahedrons in each partition was similar. Secondly, tetrahedrons in the same partition were all connected. Finally, the average degree of connections is minimal. Figure 7 shows the mesh itself as well as two partitioning profiles generated for p = 50 and p = 1000. As a preliminary test, this partitioning procedure does not account for any size differences of tetrahedrons and the influence from the biochemical model and molecule concentrations, although their impacts can be significant in practice. At present, some of these factors can be abstracted as weights between elements in Metis; however, substantial manual scripting is required and the solution is project-dependent. To mimic the calcium concentration changes caused by voltage-gated calcium influx simulated in the published results (Anwar et al., 2013), we also extracted the region-dependent calcium influx profile from the results, which can be applied periodically to the parallel simulation. Depending on whether this profile is applied, the parallel simulation behaved differently. Without calcium influx, the majority of simulation time was spent on diffusion events of mobile buffer molecules. As these buffer molecules were homogeneously distributed within the mesh, the loading of each process was relatively balanced throughout the simulation. When calcium influx was applied and constantly updated during the simulation, it triggered calcium burst activities that rapidly altered the calcium concentration gradient, consequently unbalancing the process loading. It also activated calcium-dependent pathways in the model and increased the simulation time for reaction SSA operations. Two series were simulated, one without calcium influx and data recording, and the other one with the influx enabled and data recorded periodically. Each series of simulations started from p = 50, and finished at p = 1,000, with an increment of 50 processes each time. Both series of simulations were run for 30 ms, and repeated 20 times to acquire the average wall-clock times. For the simulations with calcium influx, the influx rate of each branch segment was adjusted according to the profile every 1 ms, and the calcium concentration of each branch was recorded to a file every 0.02 ms, as in the original simulation. Figure 8A shows the recorded calcium activity of each branch segment over a single simulation trial period, which exhibits great spatial and temporal variability as reported previously (Anwar et al., 2013). A video of the same simulation is also provided in the Supplementary Material (Video 1). As a consequence of calcium influx changes, process loading of the series was mostly unbalanced so that simulation speedup and efficiency were significantly affected. However, a substantial improvement was still achieved (Figures 8B,C). Figure 9 demonstrates the loading of an influx simulation with 50 processes, where the imbalance can be observed across processes and time. To improve the performance of simulations with strong concentration gradients, a sophisticated and efficient dynamic load balancing algorithm is required (see Discussion). Finally, to test the capability of our implementation for full cell stochastic spatial simulation in the future, we generated a mesh of a full Purkinje dendrite tree from a publically available surface reconstruction (3DModelDB; McDougal and Shepherd, 2015, ID: 156481) and applied the above model to it. To the best of our knowledge, this is the first parallel simulation of a mesoscopic level, stochastic, spatial reaction-diffusion system with full cell dendritic tree morphology. The mesh consisted of 1,044,155 tetrahedrons. Because branch diameters of the original reconstruction have been modified for 3D printing, the mesh is not suitable for actual biological study, but only to evaluate computational performance. Because of this, and the fact that no calcium influx profile can be acquired for this reconstruction, we only ran simulations without calcium influx. The simulation series started from p = 100, and progressively increased to p = 2000 by an increment of 100 processes each time. The maximum number of processes (p = 2000) was determined by the fairsharing policy of the computing center. We repeated the series 20 times to produce the average result. Figure 10A gives an overview of the full cell morphology as well as a zoom-in look at the mesh. Both speedup and efficiency relative to simulation with p = 100 (Figures 10B,C) show super-linear scalability and has the best performance with p = 2000. This result suggests that simulation performance may be further improved with a higher number of processes. All parallel simulations above perform drastically better than their serial SSA counterparts. For each of the test cases above, 20 realizations were simulated using the serial SSA solver in STEPS, and average wall-clock times are used for comparison. The speedups relative to the serial SSA simulations are shown in Figure 11. Even in the most realistic case, with dynamically updated calcium influx as well as data recording, without any special load balancing treatment, the parallel simulation with 1000 processes is still 500 times faster than the serial SSA simulation. The full cell parallel simulation without calcium influx achieves an unprecedented 3600-fold speedup with 2000 processes. This means with full usage of the same computing resources and time, parallel simulation is not only faster than single serial SSA simulation, but is also 1.8 times the speed of batch serial SSA simulations. DISCUSSION AND FUTURE DIRECTIONS Our current parallel STEPS implementation achieves significant performance improvement and good scalability, as shown in our test results. However, as a preliminary implementation, it lacks or simplifies several functionalities that could be important for real-world simulations. These functionalities require further investigation and development in future generations of parallel STEPS. Currently, STEPS models with membrane potential as well as voltage-dependent gating channels cannot be efficiently simulated using the parallel solver because a scalable parallelization of the electric field (E-Field) sub-system is still under development. This is the main reason why we were unable to fully simulate the stochastic spatial calcium burst model with Purkinje sub-branch morphology in our example, but relied on the calcium influx profile extracted from a previous serial simulation instead. The combined simulation of neuronal electrophysiology and molecular reaction-diffusion has recently raised interest, as it bridges the gap between computational neuroscience and systems biology, and is expected to be greatly useful in the foreseeable future. To address such demand, we are actively collaborating with the Human Brain Project (Markram, 2012) on the development of a parallel E-Field, which will be integrated into parallel STEPS upon its completion. As analyzed in the results, the majority of the performance speedup is contributed by the reduction of T comp , thanks to parallel computing. Eventually T idle becomes the main bottleneck, as it is mostly constant relative to the process count, unlike T comp which decreases consistently. This observation suggests two future investigational and developmental directions, maximizing the speedup gained from T comp , and minimizing T idle . Maximizing the speedup gained from T comp is important to real-world research because significant performance improvement needs to be achieved with reasonable computing resources. Adapting advanced algorithms and optimizing memory caching are two common approaches to achieve this goal. At present, we mainly focus on further optimizing memory footprint and caching ability for super-large scale simulations. In the current implementation, although reaction SSA and propensity update information are distributed, each process still stores complete information of the simulation state. This noticeably affects the weak scalability of our implementation (Figure 6). The redundant information is so far required for the purpose of interfacing with other non-parallel sub-systems, such as serial E-Field, but we will investigate whether state information can be split, based on the demand of individual processes. Process load balancing plays a crucial role in determining the idle time of the simulation T idle , and consequently the maximum speed improvement the simulation can achieve. In an unbalanced-loading simulation, processes will always be idle until the slowest one finishes, thus dramatically increasing T idle . This issue is essential to spatial reaction-diffusion simulations FIGURE 10 | Performance of a reaction-diffusion simulation with a mesh of a complete Purkinje dendrite tree. (A) Morphology of the mesh and a close look at its branches. The mesh consists of 1,044,155 tetrahedrons. (B) Speedup relative to the simulation with p = 100 shows super-linear scalability. (C) Efficiency also increases as p increases, suggesting that better efficiency may be achieved with more processes. FIGURE 11 | Speedups of parallel calcium burst simulations relative to their serial SSA counterparts, including sub-branch simulations with and without calcium influx, and the full cell simulation without calcium influx. The dashed curve assumes that p processes are used to simulate a batch of p serial SSA realizations of the full cell simulation. as high concentration gradients of molecules can be observed in many real-world models, similar to our calcium burst model. Because molecule concentrations change significantly during simulation due to reactions and diffusion, the loading of each process may change rapidly. While adding model and initial molecule concentration information to the partitioning procedure may help to balance the loading for early simulation, the initial partitioning will eventually become inefficient as molecule concentrations change. An efficient load balancing algorithm is required to solve this problem. The solution should be able to redistribute tetrahedrons between processes automatically on the fly based on their current workloads. Data exchange efficiency is the main focus of the solution, because constantly copying tetrahedron data between processes via network communication can be extremely time consuming, and may overshadow any benefit gained from the rebalancing. In its current status, our parallel STEPS implementation constitutes a great improvement over the serial SSA solution. The calcium burst simulation with Purkinje cell sub-branch morphology, dynamic calcium influx, and periodic data recording is representative of the simulation condition and requirements of typical real-world research. Similar models that previously required years of simulation can now be completed within days. The shortening of the simulation cycle is greatly beneficial to research as it provides opportunities to further improve the model based on simulation results. CODE AVAILABILITY Parallel STEPS can be accessed via the STEPS homepage (http:// steps.sourceforge.net), or the HBP Collaboration Portal (https:// collaboration.humanbrainproject.eu). Simulation scripts for this manuscript are available at ModelDB (https://senselab.med.yale. edu/modeldb/). FIGURE 3 | 3Strong scaling performance of parallel simulations with a simple model and geometry. Each series starts from p = 5 and progressively increases to p = 300. Both speedup and efficiency are measured relative to simulations with p = 5. (A) Simulations maintain super-linear speedup until p ≈ 200. (B) In general, efficiency decreases as p increases, but remains above 0.8 in the worst case (p = 300). (C,D) T comp accounted for most of the acceleration, as it is the most time-consuming segment during simulation; it maintains super-linear speedup throughout the whole series. However, as T comp decreases, T idle becomes a critical factor because its change is insignificant once p exceeds 100. FIGURE 5 | 5Strong scaling performance of simulations with different mesh coarseness. (A) Meshes with the same geometry and dimensions, but different numbers of tetrahedrons are simulated. (B) While the number of reaction events remains similar across mesh coarseness, both the number of diffusion events and the number of main loop iterations increase for the finer mesh. (C) Time distribution of simulations with p = 300, all three segments increase as the number of tetrahedrons increases. (D) Finer mesh results in a more significant decrease of relative time cost, defined as T total /n_tets, improving efficiency. (E) Speedups relative to T 5 . Simulation with finer mesh achieves much higher speedup in massive parallelization, thanks to the memory caching effect. Frontiers in Neuroinformatics | www.frontiersin.org 8 February 2017 | Volume 11 | Article 13 FIGURE 6 | 6Weak scaling performance of the implementation. (A) The default 10 × 10 × 100µm 3 mesh is extended along either the y or z axis as problem size increases. (B) Weak scaling efficiencies relative to the default case (p = 300).Frontiers in Neuroinformatics | www.frontiersin.org FIGURE 7 | 7(A) Tetrahedral mesh of a Purkinje cell with sub-branch morphology. This mesh consists of 111,664 tetrahedrons. (B) Partitioning generated by Metis for p = 50 and p = 1000. Each color segment indicates a set of tetrahedrons hosted by a single process. FIGURE 8 | 8Calcium burst simulations with a Purkinje cell sub-branch morphology. (A) Calcium activity of each branch segment over a single trial period, visualized by the STEPS visualization toolkit. Calcium activity shows large spatial and temporal variability, which significantly affects the speedup (B) and efficiency (C) of the simulation. FIGURE 9 | 9Process loading of a calcium burst simulation with sub-branch morphology and calcium influx, using 50 processes. (A) Time-cost distribution for each process shows the loading imbalance across processes. (B) The computation time cost per recording step for each process varies significantly during the simulation. Each curve in the figure represents one process. The three peaks in each curve are caused by the three burst periods (Figure 8A). TABLE 1 | 1Simple reaction-diffusion model.Frontiers in Neuroinformatics | www.frontiersin.orgSpecies Diffusion Coefficient (µm 2 /s ) Initial Count A 100 1,000 B 90 2,000 C 80 3,000 D 70 4,000 E 60 5,000 F 50 6,000 G 40 7,000 H 30 8,000 I 20 9,000 J 10 10,000 Reaction Rate Constant A + B ⇄ C k f : 1, 000 (µM·s) −1 , k b : 100s −1 C + D ⇄ E k f : 100 (µM·s) −1 , k b : 10s −1 F + G ⇄ H k f : 10 (µM·s) −1 , k b : 1s −1 H + I ⇄ J k f : 1 (µM·s) −1 , k b : 1s −1 TABLE 2 | 2Simulation settings for weak scalability study.Geometry Dimensions (µm 3 ) Initial Count Num. Processes 10 × 10 × 100 Default 300 10 × 10 × 200 2x 600 10 × 20 × 100 2x 600 10 × 10 × 300 3x 900 10 × 30 × 100 3x 900 February 2017 | Volume 11 | Article 13 Frontiers in Neuroinformatics | www.frontiersin.org AUTHOR CONTRIBUTIONSWC designed, implemented and tested the parallel STEPS described, and drafted the manuscript. ED conceived and supervised the STEPS project and helped draft the manuscript. Both authors read and approved the submission.ACKNOWLEDGMENTSThis work was funded by the Okinawa Institute of Science and Technology Graduate University. All simulations were run on the "Sango" cluster therein. We are very grateful to Iain Hepburn of the Computational Neuroscience Unit, OIST, for discussion and critical review of the initial draft of this manuscript.SUPPLEMENTARY MATERIALThe Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fninf. 2017.00013/full#supplementary-material Video 1 | Data recording of a calcium burst simulation with Purkinje cell sub-branch morphology, visualized using STEPS visualization toolkit.Conflict of Interest Statement:The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.Copyright © 2017 Chen and De Schutter. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Stochastic simulation of chemical reactions with spatial resolution and single molecule detail. S S Andrews, D Bray, 10.1088/1478-3967/1/3/001Phys. Biol. 1Andrews, S. S., and Bray, D. (2004). Stochastic simulation of chemical reactions with spatial resolution and single molecule detail. Phys. Biol. 1, 137-151. doi: 10.1088/1478-3967/1/3/001 Stochastic calcium mechanisms cause dendritic calcium spike variability. H Anwar, I Hepburn, H Nedelescu, W Chen, De Schutter, E , 10.1523/jneurosci.1722-13.2013J. Neurosci. 33Anwar, H., Hepburn, I., Nedelescu, H., Chen, W., and De Schutter, E. (2013). Stochastic calcium mechanisms cause dendritic calcium spike variability. J. Neurosci. 33, 15848-15867. doi: 10.1523/jneurosci.1722-13.2013 Dendritic diameters affect the spatial variability of intracellular calcium dynamics in computer models. H Anwar, C J Roome, H Nedelescu, W Chen, B Kuhn, De Schutter, E , 10.3389/fncel.2014.00168Front. Cell. Neurosci. 8168Anwar, H., Roome, C. J., Nedelescu, H., Chen, W., Kuhn, B., and De Schutter, E. (2014). Dendritic diameters affect the spatial variability of intracellular calcium dynamics in computer models. Front. Cell. Neurosci. 8:168. doi: 10.3389/fncel.2014.00168 A large scale monte carlo simulator for cellular microphysiology. G T Balls, S B Baden, T Kispersky, T M Bartol, T J Sejnowski, Proceedings of 18th International Parallel and Distributed Processing Symposium. 18th International Parallel and Distributed Processing SymposiumSanta Fe, NM42Balls, G. T., Baden, S. B., Kispersky, T., Bartol, T. M., and Sejnowski, T. J. (2004). "A large scale monte carlo simulator for cellular microphysiology, " in Proceedings of 18th International Parallel and Distributed Processing Symposium (Santa Fe, NM), Vol 42, 26-30. Parallel meshing and remeshing. T Coupez, H Digonnet, R Ducloux, 10.1016/S0307-904X(00)00045-7Appl. Math. Model. 25Coupez, T., Digonnet, H., and Ducloux, R. (2000). Parallel meshing and remeshing. Appl. Math. Model. 25, 153-175. doi: 10.1016/S0307-904X(00)00045-7 Parallel solutions for voxel-based simulations of reactiondiffusion systems. D D&apos;agostino, G Pasquale, A Clematis, C Maj, E Mosca, L Milanesi, 10.1155/2014/980501Biomed. Res. Int. D'Agostino, D., Pasquale, G., Clematis, A., Maj, C., Mosca, E., Milanesi, L., et al. (2014). Parallel solutions for voxel-based simulations of reaction- diffusion systems. Biomed. Res. Int. 2014, 980501-980510. doi: 10.1155/2014/ 980501 Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations. L Dematté, 10.1109/TCBB.2011.106IEEE/ACM Trans. Comput. Biol. Bioinform. 9Dematté, L. (2012). Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations. IEEE/ACM Trans. Comput. Biol. Bioinform. 9, 655-667. doi: 10.1109/TCBB.2011.106 On Parallel Stochastic Simulation of Diffusive Systems. L Dematté, T Mazza, Computational Methods in Systems Biology Lecture Notes in Computer Science. M. Heiner and A. M. UhrmacherBerlin, Heidelberg; Berlin HeidelbergSpringerDematté, L., and Mazza, T. (2008). "On Parallel Stochastic Simulation of Diffusive Systems, " in Computational Methods in Systems Biology Lecture Notes in Computer Science. eds M. Heiner and A. M. Uhrmacher (Berlin, Heidelberg: Springer Berlin Heidelberg), 191-210. URDME: a modular framework for stochastic simulation of reaction-transport processes in complex geometries. B Drawert, S Engblom, A Hellander, 10.1186/1752-0509-6-76BMC Syst. Biol. 676Drawert, B., Engblom, S., and Hellander, A. (2012). URDME: a modular framework for stochastic simulation of reaction-transport processes in complex geometries. BMC Syst. Biol. 6:76. doi: 10.1186/1752-0509-6-76 Efficient run-time support for irregular block-structured applications. S J Fink, S B Baden, S R Kohn, 10.1006/jpdc.1998.1437J. Parallel Distrib. Comput. 50Fink, S. J., Baden, S. B., and Kohn, S. R. (1998). Efficient run-time support for irregular block-structured applications. J. Parallel Distrib. Comput. 50, 61-82. doi: 10.1006/jpdc.1998.1437 Monte-Carlo simulation of an inhomogeneous reaction-diffusion system in the biophysics of receptor cells. T Fricke, J Schnakenberg, 10.1007/BF01309430Z. Phy. B Condens. Matter. 83Fricke, T., and Schnakenberg, J. (1991). Monte-Carlo simulation of an inhomogeneous reaction-diffusion system in the biophysics of receptor cells. Z. Phy. B Condens. Matter 83, 277-284. doi: 10.1007/BF01309430 Efficient exact stochastic simulation of chemical systems with many species and many channels. M A Gibson, J Bruck, 10.1021/jp993732qJ. Phys. Chem. A. 104Gibson, M. A., and Bruck, J. (2000). Efficient exact stochastic simulation of chemical systems with many species and many channels. J. Phys. Chem. A 104, 1876-1889. doi: 10.1021/jp993732q A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. D T Gillespie, 10.1016/0021-9991(76)90041-3J. Comput. Phys. 22Gillespie, D. T. (1976). A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. J. Comput. Phys. 22, 403-434. doi: 10.1016/0021-9991(76)90041-3 Approximate accelerated stochastic simulation of chemically reacting systems. D T Gillespie, 10.1063/1.1378322J. Chem. Phys. 1151716Gillespie, D. T. (2001). Approximate accelerated stochastic simulation of chemically reacting systems. J. Chem. Phys. 115:1716. doi: 10.1063/1.1378322 Accelerating the Smoldyn spatial stochastic biochemical reaction network simulator using GPUs. D V Gladkov, S Alberts, R M Souza, S Andrews, Society for Computer Simulation International. San Diego, CAGladkov, D. V., Alberts, S., D'Souza, R. M., and Andrews, S. (2011). "Accelerating the Smoldyn spatial stochastic biochemical reaction network simulator using GPUs, " in Society for Computer Simulation International (San Diego, CA), 151-158. Stochastic reactiondiffusion simulation with MesoRD. J Hattne, D Fange, J Elf, 10.1093/bioinformatics/bti431Bioinformatics. 21Hattne, J., Fange, D., and Elf, J. (2005). Stochastic reaction- diffusion simulation with MesoRD. Bioinformatics 21, 2923-2924. doi: 10.1093/bioinformatics/bti431 Efficient calculation of the quasi-static electrical potential on a tetrahedral mesh and its implementation in STEPS. I Hepburn, R Cannon, De Schutter, E , 10.3389/fncom.2013.00129Front. Comput. Neurosci. 7129Hepburn, I., Cannon, R., and De Schutter, E. (2013). Efficient calculation of the quasi-static electrical potential on a tetrahedral mesh and its implementation in STEPS. Front. Comput. Neurosci. 7:129. doi: 10.3389/fncom.2013.00129 Accurate reaction-diffusion operator splitting on tetrahedral meshes for parallel stochastic molecular simulations. I Hepburn, W Chen, De Schutter, E , 10.1063/1.4960034J. Chem. Phys. 145Hepburn, I., Chen, W., and De Schutter, E. (2016). Accurate reaction-diffusion operator splitting on tetrahedral meshes for parallel stochastic molecular simulations. J. Chem. Phys. 145, 054118-054122. doi: 10.1063/1.4960034 STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies. I Hepburn, W Chen, S Wils, De Schutter, E , 10.1186/1752-0509-6-36BMC Syst. Biol. 636Hepburn, I., Chen, W., Wils, S., and De Schutter, E. (2012). STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies. BMC Syst. Biol. 6:36. doi: 10.1186/1752-0509-6-36 Fast monte carlo simulation methods for biological reactiondiffusion systems in solution and on surfaces. R A Kerr, T M Bartol, B Kaminsky, M Dittrich, J.-C J Chang, S B Baden, 10.1137/070692017SIAM J. Sci. Comput. 30Kerr, R. A., Bartol, T. M., Kaminsky, B., Dittrich, M., Chang, J.-C. J., Baden, S. B., et al. (2008). Fast monte carlo simulation methods for biological reaction- diffusion systems in solution and on surfaces. SIAM J. Sci. Comput. 30, 3126-3149. doi: 10.1137/070692017 An accelerated algorithm for discrete stochastic simulation of reaction-diffusion systems using gradient-based diffusion and tau-leaping. W Koh, K T Blackwell, 10.1063/1.3572335J. Chem. Phys. 134154103Koh, W., and Blackwell, K. T. (2011). An accelerated algorithm for discrete stochastic simulation of reaction-diffusion systems using gradient-based diffusion and tau-leaping. J. Chem. Phys. 134:154103. doi: 10.1063/1.3572335 NTW-MT: a multi-threaded simulator for reaction diffusion simulations in neuron. Z Lin, C Tropper, M N Ishlam Patoary, R A Mcdougal, W W Lytton, M L Hines, Proceedings of the 3rd ACM SIGSIM Conference on Principles of Advanced Discrete Simulation. the 3rd ACM SIGSIM Conference on Principles of Advanced Discrete SimulationLondon, UKLin, Z., Tropper, C., Ishlam Patoary, M. N., McDougal, R. A., Lytton, W. W., and Hines, M. L. (2015). "NTW-MT: a multi-threaded simulator for reaction diffusion simulations in neuron, " in Proceedings of the 3rd ACM SIGSIM Conference on Principles of Advanced Discrete Simulation (London, UK), 157-167. The human brain project. H Markram, 10.1038/scientificamerican0612-50Sci. Am. 306Markram, H. (2012). The human brain project. Sci. Am. 306, 50-55. doi: 10.1038/scientificamerican0612-50 Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics. T T Marquez-Lago, K Burrage, 10.1063/1.2771548J. Chem. Phys. 127Marquez-Lago, T. T., and Burrage, K. (2007). Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics. J. Chem. Phys. 127, 104101-104110. doi: 10.1063/1.2771548 3D-printer visualization of neuron models. R A Mcdougal, G M Shepherd, 10.3389/fninf.2015.00018Front. Neuroinform. 918McDougal, R. A., and Shepherd, G. M. (2015). 3D-printer visualization of neuron models. Front. Neuroinform. 9:18. doi: 10.3389/fninf.2015.00018 The role of type 4 phosphodiesterases in generating microdomains of cAMP: large scale stochastic simulations. R F Oliveira, A Terrin, G Di Benedetto, R C Cannon, W Koh, M Kim, 10.1371/journal.pone.0011725PLoS ONE. 511725Oliveira, R. F., Terrin, A., Di Benedetto, G., Cannon, R. C., Koh, W., Kim, M., et al. (2010). The role of type 4 phosphodiesterases in generating microdomains of cAMP: large scale stochastic simulations. PLoS ONE 5:e11725. doi: 10.1371/journal.pone.0011725 Lattice Microbes: highperformance stochastic simulation method for the reaction-diffusion master equation. E Roberts, J E Stone, Z Luthey-Schulten, 10.1002/jcc.23130J. Comput. Chem. 34Roberts, E., Stone, J. E., and Luthey-Schulten, Z. (2013). Lattice Microbes: high- performance stochastic simulation method for the reaction-diffusion master equation. J. Comput. Chem. 34, 245-255. doi: 10.1002/jcc.23130 Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (PTS) pathway in Escherichia coli. J V Rodríguez, J A Kaandorp, M Dobrzynski, J G Blom, 10.1093/bioinformatics/btl271Bioinformatics. 22Rodríguez, J. V., Kaandorp, J. A., Dobrzynski, M., and Blom, J. G. (2006). Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (PTS) pathway in Escherichia coli. Bioinformatics 22, 1895-1901. doi: 10.1093/bioinformatics/btl271 A constant-time kinetic Monte Carlo algorithm for simulation of large biochemical reaction networks. A Slepoy, A P Thompson, S J Plimpton, 10.1063/1.2919546J. Chem. Phys. 128205101Slepoy, A., Thompson, A. P., and Plimpton, S. J. (2008). A constant-time kinetic Monte Carlo algorithm for simulation of large biochemical reaction networks. J. Chem. Phys. 128:205101. doi: 10.1063/1.29 19546 Accelerating reaction-diffusion simulations with general-purpose graphics processing units. M Vigelius, A Lane, B Meyer, 10.1093/bioinformatics/btq622Bioinformatics. 27Vigelius, M., Lane, A., and Meyer, B. (2011). Accelerating reaction-diffusion simulations with general-purpose graphics processing units. Bioinformatics 27, 288-290. doi: 10.1093/bioinformatics/btq622 Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems. B Wang, Y Yao, Y Zhao, B Hou, S Peng, International Workshop on High Performance Computational Systems Biology. TrentoWang, B., Yao, Y., Zhao, Y., Hou, B., and Peng, S. (2009). "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems, " in International Workshop on High Performance Computational Systems Biology, (Trento).
[]
[ "Integrated Perturbation Theory and One-loop Power Spectra of Biased Tracers", "Integrated Perturbation Theory and One-loop Power Spectra of Biased Tracers" ]
[ "Takahiko Matsubara \nDepartment of Physics\nNagoya University\n464-8602Chikusa, NagoyaJapan;\n\nNagoya University\nChikusa, Nagoya464-8602Japan\n" ]
[ "Department of Physics\nNagoya University\n464-8602Chikusa, NagoyaJapan;", "Nagoya University\nChikusa, Nagoya464-8602Japan" ]
[]
General and explicit predictions from the integrated perturbation theory (iPT) for power spectra and correlation functions of biased tracers are derived and presented in the one-loop approximation. The iPT is a general framework of the nonlinear perturbation theory of cosmological density fields in presence of nonlocal bias, redshift-space distortions, and primordial non-Gaussianity. Analytic formulas of auto and cross power spectra of nonlocally biased tracers in both real and redshift spaces are derived and the results are comprehensively summarized. The main difference from previous formulas derived by the present author is to include effects of generally nonlocal Lagrangian bias and primordial non-Gaussianity, and the derivation method of the new formula is fundamentally different from the previous one. Relations to recent work on improved methods of nonlinear perturbation theory in literature are clarified and discussed. 98.80.Cq, 98.80.Es
10.1103/physrevd.90.043537
[ "https://arxiv.org/pdf/1304.4226v2.pdf" ]
55,153,712
1304.4226
de650eb4425ed58cfcbb4abbf9b7a4d3a7114fb6
Integrated Perturbation Theory and One-loop Power Spectra of Biased Tracers 8 Aug 2014 Takahiko Matsubara Department of Physics Nagoya University 464-8602Chikusa, NagoyaJapan; Nagoya University Chikusa, Nagoya464-8602Japan Integrated Perturbation Theory and One-loop Power Spectra of Biased Tracers 8 Aug 2014(Dated: August 11, 2014)Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, General and explicit predictions from the integrated perturbation theory (iPT) for power spectra and correlation functions of biased tracers are derived and presented in the one-loop approximation. The iPT is a general framework of the nonlinear perturbation theory of cosmological density fields in presence of nonlocal bias, redshift-space distortions, and primordial non-Gaussianity. Analytic formulas of auto and cross power spectra of nonlocally biased tracers in both real and redshift spaces are derived and the results are comprehensively summarized. The main difference from previous formulas derived by the present author is to include effects of generally nonlocal Lagrangian bias and primordial non-Gaussianity, and the derivation method of the new formula is fundamentally different from the previous one. Relations to recent work on improved methods of nonlinear perturbation theory in literature are clarified and discussed. 98.80.Cq, 98.80.Es I. INTRODUCTION Density fluctuations in the universe contain invaluable information on cosmology. For example, the history and ingredients of the universe are encoded in detailed patterns of the density fluctuations. The large-scale structure (LSS) of the universe is one of the most popular ways to probe the density fluctuations in the universe. Spatial distributions of galaxies and other astronomical objects which can be observed reflect the underlying density fluctuations in the universe. In cosmology, it is crucial to investigate the spatial distributions of dark matter, which dominates the mass of the universe. Unfortunately, distributions of dark matter are difficult to directly observe, because the only interaction we know that the dark matter surely has is the gravitational interaction. Consequently, we need to estimate the density fluctuations of the universe by means of indirect probes such as galaxies, which have electromagnetic interactions. Relations between distributions of observable objects and those of dark matter are nontrivial. On very large scales where the linear theory can be applied, the relations are reasonably represented by the linear bias; the density contrasts of dark matter δ m and those of observable objects δ X are proportional to each other, δ X = bδ m , where b is a constant called the bias parameter. However, nonlinear effects cannot be neglected when we extract cosmological information as much as possible from observational data of LSS, and bias relations in nonlinear regime are not as simple as those in linear regime. Observations of LSS play an important role in cosmology. Shapes of power spectra of galaxies and clusters contain information on the density parameters of cold dark matter Ω CDM , baryons Ω b and neutrinos Ω ν in the universe. Precision measurements of baryon acoustic oscillations (BAO) in galaxy power spectra or correlation functions can constrain the nature of dark energy [1][2][3], which is a driving force of the acceler- * Electronic address: [email protected] ated expansion of the present universe. The non-Gaussianity in the primordial density field induces a scale-dependent bias in biased tracers of LSS on very large scales [4][5][6][7][8]. Cosmological information contained in detailed features in LSS is so rich that there are many ongoing and future surveys of LSS, such as BOSS [9], FMOS FastSound [10], BigBOSS [11], LSST [12], Subaru PFS [13], DES [14], Euclid [15], etc. Elucidating nonlinear effects on observables in LSS has crucial importance in the precision cosmology. While strongly nonlinear phenomena are difficult to analytically quantify, the perturbation theory is useful in understanding quasi-nonlinear regime. The traditional perturbation theory describes evolutions of mass density field on large scales where the density fluctuations are small. However, spatial distributions of astronomical objects such as galaxies do not exactly follow the mass density field, and they are biased tracers. Formation processes of astronomical objects are governed by strongly nonlinear dynamics including baryon physics etc., which cannot be straightforwardly treated by the traditional perturbation theory. Even though the tracers are produced through strongly nonlinear processes, it is still sensible to apply the perturbation theory to study LSS on large scales. For example, the biasing effect in linear theory is simply represented by a bias parameter b as described above. However, biasing effects in higherorder perturbation theory are not that simple. A popular model of the biasing in the context of nonlinear perturbation theory is the Eulerian local bias [16][17][18][19][20]. This model employs freely fitting parameters in every orders of perturbations, and is just a phenomenological model because the Eulerian bias is not definitely local in reality. The integrated perturbation theory (iPT) [21] is a framework of the perturbation theory to predict observable power spectra and any higher-order polyspectra (or the correlation functions) of nonlocally biased tracers. In addition, the effects of redshift-space distortions and primordial non-Gaussianity are naturally incorporated. This theory is general enough so that any model of nonlocal bias can be taken into account. Precise mechanisms of bias are still not theoretically understood well, and are under active investigations. The framework of iPT separates the known physics of gravitational effects on spatial clustering from the unknown physics of complicated bias. The unknown physics of nonlocal bias is packed into "renormalized bias functions" c (n) X in the iPT formalism. Once the renormalized bias functions are modeled for observable tracers, weakly nonlinear effects of gravitational evolutions are taken care of by the iPT. The iPT is a generalization of a previous formulation called Lagrangian resummation theory (LRT) [22][23][24][25] in which only local models of Lagrangian bias can be incorporated. In recent developments, the model of bias from the halo approach has turned out to be quite useful in understanding the cosmological structure formations [26][27][28][29][30][31][32]. The halo bias is naturally incorporated in the framework of iPT. Predictions of iPT combined with the halo model of bias do not contain any fitting parameter once the mass function and physical mass of halos are specified. This property is quite different from other phenomenological approaches to combine the perturbation theory and bias models. A concept of nonlocal Lagrangian bias has recently attracted considerable attention [33][34][35]. Extending the halo approach, a simple nonlocal model of Lagrangian bias is recently proposed [36] for applications to the iPT. Applying this nonlocal model of halo bias to evaluating the scale-dependent bias in the presence of primordial non-Gaussianity, not only the results of peak-background split are reproduced, but also more general formula is obtained. In this paper, the usage of this simple model of nonlocal halo bias in the framework of iPT is explicitly explained. The bias in the framework of iPT does not have to be a halo bias. There are many kinds of tracers for LSS, such as various types of galaxies, quasars, Ly-α absorption lines, 21cm absorption and emission lines, etc. Once the bias model for each kind of objects is given, it is straightforward to calculate biased power spectra and polyspectra of those tracers in the framework of iPT. As described above, it is needless to say that detailed mechanisms of bias for those tracers have not been fully understood yet. As emphasized above, the iPT separates the difficult problems of fully nonlinear biasing from gravitational evolutions in weakly nonlinear regime. While the basic formulation of iPT is developed in Ref. [21], explicit calculations of the nonlinear power spectra are not given in that reference. The purpose of this paper is to give explicit expressions of biased power spectra with an arbitrary model of nonlocal bias in the one-loop approximations, in which leading-order corrections to the nonlinear evolutions are included. The expressions are given both in real space and in redshift space. Three-dimensional integrals in the formal expressions of one-loop power spectra are reduced to one-and two-dimensional integrals, which are easy and convenient for numerical integrations. Contributions from primordial non-Gaussianity are also taken into account in the general expressions. Explicit formulas of the renormalized bias functions are provided for a simple model of nonlocal halo bias. In this way, general formulas of power spectra of biased objects in the one-loop approximation is provided in this paper. Since the iPT framework is based on the Lagrangian perturbation theory (LPT) [37][38][39][40][41][42], a scheme of resummations of higher-order perturbations in terms of the Eulerian perturbation theory (EPT) [44] is naturally considered [22]. In this paper, we clarify the relations of the present formula of iPT and some previous methods of resummation technique such as the renormalized perturbation theory [45,46], the Gamma expansions [47][48][49][50][51][52], the Lagrangian resummation theory [22][23][24][25], and the convolution perturbation theory [53]. Some aspects for the future developments of iPT are suggested. This paper is organized as follows. In Sec. II, formal expressions of power spectra in the framework of iPT with an arbitrary model of bias are derived. A simple model of renormalized bias functions for a nonlocal Lagrangian bias in the halo approach are summarized. In Sec. III, explicit formulas of biased power spectra, which are the main results of this paper, are derived and presented. Relations to other previous work in literature are clarified in Sec. IV, and conclusions are given in Sec. V. In App. A, diagrammatic rules of iPT used in this paper are briefly summarized. II. THE ONE-LOOP POWER SPECTRA IN THE INTEGRATED PERTURBATION THEORY In this first section, the formalism of iPT [21] is briefly reviewed (without proofs), and formal expressions of power spectra in the one-loop approximation are derived. A. Fundamental equations of the integrated perturbation theory In evaluating the power spectra in iPT, a concept of multipoint propagator [47,48,54] is useful. The (n + 1)-point propagator Γ (n) X of any biased objects, which are labeled by X in general, is defined by [21] δ n δ X (k) δδ L (k 1 ) · · · δδ L (k n ) = (2π) 3−3n δ 3 D (k− k 1···n )Γ (n) X (k 1 , . . . , k n ),(1) where δ X (k) is the Fourier transform of the number density contrast of biased objects in Eulerian space, δ L (k) is the Fourier transform of linear density contrast, δ 3 D is the Dirac's delta function in three-dimensions, and we adopt a notation k 1···n = k 1 + · · · + k n ,(2) throughout this paper. The left-hand side of Eq. (1) is an ensemble average of nth-order functional derivative. The number density field is considered as a functional of the initial density field. In the basic framework of iPT, the biased objects can be any astronomical objects which are observed as tracers of the underlying density field in the universe. The method how to evaluate multi-point propagators of biased objects in the framework of iPT is detailed in Ref. [21]. In the most general form of iPT formalism, both Eulerian and Lagrangian pictures of dynamical evolutions can be dealt with, and both pictures give equivalent predictions for observables. The models of halo bias fall into the category of Lagrangian bias, i.e., the number density field of halos is related to the mass density field in Lagrangian space. In such a case, the Lagrangian picture is a natural way to describe evolutions of halo number density field. In the models of Lagrangian bias, the renormalized bias functions [21] are the key elements in iPT, which are defined by c (n) X (k 1 , . . . , k n ) = (2π) 3n d 3 k (2π) 3 δ n δ L X (k) δδ L (k 1 ) · · · δδ L (k n ) ,(3) where δ L X (k) is the Fourier transform of halo number density contrast in Lagrangian space. We allow the bias to be nonlocal in Lagrangian space. In fact, the halo bias is not purely local even in Lagrangian space [36]. For a mass density field, the Lagrangian number density contrast δ L X is identically zero, and the bias functions are identically zero, c (n) X = 0 for all orders n = 1, 2, . . .. Assuming statistical homogeneity in Lagrangian space, the renormalized bias functions in Eq. (3) is equivalently defined by [36] δ n δ L X (k) δδ L (k 1 ) · · · δδ L (k n ) = (2π) 3−3n δ 3 D (k − k 1···n )c (n) X (k 1 , . . . , k n ). (4) The similarity of this equation with Eq. (1) is apparent in this form. The information on dynamics of bias in Lagrangian space is encoded in the set of renormalized bias functions. Assuming statistical isotropy in Lagrangian space, the renormalized bias functions c (n) X (k 1 , . . . , k n ) depend only on magnitudes k 1 , . . . , k n and relative anglesk i ·k j (i > j) of wavevectors. Applying the vertex resummation of iPT, the multi-point propagators of biased objects X are given by a form, Γ (n) X (k 1 , . . . , k n ) = Π(k 1···n )Γ (n) X (k 1 , . . . , k n ),(5) where Π(k) = e −ik·Ψ = exp        ∞ n=2 (−i) n n! (k · Ψ) n c        ,(6) is the vertex resummation factor in terms of the displacement field Ψ, and · · · c indicates the connected part of ensemble average. The displacement fields Ψ(q) are the fundamental variables in LPT, where q is the Lagrangian coordinates and the Eulerian coordinates are given by x = q + Ψ(q). The cumulant expansion theorem is used in the second equality of Eq. (6). Cumulants of the displacement fields with odd number vanish from the parity symmetry, thus the summation in the exponent of Eq. (6) is actually taken over n = 2, 4, 6, . . .. The normalized multi-point propagators of the biased objects, Γ (n) X , are naturally predicted in the framework of iPT. In the one-loop approximation of iPT, the vertex resummation factor is given by and the normalized two-point propagator is given bŷ Π(k) = exp − 1 2 d 3 p (2π) 3 k · L (1) ( p) 2 P L (p) ,(7)Γ (1) X (k) = c (1) X (k) + k · L (1) (k) + d 3 p (2π) 3 P L (p) c (2) X (k, p) k · L (1) (−p) + c (1) X (p) k · L (1) (−p) k · L (1) (k) + 1 2 k · L (3) (k, p, −p) + c (1) X (p) k · L (2) (k, −p) + k · L (1) ( p) k · L (2) (k, −p) ,(8) where L (n) is the nth-order displacement kernel in LPT. Each term in Eq. (8) respectively corresponds to each diagram of Fig. 1 in the same order. Diagrammatic rules in iPT [21] with the Lagrangian picture, which are explained in App. A, are applied in the correspondence. The normalized two-point propagator of mass density field,Γ (1) m , is obtained by putting c (n) X = 0 in Eq. (8). The perturbative expansion of the displacement field in Fourier space,Ψ(k) is given bỹ Ψ(k) = ∞ n=1 i n! k 1···n =k L (n) (k 1 , . . . , k n )δ L (k 1 ) · · · δ L (k n ),(9) where we adopt a notation, k 1···n =k · · · = d 3 k 1 (2π) 3 · · · d 3 k n (2π) 3 (2π) 3 δ 3 D (k − k 1···n ) · · · . (10) Such notation as Eq. (10) is commonly used throughout this paper. In real space, the kernels of LPT in the standard theory of gravity (in the Newtonian limit) are given by [40] L (1) (k) = k k 2 ,(11)L (2) (k 1 , k 2 ) = 3 7 k 12 k 12 2        1 − k 1 · k 2 k 1 k 2 2        ,(12)L (3) (k 1 , k 2 , k 3 ) = 1 3 L (3a) (k 1 , k 2 , k 3 ) + perm. ;(13)L (3a) (k 1 , k 2 , k 3 ) = k 123 k 123 2        5 7        1 − k 1 · k 2 k 1 k 2 2               1 − k 12 · k 3 k 12 k 3 2        − 1 3        1 − 3 k 1 · k 2 k 1 k 2 2 + 2 (k 1 · k 2 )(k 2 · k 3 )(k 3 · k 1 ) k 1 2 k 2 2 k 3 2               + k 123 k 123 2 × T(k 1 , k 2 , k 3 ),(14) where a vector function T represents a transverse part whose explicit expression will not be used in this paper. Complete expressions of the displacement kernels of LPT up to 4th order, including transverse parts are given in, e.g., Ref. [41,42]. Eqs. (7) and (8) remain valid even when the non-standard theory of gravity is assumed as long as the appropriate form of kernels L n in such a theory is used. One of the benefits in the Lagrangian picture is that redshift-space distortions are relatively easy to be incorporated in the theory. A displacement kernel in redshift space L s(n) is simply related to the kernel in real space at the same order by a linear mapping [22], L (n) → L s(n) = L (n) + n f ẑ · L (n) ẑ,(15) where f = d ln D/d ln a =Ḋ/HD is the linear growth rate, D(t) is the linear growth factor, a(t) is the scale factor, and H(t) =ȧ/a is the time-dependent Hubble parameter. The distant-observer approximation is assumed in redshift space, and the unit vectorẑ denotes the line-of-sight direction. Strictly speaking, the mapping of Eq. (15) is exact only in the Einstein-de Sitter universe. However, this mapping is a good approximation in general cosmology. The expressions of Eqs. (7) and (8) apply as well in redshift space when the displacement kernels in redshift space L s(n) are used instead of the real-space counterparts L (n) . The three-point propagator at the tree-level approximation in iPT is given bŷ Γ (2) X (k 1 , k 2 ) = c (2) X (k 1 , k 2 ) + c (1) X (k 1 ) k · L (1) (k 2 ) + c (1) X (k 2 ) k · L (1) (k 1 ) + k · L (1) (k 1 ) k · L (1) (k 2 ) + k · L (2) (k 1 , k 2 ),(16) where each term respectively corresponds to each diagram of In terms of the multi-point propagators, the power spectrum of biased objects, up to the one-loop approximation, is given by P X (k) = Π 2 (k) Γ (1) X (k) 2 P L (k) + 1 2 k 12 =k Γ (2) X (k 1 , k 2 ) 2 P L (k 1 )P L (k 2 ) +Γ (1) X (k) k 12 =kΓ (2) X (k 1 , k 2 )B L (k, k 1 , k 2 ) ,(17) where P L (k) and B L (k, k 1 , k 2 ) are the linear power spectrum and the linear bispectrum, respectively. The diagrammatic representations of Eq. (17) are shown in Fig. 3. Crossed circles correspond to the linear power spectrum or the linear bispectrum, depending on number of lines attached to them. The first two terms in Eq. (17) corresponds to the first two diagrams in Fig. 3. The last two diagrams in Fig. 3 are contributions from the primordial non-Gaussianity. The two diagrams give the same contribution because of the parity symmetry, and the sum of the two diagrams corresponds to the last term in Eq. (17). The matter power spectrum P m (k) is simply given by replacing Γ (n) X by Γ (n) m in Eq. (17), or equivalently, setting c (n) X = 0 for every n ≥ 1. The cross power spectrum between two types of objects, X and Y, is similarly obtained as P XY (k) = Π 2 (k) Γ (1) X (k)Γ (1) Y (k)P L (k) + 1 2 k 12 =kΓ (2) X (k 1 , k 2 )Γ (2) Y (k 1 , k 2 )P L (k 1 )P L (k 2 ) + 1 2Γ (1) X (k) k 12 =kΓ(2) Y (k 1 , k 2 )B L (k, k 1 , k 2 ) + 1 2Γ (1) Y (k) k 12 =kΓ(2)X (k 1 , k 2 )B L (k, k 1 , k 2 ) .(18) The diagrams for the above equations are similar to the ones in Fig. 3, where the left and right multi-point propagators correspond to those of X and Y, respectively. When X = Y, Eq. (18) apparently reduces to Eq. (17). The predictions of biased power spectra in the one-loop approximation of iPT are given by Eq. (17) for the auto power spectrum, and by Eq. (18) for the cross power spectrum. Once a model of the renormalized bias functions c (n) X is given, it is straightforward to numerically evaluate those equations. The above results are general and do not depend on bias models. Any bias model can be incorporated in the expression of iPT through the renormalized bias functions. In the next subsection, we explain a simple model of the renormalized bias function based on the halo approach. B. Renormalized bias functions in a simple model of halo approach The renormalized bias functions c (n) X are not specified in the general framework of iPT. Precise modeling of bias is a nontrivial problem, depending on what kind of biased tracers are considered. In this subsection, we consider a simple model of halo bias as an example. The expressions of renormalized bias functions in a simple model of the halo approach are recently derived in Ref. [36]. We summarize the consequences of this model below. It should be emphasized that the general framework of iPT does not depend on this specific model of bias. Without resorting to approximations such as the peakbackground split, the halo bias is shown to be nonlocal even in Lagrangian space. As a result, the renormalized bias functions have nontrivial scale dependencies. For the halos of mass M, the renormalized bias functions are given by [36] c (n) M (k 1 , . . . , k n ) = b L n (M)W(k 1 R) · · · W(k n R) + A n−1 (M) δ c n d d ln σ M [W(k 1 R) · · · W(k n R)] ,(19) where δ c = 3(3π/2) 2/3 /5 ≃ 1.686 is the critical overdensity for spherical collapse and W(kR) is a window function. In a usual halo approach, the window function is chosen to be a top-hat type in configuration space, which corresponds to W(x) = 3 sin x − 3x cos x x 3 ,(20) in Fourier space. In this case, the Lagrangian radius R is naturally related to the mass M of halo by M = 4 3 πρ 0 R 3 ,(21) whereρ 0 is the mean density of mass at the present time, or R = M 1.163 × 10 12 h −1 M ⊙ Ω m0 1/3 h −1 Mpc,(22) where M ⊙ = 1.989 × 10 30 kg is the solar mass, Ω m0 is the density parameter of mass at the present time, and h = H 0 /(100 km s −1 Mpc −1 ) is the normalized Hubble constant. Empirically, one can also use other types of window function. Direct evaluations of the renormalized bias functions suggest that Gaussian window function W(x) = e −k 2 R 2 /2 gives better fit [43]. In the latter case, the relation between the smoothing radius R and mass M is not trivial and should also be empirically modified from the relation of Eq. (22). However, the shapes of one-loop power spectrum on large scales are not sensitive to the choice of window function. The variance of density fluctuations on the mass scale M is defined by σ M 2 = d 3 k (2π) 3 W 2 (kR)P L (k).(23) The radius R is considered as a function of σ M through Eq. (19). The functions A n (M) are defined by A n (M) ≡ n j=0 n! j! δ c j b L j (M),(24) where b L n is the scale-independent Lagrangian bias parameter of nth-order. For example, first three functions are given by A 0 = 1, A 1 = 1 + δ c b L 1 , A 2 = 2 + 2δ c b L 1 + δ c 2 b L 2 . (25) When the halo mass function n(M) takes a universal form n(M)dM =ρ 0 M f MF (ν) dν ν ,(26) where ν = δ c /σ M , the Lagrangian bias parameters are given by b L n (M) = −1 σ M n f (n) MF (ν) f MF (ν) ,(27) where f (n) MF = d n f MF /dν n . Once the model of the mass function f MF (ν) is given, the scale-independent bias parameters b L n (M) and the functions A n (M) are uniquely given by Eqs. (27) and (24). In Table I, those functions are summarized for popular models of mass function, i.e., the Press-Schechter (PS) mass function [26], the Sheth-Tormen (ST) mass function [30], Warren et al. (W+) mass function [55]. In the simplest PS mass function, it is interesting to note that general expressions of the parameters for all orders can be derived [36] : b L n = ν n−1 H n+1 (ν)/δ c n , A n = ν n H n (ν), where H n (ν) are the Hermite polynomials. The ST mass function gives a better fit to numerical simulations of halos in colddark-matter type cosmologies with Gaussian initial conditions. The values of parameters in Table I are p = 0.3, q = 0.707, and A(p) = [1 + π −1/2 2 −p Γ(1/2 − p)] −1 is the normalization factor. When we put p = 0, q = 1, the ST mass function reduces to the PS mass function. The W+ mass function is represented by a parameter σ = δ c /ν, which is also a function of M, and parameters are A = 0.7234, a = 1.625, b = 0.2538, c = 1.1982. The same functional form is applied to the Marenostrum Institut de Ciéncies de l'Espai (MICE) simulations in Ref. [56], allowing the parameters redshift-dependent. Their values are given by A(z) = PS ST W+, MICE (σ = δ c /ν) f MF (ν) 2 π νe −ν 2 /2 A(p) 2 π 1 + 1 (qν 2 ) p √ q νe −qν 2 /2 A σ −a + b e −c/σ 2 b L 1 (M) ν 2 − 1 δ c 1 δ c qν 2 − 1 + 2p 1 + (qν 2 ) p 1 δ c 2c σ 2 − a 1 + bσ a b L 2 (M) ν 4 − 3ν 2 δ c 2 1 δ c 2 q 2 ν 4 − 3qν 2 + 2p(2qν 2 + 2p − 1) 1 + (qν 2 ) p 1 δ c 2         4c 2 σ 4 − 2c σ 2 − a 4c/σ 2 − a + 1 1 + bσ a         A 1 (M) ν 2 qν 2 + 2p 1 + (qν 2 ) p 2c σ 2 + 1 − a 1 + bσ a A 2 (M) ν 2 (ν 2 − 1) qν 2 (qν 2 − 1) + 2p(2qν 2 + 2p + 1) 1 + (qν 2 ) p 4c 2 σ 4 + 2c σ 2 + 2 − a 4c/σ 2 − a + 3 1 + bσ a Parameters - A(p) = [1 + π −1/2 2 −p Γ(1/2 − p)] −1 p = 0.3 q = 0.707 W+ : A = 0.7234 a = 1.625 b = 0.2538 c = 1.1982 MICE : A(z) = 0.58(1 + z) −0.13 a(z) = 1.37(1 + z) −0.15 b(z) = 0.3(1 + z) −0.084 c(z) = 1.036(1 + z) −0.0240.58(1 + z) −0.13 , a(z) = 1.37(1 + z) −0.15 , b(z) = 0.3(1 + z) −0.084 , c(z) = 1.036(1 + z) −0.024 . When the redshift-dependent parameters are adopted, the W+ mass function is sometimes referred to as the "MICE mass function". In the latter case, the multiplicity function f MF (ν) explicitly depends on the redshift, and the mass function is no longer 'universal'. The nonlocal nature of the halo bias in Lagrangian space is encoded in the second term in the RHS of Eq. (19), since the simple dependence on the window function of the first term appears even in the local bias models through the smoothed mass density field. In the large-scale limit, k 1 , k 2 , . . . , k n → 0, the second term in the RHS of Eq. (19) disappears and the renormalized bias functions reduce to scale-independent bias parameters, c (n) M ≃ b L n (M). This property is consistent with the peak-background split. However, the loop corrections in the iPT involve integrations over the wavevectors of the renormalized bias functions, and there is no reason to neglect the second term which represents nonlocal nature of Lagrangian bias of halos. The Eq. (19) is shown to be equivalent to the following expression [36], c (n) M (k 1 , . . . , k n ) = A n (M) δ c n W(k 1 R) · · · W(k n R) + A n−1 (M) σ M n δ c n d d ln σ M W(k 1 R) · · · W(k n R) σ M n .(28) For the PS mass function, there is an interesting relation, A n = ν 2 δ c n−1 b L n−1 , and in this case, the renormalized bias function c L n is expressible by lower-order parameters b L n−1 and b L n−2 , which is a reason why the scale-dependent bias in the presence of primordial non-Gaussianity is approximately proportional to the first-order bias parameter, b L 1 rather than the secondorder one, b L 2 [36]. However, this does not mean that c (n) M is independent on b L n , because b L n can be expressible by a linear combination of b L n−1 and b L n−2 in the PS mass function. In the expressions of renormalized bias functions, Eqs. (19) and (28), all the halos are assumed to have the same mass, M. These expressions apply when the mass range of halos in a given sample is sufficiently narrow. When the mass range is finitely extended, the expressions should be replaced by [36] c (n) φ (k 1 , . . . , k n ) = dM φ(M) n(M) c (n) M (k 1 , . . . , k n ) dM φ(M) n(M) ,(29)c (n) [M 1 ,M 2 ] (k 1 , . . . , k n ) = M 2 M 1 dM n(M) c (n) M (k 1 , . . . , k n ) M 2 M 1 dM n(M) .(30) III. EXPLICIT FORMULAS The auto power spectrum P X (k) of Eq. (17) is a special case of the cross power spectrum P XY (k) of Eq. (18) as the former is given by setting X = Y in the latter. It is general enough to give the formulas for the cross power spectrum below. In the following, we decompose Eq. (18) into the following form: P XY (k) = Π 2 (k) [R XY (k) + Q XY (k) + S XY (k)] ,(31) where Π(k) is given by Eq. (7) and R XY (k) =Γ (1) X (k)Γ (1) Y (k)P L (k),(32)Q XY (k) = 1 2 k 12 =kΓ (2) X (k 1 , k 2 )Γ (2) Y (k 1 , k 2 )P L (k 1 )P L (k 2 ),(33)S XY (k) = 1 2Γ (1) X (k) k 12 =kΓ (2) Y (k 1 , k 2 )B L (k, k 1 , k 2 ) + (X ↔ Y).(34) Three-dimensional integrals appeared in the above components of Eqs. (32)- (34) can be reduced to lower-dimensional integrals both in real space and in redshift space. Such dimensional reductions of the integrals are useful for practical calculations. The purpose of this section is to give explicit formulas for the above components Π, R XY , Q XY , S XY in terms of twodimensional integrals at most. The results of this section are applicable to any bias models, and do not depend on specific forms of renormalized bias functions, e.g., those explained in Sec. II B. A. The power spectra in real space In real space, the power spectrum is independent on the direction of wavevector k, and thus the components above Π(k), R XY (k), Q XY (k), S XY (k) are also independent on the direction. In this case, dimensional reductions of the integrals in Eqs. (32)- (34) are not difficult, because of the rotational symmetry. The vertex resummation factor Π(k) of Eq. (7) is given by Π(k) = exp − k 2 12π 2 d p P L (p) .(35) On small scales, this factor exponentially suppresses the power too much, and such a behavior is not physical. This property is a good indicator of which scales the perturbation theory should not be applied. However, the resummation of the vertex factor is not compulsory in the iPT. When the vertex factor is not resummed, one can expand the factor as Π(k) = 1 − k 2 12π 2 d p P L (p),(36) instead of Eq. (35) in the case of one-loop perturbation theory. In a quasi-linear regime, the resummed vertex factor of Eq. (35) gives better fit to N-body simulations in real space [22,25]. The expression of two-point propagator in Eq. (8) is straightforwardly obtained, substituting the Lagrangian kernels of Eqs. (11)-(14). Taking the z-axis of p as the direction of k, integrations by the azimuthal angle are trivial. Transforming the rest of integration variables as r = p/k and x =p ·k, we have two equivalent expressions, Γ (1) X (k) = 1 + c (1) X (k) + k 3 4π 2 ∞ 0 dr 1 −1 dxR X (k, r, x)P L (kr)(37)= 1 + c (1) X (k) + k 3 4π 2 ∞ 0 drR X (k, r)P L (kr),(38) wherê R X (k, r, x) = 5 21 r 2 (1 − x 2 ) 2 1 + r 2 − 2rx + 3 7 (1 − rx)(1 − x 2 ) 1 + r 2 − 2rx rx + r 2 c (1) X (kr) − rx c (2) X (k, kr; x),(39) andR X (k, r) = 6 + 5r 2 + 50r 4 − 21r 6 252r 2 + (1 − r 2 ) 3 (2 + 7r 2 ) 168r 3 ln 1 − r 1 + r + 3 + 8r 2 − 3r 4 28 + 3(1 − r 2 ) 3 56r ln 1 − r 1 + r c (1) X (kr) − r 1 −1 dx x c (2) X (k, kr; x). (40) In the above expressions, rotationally invariant arguments for c (2) X are used, i.e., c (2) X (k 1 , k 2 ) = c (2) X (k 1 , k 2 ; x),(41) where x =k 1 ·k 2 is the direction cosine between k 1 and k 2 . The second expression of Eq. (38) R XY (k) =Γ (1) X (k)Γ (1) Y (k)P L (k).(42) Evaluating the convolution integrals in Eqs. (33) and (34) with the three-point propagator of Eq. (16) is also straightforward in real space. Substituting the Lagrangian kernels of Eqs. (11) and (12) into Eq. (16), and transforming the integration variables as r = k 1 /k, x =k ·k 1 , we have Q XY (k) = k 3 8π 2 ∞ 0 dr 1 −1 dx r 2Γ(2) X (k, r, x)Γ (2) Y (k, r, x) × P L (kr)P L (ky) (43) and S XY (k) = k 3 8π 2Γ (1) X (k) ∞ 0 dr 1 −1 dx r 2Γ(2) Y (k, r, x) × B L (k, kr, ky) + (X ↔ Y),(44) where y = √ 1 + r 2 − 2rx,(45)andΓ (2) X (k, r, x) = − 4 7 1 − x 2 y 2 + x r 1 + c (1) X (ky) + 1 − rx y 2 1 + c (1) X (kr) + c (2) X (kr, ky; x),(46) The factorΓ (2) Y (k, r, x) is similarly given by substituting X → Y in Eq. (46). The functionΓ (2) X (k, r, x) is just the normalized three-point propagatorΓ (2) X (k 1 , k − k 1 ) as a function of transformed variables. All the necessary components to calculate the power spectrum of Eq. (31) in real space, P XY (k) = Π 2 (k) [R XY (k) + Q XY (k) + S XY (k)] ,(47)F (k, p) d 3 p (2π) 3 F (k, p)P L (p) Diagram L (3) (k, p, − p) 10 21 k k 2 R 1 (k) L (1) i (− p)L (2) j (k, p) 3 14 k i k j − k 2 δ i j k 4 R 1 (k) + 3 7 k i k j k 4 R 2 (k) L (2) (k, p)c (1) X (p) 3 7 k k 2 R X 3 (k) L (1) (− p)c (2) X (k, p) − k k 2 R X 4 (k)(k) = R X 1 (k) and R 2 (k) = R X 2 (k) , as these functions are independent on the bias. are given above, i.e., Eqs. (35) [or (36)], (42), (43) and (44). Numerical integrations of Eqs. (38) [or (37)], (43) and (44) are not difficult, once the model of renormalized bias functions c (n) X and primordial spectra P L (k), B L (k 1 , k 2 , k 3 ) are given. The last factor S XY (k) is absent in the case of Gaussian initial conditions. B. Kernel integrals Evaluations of power spectra in redshift space are more tedious than those in real space. The reason is that the power spectra depend on the lines-of-sight direction in redshift space. One cannot arbitrary choose the direction of z-axis in the three-dimensional integrations of Eqs. (8) and (32)- (34), because the rotational symmetry is not met. Even in such cases, an axial symmetry around the lines of sight remains, and the three-dimensional integrations can be reduced to two-or one-dimensional integrations as shown below. All the necessary techniques for such reductions are the same with those presented in Refs. [22,23], making use of rotational covariance. We summarize useful formulas for the reduction in this subsection. We assume the standard theory of gravity in the formula below, although the same technique may be applicable to other theories such as the modified gravity, etc. The first set of formulas is related to the two-point propa- gator Γ (1) X of Eq. (8). The results are summarized in Table II. The integrals of a form, d 3 p (2π) 3 F (k, p)P L (p),(48) where F (k, p) consists of LPT kernels L (n) and renormalized bias functions c (n) X , are reduced to one-dimensional integrals, R X n (k). The explicit formulas are given in Table II. In this Table, we denote R 1 (k) = R X 1 (k) and R 2 (k) = R X 2 (k), as these functions are independent on the bias. The functions R X n (k) are defined by three equivalent sets of equations, R X n (k) = d 3 p (2π) 3 R X n (k, p)P L (p) = k 3 4π 2 ∞ 0 dr 1 −1 dxR X n (r, x)P L (kr) = k 3 4π 2 ∞ 0 drR X n (r)P L (kr),(49) where integrands R X n (k, p),R X n (r, x), andR X n (r) are given in Table III. The last expression of Eq. (49) is the formula which is practically useful for numerical evaluations. The other expressions are shown to indicate origins of the integrals. If the second-order bias function c (2) X (k 1 , k 2 ) only depends on magnitudes of wavevectors k 1 and k 2 , and not on the relative angle µ 12 =k 1 ·k 2 , the fourth function generically vanishes: R X 4 (k) = 0. If the first-order bias function c (1) X is scale- independent, it is explicitly shown from the last expressions that R X 3 (k) = [R 1 (k) + R 2 (k)]c (1) X . Specifically, the functions R X 3 (k) and R X 4 (k) are redundant in the Lagrangian local bias models, in which renormalized bias functions c (n) X are scaleindependent. This is the reason only two functions R 1 (k) and R 2 (k) are needed in Ref. [23]. In general situations with Lagrangian nonlocal bias models, all four functions are needed. In a simple model of halo bias in this paper, the secondorder bias function c (2) X does not depend on the angle µ 12 and R 4 (k) = 0 in this case. The second set of formulas is related to the convolution integrals of the three-point propagators Γ (2) X in calculating the one-loop power spectrum. The integrals of the form, k 12 =k F (k 1 , k 2 )P L (k 1 )P L (k 2 ),(50) where F consists of LPT kernels L n and renormalized bias functions c (n) X and c (n) Y , are reduced to two-dimensional integrals, Q XY n (k). The explicit formulas are given Table IV. For the third and fifth formulas in this Table, the indices of the LPT kernels are symmetrized, since only symmetric combinations are used in this paper. In this Table, we denote Q n (k) = Q XY n (k) for n = 1, 2, 3, 4, as these functions are independent on the bias, and Q X n (k) = Q XY n (k) for n = 5, 6, 7, 8, 9, as these functions are only dependent on the bias of objects X. The functions Q XY n (k) are defined by two equivalent sets of equations, Q XY n (k) = k 12 =k Q XY n (k 1 , k 2 )P L (k 1 )P L (k 2 ) = k 3 4π 2 ∞ 0 dr 1 −1 dxQ XY n (r, x)P L (kr) × P L k √ 1 + r 2 − 2rx ,(51) where integrands Q XY n (k 1 , k 2 ),Q XY n (r, x) are given in Table V. The last expression of Eq. (51) is the formula which is practically useful for numerical evaluations. The first expressions are shown to indicate origins of the integrands. n R X n (k, p)R X n (r, x)R X n (r) The third set of formulas is related to the initial bispectrum, which is an indicator of primordial non-Gaussianity. The integrals of the form, 1 k 2 |k − p| 2       1 − k · p kp 2       2 r 2 (1 − x 2 ) 2 1 + r 2 − 2rx − (1 + r 2 )(3 − 14r 2 + 3r 4 ) 24r 2 − (1 − r 2 ) 4 16r 3 ln 1 − r 1 + r 2 (k · p) k · (k − p) p 2 |k − p| 2       1 − k · p kp 2       rx(1 − rx)(1 − x 2 ) 1 + r 2 − 2rx (1 − r 2 )(3 − 2r 2 + 3r 4 ) 24r 2 + (1 − r 2 ) 3 (1 + r 2 ) 16r 3 ln 1 − r 1 + r 3 k · (k − p) |k − p| 2       1 − k · p kp 2       c (1) X (p) r 2 (1 − rx)(1 − x 2 ) 1 + r 2 − 2rx c (1) X (kr) 3 + 8r 2 − 3r 4 12 + (1 − r 2 ) 3 8r ln 1 − r 1 + r c (1) X (kr) 4 k · p p 2 c (2) X (k, p) rx c (2) X (k, kr; x) r 1 −1 dx x c (2) X (k, kr; x)k 12 =k F (k 1 , k 2 )B L (k, k 1 , k 2 ),(52) where F consists of LPT kernels L n and renormalized bias functions c (n) X , are reduced to two-dimensional integrals, S X n (k). The explicit formulas are given in Table VI. In this Table, we denote S 1 (k) = S X 1 (k) and S 2 (k) = S X 2 (k), as these functions are independent on the bias. The functions S X n (k) are defined by two equivalent sets of equations, S X n (k) = k 12 =k S X n (k 1 , k 2 )B L (k, k 1 , k 2 ) = k 3 4π 2 ∞ 0 dr 1 −1 dxS X n (r, x) × B L k, kr, k √ 1 + r 2 − 2rx ,(53) where integrands S X n (k 1 , k 2 ),S X n (r, x) are given in Table VII. C. The power spectra in redshift space As all the necessary integral formulas are derived in the previous subsection, we are ready to write down the explicit formula of the power spectrum in redshift space. The decomposition of Eq. (31) is applicable in redshift space, and it is sufficient to give the explicit expressions for the functions Π(k), R XY (k), Q XY (k), S XY (k) in redshift space. These functions depends on not only the magnitude k but also the direction relative to the lines of sight. We employ the distant-observer approximation for the redshift-space distortions, and the lines of sight are fixed in the direction of the third axis,ẑ. Lagrangian kernels are replaced according to Eq. (15) in the formulas of propagators Eqs. (8) and (16). In those formulas, the Lagrangian kernels appear only in the form of k · L (n) . With the linear mapping of Eq. (15), we have k · L (n) → k · L s(n) = (k + n f µkẑ) · L (n) ,(54) where µ =k ·ẑ, andk = k/k. Thus, in the distant-observer approximation of this paper, the direction dependence comes into the formulas only through the direction cosine of Eq. (55). We denote the functions of Eqs. (32)-(34) as R XY (k, µ), Q XY (k, µ), S XY (k, µ) in the following. Substituting Eq. (54) into Eqs. (7), (8) and (16), one can see that evaluations of Eqs. (32)-(34) are straightforward by means of the integral formulas in the previous subsection. The results are explicitly presented in the following. The vertex resummation function of Eq. (7) can be evaluated by applying the same technique of the previous section. The relevant integral is (56) and the last integral is proportional to the Kronecker's delta. The proportional factor is evaluated by taking contraction of the indices. Consequently, we have d 3 p (2π) 3 k · L s(1) ( p) 2 P L (p) = (k i + f µkẑ i )(k j + f µkẑ j ) d 3 p (2π) 3 p i p j p 4 P L (p),Π(k, µ) = exp − 1 + f ( f + 2)µ 2 k 2 12π 2 d pP L (p) .(57) The two-point propagator of Eq. (8) with the substitution of Eq. (54) is evaluated by means of Table II, where R n (k) functions are defined by Eq. (49) and Table III. The result is given bŷ Γ (1) X (k, µ) = 1 + c (1) X + 5 21 R 1 + 3 7 R 2 + 3 7 R X 3 − R X 4 + 1 + 5 7 R 1 + 9 7 R 2 + 6 7 R X 3 − R X 4 f µ 2 − 3 7 R 1 f 2 µ 2 + 3 7 R 1 + 6 7 R 2 f 2 µ 4 .(58) The quantities c (1) X , R n , R X n on LHS are functions of k, although the arguments are omitted. The component R XY of Eq. (32) F (k 1 , k 2 ) k 12 =k F (k 1 , k 2 )P L (k 1 )P L (k 2 ) Diagram In the third and fifth formulas, the spatial indices are completely symmetrized. We denote Q n (k) = Q XY n (k) for n = 1, 2, 3, 4, as these functions are independent on the bias, and Q X n (k) = Q XY n (k) for n = 5, 6, 7, 8, 9, as these functions are only dependent on the bias of objects X. L (2) i (k 1 , k 2 )L (2) j (k 1 , k 2 ) 9 49 k i k j k 4 Q 1 (k) L (1) i (k 1 )L (1) j (k 2 )L (2) k (k 1 , k 2 ) 3 14 (k i k j − k 2 δ i j )k k k 6 Q 1 (k) + 3 7 k i k j k k k 6 Q 2 (k) L (1) (i (k 1 )L (1) j (k 1 )L (1) k (k 2 )L (1) l) (k 2 ) 3 8 k i k j k k k l − 2k 2 δ (i j k k k l) + k 4 δ (i j δ kl) k 8 Q 1 (k) − 1 2 k i k j k k k l − k 2 δ (i j k k k l) k 8 Q 3 (k) + k i k j k k k l k 8 Q 4 (k) L (1) i (k 1 )L (2) j (k 1 , k 2 )c (1) X (k 2 ) 3 7 k i k j k 4 Q X 5 (k) L (1) (i (k 1 )L (1) j (k 1 )L (1) k) (k 2 )c (1) X (k 2 ) − 1 2 k i k j k l − k 2 δ (i j k k) k 6 Q X 6 (k) + k i k j k k k 6 Q X 7 (k) L (2) (k 1 , k 2 )c (2) X (k 1 , k 2 ) 3 7 k k 2 Q X 8 (k) L (1) i (k 1 )L (1) j (k 2 )c (2) X (k 1 , k 2 ) 1 2 k i k j − k 2 δ i j k 4 Q X 8 (k) + k i k j k 4 Q X 9 (k) L (1) i (k 1 )L (1) j (k 2 )c (1) X (k 1 )c (1) Y (k 2 ) 1 2 k i k j − k 2 δ i j k 4 Q XY 10 (k) + k i k j k 4 Q XY 11 (k) L (1) i (k 1 )L (1) j (k 1 )c (1) X (k 2 )c (1) Y (k 2 ) − 1 2 k i k j − k 2 δ i j k 4 Q XY 12 (k) + k i k j k 4 Q XY 13 (k) L (1) (k 1 )c (1) X (k 2 )c (2) Y (k 1 , k 2 ) k k 2 Q XY 14 (k) c (2) X (k 1 , k 2 )c (2) Y (k 1 , k 2 ) Q XY 15 (k) is straightforwardly obtained by the above result of the twopoint propagator: R XY (k, µ) =Γ (1) X (k, µ)Γ (1) Y (k, µ)P L (k).(59) The tree-level contribution of the above equation is given by (b X + f µ 2 )(b Y + f µ 2 )P L (k) where b X = 1+c (1) X , and Kaiser's linear formula of redshift-space distortions for the power spectrum [57] is exactly reproduced. In calculating the mass power spectrum, X = Y = m, we only need terms with R 1 (k) and R 2 (k) and other terms R X 3 (k) and R X 4 (k) vanish since c (n) X = 0 for unbiased mass density field. The component Q XY (k, µ) of Eq. (33) is similarly evaluated, while the number of terms are larger. The result is given by Q XY (k, µ) = 1 2 n,m µ 2n f m q XY nm (k) + q YX nm (k) ,(60) where q XY 00 = 9 98 Q 1 + 3 7 Q 2 + 1 2 Q 4 + 6 7 Q X 5 + 2Q X 7 + 3 7 Q X 8 + Q X 9 + Q XY 11 + Q XY 13 + 2Q XY 14 + 1 2 Q XY 15 ,(61)q XY 11 = 18 49 Q 1 + 12 7 Q 2 + 2Q 4 + 18 7 Q X 5 + 6Q X 7 + 6 7 Q X 8 + 2Q X 9 + 2Q XY 11 + 2Q XY 13 + 2Q XY 14 ,(62)q XY 12 = − 3 14 Q 1 + 1 4 Q 3 + Q X 6 − 1 2 Q X 8 − 1 2 Q XY 10 + 1 2 Q XY 12 ,(63)q XY 22 = 57 98 Q 1 + 15 7 Q 2 − 1 4 Q 3 + 3Q 4 + 12 7 Q X 5 − Q X 6 + 6Q X 7 + 1 2 Q X 8 + Q X 9 + 1 2 Q XY 10 + Q XY 11 − 1 2 Q XY 12 + Q XY 13 ,(64)q XY 23 = − 3 7 Q 1 + 1 2 Q 3 + Q X 6 ,(65)n Q XY n (k 1 , k 2 ) [k = k 1 + k 2 ]Q XY n (r, x) y = (1 + r 2 − 2rx) 1/2 , µ = (x − r)/y 1       1 − k 1 · k 2 k 1 k 2 2       2 r 2 (1 − x 2 ) 2 y 4 2 (k · k 1 )(k · k 2 ) k 1 2 k 2 2       1 − k 1 · k 2 k 1 k 2 2       rx(1 − rx)(1 − x 2 ) y 4 3 k 4 − 6(k · k 1 )(k · k 2 ) k 1 2 k 2 2       1 − k 1 · k 2 k 1 k 2 2       (1 − 6rx + 6r 2 x 2 )(1 − x 2 ) y 4 4 (k · k 1 ) 2 (k · k 2 ) 2 k 1 4 k 2 4 x 2 (1 − rx) 2 y 4 5 k · k 1 k 1 2       1 − k 1 · k 2 k 1 k 2 2       c (1) X (k 2 ) rx(1 − x 2 ) y 2 c (1) X (ky) 6 k 2 − 3k · k 1 k 1 2       1 − k 1 · k 2 k 1 k 2 2       c (1) X (k 2 ) (1 − 3rx)(1 − x 2 ) y 2 c (1) X (ky) 7 k · k 1 k 1 2 2 k · k 2 k 2 2 c (1) X (k 2 ) x 2 (1 − rx) y 2 c (1) X (ky) 8       1 − k 1 · k 2 k 1 k 2 2       c (2) X (k 1 , k 2 ) r 2 (1 − x 2 ) y 2 c (2) X (kr, ky; µ) 9 (k · k 1 )(k · k 2 ) k 1 2 k 2 2 c (2) X (k 1 , k 2 ) rx(1 − rx) y 2 c (2) X (kr, ky; µ) 10       1 − k 1 · k 2 k 1 k 2 2       c (1) X (k 1 )c (1) Y (k 2 ) r 2 (1 − x 2 ) y 2 c (1) X (kr)c (1) Y (ky) 11 (k · k 1 )(k · k 2 ) k 1 2 k 2 2 c (1) X (k 1 )c (1) Y (k 2 ) rx(1 − rx) y 2 c (1) X (kr)c (1) Y (ky) 12 k 2 k 1 2       1 − k · k 1 kk 1 2       c (1) X (k 2 )c (1) Y (k 2 ) 1 − x 2 c (1) X (ky)c (1) Y (ky) 13 k · k 1 k 1 2 2 c (1) X (k 2 )c (1) Y (k 2 ) x 2 c (1) X (ky)c (1) Y (ky) 14 k · k 1 k 1 2 c (1) X (k 2 )c (2) Y (k 1 , k 2 ) rx c (1) X (ky)c (2) Y (kr, ky; µ) 15 c (2) X (k 1 , k 2 )c (2) Y (k 1 , k 2 ) r 2 c (2) X (kr, ky; µ) × c (2) Y (kr, ky; µ)F (k 1 , k 2 ) k 12 =k F (k 1 , k 2 )B L (k, k 1 , k 2 ) Diagram L (2) (k 1 , k 2 ) 3 7 k k 2 S 1 (k) L 1i (k 1 )L 1 j (k 2 ) 1 2 k i k j − k 2 δ i j k 4 S 1 (k) + k i k j k 4 S 2 (k) L (1) (k 1 )c (1) X (k 2 ) k k 2 S X 3 (k) c (2) X (k 1 , k 2 ) S X 4 (k) TABLE VI: Integral formulas for one-loop corrections, which are related to convolving three-point propagators with the linear bispectrum. We denote S 1 (k) = S X 1 (k) and S 2 (k) = S X 2 (k), as these functions are independent on the bias. n S X n (k 1 , k 2 ) [k = k 1 + k 2 ]S X n (r, x) y = (1 + r 2 − 2rx) 1/2 , µ = (x − r)/y 1 1 − k 1 · k 2 k 1 k 2 2 r 2 (1 − x 2 ) y 2 2 (k · k 1 )(k · k 2 ) k 1 2 k 2 2 rx(1 − rx) y 2 3 k · k 1 k 1 2 c (1) X (k 2 ) rx c (1) X (ky) 4 c (2) X (k 1 , k 2 ) r 2 c (2) X (kr, ky; µ)q XY 24 = 3 16 Q 1 ,(66)q XY 33 = 3 7 Q 1 + 6 7 Q 2 − 1 2 Q 3 + 2Q 4 − Q X 6 + 2Q X 7 ,(67)q XY 34 = − 3 8 Q 1 + 1 4 Q 3 ,(68)q XY 44 = 3 16 Q 1 − 1 4 Q 3 + 1 2 Q 4 ,(69) and other q XY nm (k)'s which are not listed above all vanish. The quantities Q n , Q X n , Q XY n are functions of k, although the arguments are omitted. The Q n functions of n = 1, . . . , 4, 10, . . . , 13, 15 are symmetric with respect to X ↔ Y, while those of n = 5, . . . , 9, 14 are not. In calculating cross power spectra, X Y, the symmetrization with respect to XY in Eq. (60) is necessary. In calculating auto power spectra, X = Y, two terms in the square bracket in Eq. (60) are the same, and can be replaced by 2q XX nm (k). In calculating the mass power spectrum, X = Y = m, we only need terms with Q 1 (k), . . . , Q 4 (k) and other terms Q X 5 (k), . . . , Q XY 15 (k) all vanish since c (n) X = 0 for unbiased mass density field. The component S XY (k, µ) of Eq. (34) is similarly evaluated. The result is given by S XY (k, µ) = 1 2Γ (1) X (k, µ) 3 7 S 1 + S 2 + 2S Y 3 + S Y 4 + 6 7 S 1 + 2S 2 + 2S Y 3 f µ 2 − 1 2 S 1 f 2 µ 2 + 1 2 S 1 + S 2 f 2 µ 4 + (X ↔ Y).(70) The quantities S n , S X n and S Y n are functions of k, although the arguments are omitted. The normalized two-point propagator Γ (1) X (k, µ) in Eq. (70) can be replaced by the tree-level term, 1 + c (1) X + f µ 2 , because the rest of the factor is already of oneloop order. All the necessary components to calculate the power spectrum of Eq. (31) in redshift space, P XY (k, µ) = Π 2 (k, µ) R XY (k, µ) + Q XY (k, µ) + S XY (k, µ) ,(71) are provided above, i.e., Eqs. (57), (59), (60) and (70). Numerical integrations of Eqs. (49), (51) and (53) are not difficult, once the model of renormalized bias functions c (n) X and primordial spectra P L (k), B L (k 1 , k 2 , k 3 ) are given. The last term S XY (k) is absent in the case of Gaussian initial conditions. D. Evaluating correlation functions We have derived full expressions of power spectra of biased tracers in the one-loop approximation. The correlation functions are obtained by Fourier transforming the power spectrum. In real space, the relation between the correlation function ξ XY (r) and the power spectrum P XY (k) is standard: ξ XY (r) = ∞ 0 k 2 dk 2π 2 j 0 (kr)P XY (k),(72) where j l (z) is the spherical Bessel function. For a numerical evaluation, it is convenient to first tabulate the values of power spectrum P XY (k) of Eq. (47) in performing the onedimensional integration of Eq. (72). In redshift space, multipole expansions of the correlation function are useful [58][59][60]. For reader's convenience, we summarize here the set of equations which is useful to numerically evaluate the correlation functions in redshift space from the iPT formulas of power spectra derived above. The multipole expansion of the power spectrum in redshift space, P XY (k, µ), with respect to the direction cosine relative to lines of sight has a form, P XY (k, µ) = ∞ l=0 p l XY (k)P l (µ),(73) where P l (µ) is the Legendre polynomial. Inverting the above equation by the orthogonal relation of Legendre polynomials, the coefficient p l XY (k) is given by p l XY (k) = 2l + 1 2 1 −1 dµP l (µ)P XY (k, µ).(74) Because of the distant-observer approximation, the index l only takes even integers. The dependence on the direction µ of our power spectrum, P XY (k, µ) of Eq. (71), appears in forms of µ 2n e −αµ 2 where n = 0, 1, 2, . . . are non-negative integers. It is possible to analytically reduce the integral of Eq. (74) by using an identity 1 −1 dµ µ 2n e −αµ 2 = α −n−1/2 γ n + 1 2 , α ,(75) where γ(z, p) is the lower incomplete gamma function defined by γ(z, p) = p 0 e −t t z−1 dt.(76) Although the number of terms is large, it is straightforward to obtain the analytic expression of p l XY (k) of Eq. (74) in terms of Q n (k), R n (k), S n (k), c (1) X (k), and the lower incomplete gamma function. Computer algebra like Mathematica should be useful for that purpose. Alternatively, it is feasible to numerically integrate the one-dimensional integral of Eq. (74) for each k, once the functions Q n (k), R n (k), S n (k), c (1) X (k) are precomputed and tabulated. The latter method is much simpler than the former. The multipole expansion of the correlation function in redshift space, ξ XY (r, µ), with respect to the direction cosine relative to lines of sight is given by ξ XY (r, µ) = ∞ l=0 ξ l XY (r)P l (µ),(77)ξ l XY (r) = 2l + 1 2 1 −1 dµP l (µ)ξ XY (r, µ).(78) Since the power spectrum P XY (k, µ) and the correlation function ξ XY (k, µ) are related by a three-dimensional Fourier transform, corresponding multipoles are related by [58] ξ l XY (r) = i −l ∞ 0 k 2 dk 2π 2 j l (kr)p l XY (k).(79) Since l is an even integer, the above equation is a real number. Once the multipoles of power spectrum p l XY (k) are evaluated by either method described above and tabulated as a function of k, we have a multipoles of the correlation function ξ l XY (r) by a simple numerical integration of Eq. (79). Because the vertex resummation factor exponentially damps for high-k, the numerical integration of Eq. (79) is stable enough. E. A sample comparison with numerical simulations The purpose of this paper is to analytically derive explicit formulas of one-loop power spectra in iPT, and detailed analysis of numerical consequences of derived formulas is beyond the scope of paper. In this subsection, we only present a sample comparison with halos in N-body simulations. In Fig. 4, correlation functions in real space are presented. The numerical halo catalogs in this figure are the same as the ones used in Sato & Matsubara (2011; [25,61]. The N-body simulations are performed by a publicly available tree-particle mesh code, Gadget2 [62] with cosmological parameters Ω M = 0.265, Ω Λ = 0.735, Ω b = 0.0448, h = 0.71, n s = 0.963, σ 8 = 0.80. Other simulation parameters are given by the box size L box = 1000 h −1 Mpc, the number of particles N p = 1024 3 , initial redshift z ini = 36, the softening length r s = 50h −1 kpc, and the number of realizations N run = 30. Initial conditions are generated by a code based on 2nd-order Lagrangian perturbation theory (2LPT) [63,64], and initial spectrum is calculated by CAMB [65]. The halos are selected by a Friends-of-Friends algorithm [66] with linking length of 0.2 times the mean separation. The output redshift of the halo catalog is z = 1.0, and the mass range of the selected halos is 4.11 × 10 12 h −1 M ⊙ ≤ M ≤ 12.32 × 10 12 h −1 M ⊙ . In the upper panel, the auto-and cross-correlation functions of mass and halos, ξ hh , ξ mh , ξ mm , are plotted. Since the amplitude of linear halo bias, b L 1 , predicted by the peakbackground split in the simple halo model, does not accurately reproduce the value of halo bias in numerical simulations, we consider the value of smoothing radius R (or mass M) in the simple model of the renormalized bias function as a free parameter. We approximately treat this freely fitted radius as a representative value, and ignore the finiteness of mass range, e.g., Eq. (29). The same value of radius is used both in autoand cross-correlations, ξ hh and ξ mh . We use a Gaussian window function W(kR) = e −k 2 R 2 /2 , while the shape of the window function does not change the predictions on large scales. There is no fitting parameter for the mass auto-correlation function ξ mm . As obviously seen in the Figure, In the lower panel, scale-dependent bias parameters are plotted. Two definitions of linear bias factor, ξ hh /ξ mm and ξ mh /ξ mm , are presented. The iPT predicts almost similar curves for both definitions, and slight scale-dependence of linear bias on BAO scales is suggested. Such scale-dependence is already predicted also in models of Lagrangian local bias [23]. Unfortunately, the N-body simulations used in this comparison are not sufficiently large to quantitatively confirm the prediction for the scale-dependent bias. However, a recent Nbody analysis of the MICE Grand Challenge run [67] shows qualitatively the same scale-dependence. This observation exemplifies unique potentials of the method of iPT. IV. RELATION TO PREVIOUS WORK A. Lagrangian resummation theory It is worth mentioning here the relation between the above formulas and previous results of Ref. [23], in which the Lagrangian resummation theory (LRT) with local Lagrangian bias is developed. The iPT is a superset of LRT. The results of Ref. [23] can be derived from the formulas in this paper by restricting to the local Lagrangian bias and by neglecting contributions from the primordial non-Gaussianity, although the way to derive the same results is apparently different. The definitions of Q n and R n functions are somehow different in Ref. [23] from those in this paper. The notational correspondences are summarized in Table VIII. In Ref. [23], the linear density field δ L and the biased density field in Lagrangian space δ L X are related by a local relation δ L X (q) = F(δ L (q)) in Lagrangian configuration space. Fourier transforming this relation, the renormalized bias functions of Eq. (3) in models of local Lagrangian bias reduce to scaleindependent parameters, c (n) X = F (n) ,(80) where F (n) = ∂ n F/∂δ L n is the nth derivative of the function F(δ L ). Thus the renormalized bias functions are independent on wavevectors in the case of local bias, and we have F ′ = c (1) X and F ′′ = c (2) X , etc. It is explicitly shown that the results of Ref. [23] are exactly reproduced by setting X = Y and S XY = 0, expanding the product (Γ (1) X ) 2 in R XX and adopting the replacement of variables according to the Table VIII. In making such a comparison, the product Γ (1) X Γ (1) Y should be expanded up to the secondorder terms in P L (k) (i.e., one-loop terms). Thus, Eq. (31) is considered as a nontrivial generalization of the previous formula of Ref. [23]. Another previous formula of Ref. [22] is a special case of Ref. [23] without biasing. As a consequence, setting c (n) X = 0, S XY = 0 in Eq. (31) reproduces the results of Ref. [22]. B. Scale-dependent bias and primordial non-Gaussianity Contributions from the primordial bispectrum, if any, are included in S XY . In the cases of X = Y and X Y = m, the relations between the primordial bispectrum and scaledependent bias are already analyzed in Ref. [36] with gen-This paper Ref. [23] Ref. [36] R 1 (k) R 1 (k)/P L (k) - R 2 (k) R 2 (k)/P L (k) - R X 3 (k) F ′ [R 1 (k) + R 2 (k)] /P L (k) - R X 4 (k) 0 - Q 1 (k) Q 1 (k) - Q 2 (k) Q 2 (k) - Q 3 (k) Q 4 (k) − 6Q 2 (k) - Q 4 (k) Q 3 (k) - Q X 6 (k) F ′ Q 6 (k) - Q X 7 (k) F ′ Q 7 (k) - Q X 8 (k) F ′′ Q 8 (k) - Q X 9 (k) F ′′ Q 9 (k) - Q XX 10 (k) F ′ 2 Q 8 (k) - Q XX 11 (k) F ′ 2 Q 9 (k) - Q XX 12 (k) F ′ 2 Q 10 (k) - Q XX 13 (k) F ′ 2 Q 11 (k) - Q XX 14 (k) F ′ F ′′ Q 12 (k) - Q XX 15 (k) F ′′ 2 Q 13 (k) - S 1 (k) - R 2 (k) S 2 (k) - 2R 1 (k) − R 2 (k) S X 3 (k) - Q 1 (k)/2 S X 4 (k) - Q 2 (k) TABLE VIII: When the local Lagrangian bias is employed, and primordial non-Gaussianity is not considered, the expression of the auto power spectrum (X = Y) in this paper reproduces the result of Ref. [23]. When contributions from the primordial non-Gaussianity are extracted, the results of Ref. [36] are reproduced. Correspondences of the functions defined in this paper and those defined in Refs. [23,36] are provided in this Table. The renormalized bias functions are constants in local bias models, and denoted by F ′ = c (1) X and F ′′ = c (2) X in Ref. [23]. erally nonlocal Lagrangian bias. In the presence of primordial bispectrum, the scale-dependent bias emerges on very large scales [4,5]. The iPT generalizes the previous formulas of the scale-dependent bias with less number of approximations. The previous formulas of scale-dependent bias [4,[68][69][70], which are derived in the approximation of peak-background split for the halo bias, are exactly reproduced as limiting cases of the formula derived by iPT [36]. It should be noted that the formula of scale-dependent bias in the framework of iPT is not restricted to the particular model of halo bias. Therefore the iPT provides the most general formula of the scale-dependent bias among previous work. The correspondence between the functions defined in Ref. [36] and those in this paper is summarized in Table VIII. In this paper, the cross power spectrum of two differently biased objects, X and Y are considered in general. One can derive the scale-dependent bias of cross power spectrum P XY (k) as illustrated below. In the following argument, the redshift-space distortions are neglected for simplicity, although it is straightforward to include them. We define the scale-dependent bias ∆b XY of cross power spectrum by P XY (k) = [b XY (k) + ∆b XY (k)] 2 P m (k),(81) where P m (k) is the matter power spectrum, and b XY (k) is the linear bias factor of the cross power spectrum without contributions from primordial non-Gaussianity. In the lowest- order approximation, b XY (k) = [b X (k)b Y (k)] 1/2 , where b X (k) and b Y (k) are linear bias factors of objects X and Y, respectively. When higher orders of ∆b XY are neglected, we have ∆b XY = 1 2 b XY (k)       ∆P XY (k) P G XY (k) − ∆P m (k) P G m (k)       ,(82) where P G XY (k) and P G m (k) are the Gaussian parts of cross power spectrum and the auto power spectrum of mass, respectively, and ∆P XY (k) and ∆P m (k) are corresponding contributions from primordial non-Gaussianity so that the full spectra are given by P XY (k) = P G XY (k) + ∆P XY (k) and P m (k) = P G m (k) + ∆P m (k). On sufficiently large scales, nonlinear gravitational evolutions are not important, and dominant contributions to the multi-point propagators are asymptotically given by [36] Γ (1) X (k) ≈ b X (k),(83)Γ (2) X (k 1 , k 2 ) ≈ c (2) X (k 1 , k 2 ),(84) where b X (k) = 1 + c (1) X (k) is the linear bias factor of object X. In this limit, Eq. (34) reduces to S XY (k) ≈ b X (k) k 12 =k c (2) Y (k 1 , k 2 )B L (k, k 1 , k 2 ).(85) In the lowest-order approximation with a large-scale limit, the predictions of iPT are given by P G m (k) ≈ P L (k), P G XY (k) ≈ b X (k)b Y (k)P L (k), (86) ∆P m (k) ≈ 0, ∆P XY (k) ≈ S XY (k),(87) and we have b XY (k) = [b X (k)b Y (k)] 1/2 as previously noted. Substituting these equations into Eq. (82), we have ∆b XY (k) ≈ S XY (k) 2 √ b X (k)b Y (k) P L (k) .(88) This equation gives the general formula of the scaledependent bias for cross power spectra in general. In a case of the auto-power spectrum with X = Y, the above equation reduces to a known result [36], ∆b X ≈ S XX (k)/[2b X (k)P L (k)]. Previous formulas of the scaledependent bias in the approximation of peak-background split are reproduced in limiting cases of this result, adopting the renormalized bias functions c (n) X in the nonlocal model of halo bias described in Sec. II B. The integral of Eq. (85) is scale-dependent according to the squeezed limit of the primordial bispectrum, B L (k, k 1 , k 2 ) with k ≪ k 1 , k 2 . Thus, the scale-dependencies of the bias in cross power spectra are similar to those in auto power spectra. Amplitudes of the scale-dependent bias are different. When the primordial non-Gaussianity are actually detected, scale-dependent biases of cross power spectra of multiple kinds of objects would be useful to cross-check the detection. C. Convolution Lagrangian perturbation theory Recently, a further resummation method, called the convolution Lagrangian perturbation theory (CLPT) [53], is proposed on the basis of LRT. The implementation of the CLPT actually improves the nonlinear behavior on small scales where the original LRT breaks down. The proposed CLPT is based on the LRT in which only local Lagrangian bias can be incorporated. Under the light of iPT, the resummation scheme of CLPT corresponds to resumming the diagrams depicted by Fig. 5. The shaded ellipse with the symbol 'C' represents a summation of all the possible connected diagrams. The actual ingredients are shown in Fig. 6 up to the one-loop approximation. The corresponding function of this Figure is given bỹ Λ i j (k) = −L (1) i (k)L (1) j (k)P L (k) − d 3 p (2π) 3 L (1) (i (k)L (3) j) (k, p, −p)P L (p)P L (k) − 1 2 k 12 =k L (2) i (k 1 , k 2 )L (2) j (k 1 , k 2 )P L (k 1 )P L (k 2 ) − L (1) (i (k) k 12 =k L (2) j) (k 1 , k 2 )B L (k, k 1 , k 2 ). (89) The indices i, j are symmetrized on RHS of the above equation. This function is the same as C i j (k) in Ref. [23], and −C i j (k) in Ref. [22]. We refer to the graph of Fig. 6 and Eq. (89) as "displacement correlator" below. To the full or-ders, the displacement correlatorΛ i j (k) is given by Ψ i (k)Ψ j (k ′ ) c = −(2π) 3 δ 3 D (k + k ′ )Λ i j (k).(90) The expression of Eq. (89) is also obtained from this equation, adopting the one-loop approximation in the perturbative expansion of Eq. (9). Using the displacement correlator, the diagrams of Fig. 5 can be represented by a convolution integral of the form ∞ n=0 (−1) n n! k i 1 · · · k i n k j 1 · · · k j n k 1···n =k ′Λ i 1 j 1 (k 1 ) · · ·Λ i n j n (k n ) = d 3 q e −ik 1···n ·q exp −k i k j Λ i j (q) ,(91) where k is the wave vector of the nonlinear power spectrum P XY (k) to evaluate, k 1···n = k 1 + · · · k n is the total wave vector that flows through the resummed part of Fig. 5, and Λ i j (q) = d 3 k (2π) 3 e ik·qΛ i j (k)(92) is the displacement correlator in configuration space. The convolution integral of Eq. (91) contributes multiplicatively to the evaluation of the power spectrum P XY (k). The displacement correlator in configuration space, Eq. (92), is given by the full-order displacement field Ψ(q) as Λ i j (q) = − Ψ i (q 2 )Ψ j (q 1 ) c ,(93) where q = q 2 − q 1 . This function is denoted as C i j (q)/2 in Ref. [53], and thus we have a correspondence, C CLPT i j (q) = 2Λ i j (q)(94) In the CLPT, the vertex resummation factor is included in a function A i j (q) = B i j + C i j (q) of their notation, where B i j = 2σ 2 η δ i j and σ 2 η = |Ψ| 2 /3. Thus we have a correspondence, A CLPT i j (q) = 2 3 σ 2 η δ i j + 2Λ i j (q).(95) The first term in the LHS corresponds to the vertex resummation in iPT, and is kept exponentiated in both original LRT and CLPT. The second term is kept exponentiated in CLPT and expanded in the original LRT formalism. When the Lagrangian local bias is assumed (c (1) X = F ′ , c (2) X = F ′′ ,. ..), and the convolution resummation of Fig. 5 is taken into account in the iPT, the formalism of CLPT is exactly reproduced. When the Lagrangian nonlocal bias is allowed in the iPT with the convolution resummation, we obtain a natural extension of the CLPT without restricting to models of local Lagrangian bias. Extending this diagrammatic understanding of CLPT in the framework of iPT, it is possible to consider further convolution resummations that are not included in the formulation of CLPT. In the CLPT, only connected diagrams with two wavy lines (i.e., Fig. 6) are resummed. We define the three-point correlator of displacement,Λ i jk (k 1 , k 2 , k 3 ) where k 1 + k 2 + k 3 = 0 by the connected diagrams with three wavy lines as shown in Fig. 7. This function is given by Ψ i (k 1 )Ψ j (k 2 )Ψ j (k 3 ) c = −i (2π) 3 δ 3 D (k 1 + k 2 + k 3 )Λ i jk (k 1 , k 2 , k 3 ). (96) to the full order. This three-point correlatorΛ i jk is the same as −C i jk in Ref. [23] and −iC i jk in Ref. [22]. In a similar way as Fig. 5 and Eq. (91), including resummations of the three-point correlator modifies the convolution integral of Eq. (91) as d 3 q e −ik ′ ·q exp −k i k j Λ i j (q) + k i k j k k Λ i jk (q) ,(97) where k ′ is the total wave vector that flows through the resummed part, and Λ i jk (q) = d 3 k (2π) 3 e ik·q d 3 p (2π) 3Λ i jk (k, −p, p − k). (98) One can similarly consider four-and higher-point convolution resummations, which naturally arise in two-or higher-loop approximations. However, it is not obvious whether or not progressively including such kinds of convolution resummations actually improves the description of the strongly nonlinear regime. Comparisons with numerical simulations are necessary to check. Detailed analysis of this type of extensions in the iPT is beyond the scope of this paper, and can be considered as an interesting subject for future work. D. Renormalized perturbation theory Recent progress in improving the standard perturbation theory (SPT) was triggered by a proposition of the renormalized perturbation theory (RPT) [45,46]. Although this theory is formulated in Eulerian space, there are many common features with iPT in which resummations in terms of Lagrangian picture play an important role. Below, we briefly discuss these common features. However, one should note that purposes of developing RPT and iPT are not the same. The RPT formalism mainly focuses on describing nonlinear evolutions of density and velocity fields of matter, extrapolating the perturbation theory in Eulerian space. The iPT formalism mainly focuses on consistently including biasing and redshift-space distortions into the perturbation theory from the first principle as possible. The RPT (and its variants) is properly applicable only to unbiased matter clustering in real space (even though there are phenomenological approaches with freely fitting parameters, such as the model of Ref. [52], for example). Thus, the resummation methods in RPT can be compared only with a degraded version of iPT without biasing and redshift-space distortions. Propagators in high-k limit An important ingredient of RPT is an interpolation scheme between low-k and high-k limits of the multi-point propagator of mass Γ (n) m (k 1 , . . . , k n ) with k = k 1···n . Based on the Eulerian picture of perturbation theory, the high-k limit of the propagator is analytically evaluated as [46,47] Γ (n) m (k 1 , . . . , k n ) ≈ exp − 1 2 k 2 σ d 2 F n (k 1 , . . . , k n ),(99) in the fastest growing mode of density field, where σ d 2 = 1 6π 2 dk ′ P L (k ′ ),(100) F n is the nth-order kernel function of SPT, and k = |k 1···n |. Although decaying modes and the velocity sector are also included in the original RPT formalism [45,46], we neglect them for our purpose of comparison between RPT and iPT. The multi-point propagator in the iPT has the form of Eqs. (5) and (6) with full orders of perturbations. In the unbiased case, X = m, we have Γ (n) m (k 1 , . . . , k n ) = Π(k)Γ (n) m (k 1 , . . . , k n ),(101) where Π(k) = e −ik·Ψ is the vertex resummation factor. The Eq. (101) should also have the same high-k limit as Eq. (99), since we are dealing with the same quantities. Although explicitly proving this property in the framework of iPT is beyond the scope of this paper, a natural expectation arises that the high-k limit of the resummation factor Π(k) is given by the exponential prefactor of Eq. (99), as discussed below. In the high-k limit, the factor e −ik·Ψ which is averaged over in the resummation factor strongly oscillates as a function of displacement field Ψ. Consequently, large values of the displacement field do not contribute to the statistical average, and dominant contributions come from a regime |Ψ| k −1 . In the high-k limit, this condition corresponds to a weak field limit of the displacement field, which is well described by the Zel'dovich approximation,Ψ(k) ≈ (ik/k 2 )δ L (k). Assuming a Gaussian initial condition, higher-order cumulants of displacement field in the Zel'dovich approximation are absent in Eq. (6). Since Ψ i Ψ j c = δ i j |Ψ| 2 /3 from rotational symmetry in real space, we have (k · Ψ) 2 c = k 2 |Ψ| 2 /3 = k 2 σ d 2 in the Zel'dovich approximation. Thus we naturally expect Π(k) = e −ik·Ψ ≈ exp − 1 2 k 2 σ d 2 ,(102) in the high-k limit, which agrees with the exponential prefactor of Eq. (99). Assuming that the above expectation is correct, Eqs. (99) and (101) suggests the high-k limit of normalized propagator is given byΓ (n) m (k 1 , · · · , k n ) ≈ F n (k 1 , · · · , k n ), i.e., the high-k limit of the normalized propagator is given by tree diagrams, and contributions from whole loop corrections are subdominant. This is a nontrivial statement, since the normalized propagator contains non-zero loop corrections in each order. For example, taking the limit k → ∞ in Eq. (37) of the one-loop approximation, we havê Γ (1) m (k) ≈ 1 + 58 315 ∞ 0 p 2 d p 2π 2 P L (p),(104) which is apparently different from F 1 = 1. Actually the integral in the RHS is logarithmically divergent for a spectrum of cold-dark-matter type, which has an asymptote P L (k) ∝ k −3 for k → ∞. Thus Eq. (103) does not apparently hold when the loop corrections are truncated at any order. Thus, Eq. (103) has a highly non-perturbative nature. This situation is natural, because the high-k limit of Eq. (102) is also highly nonperturbative. When the equation is truncated at any order, a high-k limit gives divergent terms, while the whole factor approaches to zero. The same is true for the high-k limit in the RPT formalism, Eq. (99). Provided that Eq. (99) is true, Eqs. (102) and (103) are the same statement because of Eq. (101), which is a definition of the normalized propagator. The above argument is readily generalized in the case of non-Gaussian initial conditions. In the high-k limit of the RPT formalism, the exponential factor in Eq. (99) is replaced by [48] exp − 1 2 k 2 σ d 2 → e iα(k) = exp        ∞ n=2 i n n! [α(k)] n c        , (105) where α(k) ≡ −i d 3 p (2π) 3 k · p p 2 δ L ( p).(106) clear that the RegPT prescription of Eq. (110) is equivalent to evaluate the unbiased propagator Γ (n) m by Eq. (101) in the framework of iPT, keeping only the lowest-order term in the vertex resummation factor Π(k) and expanding all the other higher-order terms from the exponent. In other words, the RegPT prescription is equivalent to the restricted iPT formalism where the vertex resummations are truncated at the oneloop level (without biasing and redshift-space distortions). Nonlinear interpolation II: MPTbreeze There is another scheme of interpolating the nonlinear propagators called MPTbreeze [50], which is originally employed in the two-point propagator in the RPT formalism [46]. This method is simpler than the RegPT, in a sense that calculations of interpolated propagators require only one-loop integrals. In the MPTbreeze prescription, the interpolated propagators are given by Γ (n) MPTbreeze (k 1 , · · · , k n ) = F n (k 1 , · · · , k n ) exp δΓ (1) 1-loop (k) ,(114) where the one-loop correction term of two-point propagator in the growing mode is explicitly given by δΓ (1) 1-loop (k) = d 3 q (2π) 3 P L (q) 504k 3 q 5 6k 7 q − 79k 5 q 3 + 50q 5 k 3 − 21kq 7 + 3 4 (k 2 − q 2 ) 3 (2k 2 + 7q 2 ) ln |k − q| 2 |k + q| 2 . (115) The notations P 0 (k), f (k) Γ (n) δ in Ref. [50] are related to our notations by P L (k) = (2π) 3 D 2 + (z)P 0 (k), δΓ (1) 1-loop (k) = D 2 + (z) f (k) and Γ (n) MPTbreeze = Γ (n) δ /D n + (z), where D + (z) is the linear growth factor. Since δΓ (1) 1-loop (k) → −k 2 σ d 2 /2 in the high-k limit and δΓ (1) 1-loop (k) → 0 in the low-k limit, Eq. (114) has correct limits. It is worth noting that the prescription of Eq. (114) corresponds to replacing all the loop-correction terms of propagators by δΓ (n) N-loop (k 1 , . . . , k n ) → 1 N! δΓ (1) 1-loop (k) N F n (k 1 , . . . , k n ). (116) Both prescriptions of MPTbreeze and RegPT give similar results, and they agree with numerical simulations fairly well in the mildly nonlinear regime [50,51]. Thus the approximation of Eq. (116) turns out to be empirically good, although the physical origin of the goodness in this prescription is somehow unclear. According to Eq. (37) or Eq. (38), the iPT-normalized twopoint propagator of mass is related to the function δΓ (1) 1-loop (k) by δΓ (1) 1-loop (k) = δΓ (1) 1-loop (k) − 1 2 k 2 σ d 2 ,(117) where δΓ (1) 1-loop is the one-loop correction term which corresponds to the integral in Eq. (38) without bias, c (n) X = 0, or δΓ (1) 1-loop (k) = 5 21 R 1 (k) + 3 7 R 2 (k),(118) as seen in Eq. (58). Substituting Eq. (117) into Eq. (114), we have Γ (n) MPTbreeze = F n exp δΓ (1) 1-loop (k) exp − 1 2 k 2 σ d 2 . (119) Comparing this form with Eq. (112), the relation between the prescriptions of RegPT and MPTbreeze is explicit. Both prescriptions differ in the prefactor preceding to the exponential damping factor; a truncation scheme is employed in RegPT, and a simple model of the higher-loop corrections is employed in MPTbreeze. V. CONCLUSIONS The iPT is a unique theory of cosmological perturbations to predict the observable spectra of biased tracers both in real space and in redshift space. This theory does not have phenomenological free parameter once the bias model is fixed. In other words, all the uncertainties regarding biasing are packed into the renormalized bias functions c (n) X , and weakly nonlinear gravitational evolutions of spatial clustering of biased tracers are described by iPT without any ambiguity. In this way, the iPT separates the bias uncertainties from weakly nonlinear evolutions of spatial clustering. The renormalized bias functions are evaluated for a given model of bias. Most of physical models of bias, such as the halo bias and peaks bias, fall into the category of the Lagrangian bias. Redshift-space distortions are simpler to describe in Lagrangian picture than in Eulerian picture. The iPT is primarily based on the Lagrangian picture of perturbations, and therefore effects of Lagrangian bias and redshift-space distortions are naturally incorporated in the framework of iPT. In this paper, general expressions of the one-loop power spectra calculated from the iPT are presented for the first time. The cross power spectra of differently biased objects, P XY (k), both in real space and in redshift space are explicitly given in terms of two-dimensional integrals at most up to one-loop order. The final result in real space is given by Eq. (47) with Eqs. (35), (42), (43), (44), and that in redshift space is given by Eq. (71) with Eqs. (57), (59), (60) and (70). When the vertex resummation is not preferred, one can alternatively use Eq. (36) instead of Eq. (35). An example of the renormalized bias functions is given by Eq. (19) for a simple model of halo bias. The iPT is a nontrivial generalization of the method of Ref. [23], which is applicable only to the case that the Lagrangian bias is local and that the initial condition is Gaussian. Although the derivations are quite different from each other, it is explicitly shown that the general iPT expression of the power spectrum exactly reduces to the expression of Ref. [23] in models of local Lagrangian bias and Gaussian initial condition. The effects of primordial non-Gaussianity are included as well. The consequent results are consistent with those derived by popular method of peak-background split. In fact, the iPT provides more accurate evaluations of the scale-dependent bias due to the primordial non-Gaussianity [36]. In the present paper, both effects of gravitational nonlinearity and primordial non-Gaussianity are simultaneously included in an expression of biased power spectrum. Thus, the most general expressions of power spectrum with leading-order (one-loop) nonlinearity and non-Gaussianity are newly obtained in this paper. In this paper, comparisons of the analytic expressions with numerical N-body simulations are quite limited. In an accompanying paper [61], the results in the present paper are used in calculating the nonlinear auto-and cross-correlation functions of halos and mass, and are compared with numerical simulations, focusing on stochastic properties of bias. We have confirmed that effects of nonlocal bias is small in the weakly nonlinear regime for the Gaussian initial conditions. That is not surprising because nonlocality in the halo bias is effective on scales of the halo mass indicated by Eq. (22); for example, R ≃ 0.7, 1.4, 3.1, 6.6 h −1 Mpc for M = 10 11 , 10 12 , 10 13 , 10 14 h −1 M ⊙ , respectively, while one-loop perturbation theory is applicable on scales ≫ 5-10 h −1 Mpc for z 3. Therefore, the predictions of iPT in Gaussian initial conditions with oneloop approximation are almost the same as those of LRT with Lagrangian local bias [22], which have been compared in detail [25] with numerical simulations of halos both in real space and in redshift space. The nonlocality of halo bias should be important on small scales, and further investigations on the renormalized bias functions are interesting extension of the present work. In the framework of iPT, the vertex resummation is naturally defined, resulting in the resummation factor Π(k) of Eq. (35) in real space or Eq. (57) in redshift space. The vertex resummation of iPT is closely related to other resummation methods like RPT which are formulated in Eulerian space. When the vertex resummation is truncated up to one-loop order, the iPT without bias and redshift-space distortions gives the equivalent formalism to the RegPT, a version of RPT with regularized multi-point propagators. Beyond the vertex resummations, the scheme of CLPT is readily applied to the framework of iPT as discussed in Sec. IV C. Further resummation scheme of convolution can be also considered. It might be an interesting application of iPT to include those type of further resummations in the presence of nonlocal bias and redshift-space distortions. Although the resummation technique has proven to be useful in the one-loop approximation, it is not trivial whether the same is true in arbitrary orders. The vertex resummation is not compulsory in iPT, rather it is optional. The general form of vertex resummation factor in iPT is given by Eq. (6). When this exponential function is expanded into polynomials, we obtain a perturbative expression of power spectrum without resummation, which is an analogue to SPT. However, for evaluations of the correlation function, the exponential damping of the resummation factor stabilizes the numerical integrations the displacement field, a black dot represents nonlinear evolutions of the displacement field, a crossed circle represents the primordial spectra. The procedures for obtaining a cross polyspectra P (N) X 1 ···X N (k 1 , . . . , k N ) of different types of objects X 1 , . . . , X N are listed below. Auto polyspectra are obtained by just setting X 1 = · · · = X N . The power spectrum is a special case of polyspectra with N = 2. 1. Draw N square boxes with labels X i (i = 1, . . . N), each of which has a double solid line. Label each double solid line with an outgoing wavevector that corresponds to an argument of the polyspectra P (N) X 1 ···X N . 2. Consider possible ways to connect all the square boxes by using wavy lines, solid lines, black dots and crossed circles, satisfying following constraints: (a) An end of a wavy line should be connected to a square box, and the other end should be connected to a black dot. (b) An end of a solid line should be connected to a crossed circle, and the other end should be connected to either a square box or a black dot. (c) Only one wavy line can be attached to a black dot while arbitrary number of solid line(s) can be attached to a black dot. (d) A piece of graph which is connected to a single square box with only wavy lines or with only solid lines is not allowed. 3. Label each (solid and wavy) line with a wavevector and its direction. The wavevectors should be conserved at each vertex of square box, black dot, and crossed circle. Label each wavy line with spatial index together with a wavevector. 4. Apply the diagrammatic rules of Figs. 8 and 9 to every distinct graphs. Integrate over wavevectors as d 3 k ′ i /(2π) 3 , where k ′ i are not determined by constraints of wavevector conservation at vertices. 6. When there are m equivalent pieces in a graph, put a statistical factor 1/m! for each set of equivalent pieces. 7. Sum up all the contributions from every distinct graphs up to necessary orders of perturbations. The rule 2.-(d) is due to partial resummations of the square box. For example, the left diagram of Fig. 10 is allowed. There is a piece of graph that is connected to a single square box with both wavy and solid lines. However, the right diagram of Fig. 10 is not allowed, because of double reasons. One is that the upper piece of graph is connected to a single square box with only wavy lines. The other is that the lower piece of graph with only solid lines connected to a single square box. Each reason itself prohibits this diagram from counted. FIG. 1 : 1The diagrammatic representation of the two-point propagator with partially resummed vertex up to one-loop contributions. Fig. 2 FIG. 2 :FIG. 3 : 223in the same order. When the mapping of Eq. (15) is applied to every displacement kernels in Eq.(16), the expression of three-point propagator in redshift space is obtained. The three-point propagator of mass, Γ(2) m , is given by just substituting c (n) X = 0 in Eq.(16). The diagrammatic representation of the three-point propagator with partially resummed vertex at the tree-level contribution. The diagrammatic representation of the power spectrum up to one-loop approximation. where n(M) is the halo mass function of Eq. (26), φ(M) is a selection function of mass. For a simple example, when the mass of halos are selected by a finite range [M 1 , M 2 ], we have FIG. 4 : 4The correlation functions in real space. The prediction of one-loop iPT is compared with numerical simulations. The results of mass auto-correlation, ξ mm , halo auto-correlation, ξ hh , and mass-halo cross-correlation, ξ mh are compared in the above panel.Dashed lines represent the predictions of linear theory, solid lines represent those of one-loop iPT, and symbols with error bars represent the results of numerical simulations. In the bottom panel, scale-dependent bias parameters, which are defined by ξ hh /ξ mm for auto-correlations and ξ mh /ξ mm for cross-correlations, are plotted. Predictions of iPT are given by solid line for auto-correlations and by dotted line for crosscorrelations. These two lines are almost overlapped and indistinguishable. The horizontal dashed line corresponds to the prediction of linear theory with a constant bias factor. the predictions of one-loop iPT agree well with N-body simulations on scales 30 h −1 Mpc where the perturbation theory is applicable. FIG. 5 :FIG. 6 : 56Diagrammatic representation of the resummation scheme of CLPT[53]. The original CLPT does not include the effects of nonlocal bias, and can be easily extended to include them by applying the formalism of iPT and the resummation of this type of diagrams. Ingredients of the displacement correlator. All the diagrams up to one-loop approximation are shown. These diagrams are resummed in CLPT. FIG. 7 : 7Connected diagrams with three wavy lines up to tree-level approximation. These diagrams are not resummed in CLPT. L (k 1 , k 2 , . . . , k n ) FIG. 9: Diagrammatic rules of iPT: primordial spectra. of Fourier transform, and therefore the vertex resummation is preferred. The nonlocal model of halo bias [36] explained in Sec. II B is still primitive. There are plenty of rooms to improve the model of nonlocal bias in future work. The iPT provides a natural framework to separate tractable problems of weakly nonlinear evolutions of biased tracers from difficult problems of fully nonlinear phenomena of biasing. X X not allowed allowed FIG. 10: Examples of external vertex which is allowed (left) and not allowed (right) in iPT. TABLE I : IFunctions b L n (M), A n (M) derived from several models of mass function. is obtained by analytically integrating the variable x in the first expression of Eq. (37). Both expressions are suitable for numerical evaluations. With the expression of Eq. (37) or (38), we have TABLE II : IIIntegral formulas for one-loop corrections, which are related to the two-point propagator. We denote R 1 TABLE III : IIIIntegrands for functions R X n (k) of Eq. (49). TABLE IV : IVIntegral formulas for one-loop corrections, which are related to convolving three-point propagators. TABLE V : VIntegrands for functions Q XY n (k) of Eq. (51). TABLE VII : VIIIntegrands for functions S X n (k) of Eq. (53). AcknowledgmentsI thank Masanori Sato for providing numerical data of power spectra and correlation functions from the N-body simulations used in this paper. I acknowledge support from the Ministry of Education, Culture, Sports, Science, and Technology, Grant-in-Aid for Scientific Research (C), 21540267, 2012.Comparing these equations of RPT with Eqs.(6),(9)of iPT, there are correspondences,where Ψ(1)is the linear displacement field in configuration space at the origin. Since Eq. (101) holds in non-Gaussian initial conditions as well, the high-k limit of iPT, Eq. (102), is replaced bywhich agrees with the the replacement of RPT, Eq. (105). Since only the exponential factor is replaced in Eqs. (99) and (102), the high-k limit of Eq. (103) does not change even in the case of non-Gaussian initial conditions.Nonlinear interpolation I: RegPTIn the RPT formalism, the nonlinear propagator is approximated by analytically interpolating the behaviors in the high-k limit and the low-k limit[46,47,50]. There are at least two prescriptions for the interpolation. An interpolation scheme of Refs.[49,51,52], which is called RegPT, uses an prescription for the multi-point propagator truncated at the N-loop order aswhere δΓ (n) M-loop is the M-loop correction term of the propagator, and C.T. is a counterterm to match the N-loop expression is exact in both limits, i.e.,The tree-level multi-point propagators are the same as the kernel functions in SPT, i.e., Γ (n) tree = F n . It is apparent that Eq. (110) has the correct low-k limit. In the high-k limit, we have δΓ (n) M-loop ≈ (−k 2 σ 2 d /2) M F n /M! according to Eq. (99). In this limit, it can be shown by induction that all the loop corrections in the first parentheses of Eq. (110) including the counter term remarkably cancel each other, leaving only the tree-level contribution F n . Thus Eq. (110) also has the correct high-k limit, Γ (n)RegPT → F n exp(−k 2 σ d 2 /2) for k → ∞. The RegPT prescription of Eq. (110) can be re-expressed in a more compact form including the counterterm aswhere δΓ (n) 0-loop ≡ F n , and [· · · ]| truncated indicates a truncation up to a given order after completely expanding the exponential factor.The RegPT prescription of Eq. (110) can be compared with Eq. (101) in the iPT formalism. On one hand, applying a Taylor expansion of the resummation factor Π and truncating at the n-loop order give the same result as the n-loop SPT. On the other hand, the lowest-order approximation of the resummation factor is given by Eq.(35)in real space, i.e.,which is accidentally the same as the exponential factor in the high-k limit of Eq. (99). From these observations, it is nowAppendix A: Diagrammatic rulesA set of diagrammatic rules in iPT which is used in this paper is summarized in this Appendix. Full set of rules and their derivations are found in Ref.[21]. The relevant diagrammatic rules are shown inFig. 8 and 9. Physical meanings of the graphs are as follows: a double solid line corresponds to the number density field δ X (k), a square box represents partial resummations of dynamics and biasing, a wavy line represents . D J Eisenstein, W Hu, M Tegmark, Astrophys. J. Letters. 50457D. J. Eisenstein, W. Hu, and M. Tegmark, Astrophys. J. Letters, 504, L57 (1998). . T Matsubara, Astrophys. J. 615573T. Matsubara, Astrophys. J., 615, 573 (2004). . D J Eisenstein, Astrophys. J. 633560D. J. Eisenstein et al., Astrophys. J., 633, 560 (2005). . N Dalal, O Doré, D Huterer, A Shirokov, Phys. Rev. D. 77123514N. Dalal, O. Doré, D. Huterer and A. Shirokov, Phys. Rev. D 77, 123514 (2008). . S Matarrese, L Verde, Astrophys. J. Letters. 67777S. Matarrese and L. Verde, Astrophys. J. Letters, , 677, L77 (2008). . A Slosar, C Hirata, U Seljak, S Ho, N Padmanabhan, J. Cosmol. Astropart. Phys. 831A. Slosar, C. Hirata, U. Seljak, S. Ho, N. Padmanabhan, J. Cos- mol. Astropart. Phys., 8, 31 (2008). . A Taruya, K Koyama, T Matsubara, Phys. Rev. D. 78123534A. Taruya, K. Koyama and T. Matsubara, Phys. Rev. D 78, 123534 (2008). . V Desjacques, U Seljak, I T Iliev, Mon. Not. R. Astron. Soc. 39685V. Desjacques, U. Seljak and I. T. Iliev, Mon. Not. R. As- tron. Soc., 396, 85 (2009). . K S Dawson, Astron. J. 14510K. S. Dawson et al., Astron. J., 145, 10 (2013). . D Schlegel, arXiv:1106.1706D. Schlegel et al., arXiv:1106.1706 (2011). . P A Abell, LSST Science CollaborationsarXiv:0912.0201LSST Science Collaborations: P. A. Abell, et al., arXiv:0912.0201 (2009). . R Ellis, arXiv:1206.0737R. Ellis et al., arXiv:1206.0737 (2012). . R Laureijs, arXiv:1110.3193R. Laureijs et al., arXiv:1110.3193 (2011). . A F Heavens, S Matarrese, L Verde, Mon. Not. R. Astron. Soc. 301797A. F. Heavens, S. Matarrese, and L. Verde, Mon. Not. R. As- tron. Soc., 301, 797 (1998). . R Scoccimarro, H M P Couchman, J A Frieman, Astrophys. J. 517531R. Scoccimarro, H. M. P. Couchman, and J. A. Frieman, Astro- phys. J., 517, 531 (1999) . A Taruya, Astrophys. J. 53737A. Taruya, Astrophys. J., 537, 37 (2000). . P Mcdonald, Phys. Rev. D. 74129901P. McDonald, Phys. Rev. D 74, 103512 (2006); 74, 129901(E) (2006). . D Jeong, E Komatsu, Astrophys. J. 691569D. Jeong and E. Komatsu, Astrophys. J., 691, 569 (2009). . T Matsubara, Phys. Rev. D. 8383518T. Matsubara, Phys. Rev. D 83, 083518 (2011). . T Matsubara, Phys. Rev. D. 7763530T. Matsubara, Phys. Rev. D 77, 063530 (2008). . T Matsubara, Phys. Rev. D. 78109901T. Matsubara, Phys. Rev. D 78, 083519 (2008); 78, 109901(E) (2008) . T Okamura, T , A Taruya, T Matsubara, arXiv:1105.1491T. Okamura, T., A. Taruya and T. Matsubara, arXiv:1105.1491 . M Sato, T Matsubara, Phys. Rev. D. 8443501M. Sato and T. Matsubara, Phys. Rev. D 84, 043501 (2011). . W H Press, P Schechter, Astrophys. J. 187425W. H. Press and P. Schechter, Astrophys. J., 187, 425 (1974). . J R Bond, S Cole, G Efstathiou, N Kaiser, Astrophys. J. 379440J. R. Bond, S. Cole, G. Efstathiou, and N. Kaiser, Astrophys. J., 379, 440 (1991). . H J Mo, S D M White, Mon. Not. R. Astron. Soc. 282347H. J. Mo and S. D. M. White, Mon. Not. R. Astron. Soc., 282, 347 (1996). . H J Mo, Y P Jing, S D M White, Mon. Not. R. Astron. Soc. 284189H. J. Mo, Y. P. Jing, and S. D. M. White, Mon. Not. R. As- tron. Soc., 284, 189 (1997). . R K Sheth, G Tormen, Mon. Not. R. Astron. Soc. 308119R. K. Sheth and G. Tormen, Mon. Not. R. Astron. Soc., 308, 119 (1999). . R Scoccimarro, R K Sheth, L Hui, B Jain, Astrophys. J. 54620R. Scoccimarro, R. K. Sheth, L. Hui, and B. Jain, Astrophys. J., 546, 20 (2001). . A Cooray, R Sheth, Phys. Rep. 3721A. Cooray and R. Sheth, Phys. Rep., 372, 1 (2002). . K C Chan, R Scoccimarro, R K Sheth, Phys. Rev. D. 8583509K. C. Chan, R. Scoccimarro and R. K. Sheth, Phys. Rev. D 85, 083509 (2012). . K C Chan, R Scoccimarro, Phys. Rev. D. 86103519K. C. Chan and R. Scoccimarro, Phys. Rev. D 86, 103519 (2012). . R K Sheth, K C Chan, R Scoccimarro, arXiv:1207.7117R. K. Sheth, K. C. Chan and R. Scoccimarro, arXiv:1207.7117 (2012). . T Matsubara, Phys. Rev. D. 8663518T. Matsubara, Phys. Rev. D 86, 063518 (2012). . T Buchert, Astron. Astrophys. 2239T. Buchert, Astron. Astrophys., 223, 9 (1989). . F Moutarde, J.-M Alimi, F R Bouchet, R Pellat, A Ramani, Astrophys. J. 382377F. Moutarde, J.-M. Alimi, F. R. Bouchet, R. Pellat, and A. Ra- mani, Astrophys. J., 382, 377 (1991). . T Buchert, Mon. Not. R. Astron. Soc. 254729T. Buchert, Mon. Not. R. Astron. Soc., 254, 729 (1992). (1993). . P Catelan, Mon. Not. R. Astron. Soc. 276115P. Catelan, Mon. Not. R. Astron. Soc., 276, 115 (1995). . R Juszkiewicz, Astron. Astrophys. 298643R. Juszkiewicz, Astron. Astrophys., 298, 643 (1995). (1996). . C Rampf, T Buchert, J. Cosmol. Astropart. Phys. 621C. Rampf and T. Buchert, J. Cosmol. Astropart. Phys., 6, 21 (2012). . T Tatekawa, id.013E03Progress of Theoretical and Experimental Physics. T. Tatekawa, Progress of Theoretical and Experimental Physics, 2013, id.013E03 (2013). . T Nishimichi, T Matsubara, A M B Taruya, Wise, Astrophys. J. 3116T. Nishimichi, T. Matsubara, A. Taruya, in prep. M. B. Wise, Astrophys. J., 311, 6 (1986). (1994). . F Bernardeau, S Colombi, E Gaztañaga, R Scoccimarro, Phys. Rep. 3671F. Bernardeau, S. Colombi, E. Gaztañaga, and R. Scoccimarro, Phys. Rep., 367, 1 (2002) . M Crocce, R Scoccimarro, Phys. Rev. D. 7363519M. Crocce and R. Scoccimarro, Phys. Rev. D 73, 063519 (2006). . M Crocce, R Scoccimarro, Phys. Rev. D. 7363520M. Crocce and R. Scoccimarro, Phys. Rev. D 73, 063520 (2006). . F Bernardeau, M Crocce, R Scoccimarro, Phys. Rev. D. 78103521F. Bernardeau, M. Crocce and R. Scoccimarro, Phys. Rev. D 78, 103521 (2008). . F Bernardeau, M Crocce, E Sefusatti, Phys. Rev. D. 8283507F. Bernardeau, M. Crocce and E. Sefusatti, Phys. Rev. D 82, 083507 (2010). . F Bernardeau, M Crocce, R Scoccimarro, Phys. Rev. D. 85123519F. Bernardeau, M. Crocce and R. Scoccimarro, Phys. Rev. D 85, 123519 (2012). . M Crocce, R Scoccimarro, F Bernardeau, Mon. Not. R. Astron. Soc. 4272537M. Crocce, R. Scoccimarro and F. Bernardeau, Mon. Not. R. Astron. Soc., 427, 2537 (2012). . A Taruya, F Bernardeau, T Nishimichi, S Codis, Phys. Rev. D. 86103528A. Taruya, F. Bernardeau, T. Nishimichi, S. Codis, Phys. Rev. D 86, 103528 (2012). . A Taruya, T Nishimichi, F Bernardeau, arXiv:1301.3624A. Taruya, T. Nishimichi and F. Bernardeau, arXiv:1301.3624 (2013). . J Carlson, B Reid, M White, Mon. Not. R. Astron. Soc. 4291674J. Carlson, B. Reid and M. White, Mon. Not. R. Astron. Soc., 429, 1674 (2013) . T Matsubara, Astrophys. J. Suppl. Ser. 1011T. Matsubara, Astrophys. J. Suppl. Ser., 101, 1 (1995) . M S Warren, K Abazajian, D E Holz, L Teodoro, Astrophys. J. 646881M. S. Warren, K. Abazajian, D. E. Holz, L. Teodoro, Astro- phys. J., 646, 881 (2006). . M Crocce, P Fosalba, F J Castander, E Gaztãnaga, Mon. Not. R. Astron. Soc. 4031353M. Crocce, P. Fosalba, F. J. Castander and E. Gaztãnaga, Mon. Not. R. Astron. Soc., 403, 1353 (2010). . N Kaiser, Mon. Not. R. Astron. Soc. 2271N. Kaiser Mon. Not. R. Astron. Soc., 227, 1 (1987). The Evolving Universe. A J S Hamilton, astro-ph/9708102231185A. J. S. Hamilton, The Evolving Universe, 231, 185 (1998). (astro-ph/9708102). . A J S Hamilton, Astrophys. J. Letters. 3855A. J. S. Hamilton, Astrophys. J. Letters, 385, L5 (1992). . S Cole, K B Fisher, D H Weinberg, Mon. Not. R. Astron. Soc. 267785S. Cole, K. B. Fisher and D. H. Weinberg, Mon. Not. R. As- tron. Soc., 267, 785 (1994). . M Sato, T Matsubara, Phys. Rev. D. 87123523M. Sato and T. Matsubara, Phys. Rev. D 87, 123523 (2013). . V Springel, Mon. Not. R. Astron. Soc. 3641105V. Springel, Mon. Not. R. Astron. Soc., , 364, 1105 (2005). . M Crocce, S Pueblas, R Scoccimarro, Mon. Not. R. Astron. Soc. 373369M. Crocce, S. Pueblas and R. Scoccimarro, Mon. Not. R. As- tron. Soc., , 373, 369 (2006). . P Valageas, T Nishimichi, Astron. Astrophys. 52787P. Valageas and T. Nishimichi, Astron. Astrophys., , 527, A87 (2011). . A Lewis, A Challinor, A Lasenby, Astrophys. J. 538473A. Lewis, A. Challinor and A. Lasenby, Astrophys. J., , 538, 473 (2000). . M Davis, G Efstathiou, C S Frenk, S D M White, Astrophys. J. 292371M. Davis, G. Efstathiou, C. S. Frenk and S. D. M. White, As- trophys. J., , 292, 371 (1985). . M Crocce, F J Castander, E Gaztanaga, P Fosalba, J Carretero, arXiv:1312.2013M. Crocce, F. J. Castander, E. Gaztanaga, P. Fosalba and J. Car- retero, arXiv:1312.2013 (2013). . F Schmidt, M Kamionkowski, Phys. Rev. D. 82103002F. Schmidt and M. Kamionkowski, Phys. Rev. D 82, 103002 (2010). . V Desjacques, D Jeong, F Schmidt, Phys. Rev. D. 8461301V. Desjacques, D. Jeong and F. Schmidt, Phys. Rev. D 84, 061301 (2011). . V Desjacques, D Jeong, F Schmidt, Phys. Rev. D. 8463512V. Desjacques, D. Jeong and F. Schmidt, Phys. Rev. D 84, 063512 (2011).
[]
[ "Uncloneable Decryptors from Quantum Copy-Protection", "Uncloneable Decryptors from Quantum Copy-Protection" ]
[ "Or Sattath \nComputer Science Department\nBen-Gurion University\n\n", "Shai Wyborski \nComputer Science Department\nBen-Gurion University\n\n\nSchool of Computer Science and Engineering\nThe Hebrew University of Jerusalem\n\n" ]
[ "Computer Science Department\nBen-Gurion University\n", "Computer Science Department\nBen-Gurion University\n", "School of Computer Science and Engineering\nThe Hebrew University of Jerusalem\n" ]
[]
Uncloneable decryptors are encryption schemes (with classical plaintexts and ciphertexts) with the added functionality of deriving uncloneable quantum states, called decryptors, which could be used to decrypt ciphers without knowledge of the secret key [GZ20]. We study uncloneable decryptors in the computational setting and provide increasingly strong security notions which extend the various indistinguishable security notions of symmetric encryption.We show that CPA secure uncloneable bit decryptors could be instantiated from a copy protection scheme ([Aar09]) for any balanced binary function. We introduce a new notion of flip detection security for copy protection schemes inspired by the notions of left or right security for encryption schemes, and show that it could be used to instantiate CPA secure uncloneable decryptors for messages of unrestricted length.We then show how to strengthen the CPA security of uncloneable decryptors to CCA2 security using strong EUF-CMA secure digital signatures. We show that our constructions could be instantiated relative to either the quantum oracle used in [Aar09] or the classical oracle used in [ALL + 21] to instantiate copy protection schemes. Our constructions are the first to achieve CPA or CCA2 security in the symmetric setting.Proposition 31.If QCP BBF is WEAK-QCP n,k secure (see Definition 10) then the scheme 1UD cpa (see Construction 1) is UD1-qCPA n,k secure (see Definition 21).Proof. By Lemma 17, QCP BBF is WEAK-QCP-RIA n,k secure.Assume the QPT adversary A = (P, D 1 , . . . , D n+k ) satisfies thatwe construct QPT procedures A = (P , F 1 , . . . , F n+k ) for which E[WEAK-QCP-RIA n,k QCP BBF (A , λ)] = µ − negl(λ).
10.48550/arxiv.2203.05866
[ "https://arxiv.org/pdf/2203.05866v2.pdf" ]
247,411,099
2203.05866
65a7796d26adce8aba4bffe3885b396f1c31095a
Uncloneable Decryptors from Quantum Copy-Protection March 16, 2022 15 Mar 2022 Or Sattath Computer Science Department Ben-Gurion University Shai Wyborski Computer Science Department Ben-Gurion University School of Computer Science and Engineering The Hebrew University of Jerusalem Uncloneable Decryptors from Quantum Copy-Protection March 16, 2022 15 Mar 20221 Uncloneable decryptors are encryption schemes (with classical plaintexts and ciphertexts) with the added functionality of deriving uncloneable quantum states, called decryptors, which could be used to decrypt ciphers without knowledge of the secret key [GZ20]. We study uncloneable decryptors in the computational setting and provide increasingly strong security notions which extend the various indistinguishable security notions of symmetric encryption.We show that CPA secure uncloneable bit decryptors could be instantiated from a copy protection scheme ([Aar09]) for any balanced binary function. We introduce a new notion of flip detection security for copy protection schemes inspired by the notions of left or right security for encryption schemes, and show that it could be used to instantiate CPA secure uncloneable decryptors for messages of unrestricted length.We then show how to strengthen the CPA security of uncloneable decryptors to CCA2 security using strong EUF-CMA secure digital signatures. We show that our constructions could be instantiated relative to either the quantum oracle used in [Aar09] or the classical oracle used in [ALL + 21] to instantiate copy protection schemes. Our constructions are the first to achieve CPA or CCA2 security in the symmetric setting.Proposition 31.If QCP BBF is WEAK-QCP n,k secure (see Definition 10) then the scheme 1UD cpa (see Construction 1) is UD1-qCPA n,k secure (see Definition 21).Proof. By Lemma 17, QCP BBF is WEAK-QCP-RIA n,k secure.Assume the QPT adversary A = (P, D 1 , . . . , D n+k ) satisfies thatwe construct QPT procedures A = (P , F 1 , . . . , F n+k ) for which E[WEAK-QCP-RIA n,k QCP BBF (A , λ)] = µ − negl(λ). Introduction Consider a content provider who wishes to broadcast her content over satellite. How could she only allow paying customers to access their content? Classically, the broadcast data is encrypted, and each customer is furnished with a set-top box that contains the decryption key. This approach protects the data from eavesdroppers, but it has an obvious flaw: a hacker with access to just one set-top box could extract the secret key and create as many set-top boxes as they desire. Indeed, current solutions rely on either security by obscurity or authenticating the user, which requires two-way communication (and are therefore unsuitable for satellite broadcast communication). Uncloneable decryptors encryption schemes (or simply uncloneable decryptors) solve this problem. Rather than storing a secret key, the set-top box stores a quantum decryptor. The uncloneablility of the decryptors prevents a pirate, even one with access to many set-top boxes, from creating a larger amount of boxes that could decode the broadcast content. The notion of uncloneable decryptors makes sense for symmetric and asymmetric encryption schemes. In this work we concentrate on the former. A central component in our constructions is quantum copy protection [Aar09, ALL + 21]. Quantum copy protection is the practice of compiling a function into a quantum state called the copy protected program which is useful to evaluate the function. The security of such schemes is defined to prohibit any pirates with access to n copies of the program from creating n + 1 programs of their own, such that each program could be used to evaluate the program on a given input with a non-negligible advantage over an adversary which outputs a uniformly random guess (the security is defined with respect to the distribution the function and input are sampled from). We formally introduce and discuss this notion in Section 3. Results In this work, we study uncloneable decryptors, a primitive that augments symmetric encryption with the functionality of deriving uncloneable quantum states -which we call decryptors -that can be used to decrypt encrypted messages without the secret key. The study of uncloneable decryption was initiated in [GZ20] (which we survey in more detail in Section 1.2) where the notion of single decryptor schemes was first defined. We extend the syntax to allow deriving arbitrarily many decryptors, which justifies the choice to change the name. We mostly consider the computational setting, where the adversary can access various decryptors and may make encryption and decryption calls. We define notions of security which extend the notions of IND-qCPA, IND-qCCA1 and IND-qCCA2 of [BZ13] (which we formally introduce in Section 2.3). These notions extend indistinguishability security of encryption schemes to the post-quantum setting by allowing the adversary to make oracle calls in superposition (as we precisely define in Section 2.2). However, the challenge phase (in which the adversary provides two plaintexts m 0 , m 1 and has to distinguish their ciphertexts) is done classically. We introduce our security notions of UD, UD-qCPA, UD-qCCA1 and UD-qCCA2 in Definitions 19 and 20, and in Definition 21 we present a variant of these definitions appropriate for single-bit encryption schemes which we denote UD1, UD1-qCPA, UD1-qCCA1 and UD1-qCCA2 respectively. In Section 4.2 we prove that these security notions extend the corresponding notions of [BZ13]. We also study the implications of quantum copy protection [Aar09] towards uncloneable decryptors. We introduce and discuss the relevant aspects of quantum copy protection in Section 3. In particular, we show that the security notions of [Aar09] are not sufficiently strong for our construction, as they do not prohibit splitting attacks, that is, splitting the copy protected program into two partial programs, where each program is useful to evaluate the function on a large fraction of the inputs. In Section 3.4 we present an explicit attack that splits the program into two partial programs, each able to evaluate the underlying function in half of the domain. As we show in Proposition 36, such attacks are detrimental to our constructions. In order to overcome this difficulty, we present in Section 3.6 the notion of flip detection security -a novel security notion for copy protection schemes, inspired by notions of "left or right" security for encryption schemes (as we more thoroughly discuss in Appendix B), which is secure against such attacks. We use WEAK-QCP and FLIP-QCP to indicate the security notions of [Aar09] and Section 3.6, respectively. All of the security notions above are first defined in the context where n decryptors (or copy-protected programs) are given to an adversary who attempts to create k additional copies. We indicate this by putting n, k in the superscript, e.g. UD-qCCA1 n,k or WEAK-QCP 1,1 . With the syntax and security of uncloneable decryption and copy protection established, we prove the following results: • In Construction 1 (Section 5.1) we instantiate an uncloneable bit decryptors scheme from a quantum copy protection scheme for any balanced binary function. In Proposition 31 we prove this construction is UD1-qCPA n,k as long as it is instantiated from a WEAK-QCP n,k secure copy protection scheme. • In Section 4.2 we show that UD1-qX security implies IND-qX security for the underlying symmetric encryption scheme. Together with the construction of the previous item, this allows us to infer two interesting properties of WEAK-QCP security: 1. the existence of a WEAK-QCP 1,1 secure copy protection scheme for a balanced binary function implies the existence of post-quantum one-way functions (Corollary 32), and 2. any WEAK-QCP copy-protectable BBF must be a weak PRF (Corollary 33). • In Construction 2 (Section 5.3) we transform a UD1-qCPA n,k secure scheme to a UD1-qCCA2 n,k secure scheme by means of post-quantum sEUF-CMA secure digital signatures (which are discussed in Section 2.4). We prove the security of this transformation in Proposition 35. • In Section 5.4 we discuss the security of extending an uncloneable decryptors bit encryption scheme to a scheme that supports plaintexts of arbitrary length by a simple "bit by bit extension" transformation (which we formally define in Definition 28). We show in Proposition 36 an example of a bit encryption scheme which is UD1-qCCA2 n,k secure (assuming the existence of a WEAK-QCP n,k secure copy protection scheme), whose bit by bit extension to support two bit long messages fails to admit even UD 1,1 security. In contrast, in Proposition 37 we show that if we instantiate Construction 1 with a FLIP-QCP n,k secure copy protection scheme, then extending this scheme bit by bit results in a UD-qCPA n,k secure uncloneable decryptors scheme. • In Construction 3 we transform any uncloneable bit decryptor scheme whose bit by bit extension is UD-qCPA n,k secure to a UD-qCCA2 n,k secure scheme by means of post-quantum sEUF-CMA secure digital signatures. This construction is a generalization of Construction 2. We prove the security of this construction in Proposition 38. • We prove that unconditional UD security is impossible against an adversary with access to arbitrarily polynomially many decryptors, even if they are not given access to any amount of ciphertexts. This result is incomparable to the impossibility result of [GZ20] as our adversary has access to more decryptors but no ciphers. For more details see Section 4.3. • In Appendix B we show that a IND-qCCA2 could be instantiated with respect to a quantum oracle, and a IND-qCCA2 1,1 secure scheme could be instantiated relative to a classical oracle. This construction improves upon the previously known constructions, which achieve at most IND-qCPA 1,1 security. Related Works Uncloneability in encryption Uncloneability first appeared in the context of encryption schemes in [Got03], who presents a scheme for encrypting classical data into a quantum cipher. In their setting, it is possible to split the cipher into two states from which the original plaintext could be recovered upon learning the key, but the fact that the cipher was split could always be detected by the intended receiver. An encryption scheme with uncloneable ciphers is presented in [BL20]. In this setting, the receiver is given a quantum ciphertext for a classical message that could be decrypted later, given the classical secret key. The security requirement of this primitive is that an adversary given the ciphertext could not split it into two states, both of which could be used independently to recover the plaintext given the secret key. They provide a construction based on conjugate coding and prove that it is unconditionally secure in the QROM against adversaries with access to a single cipher, provided that the holders of the two parts of the split cipher do not share entanglement. The study of uncloneable decryptors was initiated in [GZ20] and was studied therein in the context of a single decryptor. The authors formalize the notion of single decryptor schemes in private and public key settings, and the security thereof. They then proceed to show a black box transformation of the uncloneable cipher scheme of [BL20] to a single decryptor scheme that is selectively secure (that is, the adversary has to pick the messages before seeing the decryption key). The black box properties of this transformation imply that the resulting scheme inherits the security properties of [BL20]'s scheme; namely, it is also secure in the QROM against unentangled adversaries with access to a single cipher. The authors of [GZ20] also construct computationally secure public-key single decryptor schemes based on one-shot signatures and extractable witness encryption. This construction is secure against adversaries with access to a single cipher in the common reference string model. Note that these assumptions are quite strong, in particular, the only known candidate for one-shot signatures is instantiated with respect to a classical oracle [AGKZ20]. The authors of [CLLZ21] use hidden coset states to present two constructions for public key single decryptor schemes which are secure in the plain model with respect to some assumptions: the first construction requires extractable witness encryption, and the second removes this requirement assuming post-quantum compute-and-compare obfuscation for the class of unpredictable functions. Quantum copy protection Quantum copy protection was first presented in [Aar09], which constructs a copy protection scheme for any quantum-unlearnable function relative to the existence of a quantum oracle. This construction is secure against QPT adversaries with access to arbitrarily many copies of the copy protected program. The author also provides several candidate constructions for copy protection schemes of delta functions. The oracle construction of [Aar09] is the only one that is known to be secure in the presence of multiple copy-protected programs. The authors of [ALL + 21] point out some weaknesses in the original security definition of [Aar09] and propose a strictly stronger definition. In particular, they briefly discuss the possibility of splitting attacks such as the one we formalize in Section 3.4. Their definition roughly relies on the idea that instead of testing a program against a sampled function and input pair, they could quantify the "quality" of the program against the entire distribution. They also replace the quantum oracle of [Aar09] with a classical oracle and prove that relative to this oracle all quantum-unlearnable functions are securely copy protectable even with respect to their stronger notion of security. However, this construction is only known to be secure in the presence of a single copy. The authors of [BJL + 21] construct a single copy "semi-secure" copy protection scheme for compute-and-compare programs where it is assumed that at least one of the freeloaders is honest. The authors of [CMP20] construct a copy protection scheme for compute-and-compare circuits without any assumptions which provably admits non-trivial single copy security (that is, it does not completely satisfy the security definition of copy protection, but it does satisfy that the probability that both freeloaders evaluate the function correctly is non-negligibly smaller than 1, which is classically impossible) in the quantum random oracle model. In [CLLZ21] it is shown how to construct a copy protection scheme for PRFs from the two constructions for uncloneable decryptors mentioned earlier. Their constructions are single-copy secure relying on the same assumptions (either extractable witness encryption or post-quantum compute-and-compare obfuscation for the class of unpredictable functions) along with postquantum one-way functions with subexponential security and indistinguishability obfuscation. This is the only example of a provably secure copy protection scheme for a non-evasive class of functions. The authors of [AP21] initiate the study of a weaker form of copy protection called secure software leasing. In this notion, it is assumed that the end users of the program are honest. That is, piracy is not preventable, but it is always detectable. They construct such a scheme for evasive functions based on indistinguishability obfuscators and LWE. They further provide a construction for a class of quantum unlearnable functions assuming LWE, but for which an SSL scheme could not be constructed. This implies that a general scheme for quantum copy protection for quantum unlearnable functions is also impossible. More recently, [KNY20] managed to construct SSL schemes for a subclass of the class of evasive functions. They also consider two weaker variants of SSLfinite-term SSL and SSL with classical communicationsand show constructions of such schemes for pseudo-random functions. Their constructions are single copy secure in the common reference string model. Post-quantum indistinguishability of ciphers Security notions for classical encryption and digital signature schemes against quantum adversaries were first considered in [BZ13], where the notions of IND-qCCA1 and IND-qCCA2 security were introduced. These notions allow the adversaries to query the encryption/decryption oracles in superposition but do not allow making the challenge query in superposition. The authors show that two seemingly natural security definitions that allow superimposed challenge queries are actually impossible to obtain and leave open the problem of meaningfully defining qIND-qCPA security. The authors of [GHS16] propose a definition for qIND-qCPA and prove that it extends the IND-qCPA notion of [BZ13]. They show that this notion is not achievable by any quasi-lengthpreserving encryption schemes but provide a length-increasing scheme that is qIND-qCPA secure. In [CEV20], the authors propose a definition for qIND-qCCA2 and prove that it is stronger than the qIND-qCPA notion of [GHS16]. They provide constructions for qIND-qCCA1 secure symmetric encryption schemes and prove that the encrypt-then-MAC paradigm affords a generic transformation of qIND-qCCA1 secure schemes to qIND-qCCA2 secure schemes. Scientific Contribution We improve upon existing work in the following senses: • our constructions are the first to exhibit security in the presence of multiple decryptors, • our work is the first to obtain CPA and CCA2 in the symmetric setting, and • to the best of our knowledge, this is the first work to exhibit an application of copy protection to cryptography. Drawbacks The main drawback of our constructions is that they currently cannot be instantiated in any standard model to obtain an uncloneable decryptors scheme which is not a single decryptor scheme. Our constructions require quantum copy protection of balanced binary functions. The only known construction of such a scheme is the one presented in [CLLZ21], which relies on assumptions that [CLLZ21] also use to construct single decryptor schemes. However, as we discuss in Section 3.6.1, our schemes can be instantiated relative to a quantum oracle. Furthermore, this oracle could be replaced with a classical oracle at the cost of the resulting scheme only being secure as a single decryptor scheme. In order to obtain security against messages of arbitrary length, our construction assumes flip detection security. It is yet unclear if flip detection security is obtainable from weak security [Aar09], or if it is implied by strong copy protection [ALL + 21]. (However, we show in Appendix C that the oracle constructions of [Aar09] and [ALL + 21] satisfy flip detection security.) The security notions presented below are given as indistinguishability experiments. Our notions would be better established had we provided an equivalent semantic (simulation based) security definition. At this point, we do not even have a candidate notion of semantic security. We leave this as an open problem. Overview of the Constructions We give a short informal description of our constructions. We use the syntax for uncloneable decryptors, which we formally define in Definition 18 (as well as syntaxes for copy protection and digital signatures). However, we hope that the function of the procedures we refer to is sufficiently clear from context for the purpose of this exposition. Let QCP be a copy protection scheme for a balanced binary function BBF with input length . We transform QCP to an uncloneable bit decryptor scheme using a transformation similar to the standard transformation of pseudo-random functions to symmetric encryption schemes (see e.g. [Gol04, Construction 5.3.9]): • The secret key sk is an (efficient description of) a function sampled from BBF, that is, sk ← BBF.Sample(1 λ ). • To generate a decryptor, copy protect the function: DecGen(sk) ≡ QCP.Protect(sk). • To encrypt a bit b, sample r ← {0, 1} and output the cipher (r, b ⊕ BBF.Eval sk (r)). • To decrypt a cipher (r, β) using a decryptor ρ calculate b = β ⊕ QCP.Eval(ρ, r). To obtain UD1-qCCA2 security from a UD1-qCPA secure scheme UD, we wrap it with a digital signature scheme DS: • Sample a key pair (sk DS , pk DS ) ← DS.KeyGen(1 λ ) and attach sk DS to the secret key and pk DS to any generated decryptor. • To encrypt a bit b generate c ← UD.Enc(b) and output (c, s = DS.Sign pk DS (c)). • To decrypt a cipher (c, s) first verify that DS pk DS (c, s) = 1 and if the verification passes use UD to decrypt the message. In order to extend the first scheme to support arbitrary length plaintexts, we encrypt each bit separately. This construction remains secure provided that the underlying copy protection scheme QCP has sufficiently strong security. To obtain UD-qCCA2 security we adjust the transformation we used for bit decryptors by: • for each cipher, sampling a random serial number r ← {0, 1} λ , and • signing the cipher of each bit along with the serial number, the plaintext length, and the index of the bit (that is, if the cipher of the UD-qCPA scheme is c 1 , . . . , c we attach to c i the signature s i = DS.Sign sk DS (c i , i, , r)). Signing prevents an adversary from truncating ciphers, signing i prevents rearranging the bits of a cipher, and signing r prevents splicing ciphers. Acknowledgements We wish to thank Dominique Unruh for valuable discussions. This work was supported by the Israel Science Foundation (ISF) grant No. 682/18 and 2137/19, and by the Cyber Security Research Center at Ben-Gurion University. Preliminaries Basic Notions, Notations and Conventions We call a function f : R → R negligible if for every c < 0 it holds for any sufficiently large λ that |f (λ)| < λ c . Equivalently, f is negligible if |f (λ)| = λ −ω(1) . We often use the shorthand f (λ) ≤ negl(λ) to state that f is negligible, and the shorthand f (λ) ≤ g(λ) + negl(λ) to state that |f − g| is negligible. We say that f : N → N is polynomial in λ and denote f = poly(λ) if there exists a polynomial p such that f (λ) ≤ p(λ) for all λ ∈ N. Note that we do not require that f is a polynomial, only that it is upper bound by a polynomial. A quantum polynomial time (QPT) procedure C is a uniform family of circuits C λ such that |C λ | = poly(λ), where we use |C λ | to denote the number of elementary gates in C λ . We always assume that the elementary gates are chosen from a fixed finite universal set. Equivalently, C is QPT if there exists a polynomial Turing machine whose output on the input 1 λ is a classical description of the circuit C λ . We use sans serif typeface to denote circuits whose input and output are classical such as Enc and classical data such as sk. We use calligraphic typeface to denote quantum algorithms and data such as Enc and sk. We use Greek letters ρ, σ, . . . to represent (possibly mixed) quantum states given as density operators. When ρ is a state and s is a classical string we often abuse notation by writing ρ ⊗ s as shorthand for ρ ⊗ |s s|. Sans serif typeface does not indicate that the circuits are not quantum, only that they map computational basis states to computational basis states. When the circuits are random, we always assume that the random bits are uniformly sampled from the computational basis. This is further elaborated upon in Section 2.2. We use sans serif uppercase letters to denote instances of (either classical or quantum) schemes, such as SE to denote symmetric encryption schemes. When there are several schemes in play, we will use notations such as SE.KeyGen to clarify to which scheme we refer. When sk denotes a key, we will often use currying notation such as Enc sk rather than Enc(sk, ·) for clarity of exposition. We emphasize that sk is still considered part of the input to Enc. Given a function f : D → R and S ⊂ D, we define the function (f \ S)(x) = f (x) x / ∈ S ⊥ x ∈ S , if c ∈ D we slightly abuse notation by writing f \ c instead of f \ {c}. For any string c ∈ {0, 1} * we use |c| to denote its length and c to denote its bitwise flip. For any distribution D we use the notation x ← D to indicate that x is sampled from the distribution D. For any finite set A we use the notation x ← A to indicate that x is sampled from A uniformly. Classical Oracles with Quantum Access Some of the circuits we discuss below are implicitly oracle machines. This means that they are given access to query an oracle (that is, a black box that evaluates some function). Given an oracle machine A and a classical function f , we use the notation A f to denote an invocation of A when it is given oracle access to f . In our context, we wish to model a quantum adversary which can perform a classical circuit on an input which is in superposition. Even though the underlying function is classical, the adversary can access it quantumly. We now precisely define what this means. For deterministic oracles, quantum access means that the adversaries may query the oracle superposition as follows: the standard way to represent a function f as a quantum operation as the unitary U f defined via |x, y → |x, y ⊕ f (x) . Given an oracle machine A and a classical function f we use A |f to indicate that A my perform quantum queries to the unitary U f . We call this quantum access to the (classical) function f . Some of the oracles we consider are randomized (e.g. encryption oracles). In case C is some randomized circuit, we use C(·; r) to denote the (deterministic) invocation of C with random bits r. We define the quantum oracle of C as the oracle which, upon receiving a query (possibly in superposition), samples a uniformly random r and applies to the state the unitary U C;r defined by |m, x → |m, C(m; r) ⊕ x . Given an oracle machine A and a randomized circuit C we use A |C to indicate that A may perform quantum queries to the oracle which uniformly samples r and then applies U C;r to the input. Note that the randomness is sampled once per query. Remark 1. This oracle could be equivalently defined as the oracle which, on input ρ, applies the unitary |m, x, r → |m, C(m; r) ⊕ x, r to ρ ⊗ 2 −|r| r |r r| . Symmetric Encryption Schemes Recall the syntax of symmetric encryption schemes: Definition 2 (Symmetric Encryption Scheme). A symmetric encryption (or private key encryption) scheme SE is a triplet of QPT circuits: • sk ← KeyGen(1 λ ), • c ← Enc sk (m), and • m ← Dec sk (c). SE is correct if for all messages m ∈ {0, 1} * it holds that Pr sk ← KeyGen 1 λ : Dec sk (Enc sk (m)) = m = 1 Security of encryption schemes is commonly defined in terms of indistinguishability games between a trusted challenger C and an arbitrary adversary A which takes the following form: 1. (Initialization) C invokes KeyGen(1 λ ) to obtain a secret key sk which it provides to A. The scheme is considered secure if any QPT A can win the game with probability at most 1/2 + negl(λ). We define increasingly strong security notions by considering increasingly strong adversaries, where we model the adversary's strength by the oracle access they are allowed to have. The most common notions are: • Passive Attack (IND): A is not given access to any oracle. Another way to increase the adversary's strength is by allowing her to make oracle queries in superposition (as described in Section 2.2). We denote the result of modifying the IND-X game this way by IND-qX. Note that in the IND-qX indistinguishability games the challenge query is still completely classical. This notion of security was first explored in [BZ13]. For completeness, we provide the full definition of IND-qCCA1 security. IND-qCPA and IND-qCCA2 security are defined by appropriately modifying the oracle access of A 1 and A 2 . the output of the game is 1 if A |Enc sk 2 (σ, m 0 , m 1 , c) = b SE is IND-qCCA1 secure if for any QPT A it holds that P[IND-qCCA1(A, λ) = 1] = 1 2 + negl(λ). Remark 4. One can also attempt allowing superimposed challenge queries, arriving at notions of the form qIND-qX. However, it seems that the most straightforward ways to try to do so lead to security notions that are impossible to obtain, see e.g. [BZ13, Thoerem 4.2 and Theorem 4.4]. Recent works such as [GHS16,CEV20] provide meaningful definitions for such security notion, as further discussed in Section 1.2. Digital Signature Schemes Definition 5 (Digital Signature Scheme). A digital signature scheme DS is a triplet of QPT circuits: • sk, pk ← KeyGen(1 λ ), • s ← Sign sk (m), and • b ← Ver pk (m, s). where Ver is deterministic. DS is correct if for all messages m it holds that P   b = 1 : sk, pk ← KeyGen(1 λ ) s ← Sign sk (m) b ← Ver pk (m, s)    = 1. We say that DS is deterministic if DS.Sign is deterministic. The notion of security we require for digital signature is that of strong existential unforgeability under chosen message attacks, which we abbreviate as sEUF-CMA. Under this notion, we furnish an adversary with access to a signing oracle as well as the public verification key and expect him to create a signed document (m, s) such that DS.Ver pk (m, s) = 1 though s was not output as a response to a signature query on m. If this holds for any QPT adversary, we say that DS is (post-quantum) sEUF-CMA secure. Definition 6 (sEUF-CMA security (adapted from [GMR88])). Let DS be a digital signature scheme as in Definition 5. For a procedure A let the strong sEUF-CMA DS (A, λ) game be defined as follows: 1. C generates pk, sk ← KeyGen and gives pk to A. A Sign sk produces a signed document (m, s). 3. The result of the game is 1 if Ver pk (m, s) = 1 and s was not given as output to a query with input m during the previous phases. DS is strong sEUF-CMA secure if for any QPT A it holds that P[sEUF-CMA DS (A, λ) = 1] ≤ negl(λ). In the weak variant EUF-CMA, we only require that the signing oracle is never queried on m. The resulting security notion does not prohibit attacks where a signed document (m, s) could be used to create s = s such that DS.Ver pk (m, s ) = 1. Stated differently, this security prevents an adversary from signing previously unsigned messages but not from creating new valid signatures for a previously signed message. As we elaborate in Section 5.3, we use signature schemes to make it unfeasible to create new valid ciphers from a list of ciphers for chosen plaintexts. If the adversary can modify the signature without abrogating its validity, they can transform a known plaintext into a new plaintext, which we expected the signature scheme to protect us from. Hence, EUF-CMA is unsuitable for our application. The notion of EUF-CMA security first appeared in [GMR88], though the authors thereof only required security against PPT adversaries. By post-quantum security we mean the same security game, but where the adversary is QPT rather than PPT. However, we still require that the adversary only has classical access to the signing oracle. Remark 7. Extending this notion to an adversary who is allowed to make signature queries in superposition is not at all straightforward, since in this case it is impossible to record the queries made by the adversary, and it is unclear even what it means for a signed document to be "different" than the responses to the queries they made. This has been addressed e.g. in [BZ13], where they do not record the queries but rather require that an adversary which makes q queries can not create q + 1 distinct signed messages that all pass the verification procedure. Quantum Copy Protection In this section we overview the notion of quantum copy protection -the practice of compiling an arbitrary functionality into a quantum program in a manner that makes the functionality uncloneable. Copy protection was first introduced and discussed in [Aar09], which furnishes a first attempt at a security definition. For reasons which will become clear shortly, we refer to this security notion as weak copy protection, which we discuss at some length in Section 3.2. While this security notion is far from trivial, it also exhibits some vulnerabilities which make it unsuitable for many cryptographic applications. One such vulnerability is that it does not prohibit splitting the program into two "partial" programs, each able to evaluate the protected function on a different portion of the domain. We exhibit an explicit splitting attack in Section 3.4. In [ALL + 21] a much stronger security notion is proposed, which does prohibit splitting attacks. We shortly and informally discuss this notion in Section 3.5. We further discuss the state of the art of quantum copy protection Section 1.2. Our constructions only require the copy protection of a balanced binary function. That is, an efficiently sampleable distribution of binary functions such that applying a sampled function to a uniform input distributes negligibly close to a uniformly random bit. In Section 3.6 we introduce flip detection security, a strengthening of weak copy protection which prohibits splitting attacks. • there exists d = poly(λ) such that for any f ∈ F λ there exists a string d f with |d f | = d, this string is called the description of f , and The Syntax of Quantum Copy Protection • there exists a circuit Eval F such that for any f ∈ F λ and x ∈ {0, 1} n , Eval F (d f , x) outputs f (x) with running time poly(λ). When considering a class F λ we usually suppress the security parameter λ, we also use the notation Eval f as shorthand for Eval F (d f , ·). We often (e.g. in the definition below) abuse notation and refer to f instead of to d f , and to F as the collection of descriptions rather than the collection of functions themselves. Definition 9 (Copy Protection Scheme, adapted from [Aar09]). Let F be an admissible class, a copy protection scheme for F is a pair of procedures QCP F = (Protect, Eval) with the property that if σ f ← Protect(f ) then it holds for any x ∈ {0, 1} that Eval(σ f , x) = f (x). We often refer to σ f as the copy protected version of f . When F is clear from context, we will suppress it and refer to the scheme by QCP. Weak Copy Protection We present the original notion of quantum copy protection presented in [Aar09]. This definition was subsequently strengthened by [ALL + 21] whose authors also name the strengthened definition therein quantum copy protection. We hence refer to the original definition of [Aar09] as weak copy protection to avoid confusion. Intuitively, copy protection security is defined in terms of a game between a trusted challenger C and several arbitrary QPT algorithms, namely a pirate P and n + k non communicating freeloaders F 1 , . . . , F n+k . The pirate is given n copies of the copy protected function σ f , from which the create n + k pirated copies ρ 1 , . . . , ρ n+k , affording ρ j to F j . Finally, C asks each freeloader to evaluate f on some point, and the output of the game is the number of freeloaders which evaluated f correctly. The scheme is considered secure if for any adversary A = (P, F 1 , . . . , F n+k ) the expected output of the game is at most negligibly higher than n+k/2 m . Note that a probability of n+k/2 m could be reached easily: for j = 1, . . . , n the pirate gives a copy of σ f to F j , so that the first n freeloaders can answer correctly with certainty; the remaining k freeloaders output a uniformly random value from {0, 1} m , giving each a winning probability of 2 −m . Formalizing this idea requires specifying the distribution from which the function f and the inputs given to the freeloaders are sampled. Hence, copy protection (in all its forms) is defined with respect to a distribution D on the class of functions and on the set of inputs, i.e. over F × {0, 1} n . Definition 10 (WEAK-QCP security, adapted from [Aar09]). Let QCP be a copy protection scheme for some admissible class of functions, and let D λ be an efficiently samplable distribution on F λ × {0, 1} n . For any natural numbers n, k define the game WEAK-QCP n,k,D QCP (A, λ) between a trusted challenger C and an arbitrary adversary A = (P, F 1 , . . . , F n+k ): • C samples (f, x) ← D λ and invokes QCP.Protect(f ) n times to obtain ρ = ρ ⊗n f , • P(ρ) generates n + k states σ 1 , . . . , σ n+k , • F i (σ i , x) outputs a string y i , • the output of the game is the number of indices i such that f (x) = y i . QCP is WEAK-QCP n,k,D secure if for any QPT adversary A it holds that = E[WEAK-QCP n,k,D (A, λ)] ≤ n + k 2 m + negl(λ), QCP is WEAK-QCP D secure if it is WEAK-QCP n,k,D secure for any n, k = poly(λ). When D is clear from context we suppress it and write WEAK-QCP n,k and WEAK-QCP. As explained in [Aar09], a copy protection scheme could not be (weakly) secure against arbitrary adversaries, as an unbounded P could use ρ f to learn f . A necessary condition for the existence of a copy protection scheme for f is that f is quantum-unlearnable, which roughly means that a polynomial quantum adversary with oracle access to f can not evaluate f on a sampled x once the oracle access is revoked (unlearnability is defined with respect to the distribution f and x are sampled from). We expand more on unlearnability in Appendix C. However, quantum unlearnability is not a sufficient condition: [AP21] construct a family of functions which is (under some standard cryptographic assumptions) quantum unlearnable and yet is not copy-protectable. Balanced Binary Functions A BBF F is comprised of an admissible class (see Definition 8) of binary functions (that is, m = 1) and an efficient procedure for sampling f ← F such that if x is uniformly random then f (x) is very close to uniform (where the distribution is taken over both f and x). More formally: Definition 11 (balanced binary function (BBF)). A Balanced Binary Function BBF is comprised of two QPT procedures (Sample, Eval) such that: • Sample(1 λ ) samples f (not necessarily uniformly) from a set F λ ⊂ {0, 1} poly(λ) . We do not require that all strings in F λ will be of the same length, only that their length is bounded by some polynomial in λ, • there exists a polynomial such that if f ← Sample(1 λ ) then Eval f implements a deter- ministic function from {0, 1} (λ) to {0, 1}, and • if f ← Sample(1 λ ) and x ← {0, 1} (λ) then |Pr[Eval sk (x) = 0] − Pr[Eval sk (x) = 1]| = negl(λ). Remark 12. When considering copy-protection of BBFs, the distribution D is always implicitly assumed to be BBF.Sample(1 λ ) × U({0, 1} (λ) ). That is, the function is sampled from the (randomized) circuit BBF.Sample, and the input is uniformly random. This definition has the nice property that the function is sampled independently from the input, making it easy to extend this definition in various ways that require sampling more than one input. We propose one such way in Section 3.6. Being a BBF in itself is not a strong property. A trivial example of a BBF is the one containing the two constant functions, which is obviously not unlearnable and in particular not copy protectable. It turns out that a necessary condition for a BBF to be WEAK-QCP 1,1 copy protectable is that BBF is a weak PRF. We state and discuss this corollary in Section 5.1.1 and provide a proof in Appendix E. A Splitting Attack on Weak Copy Protection As mentioned earlier, the security definition in Definition 10 has a weakness that is detrimental to our (and arguably other) applications. Namely, it does not prohibit splitting attacks. Intuitively, a splitting attack is an efficient way to transform a copy-protected program ρ f into two states σ 0 and σ 1 such that each state σ b is useful to evaluate a non-negligible fraction of inputs. For example, σ b could be used to evaluate any input whose first bit is b. We informally discuss such a splitting attack on any WEAK-QCP 1,1 secure scheme and defer formal treatment of this attack to Appendix A. Let QCP be a copy protection scheme for a BBF with input length . We can consider a new class BBF of input length + 1 such that every function in BBF is given by a pair of (efficient descriptions of) functions f 0 , f 1 sampled from BBF. The function described by (f 0 , f 1 ) maps b x to f b (x). Copy protect BBF by sampling two functions f 0 , f 1 , copy protecting each to obtain ρ 0 , ρ 1 and providing ρ 0 ⊗ ρ 1 as the protected function. Hence, to evaluate on the input b x one invokes QCP.Eval(ρ b , x) and returns the output. The copy-protected program could be trivially split into the two programs ρ 0 , ρ 1 , where ρ b could be used to evaluate the function on any input starting with b. Strong Copy Protection Splitting attacks noticed by [ALL + 21], who have devised a stronger definition of quantum copy protection that prohibits such attacks. The definition thereof is highly involved and requires introducing several notions it relies upon. Since we never use this definition directly, we compromise for an overview of the intuition behind it. The idea is not to test the pirated programs produced by the pirate on a sampled output, but rather design a binary measurement whose success probability is the same as the probability that a given program (comprised of a quantum state along with "instructions" for using it to evaluate the copy-protected function in the form of a quantum circuit) evaluates correctly a function-input pair sampled from the respective distribution D. The scheme is then considered QCP 1,1,D secure if it is impossible to transform a copy-protected function ρ into two programs, such that each program passes the measurement with non-negligible security. The extension to QCP n,k,D and QCP n,k,D security is similar to the end of Definition 10. The authors explain how to efficiently approximate a distribution such as above with respect to any efficiently samplable distribution D. This implies that the game described above could be approximately simulated by an efficient challenger, which is essential for using this definition for security reduction. A tenet of this security definition is that the measurement is performed on each program independently (though the measurement outcomes may not be independent due to entanglement), which prohibits splitting attacks. Note that while it is possible to implement this measurement efficiently, it is impossible to efficiently determine that it has a negligibly small success probability. This is in analogy to the fact that it is efficient to simulate the WEAK-QCP game against arbitrary QPT adversaries, but it is infeasible to determine whether an arbitrary adversary has a non-negligible advantage. Flip Detection Security In this section we define the notion of flip detection security. The intuition is that it should be hard for many freeloaders to distinguish a black box that always evaluates the function correctly from a black box that always evaluates it incorrectly. As we discuss in Appendix B, this notion is a natural adaptation of the notions of left and right security used to model the security of encryption schemes in the presence of multiple encryptions. The notion of FLIP-QCP security is only defined for copy-protection schemes for BBFs, recall that when considering such scheme we assume that D = BBF.Sample ⊗ U({0, 1} ) (see Remark 12). Definition 13 (FLIP-QCP Security). Let QCP BBF be a copy protection scheme for a balanced binary function BBF. For any binary function with input length and any bit b let O f,b be the oracle which takes no input and outputs (r, f (r) ⊕ b) with r ← {0, 1} . For any n, k = poly(λ) define the game FLIP-QCP n,k QCP (A, λ) between a trusted challenger C and an arbitrary adversary A = (P, F 1 , . . . , F n+k ): • C samples f ← BBF.Sample(1 λ ) and b 1 , . . . , b n+k ← {0, 1} and invokes QCP.Protect(f ) n times to obtain ρ = ρ ⊗n f , • P(ρ) creates n + k states σ 1 , . . . , σ n+k , • F O f,b i i (σ i ) outputs a bit b i , • the output of the game is the number of indices i for which b i = b i . QCP is FLIP-QCP n,k secure if for any QPT adversary A it holds that E[FLIP-QCP n,k QCP (A, λ)] ≤ n + k 2 + negl(λ), QCP is FLIP-QCP secure if it is FLIP-QCP n,k secure for any n, k = poly(λ). Lemma 14. Let QCP be a FLIP-QCP n,k secure scheme, then it is also WEAK-QCP n,k secure. Proof. This follows by noting that if we modify the FLIP-QCP n,k such that each freeloader makes exactly one query to O f,b , we obtain a notion which is equivalent to WEAK-QCP n,k . The only difference is that instead of just getting x, F i gets a pair of the form ( x, f (x) ⊕ b i ) where b i is uniformly random. But that b i is uniformly random implies that f (x) ⊕ b i is also uniformly random, so it could be omitted. The splitting attack elaborated in Appendix A implies that the converse of Lemma 14 is false, we prove this in Lemma 39. Proposition 36 shows that this property carries over to uncloneable decryptors: there could exist uncloneable bit decryptors which are secure, but trying to extend them to arbitrary message lengths by encrypting bit by bit is not secure. Fortunately, strengthening the security of copy protection to resemble LoR security rather than IND security alleviates this problem, as we shall see in Proposition 37. It is unclear whether strong copy protection implies flip detection or whether it is possible to generically transform a WEAK-QCP secure scheme to a FLIP-QCP secure scheme. We leave this as an open question. Oracle Instantiation of FLIP-QCP Secure Schemes In Proposition 51 we establish that FLIP-QCP secure (resp. FLIP-QCP 1,1 ) secure copy protection schemes exist for any unlearnable BBF relative to a quantum (resp. classical) oracle. The author of [Aar09] presents a weakly secure copy protection scheme instantiated relative to a quantum oracle. This scheme is unique in the sense that it supports an arbitrary polynomial amount of copies. The authors of [ALL + 21] manage to replace this oracle with a classical oracle. However, the resulting construction is not known to support more than a single copy. In Appendix C we prove that the [Aar09] and [ALL + 21] schemes are in fact FLIP-QCP and FLIP-QCP 1,1 secure respectively, as follows from the following observations: • both schemes satisfy flip detection security given that the protected function class exhibits a property we call flip unlearnability, and • flip unlearnability is actually equivalent to unlearnability. In Section 5 we use FLIP-QCP n,k secure copy-protection schemes to obtain UD-qCCA2 n,k secure uncloneable decryptors. Combined with the oracle instantiation this implies that UD-qCCA2 secure uncloneable decryptors exist relative to a quantum oracle, and UD-qCCA2 1,1 secure uncloneable decryptors exist relative to a classical oracle. In contrast, the best security achieved by previously known construction is UD-qCPA 1,1 . Copy Protection with Random Input Oracles In the security proofs in Section 5 it is often comfortable to modify Definition 10 and Definition 13 to allow the adversary to evaluate the copy protected function at random points. In this section, we argue that this modification does not imply a stronger notion of security. Intuitively, this holds since the pirate can sample sufficiently many random strings and use her copy protected program to evaluate these points. We now formalize this intuition. Definition 15 (Random Input Oracle ((RIA))). For any function f with domain D let R(f ) be the oracle which takes no input and outputs (r, f (r)) where r ← D. Definition 16. The WEAK-QCP-RIA n,k (resp. FLIP-QCP-RIA n,k ) game is defined as the WEAK-QCP n,k (resp. FLIP-QCP n,k ) game with modification that each freeloader F i is given access to R(f ) (where f is the function sampled by C). Lemma 17. If QCP is WEAK-QCP n,k (resp. FLIP-QCP n,k ) secure then it is also WEAK-QCP-RIA n,k (resp. FLIP-QCP-RIA n,k ) secure. Proof. We prove the proposition for WEAK-QCP, though the proof for FLIP-QCP is identical. Let A = (P, F 1 , . . . , F n+k ) satisfy that E[WEAK-QCP-RIA n.k QCP (A, λ)] = µ, we construct A = (P , F 1 , . . . , F n+k ) such that E[WEAK-QCP n,k QCP (A , λ)] = µ. Since the freeloaders F i are polynomial, there is a polynomial bound q on the accumulated number of queries they make to R(f ). The pirate P the pirate samples r 1 , . . . , r q uniformly at random and creates the list L = ((r i , QCP.Eval(ρ, r i )) q i=1 . They then simulate the pirate P(ρ ⊗n ) to obtain the states σ 1 , . . . , σ n+k . Finally, she gives each freeloader F i the state L ⊗ σ i . The freeloader F i simulates F(σ i ). Whenever F i queries F(f ), F i responds with a previously unused pair from L. She resume the simulation until obtaining an output b which she outputs herself. The view of A is exactly the same in the WEAK-QCP-RIA n,k game and in the simulation above, so the output of both interactions is identically distributed. Since the output of F i is simply the same as the output of F i , it follows that WEAK-QCP-RIA n,k QCP BBF (A, λ) and WEAK-QCP n,k QCP BBF (A , λ) distribute identically. Syntax and Security of Uncloneable Decryptors An uncloneable decryptors encryption scheme is a symmetric encryption scheme that allows the owner of the secret key to derive quantum states we call decryptors. A decryptor could be used to decrypt messages but is unfeasible to clone, even given access to polynomially many decryptors derived from the same secret key. The syntax of this primitive is a slight generalization of the notion of single decryptor schemes introduced in [GZ20] and further discussed in [CLLZ21]. Definition 18. (Uncloneable Decryptors Scheme) An uncloneable decryptors scheme UD is comprised of the following five QPT procedures: • sk ← KeyGen(1 λ ), • ρ ← DecGen(sk), • c ← Enc sk (m), • m ← Dec sk (c), and • m ← Dec(ρ, c). where Dec is deterministic, Enc and KeyGen implement classical functions. UD is correct if for any m it holds that P    sk ← KeyGen(1 λ ) ρ ← DecGen(sk) c ← Enc(sk, m) : Dec(ρ, c) = Dec sk (c) = m    = 1. Our definition slightly differs from [GZ20]'s as it has an explicit procedure for producing decryptors. It is also different from [CLLZ21]'s definition in two ways: it explicates the classical decryption circuit (which uses the secret key rather than the quantum decryptor) and assumes that the underlying encryption scheme is symmetric (whereas [CLLZ21] assume the underlying encryption is asymmetric). In Section 4.1 we introduce notions of security suitable for uncloneable decryptors, which generalize the notions of security introduced in previous works. By removing the DecGen and Dec procedures from the syntax of an uncloneable decryption scheme, one obtains the underlying symmetric encryption scheme. In Section 4.1 we formally introduce the underlying scheme and show that the security of the uncloneable decryptors scheme implies the security of the underlying encryption scheme. In Section 4.3 we show that even the weakest form of security is unattainable against an unbounded adversary with access to arbitrary polynomially many decryptors. Security Notions Here we provide the notions of UD-qCPA, UD-qCCA1 and UD-qCCA2 security for uncloneable decryptors. These are adaptations of the respective notions for symmetric encryption schemes (recall Definition 3). These definitions combine the security notions of quantum copy protection (see Definition 10) and symmetric encryption (see Definition 3). We retain the form of the security game of quantum copy protection, where a pirate P receives n copies of a program from the challenger C and creates n+k quantum states. These states are then given to distinguishers D 1 , . . . , D n+k (which replace the freeloaders F 1 , . . . , F n+k )). The challenger then plays against each distinguisher a game similar to the indistinguishability game for symmetric encryption schemes. Namely, D i needs to distinguish between ciphertexts of two plaintexts of her choosing. Like in symmetric encryption, we make our security notions progressively stronger by affording the adversary more forms of oracle access. Definition 19 (UD security). Let UD be an uncloneable decryptors scheme, and let A = (P, D 1 , . . . , D n+k ) be procedures, the UD n,k UD (A, λ) game is defined as follows: 1. C generates sk ← KeyGen(1 λ ) and n decryptors ρ 1 ← DecGen(sk), . . . , ρ n ← DecGen(sk) and samples b 1 , . . . , b n+k ← {0, 1}, 2. P(ρ 1 , . . . , ρ n ) creates states σ 1 , . . . , σ n+k , and n + k pairs of plaintexts (m i 0 , m i 1 ) with |m i 0 | = |m i 1 |, 3. For i = 1, . . . , n + k, C calculates c i ← Enc sk (m i b i ), 4. D i (σ i , b i ) outputs a bit β i , the output of the game is the number of indices i for which b i = β i . We say that UD is UD n,k secure if for any QPT A = (P, D 1 , . . . , D n+k ) it holds that E IND n,k UD (A, λ) ≤ n + k/2 + negl(λ). We say that UD is UD secure if it is UD n,k secure for any n, k ∈ poly(λ). This notion is extended by augmenting the freeloaders with oracle access. For convenience, we marked the oracles added to Definition 20 with a red underline. Removing all underlined expressions exactly recovers Definition 19. Definition 20 (UD-qX security). Let UD be an uncloneable decryptors scheme, and let A = (P, D 1 , . . . , D n+k ) be procedures, for any two oracles O 1 , O 2 define the UD-qX n,k O 1 ,O 2 (A, λ) game: 1. C generates sk ← KeyGen(1 λ ) and n decryptors ρ 1 ← DecGen(sk), . . . , ρ n ← DecGen(sk) and samples b 1 , . . . , b n+k ← {0, 1}, 2. P |Enc sk ,|O 1 (ρ 1 , . . . , ρ n ) creates states σ 1 , . . . , σ n+k , and n + k pairs of plaintexts (m i 0 , m i 1 ) with |m i 0 | = |m i 1 |, 3. For i = 1, . . . , n + k, C calculates c i ← Enc sk (m i b i ), 4. D |Enc sk ,|O 2 i (σ i , c i ) outputs a bit β i , the output of the game is the number of indices i for which b i = β i . By explicating the oracles O 1 , O 2 we define the games which will define our security notions. Let ⊥ designate a trivial oracle which always responds with ⊥. • UD-qCPA n,k = UD-qX n,k ⊥,⊥ , • UD-qCCA1 n,k = UD-qX n,k Dec sk ,⊥ , • UD-qCCA2 n,k = UD-qX n,k Dec sk ,Dec sk \c i where c i is the output of the challenge query given to D i in step 3. We say that UD is UD-qCPA n,k secure if for any QPT A = (P, D 1 , . . . , D n+k ) it holds that E UD-qCPA n,k UD (A, λ) ≤ n + k/2 + negl(λ). We say that UD is UD-qCPA secure if it is UD-qCPA n,k secure for any n, k ∈ poly(λ). UD-qCCA1 n,k , UD-qCCA1, UD-qCCA2 n,k and UD-qCCA2 security are defined similarly. Our constructions below are formed by first creating a scheme that only supports encrypting a single bit (namely uncloneable bit decryptors) and then extending them to messages of unrestricted (polynomial) length. The security game takes on a simpler form in the bit encryption setting, as there are only two possible ciphers. Definition 21 (UD1-qX security). The UD1-qX n,k games are defined almost exactly like the UD-qX n,k of Definition 19 and Definition 20, but with the following modifications: • in step 2, P does not create any plaintexts, and • in step 3, c i ← Enc sk (b i ). We say that UD is UD1-qX secure if for any QPT A = (P, D 1 , . . . , D n+k ) it holds that E UD1-qX n,k UD (A, λ) ≤ n + k/2 + negl(λ). We say that UD is UD1-qX secure if it is UD1-qX n,k secure for any n, k ∈ poly(λ). The Underlying Symmetric Encryption Scheme An uncloneable decryptors scheme is a symmetric encryption scheme with added functionality. When removing the extra functions, we remain with a run-of-the-mill encryption scheme which we call the underlying scheme. Definition 22 (Underlying Scheme). Let UD be an uncloneable decryptors encryption scheme, the underlying encryption scheme is SE UD = (UD.KeyGen, UD.Enc, UD.Dec). Notably, UD-qX 1,k security for any k implies that the underlying scheme admits IND-qX security. Proposition 23. If UD is UD 1,k (resp. UD-qCPA 1,k , UD-qCCA1 1,k and UD-qCCA2 1,k ) secure (see Definitions 19 and 20) for any k ≥ 1 then SE UD is IND (resp. IND-qCPA, IND-qCCA1 and IND-qCCA2) secure (see Definition 3). Proof. We prove the result for k = 1; our argument generalizes straightforwardly to any k. The proof idea is straightforward: we need to win two distinguishing games, we win one with certainty using the decryptor, and we use the IND-qX adversary for the second one. If the IND-qX wins with probability 1 2 + it follows that the expected number of correct distinguishers is 1 + 1 2 + , and it follows from the UD-qX security that = negl(λ). Let A = (A 1 , A 2 ) be a QPT adversary for the IND-qX SE UD game, we describe an adversary (P, D 1 , D 2 ) for the UD-qX 1,1 UD game. We note that for any security notion IND-qX, the corresponding notion UD-qX satisfies that P has exactly the same oracle access as A 1 and D 2 has exactly the same oracle access as A 2 . After obtaining the decryptor ρ from C, P simulates A 1 her own oracle access to answer oracle calls, until obtaining the plaintexts m 0 , m 1 and auxiliary data σ. She gives ρ to D 1 and σ to D 2 . She then gives the adversary the following pairs of messages (m 1 0 = 0, m 1 1 = 1), (m 2 0 = m 0 , m 2 1 = m 1 ) (that is, D 1 needs to distinguish the ciphers of 0 and 1 while D 2 needs to distinguish the ciphers of m 0 and m 1 ). The distinguisher D 1 outputs UD.Dec(ρ, c 1 ). The distinguisher D 2 simulates A 2 (σ), using her own oracle access to answer any queries, and outputs the result. Let β 1 , β 2 be the bits sampled by C in the challenge query, and b 1 , b 2 be the outputs of D 1 , D 2 respectively, then Pr[b 1 = β 1 ] + Pr[b 2 = β 2 ] = E UD-qX 1,1 UD (A , λ) ≤ 1 + 1/2 + negl(λ) where the inequality is due to the UD-qX 1,1 security of UD. From the correctness of UD we have that Pr[b 1 = β 1 ] = 1. From the construction of D 2 , we have that Pr[b 2 = β 2 ] = Pr IND-qX SE UD (A, λ) = 1 . Plugging these probabilities into the inequality above we obtain (after some rearrangement) that Pr IND-qX SE UD (A, λ) = 1 ≤ 1/2 + negl(λ) as needed. Remark 24. The same argument can be used almost verbatim to show for any k ≥ 1 that UD1-qX 1,k security implies the corresponding IND-qX security for bit encryption schemes. Impossibility of Unconditional Security In this section, we discuss the impossibility of unconditional security against an adversary that can request polynomially many decryptors. The authors of [GZ20] consider a scenario where the adversary is given access to polynomially many ciphers of random plaintexts (without even being given the plaintexts themselves). They prove that unconditional security is not obtainable against such adversaries. This implies, in particular, that no scheme is unconditionally UD-qCPA 1,k secure for any k. In this section, we prove that if we allow an arbitrary polynomial number of decryptors, UD security is also impossible. That is, for large enough polynomial n and any polynomial k there exists an unbounded IND n,k adversary with a non-negligible advantage. Our result is incomparable with the impossibility result of [GZ20]: the adversary we consider is stronger in the sense that she has access to many decryptors but is weaker in the sense that she does not have access to any ciphers. The proof is a straightforward application of a technique called shadow tomography, first considered in [Aar18]. Consider a family E of two-outcome measurements acting on D dimensional states. Let ε > 0 be some error tolerance. Say you have access to an unrestricted number of copies of some state ρ. The shadow tomography task is to compute for each E ∈ E an estimation s E such that ∀E ∈ E, |s E − Tr(Eρ)| ≤ ε. The following theorem bounds the number of copies of ρ required to achieve this task. Theorem 25 ([Aar18], Theorem 2). The shadow tomography task could be solved with success probability 1 − δ usingÕ log 1 δ · log 4 |E| · log D · ε −4 copies of ρ. Theorem 25 is used in [Aar18, Theorem 7] to prove the impossibility of unconditionally secure quantum money. Our proof is an adaptation of their argument to uncloneable decryptors. Let D be the dimension of the decryptors ρ produced by DecGen(sk). Note that since all procedures of UD are QPT, it follows that ρ is a state on poly(λ) many qubits. That is, D = 2 poly(λ) . It also follows that |c| = poly(λ) whereby |E| is exponential in λ. It follows from Theorem 25 that there exists n = poly(λ) such that given ρ ⊗n it is possible to calculate estimates s Ec such that with probability 1 − 2 −λ = 1 − negl(λ), ∀c ∈ {0, 1} p(λ+|m 0 |) , |s Ec − Pr[Dec(ρ, c) = m 0 ]| ≤ 1 4 . The pirate P performs shadow tomography on the set E of circuits defined above, using the state ρ ⊗n obtained from the challenger to obtain estimations s Ec , which they transmit to It follows that E UD n,k UD (A, λ) = (1 − negl(λ))(n + k) = n + k − negl(λ) as needed. Remark 27. Note that the proof above uses the plaintext m b = b. This implies that Theorem 26 holds also for UD1 security. Extendability In order to construct uncloneable decryptors, we first construct uncloneable bit decryptors and then extend them. One way to do so is by the following "bit-by-bit" transformation. Definition 28 (Extended Scheme). Let UD be an uncloneable decryptors scheme which supports messages of length 1, define the extension of UD to be the following scheme UD ext : • UD ext .KeyGen ≡ UD.KeyGen, • UD ext .DecGen ≡ UD.DecGen, • UD ext .Enc sk (m) outputs (c 1 , . . . , c |m| ) where c i ← UD.Enc sk (m i ), • UD ext .Dec sk ((c 1 , . . . , c ) outputs m 1 . . . m where m i ← UD.Dec sk (c i ), • UD ext .Dec(ρ, (c 1 , . . . , c )) outputs m 1 . . . m where m i ← UD.Dec(ρ, c i ). Definition 29 (UD-qX n,k extendability). Let UD be an uncloneable decryptor encryption scheme, we say that UD is UD-qX n,k extendable if UD ext is UD-qX n,k secure (see Definition 19 and Definition 20). We say that UD is UD-qX extendable if it is UD-qX n,k extendable for any n, k = poly(λ). It is trivial to check that UD-qX n,k extendability implies UD1-qX n,k security. Unfortunately, the converse is not generally true. Indeed, we will see in Construction 2 that given a WEAK-QCP secure copy protection scheme for any BBF, one can construct an uncloneable decryptors scheme which is UD1-qCCA2 secure but which is not even UD 1,1 extendable. In other words, the transformation described by Definition 28 does not afford a generic method to extend a length restricted scheme to an unrestricted scheme. Nevertheless, the notion of extendability will be helpful in the following constructions. Constructions Having established the relevant definitions and security notions in Section 4 we now turn to present several constructions and transformations of construction that strengthen their security. UD1-qCPA Security We first explain how to obtain UD1-qCPA n,k security from a WEAK-QCP n,k secure copy protection scheme for any BBF. This construction is inspired by the standard construction of symmetric bit encryption from pseudo-random functions (see e.g. [Gol04, Construction 5.3.9]). In this construction, the key to the PRF is used as a key to an encryption scheme, and a bit b is encrypted by sampling a uniformly random string r and outputting the pair (r, PRF sk (r) ⊕ b). We follow a similar approach, but replace the PRF with a WEAK-QCP secure copy protection for a BBF. Construction 1 (UD1-qCCA1 Secure Scheme from WEAK-QCP secure BBF). Let BBF be a balanced binary function (see Definition 11) with input length . Let QCP BBF be a copy protection scheme for BBF. Define the scheme 1UD cpa : • 1UD cpa .KeyGen ≡ BBF.Sample. • 1UD cpa .DecGen ≡ QCP.Protect. • 1UD cpa .Enc sk (b) → (r, b ⊕ BBF.Eval sk (r)) where r ← {0, 1} . • 1UD cpa .Dec sk ((r,b)) →b ⊕ BBF.Eval sk (r). • 1UD cpa .Dec(ρ, (r,b)) →b ⊕ QCP BBF .Eval(ρ, r). The correctness of Construction 1 follows from the correctness of BBF and QCP BBF . Remark 30. Note that this construction is manifestly not UD-qCCA2 secure: given the challenge cipher (r, m ⊕ f k (r)), a distinguisher could make a decryption query on (r, 1 ⊕ m ⊕ f k (r)) to obtain 1 ⊕ m. That QCP BBF is WEAK-QCP-RIA secure implies that E[WEAK-QCP-RIA n,k QCP BBF (A , λ)] ≤ n + k/2 + negl(λ) whereby the equality above would imply that µ ≤ n + k/2 + negl(λ). Upon getting ρ 1 , . . . , ρ n , the pirate P simulates P(ρ 1 , . . . , ρ n ). When P queries Enc sk , P queries R(BBF.Eval k ) to obtain a pair (r, f sk (r)) and applies the unitary |m, x → |m, x ⊕ (r, f (r) ⊕ m) to the input of the first query. When the simulation is concluded, P obtains σ 1 , . . . , σ n+k which she gives to the freeloaders F 1 , . . . , F n+k . Once the freeloader F i is given σ i from the pirate and x i from the challenger, she samples a random bit β i and invokes D i (σ i ) with the cipher (x, β i ) to obtain output b i . She returns y i = b i ⊕ β i . Note that (x, β i ) is a valid cipher for β i ⊕ BBF.Eval sk (x). Hence, if the output of F i is correct then b i = β i ⊕ BBF.Eval sk (x) whereby y i = b i ⊕ β i = BBF. Eval sk (x) as needed. Thus, the number of distinguishers who answer correctly is exactly the number of freeloaders who answer correctly. The balancedness of BBF implies that if x is uniformly random then m ⊕ BBF.Eval sk (x) is 0 with probability 1/2 + negl(λ). It follows that the statistical difference between the view of F i when simulated by D i and the view of D i in the UD1-qCPA n,k 1UD cpa (A, λ) game is negligibly close, whereby the expected number of distinguishers who answered correctly is negligibly close to µ. Implications Theorems Proposition 31 and Proposition 23 together imply that the existence of a WEAK-QCP 1,1 copy protectable BBF implies the existence of a post-quantum IND-CPA secure bit encryption scheme. Such schemes can are known to imply the existence of post-quantum one way functions. This line of argument boils to: Corollary 32. The existence of a WEAK-QCP 1,1 secure copy protection scheme for a BBF implies the existence of post-quantum one-way functions. Consider the bit encryption scheme described at the top of 5.1. Examination of the analysis of its IND-CPA security (e.g. [Gol04, Proposition 5.4.12]) reveals that for the construction to be secure, it is sufficient that the construction remains secure if we only require that the underlying function is a weak PRF. That is, it is infeasible to distinguish it from a truly random function for an adversary with access to its value on polynomially many uniformly random points. Furthermore, it is possible to show that being a weak PRF is also a necessary condition. This leads to the following corollary, the proof thereof is deferred to Appendix E: Corollary 33. If there exists a WEAK-QCP 1,1 secure copy protection scheme for a binary balanced function BBF, then BBF is a weak PRF. UD1-qCCA1 Security The construction [Gol04, Construction 5.3.9] which we discussed before is actually IND-CCA1 secure [Gol04, Proposition 5.4.18]. Intuitively, the proof of this claim follows by noting that the value of the function at the point used to mask the plaintext seems independent of the value of the function on all points which were required to answer previous encryption queries to computationally bounded adversaries, as mandated by the PRF property. Hence, having access to decryption oracle before seeing the challenge ciphertext does not benefit the adversary. In trying to carry this idea to the context of uncloneable decryptors, one runs into a difficulty: Our ability to answer decryption queries relies on our ability to evaluate the underlying BBF, which we are only able to do with the help of the copy-protected programs given by the challenger. However, simulating the pirate might modify the copy-protected programs in a way that makes them unusable. It is tempting to try to sidestep this by means of rewinding: every time the pirate makes a query, apply it in reverse to recover the copy-protected programs, use them to respond to the query, and apply the pirate forward to get back to the querying point. The problem is that the inputs to decryption queries might depend on measurement outcomes (or equivalently, if we simulate the pirate coherently, the queries might become entangled with the auxiliary qubits to which we store the measurement outcomes). This issue could be completely circumvented if we allow the WEAK-QCP adversary to have one additional program they could use to respond to queries. Following this line of argument, one can prove: Proposition 34. If QCP BBF is WEAK-QCP n+1,k secure (see Definition 10) then the scheme 1UD cpa (see Construction 1) is UD1-qCCA1 n,k secure (see Definition 21). We do not provide a formal proof as (assuming secure digital signatures) this result is superseded by the construction of the next section. UD1-qCCA2 Security Here we employ digital signatures to generically transform UD1-qCPA n,k secure decryptors to UD1-qCCA2 n,k secure decryptors. The transformation is conceptually similar to the standard transformation of IND-CPA secure symmetric encryption schemes into IND-CCA2 secure symmetric encryption schemes by means of message authentication codes (see e.g. [Gol04, Proposition 5.4.20]). Informally, Message authentication codes are a way to produce tags for strings such that anyone holding a secret key can tag messages as well as verify that other messages have been tagged with the same key, but such that it is infeasible to create valid tags without knowing the secret key. By modifying the scheme to tag ciphers at encryption, and only decrypting properly tagged messages (returning ⊥ otherwise), we render the decryption oracles useless, since creating valid ciphers other than ones obtained from encryption queries becomes unfeasible. The encrypt-than-MAC paradigm described above is unsuitable for our needs since the adversary has access to a decryptor. Since the decryptor should allow decrypting messages (and in particular, verifying tags), it should somehow contain the authentication key. However, simply affording this key in the clear would allow the adversary to tag messages themselves. We circumvent this by using a digital signature (see Section 2.4) rather than a message authentication code, which allows us to separate tagging (henceforth called signing) from verification. In order to answer encryption queries, we need to record decryption queries and their results. This is impossible to do for queries in superposition in general but becomes possible when we assume that the scheme is decoupled. That is, we assume that the encrypting 0 and 1 using the same randomness appears independent (at least to a computational adversary). Fortunately, it is easy to transform any scheme to a decoupled scheme while retaining its security, which allows us to assume without loss that the scheme we wish to transform is already decoupled. In Appendix D we formally define and show how to decouple uncloneable bit decryptors (which is sufficient for our applications) and sketch a decoupling procedure for uncloneable decryptors in general. Construction 2 (UD1-qCCA2 Uncloneable Decryptors from UD1-qCPA Uncloneable Decryptors and sEUF-CMA Signatures). Let UD dec be a decoupled (see Definition 52) uncloneable decryptors scheme, and let DS be a deterministic digital signature scheme (see Definition 5). Define the scheme 1UD cca2 as following: • 1UD cca2 .KeyGen(1 λ ) outputs (sk UD , sk DS , pk DS ) where: KeyGen(1 λ ), and -(sk DS , pk DS ) ← DS.KeyGen(1 λ ). -sk UD ← UD dec . • 1UD cca2 .DecGen (sk UD ,sk DS ,pk DS ) outputs ρ ⊗ pk DS where ρ ← UD dec .DecGen(sk UD ). • 1UD cca2 .Enc (sk UD ,sk DS ,pk DS ) (b) outputs (c, s) where c ← UD dec .Enc sk UD (b) and s ← DS.Sign sk DS (c). • 1UD cca2 .Dec (sk UD ,sk DS ,pk DS ) ((c, s)) outputs UD dec .Dec sk UD (c) DS.Ver pk DS (c, s) = 1 ⊥ else , • 1UD cca2 .Dec(ρ ⊗ pk DS , (c, s)) outputs UD dec .Dec(ρ, c) DS.Ver pk DS (c, s) = 1 ⊥ else . The correctness of 1UD cca2 of Construction 2 follows from the correctness of UD dec and DS. Proposition 35. If UD dec is UD1-qCPA n,k secure (see Definition 20) and DS is sEUF-CMA secure (see Definition 6) then 1UD cca2 from Construction 2 is UD1-qCCA2 n,k secure (see Definition 20). Proof. Let A = (P, D 1 , . . . , D n+k ) be an adversary for the UD-qCCA2 n,k 1UD cca2 game. We construct an adversary A = (P , D 1 , . . . , D n+k ) for the UD-qCPA n,k UD dec game such that E UD-qCCA2 n,k 1UD cca2 (A, λ) − UD-qCPA n,k UD dec (A , λ) < negl(λ), the UD1-qCCA2 n,k security of 1UD cca2 will then follow from the UD1-qCPA n,k security of UD dec . In the following, we describe how A simulates answers to decryption oracle calls made by A. We often refer to the encryption oracle as the actual oracle and to the responses made by A to such calls as the simulated oracle. After being given ρ ⊗n from C, the pirate P : • Generates sk DS , pk DS ← DS.KeyGen(1 λ ). • Simulates P((ρ ⊗ pk DS ) ⊗n ), responding to encryption queries by: Each distinguisher D i simulates D i (σ i ⊗pk DS ) on the cipher c i given to her by C. D i answers encryption calls the same way P did, storing signature-cipher-plaintext triplets into a list L i which we assume is initially a copy of L P . She answers decryption calls by applying the unitary |(c, s), x → |(c, s), x ⊕ f (c, s) where f (c, s) = b (c, s, b) ∈ L i ⊥ otherwise . Note that f is well defined since it is impossible that c is a cipher of both 0 and 1. When the simulation ends, D i outputs the output of D i . We argue that the statistical difference between the views of A in the simulation described above and the original UD-qCCA2 dcs 2 game is negl(λ). Intuitively, any statistical difference between the actual game and the simulation must result from differences in the outputs decryption queries (since the actual and simulated encryption oracles are identical). Before the first decryption oracle call, everything distributes identically. If all queries made by A to encryption oracles satisfy that the response of the simulated oracle is negligibly close to the response of the actual oracle, then the state at the end of the simulation is negligibly close to the state of the actual game. In other words, if the actual and simulated views of A by the end of the game are not statistically close, then at some point A must have made an encryption query on an input whose result on the actual oracle is significantly different than on the simulated oracle. This can only happen if the input the the encryption query is significantly supported on pairs of the form (c, s) such that UD dec .Enc sk UD (c) = b and DS.Ver pk DS (c, s) = 1 yet (c, s, b) / ∈ L i . We use this fact to extract a signed document that was not a result of a signature query, whereby voiding the sEUF-CMA security of DS and arriving at a contradiction. More precisely, for any state ρ which is of the dimension of an input to the decryption oracle, let S j (ρ) be the probability that, upon measuring the first register of ρ, the outcome would be of the form (c, s) where (c, s, * ) / ∈ L i at the time the jth query was made, yet DS.Ver pk DS ((r, b), s) = 1. Let S j be the expected value of S j (ρ) where ρ j is the input to the jth decryption query made by A , and let S = max{S 1 , . . . , S q } where q is the maximal number of decryption queries. If S < negl(λ) then the output of any decryption call made by A in the simulation is negligibly close to the output expected by an actual decryption oracle. Since the output of D i is exactly the output of D i it follows that E UD1-qCCA2 n,k 1UD cca2 (A , λ) − UD1-qCPA n,k UD dec (A, λ) < negl(λ). We show that S < negl(λ) by constructing an adversary B to the sEUF-CMA DS game (recall Definition 6) such that Pr[sEUF-CMA DS (B, λ) = 1] ≥ S poly(λ) , whereby it will follow from the sEUF-CMA security of DS that S < negl(λ). Recall that the adversary B has access to the oracle Sign sk DS as well as to the public key pk DS . The adversary B first chooses two random positive integers j ≤ n+k and u ≤ max{Q i } i=1,...,n+k where Q i is the maximal number of decryption queries to be made by D i . B then simulates the game UD1-qCCA2 n,k 1UD cca2 (A , λ) (with the adversary A defined above) with the following modifications: • P does not generate sk DS , pk DS , but is rather given pk DS from B (and does not know sk DS ). • All invocations of DS.Sign sk DS are replaced with oracle calls. B runs the simulation, responding to signature calls made by A by querying the signing oracle and recording the message-signature pair until D j makes their uth query. Instead of answering the query, B measures the first register of the input state and transmits to C the input-output list of all queries it has made, and the output of the last measurement (if the simulation of D j finishes before u queries are made, B concedes the game and the outcome is 0). By hypothesis, at least one of the queries made by the distinguishers doing the simulation has the property that measuring the first register will result with probability S with a valid signed message which was not queried by B. The probability that B measured such a state is at least 1 ju . Recall that j ≤ n + k = poly(λ), and that q is bounded by the number of oracle queries made by a QPT procedure, whereby q ≤ poly(λ). It follows that Pr[sEUF-CMA DS (B, λ) = 1] ≥ S poly(λ) as needed. UD-qCPA Security In this section, we show how to transform a UD-qCPA n,k extendable scheme into a UD-qCCA1 n,k secure scheme using digital signatures. In order to obtain a UD-qCPA n,k secure uncloneable decryptors, it suffices to provide UD-qCPA n,k extendable uncloneable bit decryptors (recall Definition 28). One might hope that the scheme 1UD cpa from Construction 1 is already UD-qCPA n,k extendable. Unfortunately, this is not the case. As we will soon see, a poor choice of a copy protection scheme, even a WEAK-QCP secure one, could result in a scheme that is not even UD 1,1 extendable. Worse yet, by applying the transformation of Construction 2 to this scheme, we obtain a scheme which is UD1-qCCA2 secure but not UD 1,1 extendable. To make things even worse, the scheme actually fails to be secure even when limited to plaintexts of length 2! Proposition 36. Assume there exists a WEAK-QCP n,k (resp. WEAK-QCP) secure copy protection scheme for some BBF. Then there exists a UD1-qCPA n,k (resp. UD1-qCPA) secure uncloneable decryption scheme which is not UD 1,1 extendable even when limited to plaintexts of length 2. Proof. The splitting attack described in Section 3.4 shows that the existence of a WEAK-QCP n,k secure copy protection scheme for some BBF implies the existence of a scheme which is also WEAK-QCP n,k secure, but with the property that each protected copy ρ could be split into two states ρ 0 , ρ 1 such that ρ b could be used to evaluate the underlying BBF on inputs starting with b. Call that scheme QCP. By using this scheme to instantiate Construction 1 and then applying the transformation of Construction 2 to the result we obtain a scheme which is UD1-qCCA2 n,k secure due to Proposition 31 and Proposition 35. Let UD be the scheme obtained by extending this scheme to two bit messages as described in Definition 28. We claim that this scheme is not UD 1,1 secure. To see this, consider the following adversary A = (P, D 0 , D 1 ) to a version of the UD 1,1 UD modified so that the challenge ciphers must be of length 2 (note that we unusually named the distinguishers D 0 and D 1 rather than D 1 and D 2 for ease of notation): • After being given ρ from C, P splits it to ρ 0 and ρ 1 , and gives ρ b to D b • D b makes a challenge query on the plaintexts m β = β β for β = 0, 1. Recall that the ciphertext given to D b is of the form (c 1 , c 2 ) where c i = (r i , β i ⊕ BBF.Eval sk (r i ), σ), though σ is currently irrelevant. Divide into cases: -If r i starts with b for some i then D b uses ρ b to calculate β i ⊕ BBF.Eval sk (r i ) and outputs the result. -Else, D b outputs a uniformly random bit. Note that since r i is uniformly random, the first case happens with probability 3/4, and in this case D b outputs the correct answer with certainty. The second case has probability 1/4 and then D b responds correctly with probability 1/2. All and all, the probability that D b answers correctly is 3 4 + 1 4 · 1 2 = 7 8 , so the expected number of distinguishers which answered correctly is 2 · 7 8 = 1 + 1 2 + 1 4 , so that A has a constant advantage of 1 4 . The good news, however, is that for 1UD cpa to be UD-qCPA n,k extendable it suffices to require that the underlying copy protection scheme is FLIP-QCP n,k secure (recall Definition 13). Proposition 37. Let 1UD cpa be obtained from instantiating Construction 1 with a FLIP-QCP n,k (resp. FLIP-QCP) secure copy protection scheme (see Definition 13), then 1UD cpa is UD-qCPA n,k (resp. UD-qCPA) extendable (see Definition 28). Proof. Let QCP be the copy protection scheme underlying 1UD cpa , and let UD cpa be the result of applying the transformation of Definition 28 to 1UD cpa . Recall that by Lemma 17 and the FLIP-QCP n,k security of QCP it follows that QCP is also FLIP-QCP-RIA n,k secure (see Definition 16). Let A = (P, D 1 , . . . , D n+k ) satisfy that E UD-qCPA n,k UD cpa (A) = n + k/2 + , we construct A = (P , F 1 , . . . , F n+k ) such that E FLIP-QCP-RIA n,k QCP (A ) = n + k/2 + . The FLIP-QCP-RIA n,k security of QCP BBF will then imply that = negl(λ) as needed. We first note that we can assume without loss that m i 0 = m i 1 for i = 1, . . . , n k (recall that (m i 0 , m i 1 ) is the ith pair of plaintexts sent to C in 3 of Definition 20). This holds because by the very definition of UD cpa , each bit is encrypted independently of the rest of the bits. Assume for example that the first bit of both messages is 0, letm b be obtained from m b by removing the first bit. Instead of sending m 0 , m 1 in the challenge query, D i could: • query the encryption oracle on input 0 to obtain some cipher c 1 , • make a challenge onm 0 ,m 1 to obtain a cipher (c 2 , . . . , c ), • run as before on the cipher c = (c 1 , . . . , c ). The cipher c obtained as above distributes exactly the same as the output of a challenge query on m 0 , m 1 . This process could be done in parallel for any bit that is the same in both messages, proving that an adversary restricted such that m 0 = m 1 can achieve the same advantage as a general adversary. We can therefore assume that P outputs for each distinguisher a single message m i , and that D i is given c b where c 0 is the encryption of m i and c 1 is the encryption of m i . D i then has to guess b. Assume A = (P, D 1 , . . . , D n+k ) is an adversary for the UD-qCPA n,k UD cpa game modified as described in the previous paragraph. We construct an adversary A = (P , F 1 , . . . , F n+k ) for the FLIP-QCP-RIA n,k QCP game: • after being given ρ ⊗n from C, P : simulates P(ρ ⊗n ), responds to encryption calls on plaintexts of length by querying the random input oracle times to obtain (r i , b i ) for i = 1, . . . , and applying the unitary |m, y → |m, y ⊕ ((r 1 , m 1 ⊕ b 1 ), . . . , (r , m ⊕ b )) to the input, -resumes the simulation until obtaining the states σ I , and plaintexts m i -provides σ i ⊗ m i to F i . • after being given σ i , the freeloader F i : The view of D i is exactly the same in the modified UD-qCPA n,k game and the simulation above, so they (and thereby F i ) output b with the same probability. Hence it suffices to show that b is the correct output for D i iff it is the correct output for F i . -F i queries C Indeed, if C outputs pairs of the form (r i , BBF.Eval sk (r i )) then the correct output for F i is 0. In this case ((r 1 , β 1 ⊕ m 1 ), . . . , (r , β ⊕ m )) =((r 1 , BBF.Eval sk (r 1 ) ⊕ m 1 ), . . . , (r , BBF.Eval sk (r ) ⊕ m )) =UD cpa .Enc sk (m) so the correct output for D i is also 0. The argument for the second case is identical. UD-qCCA2 Security Obtaining UD-qCCA2 security requires a bit more care. The bit by bit approach does not prohibit basic manipulations of the ciphertext such as truncating, rearranging, or combining several qubits. This allows the adversary to slightly manipulate the challenge cipher such that the decryption of the new cipher completely reveals which message was encrypted in the challenge phase. Such attacks demonstrate that no scheme could be UD-qCCA2 1,1 extendable. This is overcome by signing a document containing the cipher and some metadata: a unique serial number sampled uniformly at random, the plaintext length, and the encrypted bit's location within the plaintext. These data make it infeasible to truncate, rearrange or combine ciphers to generate new ciphers. This allows us to generically transform a UD-qCPA n,k extendable scheme (such as the scheme described in Proposition 37) to a UD-qCCA2 n,k secure scheme. Construction 3 (UD-qCCA2 Uncloneable Decryptors from UD-qCPA extendable Uncloneable Decryptors and sEUF-CMA Secure Signatures). Let 1UD cpa be an uncloneable decryptors bit encryption scheme, and let DS be a deterministic digital signature scheme. Let UD cca2 be the following scheme: • UD cca2 .KeyGen(1 λ ) outputs (sk UD , sk DS , pk DS ) where: KeyGen(1 λ ), and sk DS , pk DS ← DS.KeyGen(1 λ ). -sk UD ← 1UD cpa . • UD cca2 .DecGen(sk UD , sk DS , pk DS ) ≡ 1UD cpa .DecGen(sk UD ). • UD cca2 .Enc (sk UD ,sk DS ,pk DS ) (m) → (r, (c 1 , s 1 ), . . . , (c |m| , s |m| )) where: (c 1 , s 1 Proposition 38. If 1UD cpa is UD-qCPA n,k (resp. UD-qCPA) extendable (see Definition 28) and DS is sEUF-CMA secure (see Definition 6) then the scheme UD cca2 of Construction 3 is UD-qCCA2 n,k (resp. UD-qCCA2) secure (see Definition 20). -r ← {0, 1} λ , -c i ← 1UD cpa .Enc sk UD (m i ), and -s i ← DS.Sign sk DS (c i , |m|, i, r). • UD cca2 .Dec (sk UD ,sk DS ,pk DS ) ((r, Proof. Let UD cpa be the bit by bit extension of 1UD cpa to an unrestricted scheme as described in Definition 28. By hypothesis, this scheme is UD-qCPA n,k secure. Let A = (P, D 1 , . . . , D n+k ) be an adversary to the UD-qCCA2 n,k UD cca2 game. We construct an adversary A = (P , D 1 , . . . , D n+k ) such that E UD-qCCA2 n,k UD cca2 (A, λ) − UD-qCPA n,k UD cpa (A , λ) < negl(λ). The UD-qCPA n,k security of UD cpa will then imply the UD-qCCA2 n,k security of UD cca2 . After being given ρ ⊗n from C, the pirate P : • generates sk DS , pk DS ← DS.KeyGen(1 λ ). • simulates P((ρ ⊗ pk DS ) ⊗n ), responding to an encryption queries of length by: using her encryption oracle times on input 0 and times on input 1 to obtain {c j,0 , c j,1 } j=1 , c j,b , , j, r), (s j,b , c j,b , b, , r) to a list L P , and applying the unitary |m, y → |m, y ⊕ (r, (c 1,m 1 , s 1,m 1 ), . . . , (c ,m , s ,m )) to the input, and decryption queries by applying the unitary |c, s 1 , b 1 , , r), . . . , (c , s , b , , r) ∈ L P ⊥ otherwise . -sampling r ← {0, 1} λ , -computing s j,b ← DS.Sign sk DS (-storingy → |c, t ⊕ f (c) where f ((r, (c 1 , s 1 ), . . . , (c , s ))) = b 1 . . . b (c 1 , (we implicitly assume that f also returns ⊥ on strings which are not of the required form.) Note that f is well defined since it is impossible that c j is a cipher of both 0 and 1. • when the simulation has ended, P outputs for each i the state σ i and a pair (m i 0 , m i 1 ) of plaintexts of length i , P gives each D i the state σ i ⊗ pk DS ⊗ sk DS ⊗ L P , and transmits the plaintext pairs to C. Each distinguisher D i simulates D i (σ i ⊗ pk DS ). D i answers encryption calls the same way P did, storing signature-cipher-plaintext-length-nonce quintuplets into a list L i which we assume is initially a copy of L P . She answers decryption calls the same way as well, only using the list L i rather than L P in the definition of f . Each distinguisher D i simulates D i (σ i ⊗ pk DS ). D i answers oracle calls the same way P did, storing signature-cipher-plaintext-length-noce quintuplets into a list L i which we assume is initially a copy of L P . When D makes a challenge query on plaintexts m 0 , m 1 of length (that is, the simulation arrives at Item 3 in Definition 20), D forwards m 0 , m 1 to C as a challenge query to obtain a cipher of the form c 1 , . . . , c . She then samples r ← {0, 1} λ and responds to the challenge query made by D with (r, (c 1 , s 1 ), . . . , (c , s )) where s j = DS.Sign sk DS (c j , , j, r). We argue that if the inequality E UD-qCCA2 n,k UD cca2 (A, λ) − UD-qCPA n,k UD cpa (A , λ) < negl(λ) does not hold, then it is possible to create an adversary which wins the sEUF-CMA DS game (recall Definition 6) with non-negligible probability, by way of contradiction. As explained in the proof of Proposition 35, if the actual and simulated views of A by the end of the game are not statistically close, then at some point A must have made a decryption query on an input whose result on the actual oracle is significantly different than on the simulated oracle. This can only happen if the input to the decryption query is significantly supported on legitimate ciphers which were not generated as a response to encryption calls and are not the challenge cipher. As a consequence, by measuring a random query in the computational basis, we obtain such a cipher with a non-negligible probability. The analysis is identical to the proof of Proposition 35, so we do not repeat it here. It remains to explain why such a cipher necessarily contains a fresh signed document. Assume c = (r, (c 1 , s 1 ), . . . , (c , s )) is such a ciphertext. Then for any i = 1, . . . , it holds that DS.Ver pk DS (c i , , i, r) = 1 We assume that A never responds to encryption queries with two ciphers with the same nonce r. In practice, this only holds up to negligible probability (if this does happen, and the two ciphers happen to be encryption of plaintexts of the same size, then A can indeed create a legitimate cipher which was not the output of any encryption query without creating any fresh signatures). Consider the following cases: • One of the ciphers which A output as response to an encryption query is of the form c = (r, (c 1 , s 1 ), . . . , (c , s )) (that is, c and c have the same nonce r), then -If = then it holds for any i that the message (c i , , i, r) was never signed by A so ((c i , , i, r), s i ) is a fresh signed document. -Else, there exists some i such that c i = c i or s i = s i then ((c i , , i, r), s i ) is a fresh signed document (note that here we use the fact that DS is sEUF-CMA secure and not just EUF-CMA secure. EUF-CMA security does not prevent the adversary to create a different signature for the same message whereby modifying query outputs to create legitimate ciphertexts). • Else, each pair of the form ((c i , , i, r), s i ) is a fresh signed document. Open Questions Our treatment raises several questions. We list some of them here: Can FLIP-QCP security be obtained from WEAK-QCP security? We have shown that WEAK-QCP security does not imply FLIP-QCP security. Is it possible to generically transform a WEAK-QCP secure scheme into a FLIP-QCP secure scheme? Such a transformation will show that our unrestricted length schemes could be instantiated from WEAK-QCP security. Can our construction be made more generic? Is there a generic transformation of UD-qCPA uncloneable decryptors into UD-qCCA1 secure uncloneable decryptors, perhaps utilizing additional primitives? In particular, does UD-qX security imply UD-qX extendability? A positive answer to the latter would imply that UD-qCCA2 security can be generically obtained from UD-qCPA security and sEUF-CMA digital signatures. Semantic security notions? Our security notions extend the notions of indistinguishability of ciphertexts security for encryption schemes. It is known that in the setting of encryption schemes, the indistinguishability of ciphertexts is equivalent to a more natural concept of security known as semantic definition which better captures the adversary's limited ability to learn anything about a plaintext given its encryption. Our definition would be better established if we could provide a semantic security definition and prove its equivalence to the current definition. There are also several natural directions into which the current work could be extended: • Public key encryption: could the notion of UD-qCCA2 be extended to the public key setting? The authors of [ALL + 21] consider single decryptors in the public key setting where the adversary has no oracle access, which amounts to an asymmetric notion of UD-qCPA. • Quantum challenge queries: our security definitions extend the IND-qX security notions of symmetric encryption afforded by [BZ13]. In these definitions, oracle queries might be in superposition, but the challenge query is classical. The works of [GHS16] and [CEV20] afford notions of qIND-qCPA and qIND-qCCA2 where the challenge phase is also in superposition. It is interesting whether their definitions could be extended to stronger security notions for uncloneable decryptors and whether such decryptors could be constructed. • A Splitting Attack on Weak QCP In Section 3.4 we informally described a splitting attack against WEAK-QCP 1,1 secure copyprotection scheme. In this appendix we formalize this attack. That is, we show how to transform a WEAK-QCP 1,1 secure copy-protection scheme for BBF to a new scheme which is WEAK-QCP 1,1 secure (though for a slightly different BBF) but splittable. Our attack is easily extendable to WEAK-QCP n,k security. Let QCP BBF be a WEAK-QCP 1,1 secure copy-protection scheme for the balance binary function BBF. We define a new binary function BBF as follows: • BBF .Sample(1 λ ): invoke BBF.Sample(1 λ ) twice to obtain two functions f 0 , f 1 , output (f 0 , f 1 ), • BBF .Eval((f 0 , f 1 ), b x): output BBF.Eval(f b , x). It is trivial to see that BBF is also balanced. We define the following copy-protection scheme QCP .BBF : • QCP .Protect(f 0 , f 1 ): output ρ (f 0 ,f 1 ) = ρ 0 ⊗ ρ 1 where ρ b ← QCP.Protect(f b ), • QCP .Eval(ρ (f 0 ,f 1 ) , b x): output QCP.Eval(ρ b , x) Note that this scheme affords a very simple splitting attach. The state ρ (f 0 ,f 1 ) is already given as a tensor product of the states ρ f b , each of which could be used to evaluate f on points whose first bit is b. It remains to show that the new scheme QCP is also WEAK-QCP 1,1 secure, from which will follow that weak copy-protection security does not prohibit splitting attacks. Claim 1. QCP is a WEAK-QCP 1,1 secure copy-protection scheme for BBF . Proof. Let A = (P , F 1 , F 2 ) be an adversary for the WEAK-QCP 1,1 QCP game with advantage . We need to show that = negl(λ). To do so, we construct an adversary A = (P, F 1 , F 2 ) for the WEAK-QCP QCP game with advantage . It will then follow by hypothesis that = negl(λ). The pirate P: • samples a random bit b, let ρ b be the state afforded by C, • samples f ← BBF.Sample(1 λ ) and ρ 1−b ← QCP.Protect(f ), • starts simulating the WEAK-QCP QCP (A ) game by affording the pirate P with the state ρ 0 ⊗ ρ 1 until they obtain from P the states σ 1 and σ 2 , x) until they obtain an output y i , which she outputs herself. • gives σ i ⊗ b to F i . The freeloader F i (σ i ⊗ b, b x) simulates F i (σ i , Note that y i = f b (x) if and only if y i = BBF . Eval((f 0 , f 1 ), b x), so the probability that F i evaluated x correctly is exactly the same as the probability that F i evaluated b x correctly. However, by definition of BBF and A it follows that the view of A in the simulation above is identical to her view in the WEAK-QCP QCP (A ) game, hence the advantage of A for the WEAK-QCP QCP game is also . Finally, we use the splitting attack above to prove that the converse of Lemma 14 is false: Lemma 39. If there exists a WEAK-QCP 1,1 secure copy-protection scheme, then there exists a copy-protection scheme which is WEAK-QCP 1,1 secure but not FLIP-QCP 1,1 secure. Proof. Let QCP be WEAK-QCP 1,1 secure, and let QCP be the scheme described in Section 3.4. According to Appendix A, QCP is also WEAK-QCP 1,1 secure. However, it is not FLIP-QCP 1,1 secure, as we will now show by directly constructing an adversary A = (P, F 0 , F 1 ) for that FLIP-QCP 1,1 (A, λ) game (note that we unusually index the freeloaders with 0, 1, as it is more convenient in our particular context). The pirate P, upon being given ρ f 0 ,f 1 = ρ f 0 ⊗ ρ f 1 gives ρ f β to F β resp. The freeloader F β queries O f,b β twice to get two pairs (x 1 , f (x 1 ) ⊕ b β ), (x 2 , f (x 2 )) ⊕ b β . If the string x 1 starts with β, letx 1 be x 1 without the first bit, then F β calculates f (x 1 ) ← QCP.Eval(ρ β ,x 1 ) and outputs f (x 1 ) ⊕ (f (x 1 ) ⊕ b β ) = b β . Else, if the string x 2 starts with β, they do the same with x 2 . In both cases they output b β with certainty. In case both strings x 1 and x 2 start with 1 − β, F outputs a uniformly random bit. Since x 1 , x 2 are uniformly random, the probability that both start with 1 − β is 1/4. It follows that F β outputs the correct answer with probability 3/4 + 1/4 · 1/2 = 7/8. Hence, E FLIP-QCP 1,1 (A, λ) = 14/8 = 1 + 1/2 + 1/4, so that A has a non-negligible advantage of 1/4. Note that by doing the same but with more queries, the advantage becomes exponentially close to the maximal advantage of 1/2. B FLIP-QCP as a Copy-protection Version of LoR-CPA The purpose of this section is to better motivate the notion of flip detection security for quantum copy-protection (see Definition 13) introduced and considered in Section 3.6. We do so by arguing that it is a natural adaptation of the notions used to model multi-encryption security for encryption schemes. In this appendix, we overview the left and right notions of security for encryption schemes and present an adaptation of this definition to copy-protection scheme we call LoR-QCP security. We show that LoR-QCP security implies FLIP-QCP security for copyprotecting BBFs. We also show that LoR-QCP implies WEAK-QCP in general. The motivation for Definition 13 arose through attempts to extend uncloneable bit decryptors to uncloneable decryptors which support messages of arbitrary length. A standard method to extend bit encryption to arbitrary messages is to encrypt each bit separately. In order to establish the security of the resulting scheme, it is required to show that no new attacks arise from the fact that a cipher of a long plaintext could be parsed as many independently encrypted bits. Differently stated, it should be established that the original bit encryption scheme remains secure against an adversary with access to multiple encryptions. Such adversaries are usually modeled with notions of left or right (LoR) security (for example, in Definition 57 we introduce the notion of LoR-CPA, though in a different context). Informally, the difference between IND (see Section 2.3) and LoR security lies in the form of the challenge phase. In the IND notions, the adversary is aware of two plaintexts, which we can call the left plaintext and right plaintext. She is given a ciphertext and has to distinguish whether it is an encryption of the left or the right plaintext. The LoR notions of security allow the presence of many pairs of left and right ciphers. The challenger uniformly chooses a side, and for each pair affords the adversary with a ciphertext of the plaintext of that side (the crucial point being that the challenger chooses a side only once, and consistently encrypts only the plaintexts of that side throughout the game). The adversary then has to distinguish a challenger who chose the left side from a challenger who chose the right side. It is easy to see that LoR security is at least as strong as IND with the same oracle access (as IND security could be described as LoR security where the adversary is limited to a single pair of left and right plaintexts). Basic results in encryption theory show that the inverse implication is also true (albeit at the cost of increasing the adversary's advantage by a polynomial multiplicative factor) in contexts where the adversary has access to an encryption oracle. These statements are formally defined, proved, and discussed in [BDJR97]. Unfortunately, this property of encryption schemes does not hold for uncloneable decryptors, as is demonstrated in Proposition 36. However, Proposition 37 shows that it is possible to obtain an uncloneable bit decryptor scheme that remains secure under multiple encryptions if we require the underlying copy-protection scheme to satisfy the flip detection security notion defined in Definition 13. In this appendix, we present a natural adaptation of left or right security to copy-protection schemes we call LoR-QCP; we then show that LoR-QCP implies WEAK-QCP security in general, and FLIP-QCP security for balanced binary function. A subtlety with this definition is with the distribution D with respect to which it is defined. We restrict our attention to the scenario where all inputs are sampled independently, though it is possible to define LoR-QCP in more general settings. (x 0 , x 1 , f (x b )) where x 0 ← D x (f ), and x 1 is sampled from D x (f ) conditioned on the event that f (x 0 ) = f (x 1 ). For any n, k = poly(λ) and for any adversary A = (P, F 1 , . . . , F n+k ) define the LoR-QCP n,k,D QCP (A, λ) game between A and a trusted adversary C: • C samples f ← D F , invokes QCP.Protect(f ) n times to obtain ρ = ρ ⊗n f and gives ρ to P, • P generates n + k states σ 1 , . . . , σ n+k and gives σ i to F i , • C samples b i ← {0, 1} for i = 1, . . . , n + k, • F O f,b i i (σ i ) outputs a bit β i , • the output of the game is the number of indices i such that b i = β i . QCP is LoR-QCP n,k,D secure if for any A it holds that E LoR-QCP n,k,D QCP (A, λ) ≤ n + k 2 + negl(λ), QCP is LoR-QCP D secure if it is LoR-QCP n,k,D secure for any n, k = poly(λ). We soon prove that a LoR-QCP secure copy-protection scheme for a BBF is also FLIP-QCP secure. In particular, the LoR-QCP security of a scheme QCP BBF implies that Construction 1 is UD-qCPA extendable when instantiated with QCP BBF . This implication can be interpreted as an indication that the difficulty in extending uncloneable bit decryptors is actually underwritten by a broader difficulty: that of extending the uncloneability of evaluating a function to the uncloneability of identifying to which of two sets of arbitrarily many random points this functionality was applied. This extension -which is commonplace in encryption (see the discussions titled It is easy to construct an example where IND security does not imply LoR security (for example, one could choose a random string r the same length of the secret key, and then append r to encryptions of 0 and sk ⊕ r to encryptions of 1). One could argue that since copy-protection adversaries are not given oracle access, the lack of equivalence of WEAK-QCP and FLIP-QCP is similar to the lack of equivalence of IND and LoR and has nothing to do with uncloneability. There are three responses to this argument: 1. We also see this phenomenon in the context of uncloneable decryptors, where the security game is more similar to those used to define IND-CPA security, and in particular affords encryption oracle access to the adversary. The failure of a UD1-qCPA secure scheme to be UD-qCPA extendable seen in Proposition 36 stands in contrast to the equivalence between IND-CPA and LoR-CPA security. 2. In both FLIP-QCP and WEAK-QCP, the adversary is given access to the copy-protected program, which allows them to evaluate the function at arbitrary points and is akin to having oracle access to the function at the first phase of the FLIP-QCP and WEAK-QCP games. As we establish in Lemma 17, this property is strong enough that allowing the adversary to evaluate the function in uniformly random points throughout the entirety of the game does not increase the security. The randomness of the evaluated points is a manifestation of the non-determinism of the encryption procedure (which is a necessary condition for IND-CPA security). 3. Some may find the previous point objectionable, since the structure of the security games still seems quite different. This objection could be addressed by investigating the security notions for encryption obtained by allowing the adversary access to encryptions of random plaintexts. Such security notions have not been researched in the literature to the best of our knowledge, as they do not seem particularly useful to model any forms of attack not already addressed by chosen plaintext security. However, it is straightforward to define IND and LoR security against random plaintext attacks and show that these notions are indeed equivalent. Proposition 42. Let BBF be a balanced binary function with input length (see Definition 11), QCP be a copy-protection scheme for BBF (see Definition 9), and let D = (BBF.Sample, U ({0, 1} )). For any n, k = poly(λ) if QCP is LoR-QCP n,k,D secure, then QCP is also FLIP-QCP n,k secure. Proof. Given an adversary A = (P, F 1 , . . . , F n+k ) for FLIP-QCP n,k consider an adversary A = (P , F 1 , . . . , F n+k ) for LoR-QCP n,k,D which simulates A and responds to queries as follows: whenever F i queries for a pair of the form (x, f (x)⊕b i ), F makes a query toÕ f,b i to obtain a pair of the form (m 0 , m 1 , f (m b i )) and responds to the query made by We conclude by showing that LoR-QCP security with respect to any distribution D implies WEAK-QCP security with respect to the same distribution. F i with (m 0 , f (m b i )). Finally, F i outputs the output of F i . By the fact that f (m 0 ) = f (m 1 ) it follows that (m 0 , f (m b i )) = (m 0 , f (m 0 ) ⊕ b i ) so that F i Lemma 43. Let QCP be a quantum copy-protection function that is LoR-QCP n,k,D secure with respect to some distribution D. Then it is also WEAK-QCP n,k,D secure. Proof. Let A = (P, F 1 , . . . , F n+k ) be an adversary whose expected output in the WEAK-QCP n,k,D game is µ. It is easy to construct an adversary A = (P, F 1 , . . . , F n+k ) whose expected output in the LoR-QCP n,k,D game is at least µ. The freeloader F i : • makes a single query to O f,b to obtain ((x 0 , x 1 ), y), • invokes F i with input x 0 , • outputs 0 if and only if F i outputs y. It is straightforward to check that if F i evaluates f (x 0 ) correctly, then F i outputs b (here we rely on the fact that f (x 0 ) = f (x 1 )). So the probability that F i is correct is at least the probability that F i is correct. Note that if b = 1 then F i outputs b even if F i evaluated f (x 0 ) wrong, as long as she did not happen to output the point y, which means that the expected output of A in the LoR-QCP n,k,D game might actually be larger (albeit this scenario is impossible if F is a binary function). Aaronson [Aar09] introduces a quantum oracle relative to which there exists a WEAK-QCP secure copy-protection scheme for any unlearnable class of functions. In a subsequent work, Aaronson et al. [ALL + 21] manage to replace the quantum oracle with a classical oracle at the cost of only obtaining WEAK-QCP 1,1 security. C Oracle Instantiation of FLIP-QCP Remark 44. The reason that [Aar09]'s construction supports unlimited copies whereas [ALL + 21]'s only support a single copy follows from the information theoretic no-go theorem to which they eventually appeal. The [Aar09] construction encodes programs into Haar random quantum states, which are hard to clone approximately even when given (polynomially) many copies. The [ALL + 21] construction replaces the Haar random state with so called hidden subspaces first introduced in [AC12] to construct quantum money. These states have the property that they could be verified given access to an appropriate classical oracle. The [ALL + 21] reduction appeals to a theorem by [BS16] which shows that a particular functionality of hidden subspaces (namely, producing either a primal or a dual vector of the hidden subspace) can not be cloned. Unlike random states, this theorem no longer holds when an adversary is given linearly many copies of the hidden subspace. The [ALL + 21] scheme uses fresh randomness for each copy of the program, and thereby their scheme might still be secure in the presence of many copies. However, their proof is highly specialized for the single copy setting, and they leave the more general setting as an open problem. In this section, we sketch a proof that the [Aar09] and [ALL + 21] schemes are actually FLIP-QCP and FLIP-QCP 1,1 secure respectively. We focus on [ALL + 21] since the authors thereof provide the notion of predicates that can naturally encapsulate more general functionalities we wish to copy-protect, including flip detection security. Our arguments also apply to the construction of [Aar09] since they are mostly agnostic to the proof of [ALL + 21, Theorem 4], and only appeal to the fact that it generalizes to arbitrary predicates (see the discussion concluding Section 5 therein, which is equally applicable to [Aar09, Theorem 9]). As we shortly discuss, the only detail in our argument which requires addressing the content of the proof is justifying that it still holds even though the predicate we define require superpolynomially many random bits (note that the proof does not follow through for such predicates in general). To extend this result to FLIP-QCP security, we note two simple observations: • the oracle instantiated scheme of [ALL + 21] is FLIP-QCP secure, given that the underlying function satisfies an ostensibly stronger form of unlearnability we shortly introduce called flip unlearnability, and • flip unlearnability is actually equivalent to unlearnability. In order to consider more generalized notions of copy-protection, which seek to prevent the freeloaders from performing functionalities that are not as strong as evaluating the underlying function at a random point, [ALL + 21, Definition 15] introduce the formalism of predicates. A predicate allows us to describe a variety of functionalities by encapsulating both the information given to the freeloader and the outcomes which are considered "correct". Definition 45 (Predicate, [ALL + 21]). A binary predicate E = E(P, y, r) is a binary outcome function comprised of the following information: • a deterministic circuit X(y, r), and • a relation R which contains triplets of the form (z, y, r). Given a circuit P , the predicate E(P, y, r) evaluates to 0 if and only if (P (X(y, r), y, r) ∈ R. Here we assume that P is a circuit with classical input and output, though it can be a quantum circuit with auxiliary qubits instantiated to an arbitrary quantum state. In the definitions below, P represents a freeloader, and the auxiliary state is the input given to P by the pirate. While not implied by the definition of a predicate, the values y and r should be considered as representing arbitrary auxiliary information and uniform randomness, respectively. Given a predicate, we can redefine both unlearnability and copy-protection in terms of that predicate to obtain the notions of generalized copy-protection and generalized unlearnability, which are formally defined in Definitions 19 and 21 in [ALL + 21]. The idea is that instead of testing whether the adversary manages to evaluate the function in a random point, we test whether given input x = X(y, r) (where y is arbitrary auxiliary information which may depend on the underlying function f ) they produce z such that (z, y, r) ∈ R. This definition is rather involved, so we do not reproduce it here. Note that by setting y = f , r ← {0, 1} , and (z, f, r) ∈ R ⇐⇒ f (r) = z we recover the standard notions of unlearnability and copy-protection. Remark 46. The definitions of generalized unlearnability and generalized copy-protection afforded in [ALL + 21] are more general still. They are defined with respect to a cryptographic application which is comprised of a predicate as well as a sampling protocol between the challenger C and the adversary A for choosing the function f . Moreover, their sampling protocol allows passing arbitrary auxiliary information to the adversary. We do not require this level of generality and always implicitly assume that in the sampling "protocol" C simply samples f ← BBF.Sample(λ) without any communication with A. A minor difficulty in defining flip detection as a predicate is that the amount of randomness in the definition of a predicate may not depend on the adversary. Since an adversary can make any polynomial number of queries, each requiring a constant number of random bits to respond, this forces the randomness to be superpolynomially large. To sidestep this, we choose some fixed superpolynomial function q and let r be long enough to contain q pairs of random inputs. When considering a circuit P which makes q = poly(λ) queries, we redefine it as a circuit that gets q responses in advance, uses the first q as query responses, and discards the rest. The resulting predicate accounts for all possible QPT adversaries. Allowing superpolynomial randomness seems to break the security reductions in the proof of [ALL + 21, Theorem 4], as they require a computationally bounded adversary to evaluate the predicate E. However, this is easily fixed by noting that for a fixed adversary, the result of evaluating E is independent of all but a polynomial prefix of the randomness, so it can be simulated efficiently. Let q be a superpolynomial function, we define the flip detection predicate as follows: the randomness is of the form r = (b, r 1 , . . . , r q ), the input is X(f, r) = ((r 1 , f (r 1 )⊕b), . . . , (r q , f (r q )⊕ b)), and (z, f, r) ∈ R ⇐⇒ z = b. We say that a BBF is flip unlearnable if it is satisfies [ALL + 21, Definition 21] with respect to the flip detection predicate and the trivial sampler (we do not fully specify this definition as it would require us to introduce several notions form [ALL + 21] which are too far removed from the discussion. Fortunately, the fact that the functions under consideration are BBFs allows for much simpler equivalent definitions for both unlearnability and flip unlearnability, which we introduce shortly). Applying the generalized form of [ALL + 21, Theorem 4] gives us: Lemma 47. If BBF is flip unlearnable, then applying the [Aar09] construction (resp. [ALL + 21] construction) to BBF results in a FLIP-QCP secure (resp. FLIP-QCP 1,1 secure) copy-protection scheme. Lemma 47 already implies that uncloneable decryptors could be instantiated from a quantum oracle assuming the existence of a flip unlearnable BBF. The remaining question is how quantum unlearnability and flip unlearnability compare. We prove that these are equivalent notions. In order to do so, we first note that the definitions of unlearnability and flip unlearnability take on a simpler form when the underlying function is a BBF. Definition 48 (unlearnability for BBF). Let BBF be a balanced binary function with input length , and let A = (A 1 , A 2 ) be a procedure. Define the UL BBF (A, λ) game between A and a trusted challenger C as follows: Proof. The easy direction is showing that flip unlearnability implies unlearnability. Say A = (A 1 , A 2 ) wins the UL game with probability p, we will construct a procedure A 2 such that A = (A 1 , A 2 ) which wins the FLIP-UL BBF game with the same probability: A 2 (σ) uses her oracle access to obtain a single pair of the form (r, f (r) ⊕ b), she then invokes A 2 (σ, r) to obtain an output b . Finally, she outputs f (r) ⊕ b ⊕ b . Note that A wins the game if and only if f (r) = b . Since the view of A 2 is identical in the UL BBF game and in the simulation, it follows by hypothesis that f (r) = b with probability p. From the assumption that BBF is flip unlearnable it follows that p = 1 2 + negl(λ) as needed. For the other direction, assume that A = (A 1 , A 2 ) is an adversary to the FLIP-UL game which makes q queries to O f,b . Let H i be the hybrid where the first i queries are to the oracle O f,0 and the remaining queries are to the oracle O f,1 , and let p i be the probability that A outputs 0 in the hybrid H i . Note that H 0 is exactly the FLIP-UL BBF game conditioned on b = 0, and H q is exactly the FLIP-UL BBF game conditioned on b = 1, so to prove that BBF is flip unlearnable it suffices to show that |p 0 − p q | = negl(λ). Consider the following adversary A = (A 1 , A 2 ) to the FLIP-UL BBF game: 1. A 1 simulates A 1 to obtain a quantum state σ, she also uses her oracle to create q pairs of the form (r, f (r)) with r ← {0, 1} . She creates the state σ = σ⊗((r 0 , f (r 0 )), . . . , (r q , f (r q ))). 2. A 2 (σ , x) samples ι ← 1, . . . , q, she then simulates A 2 and responds to the ith query the following way: • for i < ι output (r i , f (r i )), • for i = ι output (x, 0), • for i > ι output (r i , f (r i ) ⊕ 1). A 2 resumes the simulation until A 2 outputs a bit b, which A 2 outputs herself. Note that if f (x) = 0 then the view of A is exactly the same as in H i , and otherwise the view of A is exactly the same as in H i−1 . That is, if f (x) = 0 then A outputs 0 with probability 1 q q i=1 p i . From the hypothesis that BBF is unlearnable it follows that 1 q q−1 i=0 p i = 1 2 + negl(λ). Similarly, if f (x) = 1 then A outputs 0 with probability 1 q q i=1 p i = 1 2 + negl(λ). It follows that |p 0 − p q | = q−1 i=0 p i − q i=1 p i = negl(λ) Combining Lemma 47 and Lemma 50 gives us the desired result: Proposition 51. Let BBF be an unlearnable balanced binary function (see Definition 48), then: • there exists a FLIP-QCP 1,1 secure copy-protection scheme for BBF relative to a classical oracle, and • there exists a FLIP-QCP secure copy-protection scheme for BBF relative to a quantum oracle. D Decoupling As we discuss in Section 5.3, a desirable property of uncloneable decryptor schemes is that they are decoupled. That is, the encryptions of two distinct ciphers are independent (or at least computationally indistinguishable from independent). In particular, we use this property to sidestep the problem of recording a quantum query, which is required for simulating CCA2 adversaries while only having access to an encryption oracle. In this appendix, we show a generic transformation that decouples uncloneable bit decryptors and prove that this transformation preserves UD-qX security. For completeness, we also suggest a transformation for decoupling general uncloneable decryptor schemes and sketch a security proof, even though such a transformation is not required for our constructions. In the context of bit encryption it is tempting to exploit the fact that there are only two possible plaintexts to sidestep this issue by handling an encryption query on input τ the following way: • query the encryption oracle on b = 0, 1 to obtain ciphertexts c b and record them, and • apply the unitary |b, y → |b, y ⊕ c b to ρ. This allows us to keep a list of all ciphertexts we have generated (this still leaves open the problem of handling decryption calls to ciphertexts which were not generated by encryption queries, which we treat in Section 5.5). The problem with this approach is that it generates a different distribution than directly querying the oracle in superposition. Recall that the randomness of the encryption oracle is sampled once per query. Consider an encryption query on a pure state of the form |ψ = (α|0 + β|1 ) ⊗ |0 , then an actual oracle query distributes like α|0, Enc sk (0; r) + β|0, Enc sk (1; r) , r ← {0, 1} t whereas the output of our simulated oracle distributes like α|0, Enc sk (0; r 0 ) + β|0, Enc sk (1; r 1 ) , r 0 , r 1 ← {0, 1} t , and these distributions might be very different. We overcome this by decoupling the randomness used for each cipher. That is, we make sure that the bits used to generate c 0 are disjoint from the bits used to generate c 1 . This implies that encrypting 0 and 1 using the same randomness distributes identically to encrypting them using fresh randomness. Definition 52. An uncloneable bit decryptors scheme 1UD is called decoupled if the encryption procedure is of the form Enc sk (·; r 0 , r 1 ) where the output of Enc sk (b) depends only on r b . Fortunately, it is very simple to generically transform a UD1-qX secure scheme to a decoupled scheme with the same security. Definition 53. Let 1UD be an uncloneable bit decryptors scheme. Define the decoupling of 1UD to be the scheme 1UD dec : • All procedures but Enc sk are defined identically for 1UD and 1UD dec . • 1UD dec .Enc sk : samples r 0 , r 1 ← {0, 1} k where t is the amount of random bits required by 1UD.Enc, outputs 1UD.Enc sk (b; r b ). Lemma 54 (Decoupling does not Affect Security). Let 1UD be an uncloneable bit decryptors scheme. If 1UD is UD1-qX n,k secure (see Definition 21) then 1UD dec (see Definition 52) is also UD1-qX n,k secure. Proof. Let A be an adversary to the UD1-qX n,k 1UD dec game, we construct an adversary A to the UD1-qX n,k 1UD game such that E UD1-qX n,k 1UD dec (A, λ) = E UD1-qX n,k 1UD (A , λ) . The adversary A simulates A using her own oracle access. They respond to decryption queries by using their own decryption oracle. When getting an encryption query on some input ρ, A queries her own encryption oracle on both 0 and 1 to obtain c 0 and c 1 respectively. She applies the unitary |b, x → |b, x ⊕ c b to ρ and replies to the query with the result. The distinguishers of A output the output of the distinguishers of A. This affords a perfect simulation of the view of A in the actual UD1-qX n,k game, whereby the output of A distributes identically to the output of A. It is possible to decouple schemes with arbitrary length messages. However, doing so does not solve the problem of recording queries since, in general, the amount of possible queries grows exponentially with the length of the query. For completeness, we informally describe how this could be achieved if we assume that the number of random bits used by the encoding procedure does not grow with the message length (which is not true for our constructions). These ideas could be extended to polynomially many random bits by standard techniques. In Definition 52 we required that the ciphers for two different messages would be completely independent even conditioned on the fact that they were generated using the same randomness (in fact, our requirement is strictly stronger). In the context of arbitrary message length, we only require them to be computationally indistinguishable. That is, it is infeasible for a QPT adversary to distinguish an encryption oracle that reuses the same randomness for all queries from an encryption oracle that samples fresh randomness for queries of new plaintexts (but keeps a record of already queried plaintexts to respond consistently). Assume for simplicity that if sk ← UDKeyGen(1 λ ) then UD.Enc sk requires λ random bits. Let PRF be a post-quantum pseudo-random function with the property that if k ← PRF.KeyGen(1 λ ) then PRF.Eval k implements a function from {0, 1} * to {0, 1} λ . Modify UD.KeyGen such that in addition to sk it also samples and outputs k ← PRF.KeyGen(1 λ ). Replace UD.Enc with the circuit Enc (sk,k) (m; r) which outputs c ← UD.Enc sk (m; PRF.Eval(m, r)). The security property of PRF then implies PRF.Eval(m, r) is indistinguishable from using a truly random string. In particular, it is computationally impossible to distinguish the unitary |m, x → |m, x ⊕ Enc (sk,k) (m) from an oracle which on each query chooses a random string r m for any possible message m an applies the unitary |m, x → |m, x ⊕ Enc sk (m; r m ) . E Weakly Copy-Protectable BBFs are Weak PRFs In this appendix, we provide a full proof to Corollary 33. Namely, we show that if it is possible to construct a WEAK-QCP 1,1 secure copy-protection scheme for BBF, then BBF must be a weak PRF. To do so, we first recall the definition of a weak PRF for binary functions. For any function f : D → R, let R(f ) denote an oracle which takes no input and outputs a pair of the form (r, f (r)) where r ← D. Definition 55. Let BBF be a binary function. We say that BBF is a post-quantum weak pseudorandom function (or weak PRF) if for any QPT oracle machine A whose output is one bit it holds that Pr A R(O b ) = b = 1 2 + negl(λ) where sk ← BBF.Sample(λ), O 0 = BBF.Eval sk , O 1 is a uniformly random function with the same input and output lengths as BBF.Eval sk and b ← {0, 1}. The choice of the notation BBF in Definition 11 is suggestive of the fact that any weak PRF must be balanced. Indeed, suppose the underlying function outputs b with probability significantly greater than 1 2 . In that case, an adversary which makes a single query and outputs 0 iff the output is b is a counterexample to Definition 11. Remark 56. The key difference between weak and standard PRFs is that the former only allows evaluating the function on random inputs, hence the random input oracles which appear in Definition 55. If instead we have granted A oracle access O 0 and O 1 we would have recovered the definition of a post-quantum PRF, and if we have granted access to |O 0 and |O 1 we would have recovered the definition of fully quantum PRF. For a discussion of these two notions and the differences thereof (including an explicit example of a post-quantum PRF which is not fully quantum) see [Zha21]. The second ingredient we require is notion of security for symmetric encryption called leftor-right (LoR) security, first introduced in [BDJR97]. Under this notion, the challenger C first samples a random coin b. The adversary may make several challenge queries of the form (m 0 , m 1 ) whose response is the encryption of m b , and their goal is to guess b. We emphasize that b is chosen once for the entire game. Definition 57 (LoR-CPA security). Let SE be a symmetric encryption scheme, and let A be a procedure, the LoR-CPA SE (A, λ) game is defined as follows: • C samples sk ← SE.KeyGen(1 λ ), and b ← {0, 1}, • A transmits to C pairs of the form (m 0 , m 1 ) with |m 0 | = |m 1 |, • C responds to each pair with Enc sk (m 0 ), • A outputs a bit β, • A wins the game if b = β. The scheme SE is considered post-quantum LoR-CPA secure if for any QPT A, the winning probability is at most 1/2 + negl(λ). It might come off as curious that the adversary for the LoR-CPA game enjoys no oracle access despite the CPA in the name indicating this game should simulate a chosen plaintext attack. However, providing A with access to Enc sk does not increase their strength in any way, as she can simulate a query on plaintext m by sending the pair (m, m) to C. It might seem that LoR-CPA security is stronger than IND-CPA security, as the adversary is more powerful. However, [BDJR97,Theorem 4] shows that these added capabilities can only increase the advantage by a polynomial multiplicative factor, so that IND-CPA security does imply LoR-CPA. We relate balanced binary functions to symmetric encryption through the following construction: Construction 4. For any BBF with input length define the bit encryption scheme SE BBF : • SE BBF .KeyGen ≡ BBF.Sample, • SE BBF .Enc sk (b) outputs (r, b ⊕ BBF.Eval sk (r)) where r ← {0, 1} , and • SE BBF .Dec sk ((r, β)) outputs β ⊕ BBF.Eval sk (r). The correctness of SE BBF follows from the correctness of BBF. Note that it holds that SE BBF = SE 1UD cpa (recall the definition of the underlying scheme Definition 22) where 1UD cpa is the uncloneable decryptor scheme of Construction 1 instantiated with BBF. This gives us the following series of implications: • say that there exists a copy-protection scheme QCP for BBF which is WEAK-QCP 1,1 secure, • it follows from Proposition 31 that using QCP to instantiate Construction 1 is UD1-qCPA 1,1 secure, • it follows from Proposition 23 that the underlying scheme to this construction is IND-qCPA secure, and in particular IND-CPA secure, • from the discussion above, the underlying scheme is also LoR-CPA secure, and • also from the discussion above, this underlying scheme is exactly SE BBF . Hence, if BBF could be copy-protected with WEAK-QCP 1,1 security then SE BBF is postquantum LoR-CPA secure. So in order to prove Corollary 33 it suffices to prove: Lemma 58. If SE BBF is post-quantum LoR-CPA secure then BBF is a post-quantum weak PRF. Remark 59. The converse also holds. The analysis of [Gol04, Proposition 5.3.19] shows that SE BBF is classically IND-CPA secure whenever BBF is a PRF. By examining the proof one notes that: 1. the reduction therein only queries BBF on uniformly random points, whereby the proof holds verbatim for weak PRFs as well, and 2. if we assume that BBF is a post-quantum weak PRF, then the same argument without modification shows that SE BBF is post-quantum IND-CPA secure. Proof. Let A be a QPT oracle machine such that Pr A R(O b ) = b = 1 2 + , where the oracles O 0 , O 1 are as described in Definition 55. We show that = negl(λ) by constructing an adversary for the LoR-CPA SE BBF game with advantage − negl(λ). It will then follow from the LoR-CPA security of SE BBF that = negl(λ). The adversary A : • simulates A, • whenever A makes a query, A sends the pair (0, z) with z ← {0, 1} to C and replies to A with the respond she got from C, • A resumes the simulation until A outputs a bit β, and outputs β. Let b be the bit sampled by C at the beginning of the game, then A wins iff b = β. Note that if b = 0 then the queries of A are always answered with pairs of the form (r, BBF sk (r) ⊕ 0) = (r, BBF sk (r)), so her view distributes exactly as if she had access to the oracle O 0 . If b = 1 then the queries are always answered with pairs of the form (r, BBF sk (r) ⊕ z) where z ← {0, 1}. Hence, the second bit distributes uniformly. The view of A in this scenario is identical to her view given access to the O 1 oracle conditioned on the event that A never responded to two different queries with the same r, which holds with overwhelming probability. It follows that the view of A in the simulation above is negligibly close to her view given access to the actual oracles O 0 , O 1 . It follows that the probability that A wins is negligibly close to Pr A R(O b ) = b = 1 2 + . That is, the advantage of A is − negl(λ). F Fully Specified Constructions We presented our construction for UD-qCPA and UD-qCCA2 as transformations of given schemes with properties obtained by previous constructions. This appendix presents full specifications of these constructions instantiated from a FLIP-QCP secure copy-protection and a deterministic sEUF-CMA secure digital signature. F.1 UD-qCPA secure construction Let QCP BBF be a quantum copy-protection scheme for a binary balanced function BBF with input length BBF . We define an uncloneable decryptors scheme UD cpa : • UD cpa .KeyGen ≡ BBF.Sample. • UD cpa .DecGen ≡ QCP BBF .Protect. • UD cpa .Enc sk (m) outputs (c 1 , . . . , c |m| ) where r i ← {0, 1} BBF c i = (r i , m i ⊕ BBF.Eval sk (r i )) . • UD cpa .Dec sk ((r 1 , β 1 ), . . . , (r , β )) outputs β 1 ⊕ BBF.Eval sk (r 1 ) . . . β ⊕ BBF.Eval sk (r ). • UD cpa . Dec(ρ, (r 1 , β 1 ), . . . , (r , β )) outputs β 1 ⊕ QCP BBF .Eval(ρ, r 1 ) . . . β ⊕ QCP BBF .Eval(ρ, r ). Proposition 37 implies that if QCP BBF is FLIP-QCP n,k (resp. FLIP-QCP) secure then UD cpa is UD-qCPA n,k (resp. UD-qCPA) secure. F.2 UD-qCCA2 secure construction Let QCP BBF be a quantum copy-protection scheme for a binary balanced function BBF with input length BBF . Let DS be a deterministic digital signature scheme. We define an uncloneable decryptors scheme UD cca2 : • UD cca2 .KeyGen(1 λ ) outputs (sk BBF , sk DS , pk DS ) where sk BBF ← BBF.Sample(1 λ ) (sk DS , pk DS ) ← DS.KeyGen(1 λ ) . • UD cca2 .DecGen(sk) parses sk as (sk BBF , sk DS , pk DS ), outputs ρ⊗pk DS where ρ ← QCP BBF .Protect(sk BBF ). • UD cca2 .Enc sk (m) parses sk as (sk BBF , sk DS , pk DS ), outputs 2. (First learning phase) A produces two messages m 0 , m 1 . 3. (Challenge query) A provides C with m 0 , m 1 , C samples a uniformly random b ← {0, 1} and responds with the challenge cipher c = Enc sk (m b ). 4. (Second learning phase) A outputs a single bit b . 5. (Winning condition) The output of the game is 1 if b = b. • Chosen Plaintext Attack (IND-CPA): A is given classical oracle access to Enc sk . • A priori Chosen Ciphertext Attack (IND-CCA1): A is additionally given classical oracle access to Dec sk during the first learning phase. • A posteriori Chosen Ciphertext Attack (IND-CCA2): A is additionally given classical oracle access to Dec sk \ c during the second learning phase, where c is the challenge cipher. Definition 3 ( 3IND-qCCA1 security, [BZ13, Definition 8]). Let SE be a symmetric encryption scheme (cf. Definition 2). For a procedure A = (A 1 , A 2 ) let the IND-qCCA1 SE (A, λ) game between A and a trusted challenger C be defined as follows: 1. C invokes KeyGen(1 λ ) to obtain the key sk, 2. A |Enc sk |Dec sk 1 outputs two classical plaintexts m 0 , m 1 and auxiliary data σ, 3. C samples b ← {0, 1} and computes c ← Enc sk (m b ), Definition 8 ( 8Admissible Class). An admissible class of functions F λ is a collection of functions with the following properties: • the members of F λ are functions {0, 1} → {0, 1} m where n, m = poly(λ), Theorem 26 . 26Let UD be an uncloneable decryptors scheme, there exists n = poly(λ) such that for any k there exists a (computationally unbounded) adversary A such thatE UD n,k UD (A, λ) ≥ n + k − negl(λ).Proof. Let be the length of a ciphertext for a plaintext of length 1. For each string c ∈ {0, 1} let E c be a two outcome measurement which, on input ρ, measures Dec(ρ, c) in the computational basis and accepts if and only if the result is 0. Let E = {E c }. D 1 , . . . , D n+k . For every i they transmit to C the pair (m 0 , m 1 ). The distinguishers D i return 0 if and only if s Ec i > 1 2 . The correctness of UD implies that if ρ ← DecGen(sk) where sk ← KeyGen(1 λ ) then c ← Enc sk (m 0 ) implies that Pr[E(ρ) = 0] = 1, whereby S Ec > 3 4 with probability 1 − negl(λ). Similarly that if c ← Enc sk (m 1 ) then Pr[E(ρ) = 1] = 1 whereby S Ec < 1 4 with probability 1 − negl(λ). - using her encryption oracle on inputs 0 and 1 to obtain c 0 and c 1 ,computing s b ← DS.Sign sk DS (c b ), -storing (s b, c b , b)to a list L P , and applying the unitary |b, x → |b,x ⊕ (s b , c b ), until obtaining the states σ 1 , . . . , σ n+k . Gives each D i the state σ i ⊗ pk DS ⊗ sk DS ⊗ L P . on the bits of m i in order to obtain a sequence of pairs c i = ((r 1 , β 1 ), . . . , (r , β )), -simulates D i (σ i ) with c i as the cipher, -responds to encryption queries exactly the same way as P , -resumes simulation of D i until obtaining an output b i , and outputs b i . ), . . . , (c , s )) outputs b 1 . . . b ∀i = 1, . . . , , DS.Ver pk DS ((c i , , i, r), s i ) = 1 ⊥ else where b i = 1UD cpa .Dec sk UD (c i ). • UD cca2 .Dec(ρ ⊗ pk DS , (r, (c 1 , s 1 ), . . . , (c , s )) outputs b 1 . . . b ∀i = 1, . . . , , DS.Ver pk DS ((c i , , i, r), s i ) = 1 ⊥ else where b i = 1UD cpa .Dec(ρ, c i ). The distinguisher D i is given from C a cipher of the form c i 1 , . . . , c i i . She samples r ← {0, 1} λ and for j = 1, . . . , i computes s i ← DS.Sign pk DS (c i j , , j, r). She then simulates D i (σ i ) with (r, (c 1 , s 1 ), . . . , (c i , s i ) as the challenge cipher. Definition 40 ( 40LoR-QCP security). Let F λ be a permissible class of functions (see Definition 8). Let D F be a polynomial randomized circuit such that D F (1 λ ) outputs an element of F λ . Let D x be a polynomial randomized circuit such that for any f ∈ F, D x (f ) outputs an element in the domain of f . Denote D = (D F , D x ). Let QCP be a copy-protection scheme for F (see Definition 9). For any f ∈ F and b ∈ {0, 1} let O f,b be the oracle which is given no input and outputs "Multiple-Message Security" in sections 5.4.3 and 5.4.4 in [Gol04] for an overview) -seems to fail in the context of quantum copy-protection. Remark 41. LoR security is defined like IND security (recall Definition 3) with the modification that the adversary creates a list of polynomially many q pairs of plaintexts (m 1 0 , m 1 1 ), . . . , (m q 0 , m q 1 ) and is then given back the encryptions of m 1 b , . . . , m q b and has to distinguish the cases b = 0 from b = 1 (note that all plaintexts are chosen at once, unlike in Definition 57 where the adversary is given the encryption of m i b right after providing the pair (m i 0 , m i 1 ), and may adapt future queries based on the ciphertext of previous queries). answers correctly if and only if F i answers correctly. (r, (c 1 1, s 1 ), . . . , (c |m| , s |m| )wherer ← {0, 1} λ c i = (r i , m i ⊕ BBF.Eval sk (r i )) s i = DS.Sign sk DS (c i , |m|, i, r) r i ← {0, 1} BBF • UD cca2 .Dec sk ((r, (r 1 , β 1 ), . . . , (r , β ))) parses sk as (sk BBF , sk DS , pk DS ), outputs b 1 . . . b ∀i = 1, . . . , , DS.Ver pk DS ((c i , , i, r), s i ) = 1 ⊥ else where b i = β i ⊕ BBF.Evalsk BBF (r i ).• UD cca2 .Dec(ρ, (r, (r 1 , β 1 ), . . . , (r , β ))) outputs b 1 . . . b ∀i = 1, . . . , , DS.Ver pk DS ((c i , , i, r), s i ) = 1 ⊥ else 1. C samples f ← BBF.Sample(1 λ ), 2. A |f 1 produces a quantum state σ, 3. C samples x ← {0, 1} , 4. A 2 (σ, x) outputs a bit b, 5. the output of the game is 1 if and only if b = f (x) (in that case we say that A won the game). We say that BBF is unlearnable if it holds for any QPT A that Definition 49 (flip unlearnability). For any binary function f with input length and any bit b let O f,b be an oracle with no input which outputs a pair of the form (r, f (r) ⊕ b) where r ← {0, 1} .Let BBF be a balanced binary function with input length , and let A = (A 1 , A 2 ) be a procedure. Define the FLIP-UL BBF (A, λ) game between A and a trusted challenger C as following:1. C samples f ← BBF.Sample(1 λ ) and b ← {0,1}, (σ) outputs a bit b , 4. the output of the game is 1 if and only if b = b (in that case we say that A won the game). We say that BBF is flip unlearnable if it holds for any QPT A that E [FLIP-UL BBF (A, λ)] = 1 2 + negl(λ). Lemma 50. Let BBF be a balanced binary function, then BBF is unlearnable if and only if it is flip unlearnable.E [UL BBF (A, λ)] = 1 2 + negl(λ). 2. A |f 1 produces a quantum state σ, 3. A O f,b 2 Quantum Copy-Protection and Quantum Money. S Aaronson, 10.1109/CCC.2009.42arXiv:1110.5353Proceedings of the 24th Annual IEEE Conference on Computational Complexity. the 24th Annual IEEE Conference on Computational ComplexityParis, FranceIEEE Computer SocietyS. Aaronson. Quantum Copy-Protection and Quantum Money. In Proceedings of the 24th Annual IEEE Conference on Computational Complexity, CCC 2009, Paris, France, 15-18 July 2009, pages 229-242. IEEE Computer Society, 2009, arXiv: 1110.5353. Shadow Tomography of Quantum States. S Aaronson, 10.1145/3188745.3188802arXiv:1711.01053Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018. I. Diakonikolas, D. Kempe, and M. Henzingerthe 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018Los Angeles, CA, USAACMS. Aaronson. Shadow Tomography of Quantum States. In I. Diakonikolas, D. Kempe, and M. Henzinger, editors, Proceedings of the 50th Annual ACM SIGACT Sympo- sium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, pages 325-338. ACM, 2018, arXiv: 1711.01053. Computational Security of Quantum Encryption. G Alagic, A Broadbent, B Fefferman, T Gagliardoni, C Schaffner, M S Jules, 10.1007/978-3-319-49175-2_3arXiv:1602.01441Information Theoretic Security -9th International Conference, ICITS 2016. A. C. A. Nascimento and P. S. L. M. BarretoTacoma, WA, USA10015ABF + 16. Revised Selected Papers[ABF + 16] G. Alagic, A. Broadbent, B. Fefferman, T. Gagliardoni, C. Schaffner, and M. S. Jules. Computational Security of Quantum Encryption. In A. C. A. Nascimento and P. S. L. M. Barreto, editors, Information Theoretic Security -9th International Conference, ICITS 2016, Tacoma, WA, USA, August 9-12, 2016, Revised Selected Papers, volume 10015 of Lecture Notes in Computer Science, pages 47-71, 2016, arXiv: 1602.01441. S Aaronson, P Christiano, 10.1145/2213977.2213983arXiv:1203.4740Quantum Money from Hidden Subspaces. Proceedings of the 44th Symposium on Theory of Computing. ACMS. Aaronson and P. Christiano. Quantum Money from Hidden Subspaces. Pro- ceedings of the 44th Symposium on Theory of Computing, ACM:41-60, 2012, arXiv: 1203.4740. One-shot Signatures and Applications to Hybrid Quantum/Classical Authentication. R Amos, M Georgiou, A Kiayias, M Zhandry, Proceedings of STOC 2020. STOC 2020R. Amos, M. Georgiou, A. Kiayias, and M. Zhandry. One-shot Signatures and Applications to Hybrid Quantum/Classical Authentication. In Proceedings of STOC 2020, 2020, Cryptology ePrint Archive: 2020/107. New Approaches for Quantum Copy-Protection. S Aaronson, J Liu, Q Liu, M Zhandry, R Zhang, 10.1007/978-3-030-84242-0_19arXiv:2004.09674Advances in Cryptology -CRYPTO 2021 -41st Annual International Cryptology Conference, CRYPTO 2021, Virtual Event. T. Malkin and C. PeikertSpringer128252021Proceedings, Part I[ALL + 21] S. Aaronson, J. Liu, Q. Liu, M. Zhandry, and R. Zhang. New Approaches for Quan- tum Copy-Protection. In T. Malkin and C. Peikert, editors, Advances in Cryptology - CRYPTO 2021 -41st Annual International Cryptology Conference, CRYPTO 2021, Virtual Event, August 16-20, 2021, Proceedings, Part I, volume 12825 of Lecture Notes in Computer Science, pages 526-555. Springer, 2021, arXiv: 2004.09674. Secure Software Leasing. P Ananth, R L L Placa, 10.1007/978-3-030-77886-6_17arXiv:2005.05289Advances in Cryptology -EUROCRYPT 2021 -40th Annual International Conference on the Theory and Applications of Cryptographic Techniques. A. Canteaut and F. StandaertZagreb, CroatiaSpringer126972021Proceedings, Part IIP. Ananth and R. L. L. Placa. Secure Software Leasing. In A. Canteaut and F. Stan- daert, editors, Advances in Cryptology -EUROCRYPT 2021 -40th Annual Inter- national Conference on the Theory and Applications of Cryptographic Techniques, Zagreb, Croatia, October 17-21, 2021, Proceedings, Part II, volume 12697 of Lecture Notes in Computer Science, pages 501-530. Springer, 2021, arXiv: 2005.05289. A Concrete Security Treatment of Symmetric Encryption. M Bellare, A Desai, E Jokipii, P Rogaway, 10.1109/SFCS.1997.64612838th Annual Symposium on Foundations of Computer Science, FOCS '97. Miami Beach, Florida, USAIEEE Computer SocietyM. Bellare, A. Desai, E. Jokipii, and P. Rogaway. A Concrete Security Treatment of Symmetric Encryption. In 38th Annual Symposium on Foundations of Computer Science, FOCS '97, Miami Beach, Florida, USA, October 19-22, 1997, pages 394- 403. IEEE Computer Society, 1997. Secure Software Leasing Without Assumptions. ] A Bjl + 21, S Broadbent, S Jeffery, S Lord, A Podder, Sundaram, 10.1007/978-3-030-90459-3_4arXiv:2101.12739Theory of Cryptography -19th International Conference, TCC 2021. K. Nissim and B. WatersRaleigh, NC, USASpringer130422021Proceedings, Part IBJL + 21] A. Broadbent, S. Jeffery, S. Lord, S. Podder, and A. Sundaram. Secure Software Leasing Without Assumptions. In K. Nissim and B. Waters, editors, Theory of Cryp- tography -19th International Conference, TCC 2021, Raleigh, NC, USA, November 8-11, 2021, Proceedings, Part I, volume 13042 of Lecture Notes in Computer Science, pages 90-120. Springer, 2021, arXiv: 2101.12739. Uncloneable Quantum Encryption via Oracles. A Broadbent, S Lord, 10.4230/LIPIcs.TQC.2020.4arXiv:1903.0013015th Conference on the Theory of Quantum Computation, Communication and Cryptography. S. T. FlammiaRiga, Latvia2020:22. Schloss Dagstuhl -Leibniz-Zentrum für InformatikA. Broadbent and S. Lord. Uncloneable Quantum Encryption via Oracles. In S. T. Flammia, editor, 15th Conference on the Theory of Quantum Computation, Com- munication and Cryptography, TQC 2020, June 9-12, 2020, Riga, Latvia, volume 158 of LIPIcs, pages 4:1-4:22. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2020, arXiv: 1903.00130. S Ben-David, O Sattath, arXiv:1609.09047Quantum Tokens for Digital Signatures. S. Ben-David and O. Sattath. Quantum Tokens for Digital Signatures, 2016, arXiv: 1609.09047. Secure Signatures and Chosen Ciphertext Security in a Quantum Computing World. D Boneh, M Zhandry, 10.1007/978-3-642-40084-1_21Advances in Cryptology -CRYPTO 2013 -33rd Annual Cryptology Conference. R. Canetti and J. A. GaraySanta Barbara, CA, USASpringer8043Proceedings, Part II. Cryptology ePrint Archive: 2013/088D. Boneh and M. Zhandry. Secure Signatures and Chosen Ciphertext Security in a Quantum Computing World. In R. Canetti and J. A. Garay, editors, Advances in Cryptology -CRYPTO 2013 -33rd Annual Cryptology Conference, Santa Barbara, CA, USA, August 18-22, 2013. Proceedings, Part II, volume 8043 of Lecture Notes in Computer Science, pages 361-379. Springer, 2013, Cryptology ePrint Archive: 2013/088. On the Security Notions for Encryption in a Quantum World. C Chevalier, E Ebrahimi, Q H Vu, IACR Cryptol. ePrint Arch. 237C. Chevalier, E. Ebrahimi, and Q. H. Vu. On the Security Notions for Encryption in a Quantum World. IACR Cryptol. ePrint Arch., 2020:237, 2020. Hidden Cosets and Applications to Unclonable Cryptography. A Coladangelo, J Liu, Q Liu, M Zhandry, 10.1007/978-3-030-84242-0_20arXiv:2107.05692Advances in Cryptology -CRYPTO 2021 -41st Annual International Cryptology Conference, CRYPTO 2021, Virtual Event. T. Malkin and C. PeikertSpringer128252021Proceedings, Part IA. Coladangelo, J. Liu, Q. Liu, and M. Zhandry. Hidden Cosets and Applications to Unclonable Cryptography. In T. Malkin and C. Peikert, editors, Advances in Cryptology -CRYPTO 2021 -41st Annual International Cryptology Conference, CRYPTO 2021, Virtual Event, August 16-20, 2021, Proceedings, Part I, volume 12825 of Lecture Notes in Computer Science, pages 556-584. Springer, 2021, arXiv: 2107.05692. Quantum copy-protection of compute-and-compare programs in the quantum random oracle model. A Coladangelo, C Majenz, A Poremba, arXiv:2009.13865A. Coladangelo, C. Majenz, and A. Poremba. Quantum copy-protection of compute-and-compare programs in the quantum random oracle model, 2020, arXiv: 2009.13865. Semantic Security and Indistinguishability in the Quantum World. T Gagliardoni, A Hülsing, C Schaffner, 10.1007/978-3-662-53015-3_3arXiv:1504.05255Advances in Cryptology -CRYPTO 2016 -36th Annual International Cryptology Conference. M. Robshaw and J. KatzSanta Barbara, CA, USASpringer9816Proceedings, Part IIIT. Gagliardoni, A. Hülsing, and C. Schaffner. Semantic Security and Indistinguisha- bility in the Quantum World. In M. Robshaw and J. Katz, editors, Advances in Cryp- tology -CRYPTO 2016 -36th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 14-18, 2016, Proceedings, Part III, volume 9816 of Lec- ture Notes in Computer Science, pages 60-89. Springer, 2016, arXiv: 1504.05255. Proposition 38 implies that if QCP BBF is FLIP-QCP n,k (resp. FLIP-QCP) secure and DS is sEUF-CMA secure then UD cpa is UD-qCCA2 n,k (resp. UD-qCCA2) secure. Proposition 38 implies that if QCP BBF is FLIP-QCP n,k (resp. FLIP-QCP) secure and DS is sEUF-CMA secure then UD cpa is UD-qCCA2 n,k (resp. UD-qCCA2) secure.
[]
[ "Mesoscopic eigenvalue statistics of Wigner matrices", "Mesoscopic eigenvalue statistics of Wigner matrices" ]
[ "Yukun He ", "Antti Knowles " ]
[]
[]
We prove that the linear statistics of the eigenvalues of a Wigner matrix converge to a universal Gaussian process on all mesoscopic spectral scales, i.e. scales larger than the typical eigenvalue spacing and smaller than the global extent of the spectrum.Again by (4.5), (4.6), and Lemma 4.5 we havefor any fixed ε > 0. The other terms of J 2,3 are estimated similarly. Thus we getStep 4. Putting the estimates of the three types in Steps 1-3 together, we findwhich concludes the proof of (4.27) for k = 2.For k 3, the estimates are easier than those in k = 2 because of the small prefactor N −(k+3)/2 in the definition of J k . Analogously to the case k = 2, we obtain for any fixed k 3 and ε > 0,for any ε > 0, from which (4.27) follows. We omit further details. Now we look at the term K defined in (4.18), whose estimate is contained in the next lemma.Lemma 4.7. We have K = O(N (m+n)(α−1)−α/2 ) .(4.42)
10.1214/16-aap1237
[ "https://arxiv.org/pdf/1603.01499v2.pdf" ]
119,642,620
1603.01499
b12d817f99e6bc76f68f118687efe08d168f0d0e
Mesoscopic eigenvalue statistics of Wigner matrices August 16, 2016 Yukun He Antti Knowles Mesoscopic eigenvalue statistics of Wigner matrices August 16, 2016 We prove that the linear statistics of the eigenvalues of a Wigner matrix converge to a universal Gaussian process on all mesoscopic spectral scales, i.e. scales larger than the typical eigenvalue spacing and smaller than the global extent of the spectrum.Again by (4.5), (4.6), and Lemma 4.5 we havefor any fixed ε > 0. The other terms of J 2,3 are estimated similarly. Thus we getStep 4. Putting the estimates of the three types in Steps 1-3 together, we findwhich concludes the proof of (4.27) for k = 2.For k 3, the estimates are easier than those in k = 2 because of the small prefactor N −(k+3)/2 in the definition of J k . Analogously to the case k = 2, we obtain for any fixed k 3 and ε > 0,for any ε > 0, from which (4.27) follows. We omit further details. Now we look at the term K defined in (4.18), whose estimate is contained in the next lemma.Lemma 4.7. We have K = O(N (m+n)(α−1)−α/2 ) .(4.42) Introduction Let H be an N ×N Wigner matrix -a Hermitian random matrix with independent upper-triangular entries with zero expectation and constant variance. We normalize H so that as N → ∞ its spectrum converges to the interval [−2, 2], and therefore its typical eigenvalue spacing is of order N −1 . In this paper we study linear eigenvalue statistics of H of the form Tr f H − E η , (1.1) where f is a test function, E ∈ (−2, 2) a fixed reference energy inside the bulk spectrum, and η an N -dependent spectral scale. We distinguish the macroscopic regime η 1, the microscopic regime η N −1 , and the mesoscopic regime N −1 η 1. The limiting distribution of (1.1) in the macroscopic regime is by now well understood; see [2,20]. Conversely, in the microscopic regime the limiting distribution of (1.1) is governed by the distribution of individual eigenvalues of H. This question has recently been the focus of much attention, and the universality of the emerging Wigner-Dyson-Mehta (WDM) microscopic eigenvalue statistics for Wigner matrices has been established in great generality; we refer to the surveys [11,16] for further details. In this paper we focus on the mesoscopic regime. The study of linear eigenvalue statistics of Wigner matrices on mesoscopic scales was initiated in [5,6]. In [5], the authors consider the case of Gaussian H (the Gaussian Orthogonal Ensemble) and take f (x) = (x − i) −1 , in which case (1.1) is η times the trace of the resolvent of H at E + iη. Under these assumptions, it is proved in [5] that, after a centring, the linear statistic (1.1) converges in distribution to a Gaussian random variable on all mesoscopic scales N −1 η 1. In [6] this result was extended to a class of Wigner matrices for the range of mesoscopic scales N −1/8 η 1. Recently, the results of [6] were extended in [19] to arbitrary Wigner matrices, mesoscopic scales N −1/3 η 1, and general test functions f subject to mild regularity and decay conditions. Apart from the works [6,19] on Wigner matrices, mesoscopic eigenvalue statistics have also been analysed for invariant ensembles; see [7,10] and the references therein. Let Z = (Z(f )) f denote the Gaussian process obtained as the mesoscopic limit of a centring of (1.1). From the works cited above, it is known that the variance of Z(f ) is the square of the Sobolev H 1/2 -norm of f : EZ(f ) 2 = 1 2π 2 f (x) − f (y) x − y 2 dx dy = 1 π |ξ| |f (ξ)| 2 dξ ,(1.2) wheref (ξ) . .= (2π) −1/2 f (x)e −iξx dx. Hence, a remarkable property of Z is scale invariance: Z d = Z λ , where Z λ (f ) . .= Z(f λ ) and f λ (x) . .= f (λx). It may be shown that Z is obtained by extrapolating the microscopic WDM statistics to mesoscopic scales, and we therefore refer to its behaviour as the WDM mesoscopic statistics. In light of the microscopic universality results for Wigner matrices mentioned above, the emergence of WDM statistics on mesoscopic scales is therefore not surprising. All of the models described above, including Wigner matrices, correspond to mean-field models without spatial structure. In [12,13], linear eigenvalue statistics were analysed for band matrices, where matrix entries are set to be zero beyond a certain distance W N from the diagonal. Band matrices are a commonly used model of quantum transport in disordered media. Wigner matrices can be regarded as a special case W = N of band matrices. Unlike the mean-field Wigner matrices, band matrices possess a nontrivial spatial structure. An important motivation for the study of mesoscopic eigenvalue statistics of band matrices arises from the theory of conductance fluctuations; we refer to [12] for more details. The results of [12,13] hold in the regime W −1/3 η 1, and hence for the special case of Wigner matrices they hold for N −1/3 η 1. A key conclusion of [12,13] is that for band matrices there is a sharp transition in the mesoscopic spectral statistics, predicted in the physics literature [1]: above a certain critical spectral scale η c the mesoscopic spectral statistics are no longer governed by the Wigner-Dyson-Mehta mesoscopic statistics (1.2), but by new limiting statistics, referred to as Altshuler-Shklovskii (AS) statistics in [12,13], which are not scale invariant like (1.2). For instance for the d-dimensional (d = 1, 2, 3) AS statistics, the variance of the limiting Gaussian process Z(f ) is 1 π |ξ| 1−d/2 |f (ξ)| 2 dξ instead of the right-hand side of (1.2); see [12,13]. In particular, there is a range of mesoscopic scales such that, although the microscopic eigenvalue statistics are expected to satisfy the WDM statistics, the mesoscopic statistics do not, and instead satisfy the AS statistics. Hence, the WDM statistics on microscopic and mesoscopic scales do in general not come hand in hand. In this paper we establish the WDM mesoscopic statistics for Wigner matrices in full generality. Our results hold on all mesoscopic scales N −1 η 1 and all Wigner matrices whose entries have finite moments of order 4 + o (1). We require our test functions to have 1 + o(1) continuous derivatives and be subject to mild decay assumptions, as in [19]. The precise statements are given in Section 2 below. Our proof is based on two main ingredients: families of self-consistent equations for moments of linear statistics inspired by [6], and the local semicircle law for Wigner matrices from [14,18]. Our analysis of the self-consistent equations departs significantly from that of [6], since repeating the steps there, even using the optimal bounds provided by the local semicircle law, requires the lower bound η N −1/2 on the spectral scale. In addition, dealing with general test functions f instead of f (x) = (x − i) −1 requires a new family of self-consistent equations that is combined with the Helffer-Sjöstrand representation for general functions of H. We perform the proof in two major steps. In the first step, performed in Section 4, we consider traces of resolvents G ≡ G(z) = (H − z) −1 , corresponding to taking f (x) = (x − i) −1 in (1.1). Denoting by G the normalized trace of G and X . .= X − EX, we derive a family of self-consistent equations (see (4.22) below) for the moments E G * n G m following [6], obtained by expanding one factor inside the expectation using the resolvent identity and then applying a standard cumulant expansion (see Lemma 3.1 below) to the resulting expression of the form Ef (h)h. The main work of the first step is to estimate the error terms of the self-consistent equation. An important ingredient is a careful estimate of the remainder term in the cumulant expansion (see Lemma 4.6 (i) below), which allows us to remove the condition η N −1/2 on the spectral scale that would be required if one merely combined the local semicircle law with the approach of [6]. Other important tools behind these estimates are new precise high-probability bounds on the entries of the powers G k of the resolvent (see Lemma 4.4 below) and a further family of self-consistent equations for EG k (see Lemma 4.8 below). In the second step, performed in Section 5, we consider general test functions f . The starting point is the well-known Helffer-Sjöstrand respresentation of (1.1) as an integral of traces of resolvents. An important ingredient of the proof is a self-consistent equation (see (5.21) below) that is used on the integrand of the Helffer-Sjöstrand representation. Compared to the first step, we face the additional difficulty that the arguments z of the resolvents are now integrated over, and may in particular have very small imaginary parts. Handling such integrals for arbitrary mesoscopic scales η and comparatively rough test functions in C 1+o(1) requires some care, and we use two different truncation scales N −1 ω σ η in the imaginary part of z, which allow us to extract the leading term. See Section 5 for a more detailed explanation of the truncation scales. The error terms are estimated by a generalization of the estimates established in the first step. Finally, in Section 6 we give a simple truncation and comparison argument that allows us to consider without loss of generality Wigner matrices whose entries have finite moments of all order, instead of finite moments of order 4 + o(1). Conventions. We regard N as our fundamental large parameter. Any quantities that are not explicitly constant or fixed may depend on N ; we almost always omit the argument N from our notation. We use C to denote a generic large positive constant, which may depend on some fixed parameters and whose value may change from one expression to the next. Similarly, we use c to denote a generic small positive constant. Results We begin this section by defining the class of random matrices that we consider. Definition 2.1 (Wigner matrix). A Wigner matrix is a Hermitian N ×N matrix H = H * ∈ C N ×N whose entries H ij satisfy the following conditions. (ii) We have EH ij = 0 for all i, j, and E| √ N H ij | 2 = 1 for i = j. (iii) There exists constants c, C > 0 such that E| √ N H ij | 4+c−2δ ij C for all i, j. We distinguish the real symmetric case, where H ij ∈ R for all i, j, and the complex Hermitian case, where EH 2 ij = 0 for i = j. For conciseness, we state our results for the real symmetric case. Analogous results hold for the complex Hermitian case; see Remark 2.4 below. Our first result is on the convergence of the trace of the resolvent G(z) . .= (H − z) −1 , where Im z = 0. The Stieltjes transform of the empirical spectral measure of H is G(z) . .= 1 N Tr G(z) . (2.1) For x ∈ R, z ∈ C, and Im z = 0, the Wigner semicircle law and its Stieltjes transform m are defined by (x) . .= 1 2π (4 − x 2 ) + , m(z) . .= (x) x − z dx . (2.2) Denote by H . .= {z ∈ C : Im z > 0} the complex upper half-plane. Let (Y (b)) b∈H denote the complex-valued Gaussian process with mean zero and covariance E(Y (b 1 )Y (b 2 )) = − 2 (b 1 − b 2 ) 2 , E(Y (b 1 )Y (b 2 )) = 0 (2.3) for all b 1 , b 2 ∈ H. For instance, we can set Y (b) = 1 √ 2 2 b + i 2 ∞ k=0 √ k + 1 b − i b + i k ϑ k , (2.4) where (ϑ k ) ∞ k=0 is a family of independent standard complex Gaussians, where, by definition, a standard complex Gaussian is a mean-zero Gaussian random variable X satisfying EX 2 = 0 and E|X| 2 = 1. Finally, for E ∈ R and η > 0, we define the process (Ŷ (b)) b∈H througĥ Y (b) . .= N η (G(E + bη) − m(E + bη)) for all b ∈ H. We may now state our first result. Theorem 2.2 (Convergence of the resolvent). Let H be a real symmetric Wigner matrix. Fix α ∈ (0, 1) and set η . .= N −α . Fix E ∈ (−2, 2). Then the process (Ŷ (b)) b∈H converges in the sense of finite-dimensional distributions to (Y (b)) b∈H as N → ∞. That is, for any fixed p and b 1 , b 2 , . . . , b p ∈ H, we have (Ŷ (b 1 ), . . . ,Ŷ (b p )) d −→ (Y (b 1 ), . . . , Y (b p )) (2.5) as N → ∞. Our second result is on the convergence of the trace of general functions of H. For fixed r, s > 0, denote by C 1,r,s (R) the space of all real-valued C 1 -functions f such that f is r-Hölder continuous uniformly in x, and |f (x)| + |f (x)| = O((1 + |x|) −1−s ). Let (Z(f )) f ∈C 1,r,s (R) denote the real-valued Gaussian process with mean zero and covariance E(Z(f 1 )Z(f 2 )) = 1 2π 2 (f 1 (x) − f 1 (y))(f 2 (x) − f 2 (y)) (x − y) 2 dx dy (2.6) for all f 1 , f 2 ∈ C 1,r,s (R) (see also (1.2)). Our next result is the weak convergence of the procesŝ Z(f ) . .= Tr f H − E η − N 2 −2 (x)f x − E η dx ,(2.7) where f ∈ C 1,r,s (R). We may now state our second result. Theorem 2.3 (Convergence of general test functions). Let H be a real symmetric Wigner matrix. Fix α ∈ (0, 1) and set η . .= N −α . Fix E ∈ (−2, 2). Then the process (Ẑ(f )) f ∈C 1,r,s (R) converges in the sense of finite-dimensional distributions to (Z(f )) f ∈C 1,r,s (R) as N → ∞. That is, for any fixed p and f 1 , f 2 , . . . , f p ∈ C 1,r,s (R), we have (Ẑ(f 1 ), . . . ,Ẑ(f p )) d −→ (Z(f 1 ), . . . , Z(f p )) (2.8) as N → ∞. Remark 2.4. In the complex Hermitian case, Theorems 2.2 and 2.3 remain true up to an additional factor 1/2 in the covariances. More precisely, if H is a complex Wigner matrix then (2.5) is replaced by (Ŷ (b 1 ), . . . ,Ŷ (b p )) d −→ 1 √ 2 (Y (b 1 ), . . . , Y (b p )) (2.9) and (2.8) by (Ẑ(f 1 ), . . . ,Ẑ(f p )) d −→ 1 √ 2 (Z(f 1 ), . . . , Z(f p )) . (2.10) The minor modifications to the proof in the complex Hermitian case are given in Section 7 below. Tools The rest of this paper is devoted to the proofs of Theorems 2.2 and 2.3. In this section we collect notations and tools that are used throughout the paper. Let M be an N × N matrix. We use the notations M ij n ≡ (M ij ) n , M * n ≡ (M * ) n , M * ij ≡ (M * ) ij = M ji . We denote by M the operator norm of M , and abbreviate M . .= 1 N Tr M . It is easy to see that G(E + iη) |η| −1 . For σ > 0, we use N (0, σ 2 ) to denote the real Gaussian random variable with mean zero and variance σ 2 , and N C (0, σ 2 ) d = σN C (0, 1) the complex Gaussian random variable with mean zero and variance σ 2 . We abbreviate X . .= X − EX for any random variable X with finite expectation. Finally, if h is a real-valued random variable with finite moments of all order, we denote by C k (h) the kth cumulant of h, i.e. C k (h) . .= (−i) k · ∂ k λ log Ee iλh λ=0 . (3.1) We now state the cumulant expansion formula that is a central ingredient of the proof. The formula is analogous to the corresponding formula in [6], and its proof is obtained as a minor modification whose details we omit. Ef (h)h = l k=0 1 k! C k+1 (h)Ef (k) (h) + R l+1 ,(3. 2) provided all expectations in (3.2) exist. For any fixed τ > 0, the remainder term R l+1 satisfies R l+1 = O(1) · E h l+2 · 1 {|h|>N τ −1/2 } · f (l+1) ∞ + O(1) · E|h| l+2 · sup |x| N τ −1/2 f (l+1) (x) . (3.3) The bulk of the proof is performed on Wigner matrices satisfying a stronger condition than Definition 2.1 (iii) by having entries with finite moments of all order. Definition 3.2. We consider the subset of Wigner matrices obtained from Definition 2.1 by replacing (iii) with (iii)' For each p ∈ N there exists a constant C p such that E| √ N H ij | p C p for all N, i, j. We focus on Wigner matrices satisfying Definition 3.2 until Section 6, where we explain how to relax the condition (iii)' to (iii) using a Green function comparison argument; see Section 6 for more details. We shall deduce Theorem 2.3 from Theorem 2.2 using the Helffer-Sjöstrand formula [9], which is summarized in the following result whose standard proof we omit. f (x + iy) . .= f (x) + i(f (x + y) − f (x)) . (3.4) If f is further in C 2 (R), we can also set f (x + iy) . .= f (x) + iyf (x) . (3.5) Let χ ∈ C ∞ c (R) be a cutoff function satisfying χ(0) = 1, and by a slight abuse of notation write χ(z) ≡ χ(Im z). Then for any λ ∈ R we have f (λ) = 1 π C ∂z(f (z)χ(z)) λ − z d 2 z , (3.6) where ∂z . .= 1 2 (∂ x + i∂ y ) is the antiholomorphic derivative and d 2 z the Lebesgue measure on C. The following definition introduces a notion of a high-probability bound that is suited for our purposes. It was introduced (in a more general form) in [14]. X = X (N ) (u) : N ∈ N, u ∈ U (N ) , Y = Y (N ) (u) : N ∈ N, u ∈ U (N ) be two families of nonnegative random variables, where U (N ) is a possibly N -dependent parameter set. We say that X is stochastically dominated by Y , uniformly in u, if for all (small) ε > 0 and (large) D > 0 we have sup u∈U (N ) P X (N ) (u) > N ε Y (N ) (u) N −D (3.7) for large enough N N 0 (ε, D). If X is stochastically dominated by Y , we use the notation X ≺ Y . The stochastic domination will always be uniform in all parameters, such as z and matrix indices, that are not explicitly constant. We conclude this section with the local semicircle law for Wigner matrices from [14,18]. For a recent survey of the local semicircle law, see [3], where the following version of the local semicircle law is stated. Then we have the bounds max i,j |G ij (z) − δ ij m(z)| ≺ Im m(z) N η + 1 N η (3.8) and |G(z) − m(z)| ≺ 1 N η , (3.9) uniformly in z = E + iη ∈ S. Moreover, outside the spectral domain we have the stronger estimates max i,j |G ij (z) − δ ij m(z)| ≺ 1 √ N (3.10) and |G(z) − m(z)| ≺ 1 N , (3.11) uniformly in z ∈ H \ S. Convergence of the resolvent In this section we prove the following weaker form of Theorem 2.2. N η [G] d −→ N C 0, 1 2 (4.1) as N → ∞. The convergence also holds in the sense of moments. The main work in this section is to show the one-dimensional case from Proposition 4.2, whose proof can easily be extended to the general case of Theorem 4.1 (see Section 4.4 below). Recall the notation X . .= X − EX. Proposition 4.2 is a direct consequence of the following lemma. (i) For fixed m, n ∈ N satisfying m + n 2 we have E G * n G m =    n! 2 n N 2n(α−1) + O(N 2n(α−1)−c 0 ) if m = n O(N (m+n)(α−1)−c 0 ) if m = n , (4.2) where c 0 ≡ c 0 (α) . .= 1 3 min{α, 1 − α} . (4.3) (ii) We have [G] − G = EG − m(E + iη) = O(N α−1−c 0 ) , (4.4) with c 0 defined in (4.3). As advertised, Proposition 4.2 follows immediately from Lemma 4.3. Indeed, suppose that Lemma 4.3 holds. From (4.2) we find that N 1−α G converges to N C (0, 1/2) in the sense of moments, and hence also in distribution. Proposition 4.2 therefore follows from (4.4). The bulk of this section, Sections 4.1-4.3, is devoted to the proof of Lemma 4.3. Preliminary estimates on G. We begin with estimates on the entries of G k . For k 2, the bounds provided by the following lemma are significantly better than those obtained for G k by applying the estimate (3.8) to each entry of the matrix product. For instance, a straightforward application of (3.8) yields |(G k ) ij | ≺ N k(1+α)/2−1 , which is not enough to conclude the proof of k ∈ N + we have G k ≺ N kα−1 (4.5) as well as G k ij ≺ N (k−1)α if i = j N (k−1/2)α−1/2 if i = j , (4.6) uniformly in i, j. Proof. We first prove (4.5). The case k = 1 is easy. Indeed, from (3.9) we get | G | |[G]| + |E[G]| ≺ N α−1 (4.7) as desired, where we used the definition of ≺ combined with the trivial bound G N α to estimate E[G]. Next, for k 2 we write G k = (H − E)/η + i (H − E) 2 /η 2 + 1 k · η −k = f H − E η · η −k , (4.8) where we defined f (x) . .= ( x+i x 2 +1 ) k . Note that f : R → C is smooth, and for any n ∈ N, |f (n) (x)| = O((1 + |x|) −2 ) . We definef as in (3.5) and let χ be as in Lemma 3.3 and satisfy χ(y) = 1 for |y| 1. Writing f η (x) . .= f x−E η , we obtain from Lemma 3.3 that f η (H) = 1 π C ∂z(f η (z)χ(z/η)) H − z d 2 z , so that G k = 1 2πη k R 2 iyf η (x)χ(y/η) + i η f η (x)χ (y/η) − y η f η (x)χ (y/η) G(x + iy) dx dy . (4.9) In order to estimate the right-hand side, we use (3.9) and (3.11) to obtain G(x + iy) ≺ 1 N |y| (4.10) uniformly for |y| ∈ (0, 1) and x. Hence, 1 2πη k R 2 iyf η (x)χ(y/η) G(x + iy) dx dy ≺ 1 2πη k R 2 1 N f η (x)χ(y/η) dx dy = O(1) N η k . (4.11) (Note that the use of stochastic domination inside the integral requires some justification. In fact, we use that a high-probability bound of the form (4.10) holds simultaneously for all x ∈ R and |y| ∈ (0, 1). We refer to [3, Remark 2.7 and Lemma 10.2] for further details.) Similarly, by our choice of χ, we find 1 2πη k R 2 i η f η (x)χ (y/η) G(x + iy) dx dy ≺ 1 2πη k |y| η 1 N η 2 f η (x)χ (y/η) dx dy = O(1) N η k . An analogous estimate yields 1 2πη k R 2 y η f η (x)χ (y/η) G(x + iy) dx dy ≺ 1 N η k . Altogether we have | G k | ≺ N kα−1 , which is (4.5). Next, we prove (4.6). For k = 1, we use the well-known bound |m(z)| < 1, which follows using an elementary estimate from the fact that m is the unique solution of m(z) + 1 m(z) + z = 0 (4.12) satisfying Im m(z) Im z > 0. Thus by (3.8) we have |(G) ij | ≺ 1 if i = j N (α−1)/2 if i = j , (4.13) which is (4.6) for k = 1. The extension to k 2 follows again using Lemma 3.1, and we omit the details. 4.5. (i) If X 1 ≺ Y 1 and X 2 ≺ Y 2 then X 1 X 2 ≺ Y 1 Y 2 . (ii) Suppose that X is a nonnegative random variable satisfying X N C and X ≺ Φ for some deterministic Φ N −C . Then EX ≺ Φ. Proof of Lemma 4.3 (i). Abbreviating ζ i . .= E| √ N H ii | 2 , we find from Definition 3.2 (iii)' that ζ i = O(1) . We write z . .= E + iη and often omit the argument z from our notation. Note that G = (H − z) −1 and G * = (H −z) −1 . In particular, Theorem 3.5 also holds for G * with obvious modifications accounting for the different sign of η. For m, n 1, we need to compute E G * n G m = E G m−1 G * n G . (4.14) By the resolvent identity we have G = 1 z GH − 1 z I , so that E G * n G m = 1 z E G * n G m−1 GH = 1 zN i,j E G * n G m−1 G ij H ji . (4.15) Since H is symmetric, for any differentiable f = f (H) we set ∂ ∂H ij f (H) = ∂ ∂H ji f (H) . .= d dt t=0 f H + t ∆ (ij) ,(4.16) where ∆ (ij) denotes the matrix whose entries are zero everywhere except at the sites (i, j) and (j, i) where they are one: ∆ (ij) kl = (δ ik δ jl + δ jk δ il )(1 + δ ij ) −1 . We then compute the last averaging in (4.15) using the formula (3. 2) with f = f ij (H) . .= G * n G m−1 G ij , h = H ji , and obtain zE G * n G m = 1 N 2 i,j E ∂( G * n G m−1 G ij ) ∂H ji (1 + δ ji (ζ i − 1)) + L = 1 N 2 i,j E G * n G m−1 ∂G ij ∂H ji (1 + δ ji ) + 1 N 2 i,j E ∂( G * n G m−1 ) ∂H ji G ij (1 + δ ji ) + K + L = . . (a) + (b) + K + L , (4.17) where K = N −2 i E ∂( G * n G m−1 G ii ) ∂H ii (ζ i − 2) (4.18) and L = N −1 · i,j l k=2 1 k! C k+1 (H ji )E ∂ k ( G * n G m−1 G ij ) ∂H ji k + R (ji) l+1 . (4.19) Here l is a fixed positive integer to be chosen later, and R (ji) l+1 is a remainder term defined analogously to R l+1 in (3.2). More precisely, we have the bound R (ji) l+1 = O(1) · E H ji l+2 1 {|H ji |>N τ −1/2 } · ∂ l+1 ji f ij H ∞ + O(1) · E H ji l+2 · E sup |x| N τ −1/2 ∂ l+1 ji f ij H (ij) + x∆ (ij) ,(4.20) where we define H (ij) . .= H − H ij ∆ (ij) , so that the matrix H (ij) has zero entries at the positions (i, j) and (j, i), and abbreviate ∂ ij . .= ∂ ∂H ij . Note that for G = (H − z) −1 we have ∂G ij ∂H kl = −(G ik G lj + G il G kj )(1 + δ kl ) −1 , (4.21) which gives (a) = N −2 i,j E G * n G m−1 (−G ij G ij − G ii G jj ) = −N −1 E G * n G m−1 G 2 − E G * n G m−1 G 2 = −N −1 E G * m G m−1 G 2 − E G * n G m G − 2E G * n G m EG + E G * m G m−1 E G 2 . Similarly, a straightforward calculation gives (b) = − 2 N 2 nE G * n−1 G m−1 GG * 2 + (m − 1)E G * n G m−2 G 3 . Altogether we obtain In (4.22), the last term is the leading term. The calculation of (4.22) consists of computing the leading term and estimating the subleading terms. We aim to show that the subleading terms are of order E G * n G m = 1 T E G * n G m+1 − 1 T E G * n G m−1 E G 2 + 1 T N E G * n G m−1 G 2 + 2m − 2 N 2 T E G * n G m−2 G 3 − K T − L T + 2n N 2 T E G * n−1 G m−1 GG * 2 ,N (m+n)(α−1)−c 0 . We begin with L. For k 2 define J k . .= N −(k+3)/2 i,j E ∂ k ( G * n G m−1 G ij ) ∂H ji k . (4.25) Lemma 4.6. Let R (ji) l+1 be as in (4.19). (i) For any fixed D 0 > 0 there exists some l 0 = l 0 (D 0 ) 2 such that i,j R (ji) l 0 +1 = O N −D 0 . (4.26) (ii) For all fixed k 2 we have J k = O N (m+n)(α−1)−c 0 , (4.27) where c 0 is defined in (4.3). Before proving Lemma 4.6, we show how to use it to estimate L. By setting D 0 = (m+n)(1−α) in (4.26), we obtain i,j R (ji) l 0 +1 = O N (m+n)(α−1) (4.28) for some l 0 2. From Definition 3.2 (iii)' we get max i,j C k (H ji ) = O(N −k/2 ) for all k 2. Thus (4.27) and (4.28) together imply L = O N (m+n)(α−1)−c 0 , (4.29) as desired. Proof of Lemma 4.6 (i). Let D 0 > 0 be given. Fix i, j, and choose τ = min {α/2, (1 − α)/4} in (4.20). Define W . .= H ij ∆ (ij) andĤ . .= H (ij) = H − W . LetĜ . .= (Ĥ − E − iη) −1 . We have the resolvent expansionsĜ = G + (GW )G + (GW ) 2Ĝ (4.30) and |G ab | N α , and the factĜ is G =Ĝ − (ĜW )Ĝ + (ĜW ) 2 G .independent of W , we have max a =b sup |H ji | N τ −1/2 |G ab | ≺ N −(α−1)/2 ,(4.32) and max a sup |H ji | N τ −1/2 |G aa | ≺ 1 . (4.33) Now let us estimate the last term in (4.20). We have the derivatives ∂ ji G ab = −(G aj G ib + G ai G jb )(1 + δ ji ) −1 , ∂ ji G = 2 N (G 2 ) ji (1 + δ ji ) −1 , and ∂ ji (G 2 ) ab = (G 2 ) aj G ib + (G 2 ) ai G jb + (G 2 ) bj G ia + (G 2 ) bi G ja (1 + δ ji ) −1 , where ∂ ij . .= ∂ ∂H ij . Hence for any fixed l ∈ N, ∂ l+1 ji f ij is a polynomial in the variables G , G * , 1 N (G 2 ) ab , 1 N (G * 2 ) ab , G ab , and G * ab , with a, b ∈ {i, j}. Note that in each term of the polynomial, the sum of the degrees of G , G * , 1 N (G 2 ) ab , and 1 N (G * 2 ) ab is m + n − 1, so that the product of the factors other than G ab and G * ab is trivially bounded by O(N (m+n−1)α ) for all H. Together with (4.32) and (4.33) we know, for any fixed l ∈ N, sup |x| N τ −1/2 ∂ l+1 ji f ij Ĥ + x∆ (ij) ≺ N (m+n−1)α . Note that E|H ji | l+2 = O(N −(l+2)/2 ), and we can find l 0 = l 0 (D 0 , m, n) 2 such that E|H ji | l 0 +2 · E sup x N τ −1/2 ∂ l 0 +1 ji f ij Ĥ + x∆ (ij) = O(N −(D 0 +2) ) . (4.34) Finally, we estimate the first term of (4.20). Note that by the trivial bound Proof of Lemma 4.6 (ii). We begin with the case k = 2, which gives rise to terms of three types depending on how many derivatives act on G ij . We deal with each type separately. |G ij | N α , we have ∂ l 0 +1 ji f (H) ∞ = O(N C(m,n,l 0 ) ). From Definition 3.2 (iii)' we find max i,j |H ij | ≺ 1 √ N , then by Hölder's inequality we have E H ji l+2 1 {|H ji |>N τ −1/2 } · ∂ l+1 ji f ij (H) ∞ = O(N −(D 0 +2) ) . Step 1. The first type is J 2,1 . .= N −5/2 i,j E G * n G m−1 ∂ 2 G ij ∂H ji 2 . Note that ∂ 2 G ij ∂H ji 2 = a 1 · G ii G jj G ij + a 2 · G ij 3 , where a 1 , a 2 are some constants depending on the value of δ ji . Together with (4.6) and Lemma 4.5, we find J 2,1 N −5/2 · N 2 · E G * n G m−1 · O(N (α−1)/2+ε ) = O(N α/2+ε−1 ) · E G * n G m−1 for any fixed ε > 0. Together with (4.5) and Lemma 4.5, we find J 2,1 = O(N α/2+ε−1+(m+n−1)(α−1) ) = O(N (m+n)(α−1)−c 0 ) ,(4.36) where in the last inequality we chose ε small enough depending on α. Step 2. The second type is J 2,2 . .= N − 5 2 i,j E ∂ 2 ( G * n G m−1 ) ∂H ji 2 G ij . (4.37) Since ∂ G * ∂H ji = a 3 N · (G * 2 ) ij and ∂ 2 G * ∂H ji 2 = a 4 N · (G * 2 ) ii G * jj + a 5 N · (G * 2 ) jj G * ii + a 6 N · (G * 2 ) ij G * ij for some constants a 3 , a 4 and a 5 , we see that the most dangerous term of J 2,2 is of the form P 2,2 . .= N −7/2 · i,j E G * n−1 G m−1 (G * 2 ) ii G * jj G ij . (4.38) By (4.5), (4.6), and Lemma 4.5 we have P 2,2 = O(N −7/2+2+(m+n−2)(α−1)+α+(α−1)/2+ε ) = O(N (m+n)(α−1)−α/2+ε ) for any fixed ε > 0. The other terms of J 2,2 are estimated similarly. By choosing ε small enough, we obtain J 2,2 = O N (m+n)(α−1)−c 0 . (4.39) Step 3. The third type is J 2,3 . .= N −5/2 i,j E ∂( G * n G m−1 ) ∂H ji ∂G ij ∂H ji . (4.40) The most dangerous term in J 2,3 is of the form P 2,3 . .= N −7/2 · i,j Proof. Let us first consider B . .= N −2 i E ∂( G * n G m−1 G ii ) ∂H ii . (4.43) The estimate of B is similar to that of J 2 , namely we will have terms of two types depending on whether the derivative acts on G ii or not. We then estimate the terms by Lemmas 4.4 and 4.5, which easily yields B = O(N (m+n)(α−1)−α+ε ) = O(N (m+n)(α−1)−α/2 ) . From Definition 3.2 (iii)' we get max i |ζ i − 2| = O(1), hence K = O(B) . This finishes the proof. In order to conclude the proof, we need to use that the expectation of G k is typically much smaller than G k itself. Lemma 4.4 implies that E|G k | ≺ N (k−1)α , which is not enough to conclude the proof. We need some extra decay from the expectation, which is provided by the following result. EG k = O N (k−1)α−c 0 (4.44) for k = 2, 3. Proof. The proof is analogous to that of Lemma 4.6. Let us first consider EG 2 . Again by the resolvent identity and the cumulant expansion, we arrive at EG 2 = 1 T EG + 2 T E G G 2 + 1 T N EG 3 − K (2) T − L (2) T ,(4.45) where K (2) = N −2 i E ∂(G 2 ) ii ∂H ii (ζ i − 2) ,(4.46) and in (4.20). We can argue similarly as in the proof of Lemma 4.6 (i) and show that R L (2) = 1 N i,j l k=2 1 k! C k+1 (H ji )E ∂ k (G 2 ) ij ∂H ji k + R (2,ji) l+1 ,(4.(2,ji) l 0 +1 = O(N −1 ) for some l 0 ∈ N. Thus we have |L (2) | l 0 k=2 O(J (2) k ) + O(1), where J (2) k . .= N −(k+3)/2 i,j E ∂ k (G 2 ) ij ∂H ji k . (4.48) Analogously to the proof of Lemma 4.6 (ii), we find J (2) for any fixed ε > 0, and J E G G 2 = O(N (α−1)+(2α−1)+ε ) = O(N 3α−2+ε ) , 1 N EG 3 = O(N −1+2α+ε ) , (4.49) for any fixed ε > 0. Hence by using (4.23) and choosing ε small enough, we obtain EG 2 O(N 2α−1+ε ) + O(N α/2 ) + O(N α+ε−1/2 ) = O(N α−c 0 ) . The proof of the case k = 3 is similar, and we omit the details. Armed with Lemmas 4.6 and 4.8, we may now conclude the proof of Lemma 4.3 (i). We still have to estimate the subleading terms on the right-hand side of (4.22). From (4.5), Lemma 4.5, and Lemma 4.8 we have E G * n G m−2 G 3 = E G * n G m−2 EG 3 + G 3 = O N (m+n)(α−1)+2−c 0 . (4.50) Moreover, (4.5) and Lemma 4.5 imply E G * n G m−1 G 2 = O N (m+n)(α−1)+1−c 0 , (4.51) E G * n G m+1 = O N (m+n)(α−1)−c 0 , (4.52) as well as E G * n G m−1 E G 2 = O N (m+n)(α−1)−c 0 .E G * n G m = 2n N 2 T E G * n−1 G m−1 GG * 2 + O N (m+n)(α−1)−c 0 . By the resolvent identity, GG * 2 = 1 z −z (GG * − G * 2 ) = − N 2α 4 (G − G * ) − N α 2i G * 2 . (4.54) Moreover, (4.5) and Lemmas 4.5 and 4.8 give E G * n−1 G m−1 G * 2 = O(N (m+n−2)(α−1)+α−c 0 ). We therefore conclude that The preceding argument can also be used to show E G * n G m = − n 2T N 2α−2 E G * n−1 G m−1 (G − G * ) + O N (m+n)(α−1)−c 0 , which yields E G * n G m = − n 2T N 2α−2 E G * n−1 G m−1 (EG − EG * ) + O N (E G m = O N m(α−1)−c 0 (4.56) for all m 2. In fact, one can start with E G m = 1 zN i,j E G m−1 G ij H ji and apply Lemma 3.1 to get an analogue of (4.22), which is E G m = 1 T E G m+1 − 1 T E G m−1 E G 2 + 1 T N E G m−1 G 2 + 2m − 2 N 2 T E G m−2 G 3 − K (3) T − L (3) T . Proof of Lemma 4.3 (ii). Again by the resolvent identity and the cumulant expansion, we have EG = 1 U 1 + E G 2 + 1 N EG 2 − K (4) − L (4) ,(4.59) where U . .= −z − EG, z = E + iη, K (4) = N −2 i E ∂G ii ∂H ii (ζ i − 2) ,(4.60) and L (4) = N −1 i,j l k=2 1 k! C k+1 (H ji )E ∂ k G ij ∂H ji k + R (4,ji) l+1 . (4.61) Here R (4,ji) l+1 is a remainder term defined analogously to R (ji) l+1 in (4.20). We can argue similarly as in the proof of Lemma 4.6 (i) and show that R (4,ji) l 0 +1 = O(N −2 ) for some l 0 ∈ N. Thus we have |L (4) | l 0 k=2 O(J (4) k ) + O(N −1 ), where J (4) k . .= N −(k+3)/2 i,j E ∂ k G ij ∂H ji k . (4.62) Analogously to the proof of Lemma 4.6 (ii), we find J (4) for any fixed ε > 0, and J (4) k = O(N α/2−1 ) for k 3. This shows that |L (2) | = O(N α/2−1+ε ) for any fixed ε > 0. As in Lemma 4.7, one can show that K (4) = O(N −1 ). By (4.56) and Lemma 4.8, we have E G 2 = O(N 2α−2−c 0 ) and 1 N EG 2 = O(N α−1−c 0 ) . Altogether we have EG(z + EG) + 1 = O(N α−1−c 0 ) . (4.63) Recall that m(z) is the unique solution of x 2 + zx + 1 = 0 satisfying sgn(Im m(z)) = sgn(Im z) = sgn(η) = 1. Letm(z) be the other solution of x 2 + zx + 1 = 0. An application of Lemma 5.5 in [3] gives (i) For fixed m, n 1 and i 1 , . . . , i m , j 1 , . . . , j n ∈ {1, 2, . . . , p}, we have min{|EG − m(z)|, |EG −m(z)|} = O(N α−1−c 0 ) √ κ = O(N α−1−c 0 ) ,(4.E Ỹ (b i 1 ) · · ·Ỹ (b im )Ỹ b j 1 · · ·Ỹ b jn = −2 (b i l −b j k ) 2 + O(N −c 0 ) if m = n O(N −c 0 ) if m = n ,(4. 65) where the notation means summing over all distinct ways of partitioning b i 1 , . . . , b im , b j 1 , . . . , b jn into pairs b i l , b j k , and each summand is the product of the n pairs. (ii) For any fixed b ∈ H, we haveŶ Proof of Lemma 4.9. The proof is similar to that of Lemma 4.3. Indeed, we see that (b) −Ỹ (b) = EŶ (b) = O(N −c 0 ) .E Ỹ (b i 1 ) · · ·Ỹ (b im )Ỹ b j 1 · · ·Ỹ b jn = N (m+n)(1−α) E G(E + b i 1 η) · · · G(E + b im η) G(E + b j 1 η) · · · G(E + b jn η) ,(4.68) which can be computed in the same way as E G m G * n = E G(E + iη) m G(E − iη) n in Section 4.2. Most of our previous techniques and estimates can be applied to the new computation, and the only difference is when using the resolvent identity (for example in (4.54)), we now have G(E + b i k η)G(E + b j l η) = 1 b i k − b j l (G(E + b i k η) − G(E + b j l η)) instead of GG * = 1 2i (G − G * ) . This will give us different constants in the leading terms, and lead to the induction step E Ỹ (b i 1 ) · · ·Ỹ (b im )Ỹ b j 1 · · ·Ỹ b jn = n k=1 −2 (b im − b j k ) 2 E Ỹ (b i 1 ) · · ·Ỹ b i m−1 Ỹ b j 1 · · ·Ỹ b jn /Ỹ b j k + O(N −c 0 ) for m, n 1. One can also show that E Ỹ (b i 1 ) · · ·Ỹ (b im ) = O(N −c 0 ) and E Ỹ b j 1 · · ·Ỹ b jn = O(N −c 0 ) for m, n 2. These results together imply (4.65). Moreover, (4.66) says nothing but N 1−α E G(E + bη) − m(E + bη) = O(N −c 0 ), and this can be shown using the steps in Section 4.3, in which we proved N 1−α E G(E + iη)−m(E+iη) = O(N −c 0 ). Convergence of general functions Similar as in the resolvent case, in Section 5 we prove the following analogue of Theorem 2.3. [Tr f η (H)] d −→ N 0 , 1 2π 2 f (x) − f (y) x − y 2 dx dy (5.1) as N → ∞. The convergence also holds in the sense of moments. Our main work in is section will be to show the above 1-dimensional case, since the proof can easily be extended to the general case (see Section 5.5). Recall that for a random variable X, X . .= X − EX. Proposition 5.2 is a direct consequence of the following lemma. E Tr f η (H) n = n − 1 2π 2 f (x) − f (y) x − y 2 dx dy · E Tr f η (H) n−2 + O(N −c/n 2 ) , (5.2) where c = c (r, s, α) > 0. Tr f η (H) d −→ N 0 , 1 2π 2 f (x) − f (y) x − y 2 dx dy (5.4) as N → ∞. Note that the above result is proved in a stronger sense that we have convergence in moments. Proposition 5.2 then follows from (5.3). Sections 5.1 to 5.3 are devoted to proving Lemma 5.3 (i). Before starting the proof, we give some explanations of the ideas, especially the choice of truncations in the proof. We use Lemma 3.3 to write f η (H) in the form (5.6) below, where we scale the cutoff function χ to be supported in an interval of size O(σ), with N −1 σ η. This scaling ensures that when we integrate ϕ f , the integral of the last term in (5.7) below dominates over the others. We then write E Tr f η (H) n as an integral over C n , written F in (5.8) below. The leading contribution to F arises from the region {|y 1 |, . . . , |y n | ω}, where ω N −1 is a second truncation scale. In order to ensure that F is small in the complementary region, we require that ω σ. Then, when estimating |y 1 |<ω F , the integral over z 1 yields a factor that is small enough to compensate the integrals from the other variables. We use the notations σ = N −(α+β) and ω = N −(α+γ) , so that 0 < β < γ. In addition, for all steps of the analysis to work, we have further requirements on the exponents γ and β; for instance, the last step in (5.18) below requires nβ rsγ/4. Combining all requirements, we are led to set β as in (5.5) below. Transformation by Helffer-Sjöstrand formula. Let f ∈ C 1,r,s (R) with r, s > 0, and without loss of generality we assume s 1. Fix n 2, and define σ . .= N −(α+β) , where β . .= rs 2 c 0 24n 2 ,(5.5) and c 0 is defined in (4.3). We definef as in (3.4). Let χ be as in Lemma 3.3 satisfying χ(y) = 1 for |y| 1, and χ(y) = 0 for |y| 2. An application of Lemma 3.3 gives f η (H) = 1 π C ∂z(f η (z)χ(z/σ)) H − z d 2 z = C ϕ f (z)G(z) d 2 z , (5.6) where ϕ f (x + iy) = 1 2π (i − 1) f η (x + y) − f η (x) χ(y/σ) − 1 σ f η (x + y) − f η (x) χ (y/σ) + i σ f η (x)χ (y/σ) . (5.7) Thus E Tr f η (H) n = N n ϕ f (z 1 ) · · · ϕ f (z n )E G 1 · · · G n d 2 z 1 · · · d 2 z n = . . F , (5.8) where G k . .= (H − z k ) −1 for i ∈ {1, 2, . . . , n}. Note that χ(y/σ) ≡ 0 for |y| 2σ, and we only need to consider the integral for |y 1 |, . . . , |y n | 2σ. The subleading terms. Let ω . .= N −(α+γ) with γ . .= 4nβ/rs, and by (5.5) we have α + β < α + γ < 1. We define X . .= {|x 1 |, . . . , |x n | 2 − κ 2 } and Y . .= {|y 1 |, . . . , |y n | ∈ [ω, 2σ]}, where we recall the definition (4.24) of κ. We have a lemma about F outside the region X × Y . Lemma 5.4. For F as in (5.8) we have (X×Y ) c F = O(N −β/2 ) . (5.9) Proof. We first estimate R n ×Y c F . By the estimates (3.9) and (3.11) we know G 1 · · · G n ≺ 1 |y 1 · · · y n |N n (5.10) uniformly in {|y 1 |, . . . , |y n | 2σ}. Since χ (y/σ) = 0 for |y| < σ, we have By the Hölder continuity and decay of the function f , we know |y|<ω ϕ f (z) · 1 y d 2 z = O(1) · |y|<ω (f η (x + y) − f η (x)) · 1 y dx dy = O(1) · |b|<N β−γ (f (a + bN −β ) − f (a)) · 1 b da db ,f (a + bN −β ) − f (a) C min (|b|N −β ) r , 1 1 + |a| 1+s C(|b|N −β ) rq 1 1 + |a| 1+s 1−q (5.13) for all q ∈ [0, 1]. Choose q = q 0 (s) . .= s 2(1+s) s 4 , so that (1 + s)(1 − q 0 ) = 1 + s 2 > 1. Thus we have |y|<ω ϕ f (z) · 1 y d 2 z = O(N −rq 0 β ) · |b|<N β−γ |b| rq 0 −1 1 1 + |a| 1+s/2 da db = O(N −rq 0 γ ) . (5.14) Similarly, one can show that |y|∈[ω,σ) ϕ f (z) · 1 y d 2 z = O(N −rq 0 β ) . (5.15) We also have 16) where we used the change of variables (5.12), and abbreviate |y| σ ϕ f (z) · 1 y d 2 z = N β 1 |b| 2 ψ f (a, b) · 1 b da db = O(N β ) ,(5.ψ f (a, b) . .= 1 2π N −β (i − 1) f (a + bN −β ) − f (a) χ(b) − f (a + bN −β ) − f (a) χ (b) + if (a)χ (b) . (5.17) Using lemma 4.5 we have R n ×Y c F n |y 1 |<ω F ≺ |y 1 |<ω ϕ f (z 1 ) · 1 y 1 · · · ϕ f (z n ) · 1 y n d 2 z 1 · · · d 2 z n = |y 1 |<ω ϕ f (z 1 ) · 1 y 1 dz 1 ϕ f (z 2 ) · 1 y 1 · · · ϕ f (z n ) · 1 y n d 2 z 2 · · · d 2 z n = O N −rq 0 γ · N (n−1)β O N −rsγ/4 · N (n−1)β O N −β . (5.18) Next, we estimate X c ×Y F . By the decay of the functions f and f , we have |x|>2− κ 2 ϕ f (z) · 1 y d 2 z N β |a| κ 2η ψ f (a, b) · 1 b da db = O(N β ) · |a| κ 2η 1 1 + |a| 1+s/2 da + |a| κ 2η 1 1 + |a| 1+s da + |a| κ 2η |f (a)| da = O(N β−sα/2 ) . (5.19) where a, b are defined as in (5.12). Hence 5.3. The main computation. Now let us focus on the integral X×Y F . Note that now we are in the "good" region where it is effective to apply the cumulant expansion to the resolvent. In X ∩ Y , we want to compute the quantity E G 1 · · · G n . Note that this is very close to the expression we had in (4.14). Let us abbreviate X c ×Y F ≺ {|x 1 |>2− κ 2 }×Y ϕ f (z 1 ) · 1 y 1 · · · ϕ f (z n ) · 1 y n d 2 z 1 · · · d 2 z n = O N (n−1)β · {|x 1 |>2− κ 2 , |y 1 | ω} ϕ f (z 1 ) · 1 y 1 d 2 z 1 = O N nβ−sα/2 ) O(N −β .Q m . .= G 1 · · · G m and Q (k) m . .= Q m / G k for all 1 k m n, and ζ i . .= E| √ N H ii | 2 . We proceed the computation as in Section 4.2, and get an analogue of (4.22): EQ n = 1 T n EQ n G n − 1 T n EQ n−1 E G n 2 + 1 N T n EQ n−1 G n 2 −K T n −L T n + 2 N 2 T n n−1 k=1 EQ (k) n−1 G k 2 · G n ,(5.21) where T n . .= −z n − 2E G n , K = N −2 i E ∂ G 1 · · · G n−1 · G n ii ∂H ii (ζ i − 2) , andL = N −1 i,j l k=2 1 k! C l+1 (H ji )E ∂ k G 1 · · · G n−1 · G n ij ∂H ji k +R (ji) l+1 . HereR (ji) l+1 is a remainder term defined analogously to R (ji) l+1 in (4.20). Note in Y , we have |y 1 |, . . . , |y n | ω, and we have estimates analogue to those in Section 4.2. We state these estimates in the next lemma and omit the proof. The following results hold uniformly in X × Y . (i) Analogously to Lemma 4.4, for any m ∈ N + and k = 1, 2, . . . , n, we have uniformly in X × Y . Note that by the definition of β and γ we have γ = sc 0 (α)/6n < c 0 (α)/2, which gives c 0 (α + γ) > 5c 0 (α)/6 > 0. Since we have the simple estimate G k m ≺ N m(α+γ)−1 (5.23) as well as G k m ij ≺ N (m−1)(α+γ) if i = j N (m−1/2)(α+γ)−1/2 if i = j .|ϕ f (z)| d 2 z = O(η) , (5.30) we know X×Y F = N n X×Y ϕ f (z 1 ) · · · ϕ f (z n ) EQ n d 2 z 1 · · · d 2 z n = 2N n−2 T n n−1 k=1 X×Y ϕ f (z 1 ) · · · ϕ f (z n )EQ (k) n−1 G k 2 · G n d 2 z 1 · · · d 2 z n + O(N −2c 0 (α)/3 ) ,(5.31) where in the estimate of the error term we implicitly used nγ = sc 0 (α)/6 c 0 (α)/6. By symmetry, it suffices to fix k ∈ {1, 2, . . . , n − 1}, and consider the integral over X × Y of F kn . .= 2N n−2 T n ϕ f (z 1 ) · · · ϕ f (z n )EQ (k) n−1 G k 2 · G n . (5.32) As before, we summarize the necessary estimates into a lemma. Lemma 5.6. Let F kn be as in (5.32). Then we have the following estimates. (i) Let A 1 . .= {(x, y) ∈ X × Y : y k y n > 0, |x k − x n | ηN −(n+1)γ }. Then A 1 F kn ≺ N −γ . (5.33) (ii) Let A 2 . .= {(x, y) ∈ X × Y : y k y n > 0, |x k − x n | ∈ (ηN −(n+1)γ , ηN (n+1)γ/s ] }. Then A 2 F kn = O(N −c 0 (α)/4 ) , (5.34) where the function c 0 (·) is defined in (5.22). (iii) For A 3 . .= {(x, y) ∈ X × Y : |x k − x n | > ηN (n+1)γ/s }, we have A 3 F kn ≺ N −γ . (5.35) Proof. (i) By Lemma 5.5 (i)-(ii) we have N n−2 T n EQ (k) n−1 G k 2 · G n ≺ ω −n uniformly in A 1 . Thus by (5.30) and the decay of f and f we know and ψ f is defined as in (5.17). A 1F kn ≺ ω −n · η n−2 · ϕ f (z k ) · ϕ f (z n ) 1 {|x k −xn| η·N −(n+1)γ } d 2 z k d 2 z n = O(ω −n · η n ) ψ f (a k , b k ) · ψ f (a n , b n ) 1 {|a k −an| N −(n+1)γ } da k db k da n db n = O(N nγ · N −(n+1)γ ) = O(N −γ ) ,(5. (ii) Note that our assumption (5.5) on β shows ηN (n+1)γ/s = O(N −α/2 ). By the resolvent identity, the semicircle law (3.9), and Lemma 5.5 we know N n−2 T n EQ (k) n−1 G k 2 · G n = N n−2 EQ (k) n−1 G k 2 + E G k 2 T n (z k − z n ) + G n − G k + E( G n − G k ) T n (z k − z n ) 2 ≺ ω −(n−2) · ω −1 N −c 0 (α+γ) ηN −(n+1)γ + (N ω) −1 + (N ω) −1 + N −c 0 (α+γ) η 2 N −2(n+1)γ = O(η −n · N 3nγ−c 0 (α+γ) ) O(η −n · N −c 0 (α)/3 ) (5.38) uniformly in A 2 . Hence (5.30) yields A 2 F kn = O(N −c 0 (α)/4 ) . (5.39) (iii) Similar as in (5.36), we know A 3 F kn ≺ ω −n · η n−2 · ϕ f (z k ) · ϕ f (z n ) 1 {|x k −xn|>ηN (n+1)γ/s } d 2 z k d 2 z n = O(ω −n · η n ) ψ f (a k , b k ) · ψ f (a n , b n ) 1 {|a k −an|>N (n+1)γ/s } da k db k da n db n . (5.40) Note that in {|a k − a n | > N (n+1)γ/s }, either |a k | > 1 2 N (n+1)γ/s or |a n | > 1 2 N (n+1)γ/s . Hence by the decay conditions of f and f , we have A 3 F kn ≺ ω −n · η n · N −(n+1)γ = N −γ . Let A 4 . .= {(x, y) ∈ (X × Y ) : y k y n < 0, |x k − x n | ηN (n+1)γ/s }. Note that C n is the disjoint union of A 1 , . . . , A 4 . The next result is about the integral of F kn in A 4 , which gives the leading contribution. Lemma 5.7. We have A 4 F kn = 1 2π 2 f (x) − f (y) x − y 2 dx dy · E Tr f η (H) n−2 + O(N −rsβ/9 ) , (5.41) where β is defined in (5.5). Proof. Step 1. By symmetry, let us consider A 5 . .= {(x, y) ∈ A 4 : y k ω, y n −ω, |x k − x n | ηN (n+1)γ/s }. Similar as in (5.38), we have N n−2 T n EQ (k) n−1 G k 2 · G n = N n−2 EQ (k) n−1 E G n − E G k T n (z k − z n ) 2 + O(η −n · N −c 0 (α)/3 ) (5.42) uniformly in A 5 . Note the semicircle law (3.9) now gives E G n − E G k T n = E G n − E G k −z n − 2E G n = −1 + O(N −c 0 (α+γ) ) (5.43) uniformly in A 5 . By (5.30) we know A 5 F kn = N n−2 A 5 ϕ f (z 1 ) · · · ϕ f (z n )EQ (k) n−1 −2 (z k − z n ) 2 d 2 z 1 · · · d 2 z n + O(N −c 0 (α)/3 ) = − 2 A 5 1 (z k − z n ) 2 ϕ f (z k )ϕ f (z n ) d 2 z k d 2 z n · A 5F kn + O(N −c 0 (α)/3 ) , where we decompose A 5 = A 5 × A 5 , with A 5 depends on (x k , y k , x n , y n ). HereF kn is defined aŝ F kn . .= N n−2 ϕ f (z 1 ) · · · ϕ f (z n−1 )/ϕ f (z k ) EQ (k) n−1 . Let X (k,n) . .= {|x 1 |, . . . , |x k−1 |, |x k+1 |, . . . , |x n−1 | 2− κ 2 }, and Y (k,n) . .= {|y 1 |, . . . , |y k−1 |, |y k+1 |, . . . , |y n−1 | ω}, and note that A 5 = X (k,n) × Y (k,n) . Applying Lemma 5.4 with n replaced by n − 2, we get A 5F kn = E Tr f η (H) n−2 + O(N −β/2 ) . (5.44) By the decay conditions of f and f , A 5 1 (z k − z n ) 2 ϕ f (z k )ϕ f (z n ) d 2 z k d 2 z n = 1 (z k − z n ) 2 ϕ f (z k )ϕ f (z n )1 {y k ω, yn −ω} d 2 z k d 2 z n + O(N −γ ) = ψ f (a k , b k )ψ f (a n , b n ) (a k − a n + i (b k − b n )N −β ) 2 1 {b k N β−γ , bn −N β−γ } d 2 z k d 2 z n + O(N −γ ) = . . Ψ + O(N −γ ) , where in the second last step we use the change of variables in (5.37), and ψ f is as in (5.17). Note that one can repeat the steps in the proof of Lemma 4.4 for any f ∈ C 1,r,s (R) instead of f (x) = ( x+i x 2 +1 ) k , and get f η (H) ≺ 1. (5.45) Together with Lemma 4.5 we know A 5 F kn = −2 E Tr f η (H) n−2 Ψ + Ψ · O N −β/2 + O(N −γ/2 ) . (5.46) Step 2. We now compute Ψ. Let us set and ψ f,3 (a, b) . .= i 2π f (a)χ (b) , ψ f,1 (a, b) . .= i − 1 2π N −β f (a + bN −β ) − f (a) χ(b) , ψ f,2 (a, b) . .= − 1 2π f (a + bN −β ) − f (a) χ (b) ,(5.47) which gives ψ f (a, b) = ψ f,1 (a, b) + ψ f,2 (a, b) + ψ f,3 (a, b). Let Ψ i,j . .= ψ f,i (a k , b k )ψ f,j (a n , b n ) (a k − a n + i (b k − b n )N −β ) 2 1 {b k N β−γ , bn −N β−γ } da k db k da n db n , with 1 i, j 3. We will calculate Ψ by calculating 6 different integrals Ψ i,j , subject to symmetry. Let us first consider Ψ 1,1 . Note that by (5.13), f (a + bN −β ) − f (a) C(|b|N −β ) rq 0 1 1 + |a| 1+s/2 ,(5.48) where q 0 = q 0 (s) = s 2(s+1) s 4 > 0. Thus Ψ 1,1 C 1 |b k b n |N −2β N −2β−2rq 0 β |b k b n | rq 0 χ(b k )χ(b n )1 {b k N β−γ , bn −N β−γ } db k db n = O(N −2rq 0 β ) . Now we consider Ψ 2,1 . Note that χ (b) = 0 for |b| < 1. Using integration by parts on the variable a k , we know Now we move to Ψ 2,2 . Using integration by parts on the variables a k and a n , we know Ψ 2,1 = ψ f,2 (a k , b k )ψ f,1 (a n , b n ) (a k − a n + i (b k − b n )N −β ) 2 1 {b k 1, bn −N β−γ } da k db k da n db n = ψ f,2 (a k , b k )ψ f,1 (a n , b n ) a k − a n + i (b k − b n )N −β 1 {b k 1, bn −N β−γ } da k db k da n db n ,Ψ 2,2 = log a k − a n + i (b k − b n )N −β ψ f,2 (a k , b k )ψ f,2 (a n , b n )1 {b k 1, bn −1} da k db k da n db n = O log N · N −2rq 0 β = O(N −rq 0 β ) . Similarly, Ψ 3,2 = O(N −rq 0 β/2 ) O(N −rsβ/8 ). The leading contribution comes from Ψ 3,3 . Note that Ψ 3,3 = − 1 4π 2 f (a k )f (a n ) (a k − a n + i (b k − b n )N −β ) 2 χ (b k )χ (b n )1 {b k 1, bn −1} da k db k da n db n = 1 8π 2 f (a k ) − f (a n ) (a k − a n + i (b k − b n )N −β ) 2 χ (b k )χ (b n )1 {b k 1, bn −1} da k db k da n db n = 1 8π 2 f (a k ) − f (a n ) a k − a n 2 χ (b k )χ (b n )1 {b k 1, bn −1} da k db k da n db n + O(N −β/3 ) = − 1 8π 2 f (a k ) − f (a n ) a k − a n 2 da k da n + O(N −β/3 ) , where the second last step is an elementary estimate whose details we omit. Hence by (5.45) and Lemma 4.5 we have A 5 F kn = 1 4π 2 f (x) − f (y) x − y 2 dx dy · E Tr f η (H) n−2 + O(N −rsβ/9 ) . (5.50) Similarly, let A 6 . .= {(x, y) ∈ A 4 : y k −ω, y n ω, |x k − x n | ηN (n+1)γ/s }, and we have A 6 F kn = 1 4π 2 f (x) − f (y) x − y 2 dx dy · E Tr f η (H) n−2 + O(N −rsβ/9 ) . (5.51) Thus by (5.50) and (5.51) we conclude proof. Note that Lemma 5.6 and 5.7 imply F kn = n − 1 2π 2 f (x) − f (y) x − y 2 dx dy · E Tr f η (H) n−2 + O(N −rsβ/9 ) . Together with Lemma 5.4 and (5.31), we have where ϕ f is defined as in (5.7). Note that (3.9) and (3.11) imply E Tr f η (H) n = n − 1 2π 2 f (x) − f (y) x − y 2 dx dy · E Tr f η (H) n−2 + O(N −rsβ/9 ) ,(5.G(x + iy) − m(x + iy)) ≺ 1 N |y| uniformly in x ∈ R, |y| 1. Then we have |y| σF ≺ |y| σ ϕ f (z) · 1 y d 2 z = O(N −rq 0 β ) ,(5.54) where q 0 = q 0 (s) = s 2(1+s) s 4 . Also, we have |y| σ, |x|>2− κ 2F ≺ |y| σ, |x|>2− κ 2 ϕ f (z) · 1 y d 2 z = O(N β−sα/2 ) = O(N −sc 0 /2 ) .N 1−α [G] d −→ N C 0, 1 2 (6.1) as N → ∞. In this section we only sketch a proof of Proposition 6.1, and the other results can be proved analogously. We begin with the following lemma. Lemma 6.2. Fix m > 2 and let X be a real random variable, with absolutely continuous law, satisfying EX = 0 , EX 2 = σ 2 , E|X| m C m (6.2) for some constant C m > 0. Let λ > 2σ. Then there exists a real random variable Y that satisfies EY = 0, EY 2 = σ 2 , |Y | λ , P(X = Y ) 2C m λ −m . (6.3) In particular, E|Y | m 3C m . Moreover, if m > 4 and σ = 1, then there exists a real random variable Z matching the first four moments of Y , and satisfies |Z| 6C m . The existence of Y is a slight modification of Lemma 7.6 in [15], and the construction of Z is contained in the proof of Theorem 2.5 in [15]; we omit further details. The next lemma is an easy application of Lemma 6.2. We can then construct H (1) and H (2) from H, as in Lemma 6.3. ij = H ij ) = O(N −2−c/4+δ ij ) , max i,j |H (1) ij | N −ε ,(6.ij = 0 , E H (1) ij 2 = EH ij 2 , |H (1) ij | N −ε , P(H ij = H (1) ij ) 2CN −2−c/H . .= (1 − e −2N ) 1/2 · H + e −N V , Let z . .= E + iη. We have already obtained from Proposition 4.2 that N 1−α 1 N Tr 1 H (2) − z − m(z) d −→ N C 0, 1 2 ,(6.6) and we need to show (6.6) holds with H (2) replaced by H. We first compare the local spectral statistics of H (2) and H (1) using the Green function comparison method from [17]; see also Section 4 of [11] for an overview. Fix a bijective ordering map on the index set of the independent matrix entries, φ : {(i, j) : 1 i j N } −→ {1, . . . , γ(N )} , γ(N ) . .= N (N + 1) 2 , and we assume φ(i, i) = i for i = 1, 2, . . . , N . Denote by H γ the Wigner matrix whose matrix entries h ij = H Now we focus on the term EF N 1−α 1 N Tr 1 H γ−1 − z − m(z) − EF N 1−α 1 N Tr 1 H γ − z − m(z) = . . ε γ . For γ > N , let (i, j) = φ −1 (γ). Note that i < j, and we definẽ V = H SinceṼ has only at most two nonzero entries, when computing the (k, l) matrix entry of this matrix identity, each term is a finite sum involving matrix entries of S or R and H (2) ij , e.g. (SṼ S) kl = S ki H (2) ij S jl +S kj H (2) ji S il . LetS . .= N 1−α (S−m(z)), andR,T are defined analogously. Set ξ . .=S−R, and note that one can easily obtain ξ from (6.7). Similarly, µ . .=T −R, and we have an explicit expansion for ε γ = EF (S) − EF (T ) = EF (R + ξ) − EF (R + µ) . (6.8) Now we expand F (R + ξ) and F (R + µ) aroundR using Taylor expansion. The detailed formulas of the expansion can be found in Section 4.1 of [11], and we omit them here. Since the first four moments of the entries of H (1) and H (2) coincide, the error is bounded by the terms with factors (H The complex Hermitian case We conclude the paper with a remark on the complex Hermitian case. As mentioned in Remark 2.4, in the complex case we now have (2.9) and (2.10) instead of Theorem 2.2 and 2.3. We omit the complete statements of the results here. The proof in complex Hermitian case replies on the complex cumulant expansion, which we state in the lemma below, whose proof is omitted. Lemma 7.1. (Complex cumulant expansion) Let h be a complex random variable with all its moments exist. The (p, q)-cumulant of h is defined as C (p,q) (h) . .= (−i) p+q · ∂ p+q ∂s p ∂t q log Ee ish+ith s=t=0 . Let f : C 2 → C be a smooth function, and we denote its holomorphic derivatives by f (p,q) (z 1 , z 2 ) . .= ∂ p+q ∂z 1 p ∂z 2 q f (z 1 , z 2 ) . Then for any fixed l ∈ N, we have Ef (h,h)h = l p+q=0 1 p! q! C (p,q+1) (h)Ef (p,q) (h,h) + R l+1 ,(7.1) given all integrals in (7.1) exists. Here R l+1 is the remainder term depending on f and h, and for any τ > 0, we have the estimate R l+1 = O(1) · E h l+2 · 1 {|h|>N τ −1/2 } · max p+q=l+1 f (p,q) (z,z) ∞ + O(1) · E|h| l+2 · max p+q=l+1 f (p,q) (z,z) · 1 {|z| N τ −1/2 } ∞ . Using Lemma 7.1 it is not hard to extend the argument of Sections 4-5 to the complex case. We sketch the required modifications. Let H be a complex Wigner matrix. An argument analogous to Section 6 shows that it suffices to consider H satisfying Definition 3.2. Let G . .= G(E + iη) = (H − E − iη) −1 with E, η defined in Theorem 2.2. Let m, n 1. Since H is complex hermitian, for any differentiable f = f (H) we set ∂ ∂H ij f (H) . .= d dt t=0 f H + t∆ (ij) ,(7.2) where∆ (ij) denotes the matrix whose entries are zero everywhere except at the site (i, j) where it is one:∆ (ij) kl = δ ik δ jl . Then by using Lemma 7.1 with h = H ij we have zE G * n G m = 1 N i,j E G * n G m−1 G ij H ji = 1 N 2 i,j E ∂( G * n G m−1 G ij ) ∂H ij +K +L ,(7.3) whereK andL are defined analogously to K and L in (4.17). Note that ∂G ij ∂H kl = −G ik G lj ,(7.4) and by (7.3) we have E G * n G m = 1 T E G * n G m+1 − 1 T E G * n G m−1 E G 2 + m − 1 N 2 T E G * n G m−2 G 3 −K T −L T + n N 2 T E G * n−1 G m−1 GG * 2 ,(7.5) where T = −z −2EG. By a comparison of (7.5) and its real analogue (4.22), we see that the leading term is now halved. By estimating the subleading terms in a similar fashion, one can show that instead of (4.55), we have E G * n G m = n 4 N 2α−2 E G * n−1 G m−1 + O N (m+n)(α−1)−c 0 , which agrees with our statement that we have an additional factor of 1/2 in the covariances. (i) The upper-triangular entries (H ij . . 1 i j N ) are independent. Lemma 3. 1 ( 1Cumulant expansion). Let h be a real-valued random variable with finite moments of all order, and f a complex-valued smooth function on R. Then for any fixed l ∈ N we have Lemma 3 . 3 ( 33Helffer-Sjöstrand formula). Let f ∈ C 1,r,s (R) with some r, s > 0. Letf be the almost analytic extension of f defined bỹ Theorem 3 . 5 ( 35Local semicircle law). Let H be a Wigner matrix satisfying Definition 3.2, and define the spectral domain S . .= {E + iη : |E| 10, 0 < η 10} . Theorem 4. 1 . 1Theorem 2.2 holds for Wigner matrices H satisfying Definition 3.2, and the convergence also holds in the sense of moments.For the statements of the following results we abbreviate G ≡ G(E+iη) and [G] . .= G−m(E+iη). The following result is a special case of Theorem 4.1. Proposition 4 . 2 . 42Under the assumptions of Theorem 4.1 we have Lemma 4. 3 . 3Under the assumptions of Theorem 4.1 the following holds. Lemma 4 . 3 . 43The following result yields bounds that grow slower with k and in addition provide extra smallness for the offdiagonal entries of G k . Both of these features are necessary for the proof of Lemma 4.3. Lemma 4. 4 . 4Under the conditions of Theorem 4.1, for any fixed Lemma 4.4 is very useful in estimating the expectations involving entries of G, in combination with the following elementary result about stochastic domination. Lemma implicit constant depends only on the distance to the spectral edge κ . .= 2 − |E| .(4.24) only two entries of W are nonzero, and they are stochastically dominated by N −1/2 . Then the trivial bound max a,b |Ĝ ab | N α together with (4.13) and (4.30) show that max a =b |Ĝ ab | ≺ N −(α−1)/2 , and max a |Ĝ aa | ≺ 1. Combining with (4.31), the trivial bound max a,b +1 = O(N −(D 0 +2) ), from which(4.26)follows. Lemma 4. 8 . 8Let c 0 be defined as in(4.3). We have 4.29), Lemma 4.8, and (4.50)-(4.53) to (4.22), together with (4.23), we obtain . 5 . 5Writing ξ . .= Im G, we have EG − EG * = 2iEξ. Moreover, (3.9) and (4.12) imply that T = −z − 2EG = −2iEξ + O(N −c 0 ). Together with (4.5) and Lemma 4.5 we haveE G * n G m = n 2 N 2α−2 E G * n−1 G m−1 + O N (m+n)(α−1)−c 0 (4.55)for m, n 1. = −z − 2EG, and K (3) , L (3) are defined analogously as K and L in (4.22). Due to the absence of G * , there is no leading term in (4.57) as the last term in (4.22). One can easily apply our previous techniques and show every term in RHS of (4.57) is bounded by O(N m(α−1)−c 0 ). By taking complex conjugation in (4.56) we also have E G * n = O N n(α−1)−c 0 (4.58) for all n 2. Now (4.2) follows from (4.55), (4.56), and (4.58) combined with induction. This concludes the proof of Lemma 4.3 (i). 64)where we recall the definition (4.24) of κ. Since G = (H − z) −1 , we know that sgn(Im G) = sgn(Im z) > 0. Also, we have Imm(z) −c for some c = c(κ) > 0. This shows |EG −m(z)| c. Thus from (4.64) we have |EG − m(z)| = O(N α−1−c 0 ) , which completes the proof. 4.4. Proof of Theorem 4.1. LetỸ (b) . .= N 1−α G(E + bη) . As in the one-dimensional case, Proposition 4.2, Theorem 4.1 follows from the following lemma, which generalizes Lemma 4.3. Lemma 4. 9 . 9Let c 0 be defined as in(4.3). Under the assumptions of Theorem 4.1 the following holds. 1 ), . . . ,Ỹ (b p )) d −→ (Y (b 1 ), . . . , Y (b p )) . Theorem 5 . 1 . 51Theorem 2.3 holds for Wigner matrices H satisfying Definition 3.2, and the convergence also holds in the sense of moments. Let us abbreviate f η (x) . .= f x−E η , and denote [Tr f η (H)] . .= Tr f η (H) − N 2 −2 (x)f η (x) dx. Our next result is a particular case of Theorem 5.1. Proposition 5.2. Let η, E, H be as in Theorem 5.1. Then Lemma 5 . 3 . 53Under the conditions of Theorem 5.1, we have the following results. ( ii) The random variables [Tr f η (H)] and Tr f η (H) are close in the sense that [Tr f η (H)] − Tr f η (H) = E[Tr f η (H)] = O(N −rs 2 c 0 /16 ) , (5.3) with c 0 defined in (4.3). Assume Lemma 5.3 holds. Then (5.2) and Wick's theorem imply the second step we used the change of variables a . .= (x − E)/η and b . .= y/σ . (5.12) F + O(N −β/2 ) . Lemma 5 . 5 . 55Let us extend the definition of c 0 in (4.3) to a function c 0 (·) : (0, 1) → R such that c 0 (x) . . k 2 · G n + O(N n(α+γ−1)−c 0 (α+γ) ) (5.29) whereψ f, 2 2(a, b) . .= − 1 2π f (a + bN −β ) − f (a) χ (b). Then by (5.48) we know Ψ 2,1 = O(N β · N −rq 0 β · N −β−rq 0 β ) = O(N −2rq 0 β ) . (5.49) Similarly, Ψ 3,1 = O(N −rq 0 β ). . Proof of Lemma 5.3 (ii). Let f ∈ C 1,r,s (R) with r, s > 0, and without loss of generality we assume s 1. We definef as in (3.4). Let σ = N −(α+β) , where we define β = sc 0 /4 instead in (5.5). Let χ be as in Lemma 3.3 satisfying χ(y) = 1 for |y| 1, and χ(y) = 0 for |y| 2. An application of Lemma 3.3 gives E[Tr f η (H)] = N ϕ f (x + iy) (EG(x + iy) − m(x + iy)) dx dy = . . F , (5.53) x + iy) − m(x + iy) = O(N (α+β)−1−c 0 (α+β) ) (5.56) uniformly in |x| 2 − κ 2 , |y| σ, where the function c 0 (·) is defined as in (5.22). Thus |y| σ, |x| 2− κ 2F = O(N (α+β)−c 0 (α+β) · η) = O(N β−c 0 (α+β) ) = O(N −c 0 /2 ) . (5.57) Altogether we have (5.3). where V is a GOE matrix independent of H. Then H also satisfies Definition 2.1. Let G . .= (H − E − iη) −1 , and [G ] . .= G − m(E + iη) with E, η defined in Theorem 2.2. The resolvent identity G − G = G(H − H )G implies [G ] − [G] ≺ e −N/2 . EF N 1 −α 1 N 11if φ(i, j) γ and h ij = H(1) ij otherwise; in particular H (2) = H 0 and H (1) = H γ(N ) . Let F = F (x + iy) be a complex-valued, smooth, bounded function, with bounded derivatives. Then Tr 1H γ−1 − z − m(z) − EF N 1−α 1 N Tr 1 H γ − z − m(z) . S ∆ (ij) ,and recall from section 4.2 that the matrix ∆ (ij) satisfies ∆(ij) kl = (δ ik δ jl + δ jk δ il )(1 + δ ij ) −1 . Denote Q . .= H γ−1 −Ṽ . Then H γ−1 = Q +Ṽ , andṼ is independent of Q. Also, H γ = Q +V . Define the Green functions R . .= R − RṼ R + (RṼ ) 2 − (RṼ ) 3 R + (RṼ ) 4 R − (RṼ ) 5 S . (6.7) 5 5) n 2 in the expansion, where m 1 + n 1 , m 2 + n 2 m 5, a routine estimate shows the rest terms are bounded by O(N −2−c/2 ). Thus we have ε γ = O(N −2−c/2 ) uniformly in γ > N . Similarly, one can show ε γ = O(N −1−c/2 ) uniformly for γ N . ThusEF N 1−α 1 N Tr 1 H (2) − z − m(z) − EF N 1−α 1 N Tr 1 H (1) − z − m(z) = O(N −c/2 ) .The transition from H(1) to H is immediate, since we have = H ij ) = O(N −c/4 ) . by an approximation argument, for F in the above class, lim N →∞ EF (X N ) = EF (X) is sufficient in showing X N d → X. Thus we have finished the proof. 5.5. Remark on the general case. Let us turn to Theorem 5.1. As in the 1-dimensional case, we first show that (Z(f 1 ), . . . ,Z(f p )) as N → ∞, whereZ(f i ) . .= Tr f i ( H−E η ) for 1 i p. In order to show (5.58), it suffices to compute E[Z(f i 1 ) · · ·Z(f in )], i 1 , . . . , i n ∈ {1, 2, . . . , p}, and this follows exactly the same way as we compute E Tr f η (H) n . Theorem 5.1 then follows from the estimate EẐ(f i ) = O(N −rs 2 c 0 /16 ) for all i ∈ {1, 2, . . . , p}, which is Lemma 5.3 (ii).6. Relaxing the moment conditionIn this section we use a Green function comparison argument to pass from Theorems 4.1 and 5.1 to Theorems 2.2 and 2.3. Recall G . .= G(E +iη) = (H −E −iη) −1 , and [G] . .= G−m(E +iη) with E, η defined in Theorem 2.2. Similar as in Section 4, we have a particular case of Theorem 2.2. Proposition 6.1. Let η, E, H be as in Theorem 2.2. Thend −→ (Z(f 1 ), . . . , Z(f p )) (5.58) Lemma 6.3. Let H be a real symmetric Wigner matrix, whose entries have absolutely continuous law. Let c be as in Definition 2.1. Then there exists a real symmetric Wigner matrix H (1) satisfying Definition 2.1 andmax i,j P(H (1) 4 ) 4where ε = ε(c) . .= c 4(4+c) > 0. Moreover, there exists a real symmetric Wigner matrix H (2) satisfying Definition 3.2, such that for all i, j,where 1 k 4 − 2δ ij . Proof. Fix c, C > 0 such that E| √ N H ij | 4+c−2δ ijC for all i, j. By using Lemma 6.2 with m . .= 4 + c − 2δ ij , X . .= √ N H ij , λ . .= N 1/2−ε , C m . .= C, we construct, for each H ij , a random variable H (1) ij . .= N −1/2 Y such that the family {H(1) ij } i j is independent andE H (2) ij k = E H (1) ij k , (6.5) EH (1) 4+δ ij , and we also have E| ij | 4+c−2δ ij 3C . Hence we have proved the existence of H (1) . For i < j, by using the second part of Lemma 6.2 on Y = ij , a random variable H (2) ij . .= N −1/2 Z matching the first four moments of H ij } i<j is independent. Moreover, we have the bound | has uniformly bounded moments of all order. Let us denote ζ i . .= E| ij } i j is independent. This completes the proof. Now we look at Proposition 6.1. Proof of Proposition 6.1. Let H be as in Theorem 2.2. Note that it suffices to consider the case that the entries of H have absolutely continuous law. Otherwise consider the matrix√ N H (1) √ N H (1) ij , we construct, for each H (1) (1) ij , and the family {H (2) √ N H (2) ij | 6C, which ensures √ N H (2) ij √ N H (1) ii | 2 . Then we can con- struct random variables H (2) ii such that √ N H (2) ii d ∼ N (0, ζ i ) and the family {H (2) = O(N −5/2 · N 2 · N α+ε ) = O(N α+ε−1/2 ) , = O(N −5/2 · N 2 · N (α−1)/2+ε ) = O(N α/2−1+ε ) , AcknowledgementsThe authors were partially supported by the Swiss National Science Foundation grant 144662 and the SwissMAP NCCR grant. Repulsion of energy levels and the conductance of small metallic samples. B L Altshuler, B I Shklovskii, Zh. Eksp. Teor. Fiz. (Sov. Phys. JETP). 64220B.L. Altshuler and B.I. Shklovskii. Repulsion of energy levels and the conductance of small metallic samples. Zh. Eksp. Teor. Fiz. (Sov. Phys. JETP) 91 (64), 220 (1986). On the convergence of the spectral empirical process of Wigner matrices. Z D Bai, J Yao, Bernoulli. 116Z.D. Bai and J. Yao. On the convergence of the spectral empirical process of Wigner matrices. Bernoulli 11 (6), 1059-1092 (2005). Lectures on the local semicircle law for Wigner matrices. F Benaych-Georges, A Knowles, arXiv:1601.04055PreprintF. Benaych-Georges and A. Knowles. Lectures on the local semicircle law for Wigner matrices. Preprint arXiv:1601.04055. On the principal components of sample covariance matrices. A Bloemendal, A Knowles, H.-T Yau, J Yin, arXiv:1404.0788PreprintA. Bloemendal, A. Knowles, H.-T. Yau, and J. Yin. On the principal components of sample covariance matrices. Preprint arXiv:1404.0788. Asymtotic distribution of smoothed eigenvalue density. I. Gaussian random matrices. A Boutet De Monvel, A Khorunzhy, Random Oper. and Stoch Equ. 71A. Boutet de Monvel and A. Khorunzhy. Asymtotic distribution of smoothed eigenvalue den- sity. I. Gaussian random matrices. Random Oper. and Stoch Equ., Vol. 7, No. 1, 1-22 (1999). Asymptotic distribution of smoothed eigenvalue density. II. Wigner random matrices. A Boutet De Monvel, A Khorunzhy, Random Oper. and Stoch Equ. 72A. Boutet de Monvel and A. Khorunzhy. Asymptotic distribution of smoothed eigenvalue density. II. Wigner random matrices. Random Oper. and Stoch Equ., Vol. 7, No. 2, 149-168 (1999). Universality of mesoscopic fluctuations for orthogonal polynomial ensembles. J Breuer, M Duits, Comm. Math. Phys. 3422J. Breuer and M. Duits. Universality of mesoscopic fluctuations for orthogonal polynomial ensembles. Comm. Math. Phys. 342 (2), 491-531 (2016). Bounds for the Stieltjes transform and the density of states for Wigner matrices. C Cacciapuoti, A Msltsev, B Schlein, 10.1007/s00440-014-0586-4Probab. Theory Related Fields. C. Cacciapuoti, A. Msltsev, and B. Schlein. Bounds for the Stieltjes transform and the density of states for Wigner matrices. Probab. Theory Related Fields. DOI: 10.1007/s00440-014-0586-4. The functional calculus. E B Davies, J. London Math Soc. 2E.B. Davies. The functional calculus. J. London Math Soc. (2) 52 (1), 166-176 (1995). On mesoscopic equilibrium for linear statistics in Dyson's Brownian Motion. M Duits, K Johansson, arXiv:1312.4295PreperintM. Duits and K. Johansson. On mesoscopic equilibrium for linear statistics in Dyson's Brow- nian Motion. Preperint arXiv:1312.4295. Universality of Wigner random matrices: a survey of recent results. L Erdős, Russian Math. Surveys. 663L. Erdős. Universality of Wigner random matrices: a survey of recent results. Russian Math. Surveys 66 (3), 67-198 (2011). The Altshuler-Shklovskii formulas for random band matrices I: the unimodular case. L Erdős, A Knowles, Comm. Math. Phys. 333L. Erdős and A. Knowles. The Altshuler-Shklovskii formulas for random band matrices I: the unimodular case. Comm. Math. Phys. 333, 1365-1416 (2015). The Altshuler-Shklovskii formulas for random band matrices II: the general case. L Erdős, A Knowles, Ann. H. Poincaré. 16L. Erdős and A. Knowles. The Altshuler-Shklovskii formulas for random band matrices II: the general case. Ann. H. Poincaré 16, 709-799 (2015). The local semicircle law for a general class of random matrices. L Erdős, A Knowles, H.-T Yau, J Yin, Elect. J. Prob. 18ArticleL. Erdős, A. Knowles, H.-T. Yau, and J. Yin. The local semicircle law for a general class of random matrices. Elect. J. Prob. 18, Article 59, 1-58 (2013). Spectral statistics of Erdős-Rényi graphs II: eigenvalue spacing and the extreme eigenvalues. L Erdős, A Knowles, H.-T Yau, J Yin, Comm. Math. Phys. 314L. Erdős, A. Knowles, H.-T. Yau, and J. Yin. Spectral statistics of Erdős-Rényi graphs II: eigenvalue spacing and the extreme eigenvalues. Comm. Math. Phys. 314, 587-640 (2012). Universality of local spectral statistics of random matrices. L Erdős, H.-T Yau, Bull. Amer. Math. Soc. 493L. Erdős and H.-T. Yau. Universality of local spectral statistics of random matrices. Bull. Amer. Math. Soc. 49 (3), 377-414 (2012). Universality for generalized Wigner matrices with Bernoulli distribution. L Erdős, H.-T Yau, J Yin, Journal of Combinatorics. 21L. Erdős, H.-T. Yau, and J. Yin. Universality for generalized Wigner matrices with Bernoulli distribution. Journal of Combinatorics 2 (1), 15-82 (2011). Rigidity of eigenvalues of generalized Wigner matrices. L Erdős, H.-T Yau, J Yin, Advances in Mathematics. 2293L. Erdős, H.-T. Yau, and J. Yin. Rigidity of eigenvalues of generalized Wigner matrices. Advances in Mathematics 229 (3), 1435-1515 (2012). Mesoscopic linear statistics of Wigner matrices. A Lodhia, N J Simm, arXiv:1503.03533PreprintA. Lodhia and N. J. Simm. Mesoscopic linear statistics of Wigner matrices. Preprint arXiv:1503.03533. Central limit theorem for linear eigenvalue statistics of random matrices with independent entries. A Lytova, L Pastur, Ann. Prob. 375A. Lytova and L. Pastur. Central limit theorem for linear eigenvalue statistics of random matrices with independent entries. Ann. Prob. 37 (5), 1778-1840 (2009).
[]
[ "The Assouad spectrum of random self-affine carpets", "The Assouad spectrum of random self-affine carpets" ]
[ "Jonathan M Fraser \nMathematical Institute\nUniversity of St Andrews\nNorth Haugh\nKY16 9SSSt Andrews, FifeUK\n", "Sascha Troscheit [email protected] \nDepartment of Pure Mathematics\nUniversity of Waterloo\nN2L 3G1WaterlooOntCanada\n" ]
[ "Mathematical Institute\nUniversity of St Andrews\nNorth Haugh\nKY16 9SSSt Andrews, FifeUK", "Department of Pure Mathematics\nUniversity of Waterloo\nN2L 3G1WaterlooOntCanada" ]
[]
We derive the almost sure Assouad spectrum and quasi-Assouad dimension of random self-affine Bedford-McMullen carpets. Previous work has revealed that the (related) Assouad dimension is not sufficiently sensitive to distinguish between subtle changes in the random model, since it tends to be almost surely 'as large as possible' (a deterministic quantity). This has been verified in conformal and non-conformal settings. In the conformal setting, the Assouad spectrum and quasi-Assouad dimension behave rather differently, tending to almost surely coincide with the upper box dimension.Here we investigate the non-conformal setting and find that the Assouad spectrum and quasi-Assouad dimension generally do not coincide with the box dimension or Assouad dimension. We provide examples highlighting the subtle differences between these notions. Our proofs combine deterministic covering techniques with suitably adapted Chernoff estimates and Borel-Cantelli type arguments.Mathematics Subject Classification 2010: primary: 28A80; secondary: 37C45.
10.1017/etds.2020.93
[ "https://arxiv.org/pdf/1805.04643v1.pdf" ]
119,691,305
1805.04643
fc17b03310389153063a20c654ff537b1a56c10b
The Assouad spectrum of random self-affine carpets 12 May 2018 May 15, 2018 Jonathan M Fraser Mathematical Institute University of St Andrews North Haugh KY16 9SSSt Andrews, FifeUK Sascha Troscheit [email protected] Department of Pure Mathematics University of Waterloo N2L 3G1WaterlooOntCanada The Assouad spectrum of random self-affine carpets 12 May 2018 May 15, 2018and phrases: Assouad spectrumquasi-Assouad dimensionrandom self-affine carpet We derive the almost sure Assouad spectrum and quasi-Assouad dimension of random self-affine Bedford-McMullen carpets. Previous work has revealed that the (related) Assouad dimension is not sufficiently sensitive to distinguish between subtle changes in the random model, since it tends to be almost surely 'as large as possible' (a deterministic quantity). This has been verified in conformal and non-conformal settings. In the conformal setting, the Assouad spectrum and quasi-Assouad dimension behave rather differently, tending to almost surely coincide with the upper box dimension.Here we investigate the non-conformal setting and find that the Assouad spectrum and quasi-Assouad dimension generally do not coincide with the box dimension or Assouad dimension. We provide examples highlighting the subtle differences between these notions. Our proofs combine deterministic covering techniques with suitably adapted Chernoff estimates and Borel-Cantelli type arguments.Mathematics Subject Classification 2010: primary: 28A80; secondary: 37C45. Assouad spectrum and quasi-Assouad dimension The Assouad dimension is an important notion of dimension designed to capture extreme local scaling properties of a given metric space. Its distance from the upper box dimension, which measures average global scaling, can be interpreted as a quantifiable measure of inhomogeneity. Motivated by this idea, Fraser and Yu introduced the Assouad spectrum which is designed to interpolate between the upper box dimension and Assouad dimension, and thus reveal more precise geometric information about the set, see [FY1]. Here we recall the basic definitions and, for concreteness, we consider non-empty compact sets F ⊆ R d , although the general theory extends beyond this setting. For a bounded set E ⊆ R d and a scale r > 0 we let N (E, r) be the minimum number of sets of diameter r required to cover E. The Assouad dimension of F is defined by dim A F = inf α : (∃ C) (∀ 0 < r < R < 1) (∀x ∈ F ) N B(x, R) ∩ F, r C R r α . The Assouad spectrum is the function defined by θ → dim θ A F = inf α : (∃C > 0) (∀0 < R < 1) (∀x ∈ F ) N B(x, R) ∩ F, R 1/θ C R R 1/θ α where θ varies over (0, 1). The related quasi-Assouad dimension is defined by dim qA F = lim θ→1 inf α : (∃C > 0) (∀0 < r R 1/θ R < 1) (∀x ∈ F ) N B(x, R) ∩ F, r C R r α * JMF was financially supported by the Leverhulme Trust Research Fellowship RF-2016-500, the EPSRC Standard Grant EP/R015104/1, and the University of Waterloo. † ST was financially supported by NSERC Grants 2014-03154 and 2016-03719, and the University of Waterloo. and the upper box dimension is defined by dim B F = inf α : (∃ C) (∀ 0 < R < 1) N F, r C 1 r α . These dimensions are all related, but their relative differences can be subtle. We summarise some important facts to close this section. For any θ ∈ (0, 1) we have dim B F dim θ A F dim qA F dim A F and any of these inequalities can be strict. Moreover, the Assouad spectrum is a continuous function of θ and also satisfies dim θ A F dim B F 1 − θ . (1.1) We also note that for a given θ it is not necessarily true that the Assouad spectrum is given by the expression after the limit in the definition of the quasi-Assouad dimension: this notion is by definition monotonic in θ but the spectrum is not necessarily monotonic [FY1,Section 8]. However, it has recently been shown in [FHHTY] that dim qA F = lim θ→1 dim θ A F and, combining this with (1.1), we see that the Assouad spectrum necessarily interpolates between the upper box dimension and the quasi-Assouad dimension. For more information, including basic properties, concerning the upper box dimension, see [F,. For the Assouad dimension, see [Fr, L, R], the quasi-Assouad dimension, see [GH], and for the Assouad spectrum, see [FY1,FY2,FHHTY]. Self-affine carpets: random and deterministic In this paper we consider random self-affine carpets. More specifically, random 1-variable analogues of the self-affine sets introduced by Bedford and McMullen in the 1980s. In the deterministic setting, the box dimensions were computed independently by Bedford and McMullen [B, Mc] and the Assouad dimension was computed by Mackay [M]. The Assouad spectrum was computed by Fraser and Yu [FY2], and these results also demonstrated that the quasi-Assouad and Assouad dimensions coincide by virtue of the spectrum reaching the Assouad dimension. In the random setting, the (almost sure) box dimensions were first computed by Gui and Li [GL] for fixed subdivisions and by Troscheit [T2] in the most general setting that we are aware of. The (almost sure) Assouad dimension was computed by Fraser, Miao and Troscheit [FMT]. In this article we compute the quasi-Assouad dimension and the Assouad spectrum in the random setting. Unlike in the deterministic case, we find that the quasi-Assouad dimension and Assouad dimension are usually almost surely distinct. Further, the quasi-Assouad dimension is in general also distinct from the box dimension. This is in stark contrast to the conformal setting, where it was shown that the quasi-Assouad dimension (and thus Assouad spectrum) is almost surely equal to the upper box dimension (and distinct from the Assouad dimension), see [T1]. We close this section by describing our model. Let Λ = {1, . . . , |Λ|} be a finite index set and for each i ∈ Λ fix integers n i > m i 2 and divide the unit square [0, 1] 2 into a uniform m i × n i grid. For each i ∈ Λ let I i be a non-empty subset of the set of m −1 i × n −1 i rectangles in the grid and let N i = |I i |. Let B i be the number of distinct columns which contain rectangles from I i , and C i be the maximum number of rectangles in I i which are contained in a particular column. Note that 1 B i m i , 1 C i n i and N i B i C i . For each rectangle j ∈ I i , let S j be the unique orientation preserving affine map which maps the unit square [0, 1] 2 onto the rectangle j. Let Ω = Λ N and for each ω = (ω 1 , ω 2 , . . . ) ∈ Ω, we are interested in the corresponding attractor F ω = k 1 j1∈Iω 1 ,...,j k ∈Iω k S j1 • · · · • S j k [0, 1] 2 . By randomly choosing ω ∈ Ω, we randomly choose an attractor F ω and we wish to make statements about the generic nature of F ω . For this, we need a measure on Ω. Let {p i } i∈Λ be a set of probability weights, that is, for each i ∈ Λ, 0 < p i < 1 and i∈Λ p i = 1. We extend these basic probabilities to a Borel measure P on Ω in the natural way, which can be expressed as the infinite product measure P = k∈N i∈Λ p i δ i , where Ω is endowed with the product topology and δ i is a unit mass concentrated at i ∈ Λ. Note that the deterministic model can be recovered if |Λ| = 1, that is, there is only one "pattern" available, which is therefore chosen at every stage in the process. In this case, the deterministic attractor is the unique non-empty set F ⊆ [0, 1] 2 satisfying F = j∈I1 S j (F ). Results Our main result is an explicit formula which gives the Assouad spectrum of our random self-affine sets almost surely. For simplicity we suppress summation over i ∈ Λ to simple summation over i throughout this section. Theorem 3.1. For P almost all ω ∈ Ω, we have dim θ A F ω =              1 1 − θ i p i log B i C θ i N −θ i i p i log m i + i p i log N i B −1 i C −θ i i p i log n i : 0 < θ i p i log m i i p i log n i i p i log B i i p i log m i + i p i log C i i p i log n i : i p i log m i i p i log n i < θ < 1 , where F ω is the 1-variable random Bedford-McMullen carpet associated with ω ∈ Ω. As an immediate consequence of Theorem 3.1 we obtain a formula for the quasi-Assouad dimension which holds almost surely. Corollary 3.2. For P almost all ω ∈ Ω, we have dim qA F ω = i p i log B i i p i log m i + i p i log C i i p i log n i , where F ω is the 1-variable random Bedford-McMullen carpet associated with ω ∈ Ω. Proof. This follows immediately from Theorem 3.1 and the fact that dim θ A E → dim qA E as θ → 1 for any set E ⊆ R d , see [FHHTY,Corollary 2.2]. Note that the result in [FMT] states that for P almost all ω ∈ Ω, we have dim A F ω = max i∈Λ log B i log m i + max i∈Λ log C i log n i . Therefore Corollary 3.2 demonstrates the striking difference between the Assouad and quasi-Assouad dimensions in the random setting. In particular the almost sure value of the Assouad dimension does not depend on the weights {p i } i∈Λ , but the almost sure value of the quasi-Assouad dimension depends heavily on the weights. The almost sure value of the Assouad dimension is also extremal in the sense that it is the maximum over all realisations ω ∈ Ω, whereas the quasi-Assouad dimension is an average. Recall that the quasi-Assouad and Assouad dimensions always coincide for deterministic self-affine carpets, see [FY2]. It is worth noting that the Assouad dimension of the random attractor is at least the maximal Assouad dimension of the deterministic attractors, whereas the quasi-Assouad dimension is bounded above by the dimension of the individual attractors. That is, letting i = (i, i, i, . . . ), dim A F ω max i∈Λ dim A F i (a.s.) and dim qA F ω max i∈Λ dim qA F i (a.s.). Typically these inequalities are strict and it is further possible that dim qA F ω < min i∈Λ dim qA F i , almost surely, see Figure 1 and the example in Section 3.2. Finally, note that the almost sure values of the Assouad and quasi-Assouad dimensions coincide if and only if there exists α, β ∈ (0, 1] such that for all i ∈ Λ we have log Bi log mi = α and log Ci log ni = β. This follows by considering 'weighted mediants'. In particular, the two terms giving the quasi-Assouad dimension are weighted mediants of the fractions log Bi log mi and log Ci log ni respectively. It is well-known that weighted mediants Figure 1: The Assouad spectra of the sets in the example of Section 3.1. The deterministic spectra are shown in dashed lines and the almost sure spectrum in the random case is given by a solid line. dim θ A F ω dim θ A F 2 dim θ A F 1 are equal to the maximum if and only if all the fractions coincide. In particular, coincidence of all of the deterministic Assouad (and quasi-Assouad) dimensions is not sufficient to ensure almost sure coincidence of the Assouad and quasi-Assouad dimensions in the random case. Simple algebraic manipulation yields the following random analogue of [FY2,Corollary 3.5]. In particular, the random variable dim θ A F ω can be expressed in terms of the random variables dim B F ω and dim qA F ω . Corollary 3.3. For P almost all ω ∈ Ω, we have dim θ A F ω = min    dim B F ω − θ dim qA F ω − dim qA F ω − dim B F ω i pi log ni i pi log mi 1 − θ , dim qA F    where F ω is the 1-variable random Bedford-McMullen carpet associated with ω ∈ Ω. Note that [FY2,Corollary 3.5] is formulated using the Assouad dimension instead of the quasi-Assouad dimension (although they are equal in the deterministic case). Our result shows that the quasi-Assouad dimension is really the 'correct' notion to use here. Generic Example For illustrative purposes, we exhibit a representative example and provide pictures of the random and deterministic carpets along with their spectra. Let Λ = {1, 2} and P be the 1/2-1/2 Bernoulli probability measure on Ω = Λ N . That is, we consider two iterated function systems that we choose with equal probability. The first iterated function system consists of N 1 = 20 maps, where the unit square is divided into m 1 = 19 by n 1 = 21 rectangles. There are B 1 = 10 columns containing at least one rectangle and the maximal number of rectangles in a particular column is C 1 = 8. For the attractor of this deterministic Bedford-McMullen carpet we obtain: and the spectrum interpolates between these two values with a phase transition at log 19/ log 21 ≈ 0.967. The spectrum is plotted in Figure 1 and the attractor is shown in Figure 2. The second iterated function system consists of N 2 = 5 maps, where the unit square is divided into m 2 = 2 by n 2 = 15 rectangles. There are B 2 = 2 columns containing at least one rectangle and the maximal number of rectangles in a particular column is C 2 = 4. For the attractor of this deterministic Bedford-McMullen carpet we obtain: dim A F 1 = dim qA F 1 =logdim A F 2 = dim qA F 2 = 1 + log 4 log 15 ≈ 1.512 and dim B F 2 = 1 + log 5/2 log 15 ≈ 1.338 and the spectrum interpolates between these two values with a phase transition at log 2/ log 15 ≈ 0.256. The spectrum is plotted in Figure 1 and the attractor is shown in Figure 2. Our results now give the following values for almost every ω ∈ Ω: We note that in this example the almost sure value of the Assouad dimension exceeds that of the individual attractors, the almost sure quasi-Assouad dimension is less than the quasi-Assouad dimensions of the individual attractors, and that the phase transition in the spectrum occurs at log 38/ log 315 ≈ 0.632. dim A F ω = 1 + log 8 log 21 ≈ 1.683, dim qA F ω = An extreme example By constructing explicit examples, we demonstrate the following interesting phenomenon, which highlights the subtle difference between the quasi-Assouad and Assouad dimensions. For all ε ∈ (0, 1), there exist two deterministic self-affine carpets E, F with dim B E = dim B F = dim qA E = dim qA F = dim A E = dim A F = 1 such that when one mixes the two constructions by randomising as above, one finds that almost surely dim qA F ω ε < 2 = dim A F ω . Let ε ∈ (0, 1), Λ = {1, 2}, m 1 = 2, n 1 = n, m 2 = m, n 2 = m + 1, where m, n are large integers which will be chosen later depending only on ε. Let I 1 consist of both rectangles from a particular row in the first grid and I 2 consist of all (m + 1) rectangles in a particular column of the second grid. The deterministic carpets associated with these systems are both unit line segments: a horizontal line in the first case and a vertical line in the second. Therefore both have all the dimensions we consider being equal to 1. Let p 1 = p 2 = 1/2, although the precise choice of weights is not particularly important. It follows that for P almost all ω ∈ Ω we have dim qA F ω = (1/2) log 2 + (1/2) log 1 (1/2) log 2 + (1/2) log m + (1/2) log 1 + (1/2) log(m + 1) (1/2) log n + (1/2) log(m + 1) = log 2 log(2m) + log(m + 1) log n(m + 1) . Choose m sufficiently large to ensure that log 2 log(2m) ε/2 and, now that m is fixed, choose n sufficiently large to ensure that log (m+1) log n(m+1) ε/2. The main result in [FMT,Theorem 3.2] gives that for any choice of m, n 2, dim A F ω = 2 almost surely, and therefore the desired result follows. Proofs Approximate squares In this section we introduce (random) approximate squares, which are a common object in the study of self-affine carpets. Fix ω = (ω 1 , ω 2 , . . . ) ∈ Ω, R ∈ (0, 1) and let k ω 1 (R) and k ω 2 (R) be the unique positive integers satisfying k ω 1 (R) l=1 m −1 ω l R < k ω 1 (R)−1 l=1 m −1 ω l (4.1) and k ω 2 (R) l=1 n −1 ω l R < k ω 2 (R)−1 l=1 n −1 ω l , (4.2) respectively. Also let m max = max i∈Λ m i and n max = max i∈Λ n i . A rectangle [a, b] × [c, d] ⊆ [0, 1] 2 is called an approximate R-square if it is of the form S [0, 1] 2 ∩ π 1 T [0, 1] 2 × [0, 1] , where π 1 : (x, y) → x is the projection onto the first coordinate and R) and S = S i1 • · · · • S i k ω 2(T = S i1 • · · · • S i k ω 1 (R) , for some common sequence i 1 , i 2 , . . . with i j ∈ I ωj for all j. Here we say Q is associated with the sequence i 1 , i 2 , . . . , noting that the entries i 1 , i 2 , . . . , i k ω 1 (R) determine Q. In particular, the base b − a = k ω 1 (R) i=1 m −1 ωi ∈ (m −1 max R, R] by (4.1) and the height d − c = k ω 2 (R) i=1 n −1 ωi ∈ (n −1 max R, R] by (4.2) , and so approximate R-squares are indeed approximately squares with base and height uniformly comparable to R, and therefore each other. Proof strategy and notation In order to simplify the exposition of our proofs, we define the following weighted geometric averages of the important parameters: N = i∈Λ N pi i B = i∈Λ B pi i C = i∈Λ C pi i m = i∈Λ m pi i n = i∈Λ n pi i . Using this notation, in order to prove our result it is sufficient to prove the following two statements: (1) For all log m/ log n < θ < 1 we have that for P almost all ω ∈ Ω dim θ A F ω log B log m + log C log n . (2) For all 0 < θ < log m/ log n we have that for P almost all ω ∈ Ω dim θ A F ω = 1 1 − θ log B log m + log N /B log n − θ 1 − θ log(N /C) log m + log C log n . To see why this is sufficient, first note that since the Assouad spectrum is a continuous function in θ, see [FY1,Corollary 3.5], it is determined by its values on a countable dense set and so the above statements imply the a priori stronger statements that for P almost all ω ∈ Ω, we have the given estimates for all θ. Secondly, since the Assouad spectrum necessarily approaches the quasi-Assouad dimension as θ → 1, (1) demonstrates that the quasi-Assouad dimension is at most log B log m + log C log n . and since (2) demonstrates that the Assouad spectrum attains this value at θ = log m/ log n, it follows from [FY1,Corollary 3.6] that it is constant in the interval [log m/ log n, 1). Technically speaking [FY1,Corollary 3.6] proves that if the Assouad spectrum is equal to the Assouad dimension at some θ ′ ∈ (0, 1), then it is constant in the interval [θ ′ , 1), but the same proof allows one to replace the Assouad dimension with the quasi-Assouad dimension in this statement. Finally, note that to establish estimates for dim θ A F ω , it suffices to replace balls of radius R with approximate R-squares in the definition. That is, to estimate N Q ∩ F ω , R 1/θ where Q is associated to i 1 , i 2 , . . . with i j ∈ I ωj for all j instead of N B(x, R) ∩ F ω , R 1/θ for x ∈ F ω . This is because balls and approximate squares are comparable and one can pass covering estimates concerning one to covering estimates concerning the other up to constant factors. This duality is standard and we do not go into the details. Covering estimates Let ω ∈ Ω, θ ∈ (0, 1), R ∈ (0, 1) and Q be an approximate R-square associated with the sequence i 1 , i 2 , . . . with i j ∈ I ωj for all j. In what follows we describe sets of the form S j1 • · · · • S j l (F ω ) as level l cylinders and level (l + 1) cylinders lying inside a particular level l cylinder will be referred to as children. Moreover, iteration will refer to moving attention from a particular cylinder, or collection of cylinders, to the cylinders at the next level. We wish to estimate N Q ∩ F ω , R 1/θ and to do this we decompose Q ∩ F ω into cylinders at level k ω 2 R 1/θ and cover each cylinder independently. Therefore we first need to count how many level k ω 2 R 1/θ cylinders lie inside Q. There are two cases, which we describe separately. Case (i): k ω 1 (R) < k ω 2 R 1/θ . We start by noting that Q lies inside a (unique) level k ω 2 (R) cylinder. As we move to the next level only the children of this cylinder lying in a particular 'column' will also intersect Q. Iterating inside cylinders intersecting Q until level k ω 1 (R) yields a decomposition of Q into several cylinders arranged in a single column each of which has base the same length as that of Q. The number of these cylinders is at most k ω 1 (R) l=k ω 2 (R)+1 C ω l , since each iteration from the (l − 1)th level to the lth multiplies the number of cylinders intersecting Q at the previous level by the number of rectangles in a particular column of I ω l system, which is, in particular, bounded above by C ω l . The situation is simpler from this point on. We continue to iterate inside each of the level k ω 1 (R) cylinders until level k ω 2 R 1/θ , but this time all of the children remain inside Q at every iteration. Therefore we find precisely k ω 2 (R 1/θ ) l=k ω 1 (R)+1 N ω l level k ω 2 R 1/θ cylinders inside each level k ω 1 (R) cylinder. As mentioned above, we now cover each of these cylinders individually. To do this, we further iterate inside each such cylinder until level k ω 1 R 1/θ and group together cylinders at this level which lie in the same column. This decomposes the level k ω 1 R 1/θ cylinders into approximate R 1/θ squares, each of which can be covered by 4 balls of diameter R 1/θ . Therefore it only remains to count the number of distinct level k ω 1 R 1/θ columns inside a level k ω 2 R 1/θ cylinder. Iterating from the (l − 1)th level to the lth level multiplies the number of columns by B ω l and therefore the number is k ω 1 (R 1/θ ) l=k ω 2 (R 1/θ )+1 B ω l . Combining the three counting arguments from above yields N Q ∩ F ω , R 1/θ 4   k ω 1 (R) l=k ω 2 (R)+1 C ω l      k ω 2 (R 1/θ ) l=k ω 1 (R)+1 N ω l       k ω 1 (R 1/θ ) l=k ω 2 (R 1/θ )+1 B ω l    . (4.3) Moreover, this estimate is sharp in the sense that we can always find a particular approximate R-square Q such that N Q ∩ F ω , R 1/θ K   k ω 1 (R) l=k ω 2 (R)+1 C ω l      k ω 2 (R 1/θ ) l=k ω 1 (R)+1 N ω l       k ω 1 (R 1/θ ) l=k ω 2 (R 1/θ )+1 B ω l    ,(4.4) for some constant K > 0 depending on m max and n max . Such a Q is provided by any approximate R) is chosen such that each map i j lies in a maximal column of I j , that is a column consisting of C j rectangles. Finally, the small constant K in the lower bound appears since a single ball of diameter R 1/θ can only intersect at most a constant number of the approximate R 1/θ squares found above and therefore counting approximate R 1/θ squares is still comparable to counting optimal R 1/θ covers. R-square where T = S i1 • · · · • S i k ω 1( Case (ii): k ω 1 (R) k ω 2 R 1/θ . The distinctive feature of this case is that when one iterates inside the level k ω 2 (R) cylinder containing Q, one reaches the situation where the height of the cylinders is roughly R 1/θ (level k ω 2 R 1/θ ) before the cylinders lie completely inside Q (level k ω 1 (R)). This means that the middle term in the above product no longer appears. The rest of the argument is similar, however, and we end up with N Q ∩ F ω , R 1/θ 4    k ω 2 (R 1/θ ) l=k ω 2 (R)+1 C ω l       k ω 1 (R 1/θ ) l=k ω 1 (R)+1 B ω l    . (4.5) One subtle feature of this estimate is that we appear to skip from level k ω 2 R 1/θ to level k ω 1 (R). This is to avoid over-counting due to the fact that, inside a level k ω 2 R 1/θ cylinder intersecting Q, only a single level k ω 1 (R) column actually lies inside Q, and can thus contribute to the covering number. This column comprises of several k ω 1 (R) cylinders and, since the height of this column is comparable to R 1/θ , to cover this column efficiently one only needs to count the number of level k ω 1 R 1/θ columns inside a single level k ω 1 (R) cylinder. This gives the second multiplicative term in the estimate, which concerns the terms B ω l . Once again, this bound is sharp in the sense that there exists an approximate R-square Q such that N Q ∩ F ω , R 1/θ K    k ω 2 (R 1/θ ) l=k ω 2 (R)+1 C ω l       k ω 1 (R 1/θ ) l=k ω 1 (R)+1 B ω l    . Proof of the Main Theorem We start our proof with this lemma, which is a simple variant of a Chernoff bound for stopped sums of random variables. We write P{a b} to denote P ({ω ∈ Ω : a b}) and write E(·) for the expectation of a random variable with respect to P. Lemma 4.1. Let {X i } be a sequence of non-negative discrete i.i.d. random variables with finite expectation 0 < X = E(X) < ∞. Let k ∈ N and let k k be a random variable. Let τ > k be a stopping time with finite expectation. Then, for all ε, t > 0, P τ i=k X i (1 + ε)(τ − k + 1)X E E e t(X−(1+ε)X) τ −k (4.6) and P τ i=k X i (1 − ε)(τ − k + 1)X E E e t(X−(1+ε)X) τ −k . (4.7) Further, if τ − k l for some l ∈ N, then there exists 0 < γ < 1 not depending on τ, k, l such that P τ i=k X i (1 + ε)(τ − k + 1)X γ l (4.8) and P τ i=k X i (1 − ε)(τ − k + 1)X γ l . Proof. In what follows we write {F s } s 0 for the natural filtration of our event space. We prove (4.6) and (4.8). The remaining estimates are proved similarly and we omit the details. We rearrange the left hand side of (4.6) and multiply by t > 0 to obtain P τ i=k X i (1 + ε)(τ − k + 1)X = P τ i=k t(X i − (1 + ε)X) 0 = P exp τ i=k Y i 1 , with Y i = tX i − t(1 + ε)X. Using Markov's inequality and continuing, E exp τ i=k Y i = E E exp τ i=k Y i F τ −1 = E E (exp Y τ | F τ −1 ) E exp τ −1 i=k Y i F τ −1 = E E (exp Y 0 ) E E exp τ −1 i=k Y i F τ −2 F τ −1 = E E (exp Y 0 ) 2 E E exp τ −2 i=k Y i F τ −2 F τ −1 . . . = E E (exp Y 0 ) τ −k = E E e t(X−(1+ε)X) τ −k , as required. To prove (4.8) we consider γ t = E e t(X0−(1+ε)X) . Since X is discrete, we can differentiate with respect to t for all t ∈ R, and get d dt E e t(X0−(1+ε)X) t=0 = E d dt e t(X0−(1+ε)X) t=0 = E X 0 − (1 + ε)X e t(X0−(1+ε)X) t=0 = E X 0 − (1 + ε)X = −εX < 0. Thus, since γ 0 = 1, there exists t > 0 such that 0 < γ t < 1. Note that t (and thus γ t ) does not depend on on τ, k, l and we can now use (4.6) together with the assumption that τ − k l to obtain (4.8), where γ = γ t . Note that from the definitions of k ω 1 (R) and k ω 2 (R) we can conclude that there exist constants c 1 , c θ > 1 such that for sufficiently small R, k ω 1 (R) c 1 k ω 2 (R), k ω 1 (R 1/θ ) c θ k ω 1 (R), and k ω 2 (R 1/θ ) c θ k ω 2 (R). The relationship between k ω 1 (R) and k ω 2 (R 1/θ ) is more complicated and depends heavily on ω and R. However, probabilistically we can say more. Let ε > 0 and q ∈ N. Note that, taking logarithms, P q i=1 n −1 ωi (n) −(1+ε)q = P q i=1 log n ωi (1 + ε)q log n and therefore, by Lemma 4.1, there exists 0 < γ < 1 such that P q i=1 n −1 ωi (n) −(1+ε)q γ q−1 . Now, summing over q, we obtain ∞ q=1 P q i=1 n −1 ωi (n) −(1+ε)q ∞ q=1 γ q−1 < ∞. Thus, by the Borel-Cantelli Lemma, almost surely there are at most finitely many q such these events occur. We can similarly argue for a lower bound and conclude that for almost all ω ∈ Ω there exists q ω such that, (n) −(1+ε)q q i=1 n −1 ωi (n) −(1−ε)q ,(4.9) for all q q ω . Analogously, (m) −(1+ε)q q i=1 m −1 ωi (m) −(1−ε)q ,(4.10) almost surely for all q large enough. Without loss of generality we can assume q ω to be identical for both products. Since k ω 2 (R) −c log R for some c > 0 not depending on ω, R, we see that there almost surely also exists an R ω such that (4.9) and (4.10) hold for all q k ω 2 (R), where 0 < R R ω . Given these bounds we can determine the probabilistic relationship between k ω 1 (R) and k ω 2 (R 1/θ ). Let R ω be as above. Then by the definitions of k ω 1 (R) and k ω 2 (R 1/θ ) we get, for all R R ω , (m) −(1+ε)k ω 1 (R) k ω 1 (R) i=1 m −1 ωi R < n θ max   k ω 2 (R 1/θ ) i=1 n −1 ωi   θ n θ max (n) −θ(1−ε)k ω 2 (R 1/θ ) and, after rearranging, k ω 1 (R) k ω 2 (R 1/θ ) > θ 1 − ε 1 + ε log n log m − θ log n max (1 + ε) k ω 2 (R 1/θ ) log m . (4.11) Similarly, by considering the complementary inequalities   k ω 2 (R 1/θ ) i=1 n −1 ωi   θ R < m max k ω 1 (R) i=1 m −1 ωi , we obtain k ω 1 (R) k ω 2 (R 1/θ ) < θ 1 + ε 1 − ε log n log m − log m max (1 − ε) k ω 2 (R 1/θ ) log m . (4.12) Now ε > 0 was arbitrary and the last term in (4.11) and (4.12) vanishes as R ω decreases. Therefore, for all δ > 0 and almost all ω ∈ Ω, there exists sufficiently small R ω > 0 such that (1 − δ) θ log n log m k ω 1 (R) k ω 2 (R 1/θ ) (1 + δ) θ log n log m ,(4.13) for all R < R ω . Moreover, using the much simpler relationships derived above, we can assume without loss of generality that R ω is small enough such that (1 − δ) log n log m k ω 1 (R) k ω 2 (R) (1 + δ) log n log m , (4.14) ( 1 − δ)θ k ω 1 (R) k ω 1 (R 1/θ ) (1 + δ)θ and (1 − δ)θ k ω 2 (R) k ω 2 (R 1/θ ) (1 + δ)θ (4.15) all hold simultaneously for all R < R ω . The upper bound for θ < log m/ log n We assume throughout that R ω is small enough for all inequalities in the last section to hold simultaneously (almost surely). Also, let δ > 0 be small enough such that the inequalities at the end of the previous section are all bounded away from 1. That is we choose δ > 0 such that (1 + δ)θ < 1, (1 − δ) log n log m > 1 and, especially relevent to this section, (4.13) and θ < log m/ log n imply that we can choose δ > 0 sufficiently small such that k ω 1 (R) < k ω 2 (R 1/θ ) almost surely for all R < R ω . Let ε > 0 and consider the geometric average given by C k ω 1 (R)−k ω 2 (R) N k ω 2 (R 1/θ )−k ω 1 (R) B k ω 1 (R 1/θ )−k ω 2 (R 1/θ ) 1+ε . (4.16) We want to determine the probability that there exists an approximate R square at a given level such that we need more than the estimate in (4.16) many R 1/θ squares to cover it. Note that for (4.3) to be larger than (4.16), at least one of the products must exceed the corresponding power of the average. Therefore, P N Q ∩ F ω , R 1/θ 4 C k ω 1 (R)−k ω 2 (R) N k ω 2 (R 1/θ )−k ω 1 (R) B k ω 1 (R 1/θ )−k ω 2 (R 1/θ ) 1+ε P      k ω 1 (R) l=k ω 2 (R)+1 C ω l   C (1+ε)(k ω 1 (R)−k ω 2 (R))    + P         k ω 2 (R 1/θ ) l=k ω 1 (R)+1 N ω l    N (1+ε)(k ω 2 (R 1/θ )−k ω 1 (R))      + P         k ω 1 (R 1/θ ) l=k ω 2 (R 1/θ )+1 B ω l    B (1+ε)(k ω 1 (R 1/θ )−k ω 2 (R 1/θ ))      . (4.17) Let us start by analysing the event involving C ω l . We want to show that the product can only exceed the average behaviour at most finitely many times almost surely. That is, given q ∈ N, we want to estimate P    k ω 1 (R) l=k ω 2 (R)+1 C ω l C (1+ε)(k ω 1 (R)−k ω 2 (R)) for some R ∈ (0, R ω ) such that k ω 2 (R) = q    . (4.18) Notice that k ω 1 (R) is a stopping time and, by (4.14) and our assumption that R ω and δ are chosen sufficiently small, k ω 1 (R) (1 − δ) log n/ log m k ω 2 (R) and c := (1 − δ) log n/ log m − 1 > 0. Using Lemma 4.1, we can bound (4.18) above by P          q ′ :∃R∈(0,Rω) k ω 2 (R)=q and k ω 1 (R)=q ′    q ′ l=q+1 log C ω l (1 + ε)(q ′ − q) log C             Lγ c(q−1) , for some 0 < γ < 1 where L > 0 is a deterministic constant corresponding to the number of possible values for k ω 1 (R), given k ω 2 (R). Since Lγ c(q−1) < ∞ , the Borel-Cantelli lemma implies that the product can exceed the average behaviour only finitely many times almost surely. The argument for N ω l and B ω l is identical due to the ratios given in (4.13), (4.14), and (4.15). Therefore, there almost surely exists q large enough-and hence R ′ ω small enough-such that N Q ∩ F ω , R 1/θ 4 C k ω 1 (R)−k ω 2 (R) N k ω 2 (R 1/θ )−k ω 1 (R) B k ω 1 (R 1/θ )−k ω 2 (R 1/θ ) 1+ε , for all 0 < R < R ′ ω . Using (4.13), (4.14), and (4.15) again, we obtain k ω 1 (R) − k ω 2 (R) (1 + δ) log n log m − 1 k ω 2 (R) k ω 2 (R 1/θ ) − k ω 1 (R) 1 + δ 1 − δ θ −1 − (1 + δ) log n log m k ω 2 (R) and k ω 1 (R 1/θ ) − k ω 2 (R 1/θ ) 1 + δ 1 − δ log n log m − (1 − δ) −1 θ −1 k ω 2 (R). Now, using k ω 2 (R) −(1 + δ) log R/ log n, we rearrange, C k ω 1 (R)−k ω 2 (R) C −(1+δ) 2 log R/ log m−(−(1+δ) log R/ log n) = R (1−1/θ)sc , where s c = (1 + δ) 2 log C log m θ 1 − θ − (1 + δ) log C log n θ 1 − θ → s C := θ 1 − θ log C log m − log C log n , as δ → 0. We rearrange the other terms similarly to obtain N Q ∩ F ω , R 1/θ 4R (1−1/θ)(1+ε)(sc+sn+s b ) , where s n = −(1 + δ) 2 log N log m θ 1 − θ + (1 + δ) 2 1 − δ log N log n 1 1 − θ → s N := 1 1 − θ log N log n − θ log N log m , as δ → 0, and s b = − 1 + δ 1 − δ log B log n 1 1 − θ + (1 + δ) 2 1 − δ log B log m 1 1 − θ → s B := 1 1 − θ log B log m − log B log n , as δ → 0. For arbitrary ε ′ > 0 we may assume δ > 0 is small enough such that s c + s n + s b (1 + ε ′ )(s C + s N + s B ). Note that s := s C + s N + s B = 1 1 − θ log B log m + log N /B log n − θ log N /C log m + log C log n . (4.19) We can therefore conclude that, almost surely, every approximate square of length R < R ω can be covered by fewer than 4R (1−1/θ)(1+ε)(1+ε ′ )s sets of diameter R 1/θ . Thus the Assouad spectrum is bounded above by (1 + ε)(1 + ε ′ )s and by the arbitrariness of ε, ε ′ , also by s. 4.4.2 The upper bound for θ > log m/ log n The proof for this case follows along the same lines as θ < log m/ log n and we will only sketch their differences. First note that θ > log m/ log n implies the almost sure existence of a small enough R ω such that k ω 1 (R) (1 − δ)θ log n log m k ω 2 (R 1/θ ) , for all R < R ω . Again we assume without loss of generality that R ω is chosen such that (4.13), (4.14), and (4.15) are satisfied for a given δ > 0. We also choose δ > 0 small enough to ensure that (1 − δ)θ log n log m > 1. Let ε > 0 and consider the geometric average C k ω 2 (R 1/θ )−k ω 2 (R) B k ω 1 (R 1/θ )−k ω 1 (R) 1+ε . We compare the upper bound given in (4.5) with the average above and obtain P N Q ∩ F ω , R 1/θ 4 C k ω 2 (R 1/θ )−k ω 2 (R) B k ω 1 (R 1/θ )−k ω 1 (R) 1+ε P      k ω 2 (R 1/θ ) l=k ω 2 (R)+1 C ω l   C (1+ε)(k ω 2 (R 1/θ )−k ω 2 (R))    +P         k ω 1 (R 1/θ ) l=k ω 1 (R)+1 B ω l    B (1+ε)(k ω 1 (R 1/θ )−k ω 1 (R))      . Now using the same ideas as before, noting that k ω 1 (·) and k ω 2 (·) are stopping times, we can conclude that for almost every ω ∈ Ω there exists R ω such that N Q ∩ F ω , R 1/θ 4 C k ω 2 (R 1/θ )−k ω 2 (R) B k ω 1 (R 1/θ )−k ω 1 (R) 1+ε , for all R < R ω . Using the estimates for k ω 1 (R 1/θ )/k ω 1 (R) and k ω 2 (R 1/θ )/k ω 2 (R) in (4.15) we see that there exists ε ′ > 0 such that for sufficiently small R, C k ω 2 (R 1/θ )−k ω 2 (R) B k ω 1 (R 1/θ )−k ω 1 (R) 1+ε R (1−1/θ)(1+ε)(1+ε ′ )s , where s = log B log m + log C log n . As before, this is sufficient to prove that for θ > log m/ log n, there almost surely exists R ω such that all approximate R-squares with R < R ω can be covered by fewer than 4R (1−1/θ)(1+ε)(1+ε ′ )s sets of diameter R 1/θ . This proves that dim θ A F ω (1 + ε)(1 + ε ′ )s almost surely, and hence, by arbitrariness of ε, ε ′ > 0 that dim θ A F ω s almost surely, as required. 4.4.3 The lower bound for θ < log m/ log n To prove almost sure lower bounds for dim θ A F ω we need to show that almost surely there exists a sequence R i → 0 such that for each i there is an approximate R i -square which requires at least a certain number of sets of diameter R 1/θ i to cover it. Let θ < log m/ log n and, as before, we choose δ > 0 small enough such that k ω 1 (R) < k ω 2 (R 1/θ ) almost surely for all small enough R. Let ε > 0, and given q ∈ N and ω ∈ Ω, let R q = q l=1 n −1 ω l , noting that k ω 2 (R q ) = q and R q → 0 as q → ∞. We have P N Q ∩ F ω , R 1/θ q K C k ω 1 (Rq)−k ω 2 (Rq) N k ω 2 (R 1/θ q )−k ω 1 (Rq) B k ω 1 (R 1/θ q )−k ω 2 (R 1/θ q ) 1−ε 1 − P      k ω 1 (Rq) l=k ω 2 (Rq)+1 C ω l C (1−ε)(k ω 1 (Rq)−k ω 2 (Rq)) or k ω 2 (R 1/θ q ) l=k ω 1 (Rq)−1 N ω l N (1−ε)(k ω 2 (R 1/θ q )−k ω 1 (Rq)) or k ω 1 (R 1/θ q ) l=k ω 2 R 1/θ q +1 B ω l B (1−ε)(k ω 1 (R 1/θ q )−k ω 2 (R 1/θ q ))      . (4.20) The last term is bounded above by P    k ω 1 (Rq) l=k ω 2 (Rq)+1 C ω l C (1−ε)(k ω 1 (Rq)−k ω 2 (Rq))    + P      k ω 2 (R 1/θ q ) l=k ω 1 (Rq)−1 N ω l N (1−ε)(k ω 2 (R 1/θ q )−k ω 1 (Rq))      + P      k ω 1 (R 1/θ q ) l=k ω 2 R 1/θ q +1 B ω l B (1−ε)(k ω 1 (R 1/θ q )−k ω 2 (R 1/θ q ))      and by Lemma 4.1 and the union estimate used above each probability is bounded above by L ′ γ c ′ q for some constants L ′ , c ′ > 0 and γ ∈ (0, 1). Thus, there exists q 0 such that each term in the sum is bounded by 1/6 for q q 0 and thus the probability on the left hand side of (4.20) is bounded below by 1/2 for q q 0 . Denote the event on the left hand side of (4.20) by E q . Observe that the event only depends on the values of ω i for i satisfying q = k ω 2 (R q ) i k ω 1 (R 1/θ q ) as the latter bound is a stopping time. By virtue of construction, there exists an integer d 1 such that k ω 1 R 1/θ d i q < d i+1 q for all q. Therefore the events {E q , E dq , E d 2 q , . . . } are pairwise independent. Further, by the above argument, ∞ i=0 P(E d i q0 ) ∞ i=0 1/2 = ∞ and so by the Borel-Cantelli Lemmas E q happens infinitely often. Therefore, adapting the argument involving s c , s b and s b from above, we have proved that for all ε ′ > 0 there almost surely exist infinitely many q ∈ N such that there exists an approximate R q -square Q such that N Q ∩ F ω , R 1/θ q KR (1−1/θ)(1−ε ′ )s q , where s = s C + s B + s N is the target lower bound for the spectrum. This completes the proof. Figure 2 : 2The attractors F 1 , F 2 , and F ω for ω = (2, 1, 1, 2, 1, . . . ) as used in the example in Section 3.1. AcknowledgementsThis work was started while both authors were resident at the Institut Mittag-Leffler during the 2017 semester programme Fractal Geometry and Dynamics. They are grateful for the stimulating environment. Much of the work was subsequently carried out whilst JMF visited the University of Waterloo in March 2018. He is grateful for the financial support, hospitality, and inspiring research atmosphere. Crinkly curves, Markov partitions and box dimensions in self-similar sets. T Bedford, University of WarwickPh.D dissertationT. Bedford. Crinkly curves, Markov partitions and box dimensions in self-similar sets, Ph.D disser- tation, University of Warwick, (1984). K J Falconer, Fractal Geometry: Mathematical Foundations and Applications. Hoboken, NJJohn Wiley & Sons2nd ed.K. J. Falconer. Fractal Geometry: Mathematical Foundations and Applications, John Wiley & Sons, Hoboken, NJ, 2nd ed., 2003. Assouad type dimensions and homogeneity of fractals. J M Fraser, Trans. Amer. Math. Soc. 366J. M. Fraser. Assouad type dimensions and homogeneity of fractals, Trans. Amer. Math. Soc., 366, (2014), 6687-6733. The Assouad spectrum and the quasi-Assouad dimension: a tale of two spectra. J M Fraser, K E Hare, K G Hare, S Troscheit, H Yu, preprintJ. M. Fraser, K. E. Hare, K. G. Hare, S. Troscheit and H. Yu. The Assouad spec- trum and the quasi-Assouad dimension: a tale of two spectra, preprint, (2018), available at: http://arxiv.org/abs/1804.09607. The Assouad dimension of randomly generated fractals. J M Fraser, J J Miao, S Troscheit, Ergodic Th. Dyn. Syst. 38J. M. Fraser, J. J. Miao and S. Troscheit. The Assouad dimension of randomly generated fractals, Ergodic Th. Dyn. Syst., 38, (2018), 982-1011. New dimension spectra: finer information on scaling and homogeneity. J M Fraser, H Yu, Adv. Math. 329J. M. Fraser and H. Yu. New dimension spectra: finer information on scaling and homogeneity, Adv. Math., 329, (2018), 273-328. Assouad type spectra for some fractal families, to appear in Indiana Univ. J M Fraser, H Yu, Math. J. J. M. Fraser and H. Yu. Assouad type spectra for some fractal families, to appear in Indiana Univ. Math. J., (2016), available at: http://arxiv.org/abs/1610.02334. Properties of Quasi-Assouad dimension, to appear in Math. I García, K Hare, Proc. Camb. Phil. Soc. I. García and K. Hare. Properties of Quasi-Assouad dimension, to appear in Math. Proc. Camb. Phil. Soc., available at: http://arxiv.org/abs/1703.02526. A random version of McMullen-Bedford general Sierpinski carpets and its application. Y Gui, W Li, Nonlinearity. 21Y. Gui and W. Li. A random version of McMullen-Bedford general Sierpinski carpets and its application, Nonlinearity, 21, (2008), 1745-1758. Assouad dimension: antifractal metrization, porous sets, and homogeneous measures. J Luukkainen, J. Korean Math. Soc. 35J. Luukkainen. Assouad dimension: antifractal metrization, porous sets, and homogeneous measures, J. Korean Math. Soc., 35, (1998), 23-76. Assouad dimension of self-affine carpets. J M Mackay, Conform. Geom. Dyn. 15J. M. Mackay. Assouad dimension of self-affine carpets, Conform. Geom. Dyn., 15, (2011), 177-187. The Hausdorff dimension of general Sierpiński carpets. C Mcmullen, Nagoya Math. J. 96C. McMullen. The Hausdorff dimension of general Sierpiński carpets, Nagoya Math. J., 96, (1984), 1-9. J C Robinson, Dimensions, Embeddings, and Attractors. Cambridge University PressJ. C. Robinson. Dimensions, Embeddings, and Attractors, Cambridge University Press, (2011). The quasi-Assouad dimension of stochastically self-similar sets, to appear in Proc. S Troscheit, A Royal Soc. Edin. S. Troscheit. The quasi-Assouad dimension of stochastically self-similar sets, to appear in Proc. A Royal Soc. Edin., available at: http://arxiv.org/abs/1709.02519. The box dimension of random box-like self-affine sets. S Troscheit, Indiana Univ. Math. J. 67S. Troscheit. The box dimension of random box-like self-affine sets, Indiana Univ. Math. J., 67, (2018), 495-535.
[]
[ "Hybrid Robot-assisted Frameworks for Endomicroscopy Scanning in Retinal Surgeries*", "Hybrid Robot-assisted Frameworks for Endomicroscopy Scanning in Retinal Surgeries*" ]
[ "Zhaoshuo Li ", "† ", "Mahya Shahbazi ", "Niravkumar Patel ", "Eimear O&apos; Sullivan ", "Haojie Zhang ", "Khushi Vyas ", "Preetham Chalasani ", "Anton Deguet ", "Peter L Gehlbach ", "Iulian Iordachita ", "Guang-Zhong Yang ", "Russell H Taylor " ]
[]
[]
High-resolution real-time intraocular imaging of retina at the cellular level is very challenging due to the vulnerable and confined space within the eyeball as well as the limited availability of appropriate modalities. A probe-based confocal laser endomicroscopy (pCLE) system, can be a potential imaging modality for improved diagnosis. The ability to visualize the retina at the cellular level could provide information that may predict surgical outcomes. The adoption of intraocular pCLE scanning is currently limited due to the narrow field of view and the micron-scale range of focus. In the absence of motion compensation, physiological tremors of the surgeons hand and patient movements also contribute to the deterioration of the image quality.Therefore, an image-based hybrid control strategy is proposed to mitigate the above challenges. The proposed hybrid control strategy enables a shared control of the pCLE probe between surgeons and robots to scan the retina precisely, with the absence of hand tremors and with the advantages of an image-based auto-focus algorithm that optimizes the quality of pCLE images. The hybrid control strategy is deployed on two frameworkscooperative and teleoperated. Better image quality, smoother motion, and reduced workload are all achieved in a statistically significant manner with the hybrid control frameworks.
10.1109/tmrb.2020.2988312
[ "https://arxiv.org/pdf/1909.06852v2.pdf" ]
202,577,265
1909.06852
029f5b0626af6f0a0f4a3f080fdc851e5cdfa3a2
Hybrid Robot-assisted Frameworks for Endomicroscopy Scanning in Retinal Surgeries* Zhaoshuo Li † Mahya Shahbazi Niravkumar Patel Eimear O&apos; Sullivan Haojie Zhang Khushi Vyas Preetham Chalasani Anton Deguet Peter L Gehlbach Iulian Iordachita Guang-Zhong Yang Russell H Taylor Hybrid Robot-assisted Frameworks for Endomicroscopy Scanning in Retinal Surgeries* 1 High-resolution real-time intraocular imaging of retina at the cellular level is very challenging due to the vulnerable and confined space within the eyeball as well as the limited availability of appropriate modalities. A probe-based confocal laser endomicroscopy (pCLE) system, can be a potential imaging modality for improved diagnosis. The ability to visualize the retina at the cellular level could provide information that may predict surgical outcomes. The adoption of intraocular pCLE scanning is currently limited due to the narrow field of view and the micron-scale range of focus. In the absence of motion compensation, physiological tremors of the surgeons hand and patient movements also contribute to the deterioration of the image quality.Therefore, an image-based hybrid control strategy is proposed to mitigate the above challenges. The proposed hybrid control strategy enables a shared control of the pCLE probe between surgeons and robots to scan the retina precisely, with the absence of hand tremors and with the advantages of an image-based auto-focus algorithm that optimizes the quality of pCLE images. The hybrid control strategy is deployed on two frameworkscooperative and teleoperated. Better image quality, smoother motion, and reduced workload are all achieved in a statistically significant manner with the hybrid control frameworks. I. INTRODUCTION Among the various vision-threatening medical conditions, retinal detachment is one of the most common, occurring at a rate of about 1 in 10,000 per eye per year worldwide [1]. Mechanistically retinal detachment represents a separation of the neural retina from the necessary underlying supporting tissues, such as the retinal pigment epithelium and the underlying choroidal blood vessels, which collectively provide multiple types of support required for retinal survival. If not treated promptly, retinal detachment can result in a permanent loss of vision. One possible regenerative therapeutics treatment to the injured but not dead retina is to deliver a neuroprotective agent, such as stem cells, to targeted retina cells. However, † Authors contributed equally to the work. * This work was funded in part by: NSF NRI Grants IIS-1327657, 1637789; Natural Sciences and Engineering Research Council of Canada (NSERC) Postdoctoral Fellowship #516873; Johns Hopkins internal funds; Robotic Endobronchial Optical Tomography (REBOT) Grant EP/N019318/1; EP/P012779/1 Micro-robotics for Surgery; and NIH R01 Grant 1R01EB023943-01. 1 Authors with the Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland 21218, USA. Email: {zli122}@jhu.edu. 2 Authors with the Hamlyn Centre for Robotic Surgery, Imperial College London, SW7 2AZ, London, UK. 3 Author with the Johns Hopkins Wilmer Eye Institute, Johns Hopkins Hospital, 600 N. Wolfe Street, Maryland 21287, USA. this requires finding the viable retina cells identified by using cellular-level information [2]. A recent promising technique for in-vivo characterization and real-time visualization of biological tissues at the cellular level is probe-based confocal laser endomicroscopy (pCLE). pCLE is an optical visualization technique that can facilitate cellular-level imaging of biological tissues in confined spaces. The effectiveness of pCLE has previously been demonstrated using real-time visualization of the gastrointestinal tract [3], thyroid gland [4], breast [5] and gastric [6] tissue. The use of benchtop confocal microscopy systems for imaging externally mounted retinal tissue has been reported [7] [8]. However, the small footprint of a pCLE probe (typically on the order of 1 mm) makes pCLE a suitable technique for intraocular cellular- There are, however, several steep challenges to manual intraocular scanning of the retina using a pCLE probe for realtime in-vivo diagnosis. First, the exceeding fragility and nonregenerative nature of retinal tissue do not allow any damage. It has been shown that forces as low as 7.5 mN can tear the retina [9], thus necessitating a non-contact scanning probe. Previous publications of contactless in vivo cornea scanning [10] [11] promise the no-contact scanning working principles. Furthermore, due to the small size of pCLE fiber bundles, pCLE has a narrow field of view, usually on the sub-millimeter scale [12]. Considering that the average diameter of the human eye is approximately 23.5 mm, one single pCLE image only covers a tiny portion of the scanning region. A solution to this is to use mosaicking algorithms. However, mosaicking algorithms require consistent high-quality images for feature matching and image stitching. The micron-scale focus range of a pCLE system, however, makes the manual acquisition of high-quality images extremely difficult; as the image quality drops considerably beyond the optimal focus range [13]. For example, the non-contact pCLE system with a distal-focus lens in this study has an optimal focus distance of approximately 700 µm and a focus range of approximately 200 µm. Fig. 1 shows a comparison of our non-contact probe for four cases: 1) out-of-range, 2) back-focus, 3) in-focus, and 4) front-focus, measured at a probe-to-tissue distance of 2.34 mm, 1.16 mm, arXiv:1909.06852v2 [cs.RO] 9 Apr 2020 0.69 mm, and fully in contact with the tissue, respectively. The physiological tremor of the human hand is on the order of magnitude of several hundreds of microns [14] [15], making it almost impossible to consistently maintain the pCLE probe in the optimal range required for high-quality image acquisition while assuring no or minimal contact. In addition to the above, patient movement, as well as the movement of the detached retina during the repair procedure, augment the complexities of manual scanning of the detached retinal tissue. These challenges highlight the need for a robot-assisted manipulation of the pCLE probe for intraocular scanning of the retinal tissue. While robotic systems have been developed in the literature to mitigate the difficulties of manipulating pCLE probes, most are not yet suitable for intraocular use. In [16], a hand-held device with a voice-coil actuator and a force sensor is presented to enable consistent probe-tissue contact. This device uses the measured forces to control the probe motion relative to the tissue. In [17], a hollow tube is added surrounding the pCLE probe to provide frictionbased adherence during contact with the tissue to enable a "steady motion". All the discussed techniques, however, are only compatible with contact-based pCLE probes, which together with the large footprint of the devices, making them inappropriate for scanning the delicate retinal tissues inside of the confined space of the eyeball. (a) (b) (c) (d) Robotic integration of the pCLE probe that does not require probe-tissue contact has also been achieved in [18], by presenting a system that tracks the tip position of the pCLE with an external tracker. However, in the present work, the scanning task was performed on a flat surface that presupposed the geometry and position of the tissue. Nonetheless, the curvature of the eyeball and the uneven surface of detached retina rapidly void these assumptions, rendering this system inapplicable to retinal surgery. Therefore, in this paper, a semi-autonomous hybrid approach is proposed for real-time pCLE scanning of the retina. Features of the proposed hybrid approach are briefly described: • a robot-assisted control that allows the surgeon to maneuver the probe with micron-level prevision laterally along the tissue surface freely, where the use of a robot eliminates the effect of hand tremor, • an auto-focus algorithm based on the pCLE images that actively and optimally adjusts the probe-to-tissue distance for best image quality and enhanced safety. The hybrid control strategy is deployed on two frameworks, cooperative and teleoperated, which have been shown to significantly improve the position precision in retinal surgery [19]. Both implementations use the Steady-Hand Eye Robot (SHER, developed at LCSR, Johns Hopkins University) endeffector to hold the customized non-contact fiber-bundle pCLE imaging probe. The pCLE probe is connected to a high-speed Line-Scan Confocal Laser Endomicroscopy system (developed at the Hamlyn Centre, Imperial College [20]). The teleoperated framework includes the da Vinci Research Kit (dVRK, developed at LCSR, Johns Hopkins University) to allow surgeons to remotely control the pCLE probe using the mater tool manipulator (MTM) and the stereo-vision console. A surgical microscope is used to visualize the surgical scene within the eyeball. Fig. 2 shows the complete setup. (a) (b) (c) (d) Fig. 2. The complete setup: (a) the SHER robot, the surgical microscope, the confocal laser endomicroscopy system, the pCLE probe, and an artificial eyeball phantom, (b) sketch of the setup, (c) the dVRK stereo vision, console and MTM, (d) the user's view, where the pCLE view is overlaid on the top left of the surgical microscope view looking through the eyeball opening, and the pCLE probe can be seen. The proposed hybrid cooperative and teleoperated frameworks have been validated through a series of experiments and a set of user studies, in which they are compared to manual operation, traditional cooperative and teleoperated control systems. We have discovered that the proposed hybrid frameworks result in statistically significant improvements in image quality, motion smoothness, and user workload. Contribution: To the best of our knowledge, this work is the first reported attempt to perform intraocular pCLE scanning of a large area of the retina with a distal-focus probe and with robot assistance. The introduction of an autofocus, robot-assisted, non-contact pCLE scanning system will resolve previously discussed challenges and provide surface information of retina with enhanced safety and efficiency. The technical contribution reported can be summarized as follows: • A novel hybrid teleoperated control framework is pro-posed to enable significant motion scaling for improved precision of pCLE control, thus leading to consistently high-quality images. An improved hybrid cooperative framework extending the previously system in [21] is also presented. • The proposed hybrid controller includes a novel autofocus algorithm to find the optimal position for best image quality while enhancing operation safety by avoiding probe-retina contact. A prior model of the retina curvature is included to speed up the algorithm and relax the assumption of scanning a flat surface within a static eye, which limited our previous work in [21]. • Three experiment scenarios validate the challenge of manual scanning, the efficacy of the proposed autofocus algorithm, and the enhanced smoothness of the scanning path and improved image quality using the hybrid framework over the traditional one. • A user study involving 14 participants demonstrates the enhanced image-quality and reduced user workload in a statistically significant manner using the proposed hybrid system. The remainder of the paper is organized as follows: Section II explains the implementation of hybrid control strategy by using the hybrid cooperative framework; Section III presents the hybrid teleoperated framework; Section IV presents the three experiment evaluations; Section V presents the user study, results and discussion; Section VI concludes the paper. II. HYBRID COOPERATIVE FRAMEWORK The hybrid cooperative framework setup consists of the pCLE system and the SHER. The SHER is a cooperatively controlled robot with 5-Degrees-of-Freedom (DOF) developed for vitreoretinal surgery. It has a positioning resolution of 1 µm and a bidirectional repeatability of 3 µm. The pCLE system used for real-time image acquisition is a high-speed line-scanning fiber bundle endomicroscope, coupled to a customized probe (FIGH-30-800G, Fujikura Ltd.) of 30,000core fiber bundle with distal micro-lens (SELFOC ILW-1.30, Go!Foton Corporation). The lens-probe distance is set such that the probe has an optimal focus distance of about 700 µm and a focus range of 200 µm. The hybrid cooperative framework consists of a high-level motion controller (which implements the semi-autonomous hybrid control strategy), a mid-level optimizer, and a low-level controller. Fig. 3 illustrates the block diagram of the overall closed-loop architecture. A. High-Level Controller The high-level controller enables dual functionality: • the control of the pCLE probe by surgeons with robot assistance in directions lateral to the scanning surface. Surgeons can maneuver the pCLE probe to scan regions of interest while the robot cancels out hand tremors. • the autonomous control of the pCLE probe by the robot in the axial direction (along the normal direction of the tissue's surface) to actively control the probe-tissue distance for optimized image quality. This active imagebased feedback can reduce the task complexity and the workload on surgeons while improving the image quality. The four components of the high-level controller will be discussed separately. 1) Hybrid Control Law: The hybrid control law can be formulated as:Ẋ des = K cẊdes,c + K aẊdes,a(1) where subscripts des, c, and a denote the desired hybrid motion, the cooperative motion by the surgeon (used for lateral control), and the auto-focus motion by the robot (used for axial control), respectively; X denotes to the Cartesian position of the pCLE probe tip expressed in the base frame of the SHER. K c and K a denote the motion specification matrices [22] that map a given motion to the lateral and axial directions of the retina surface respectively, and are defined as follows: K c = R T Σ c R 0 3×3 0 3×3 1 3×3 , Σ c =   1 0 0 0 1 0 0 0 0   (2) K a = R T Σ a R 0 3×3 0 3×3 0 3×3 , Σ a =   0 0 0 0 0 0 0 0 1   (3) where R denotes the orientation of the tissue normal expressed in the base frame of the SHER robot. 2) Lateral Direction -Admittance Scheme: The motion of the pCLE probe along the lateral direction is set to be controlled cooperatively [23] by the SHER and the surgeon together to enable tremor-free scanning of regions of interest. Following [23], this is realized using an admittance control scheme, as follows:Ẋ des,ee = αF eė X des,c = Ad r,eeẊdes,ee where subscripts ee and r indicate the end-effector frame and the base frame of the SHER, respectively; F ee denotes the interaction forces applied by the user's hand, expressed in the end-effector frame. F ee is measured by a 6-DOF force/torque sensor (ATI Nano 17, ATI Industrial Automation, Apex, NC, USA) attached on the end-effector; α is the admittance gain; Ad r,ee is the adjoint transformation matrix that maps the desired motion in the end-effector frame to the base frame of the SHER, and is calculated as Ad r,ee = R r,ee skew(p r,ee )R r,ee 0 3×3 R r,ee(5) where R r,ee and p r,ee are the rotation matrix and translation vector of the end-effector frame expressed in the base frame of the SHER. Also, skew(p r,ee ) denotes the skew-symmetric matrix associated with the vector p r,ee . The resultantẊ des,c , along with the motion specification matrix K c , specifies the lateral component of the desired motion given in Equation 1. . Lateral Direction control is described in Section II-A2. Axial Direction control is described in Section II-A3 and Section II-A4. Mid-Level Optimizer and Low-Level Controller are described in Section II-B. 3) Axial Direction -Image Optimizer: The desired motion along the axial direction,Ẋ des,a , is defined such that the confocal image quality is optimized through an autofocus control using a gradient-ascent search. The autonomous control is performed in a sensorless manner, meaning that no additional sensing modality is required for depth/contact measurement. As discussed previously, the confined space within the eyeball and the fragility of the retinal tissue makes it very challenging, if not impossible, to add extra sensing modalities. The proposed sensorless and contactless control strategy relies on the image quality as an indirect measure of the probe-to-tissue distance, intending to maximize the quality autonomously. Using image blur metrics as a depth-sensing modality has been previously discussed in [13]. The control strategy presented therein, however, is only applicable to contact-based pCLE systems probes, which is not suitable for scanning the delicate retinal tissue. Besides, the control system presented in [13] is dependent on the characteristics of the tissue and requires a pre-operative calibration phase. The calibration process necessitates pressing the contact-based probe onto the tissue and collecting a series of images from the pCLE system along with the force values applied to the tissue. However, this calibration process (which requires exertion and measurement of force values) is not feasible for retinal scanning due to its fragility. Therefore, in this paper, a image-based auto-focus methodology is proposed for optimizing image sharpness and quality during non-contact pCLE scanning without the necessity for any extra sensing modality. For this purpose, the effectiveness of several blur metrics was evaluated to use in real-time control of the robot. The selected metrics are: Crété-Roffet (CR) [24], Marziliano Blurring Metrix (MBM) [25], Cumulative Probability of Blur Detection (CPBD) [26], Blind/Referenceless Image Spatial QUality Evaluator (BRISQUE) [27] and Perception-based Image Quality Evaluator (PIQE) [28], which are all standard no-reference image metrics for quality assessment. Besides, the image intensity is calculated since the pCLE image will appear dark when back-focus and bright when front-focus. An experiment was conducted by commanding the SHER to move from a far distance to almost touching a scanned tissue surface. The six metrics were calculated for the pCLE images during the experiment, as shown in Fig. 4. The optimal image view in this experiment was achieved around a probe-totissue distance of 690 µm. A consistent pattern was observed for the six metrics around the optimal view (i.e., maximized value for MBM and CR scores, and minimized value for intensity, CPBD, BRISQUE, and PIQE). Among these six metrics, the lowest level of noise and the highest signalto-noise ratio belongs to CR, which was eventually chosen to be incorporated into our auto-focus control strategy. CR is a no-reference blur metric and can evaluate both motion and focal blurs. It has a low implementation cost and high robustness to the presence of noise in the image. All of the above attributes make CR an efficient and effective metric for real-time image-based control. Moreover, since pCLE images exhibit blur patterns that CR can capture (as shown in Fig. 1 and Fig. 5), CR becomes a preferred metric to evaluate the clarity and indicate the clinical diagnosis value of a given pCLE image. The working principle of the CR score is based on the fact that blurring an already blurred image does not result in significant changes in intensities of neighboring pixels, unlike blurring a sharp image. The CR score for an image I of size M × N can be obtained by first calculating the accumulated absolute backward neighborhood differences in the image along the x and y axes, denoted as d I,x and d I,y : d I,x = M,N i,j=1 |I i,j − I i−1,j | d I,y = M,N i,j=1 |I i,j − I i,j−1 | (6) Then, the input image I is convolved with a low-pass filter to obtain a blurred image B, also of size M × N . Afterwards, the changes of neighborhood differences due to the low-pass filtration, denoted as d IB,x and d IB,y , are calculated along x and y axes with a minimum value of 0: d IB,x = M,N i,j=1 max(0, |I i,j − I i−1,j | − |B i,j − B i−1,j |) d IB,y = M,N i,j=1 max(0, |I i,j − I i,j−1 | − |B i,j − B i,j−1 |)(7) The accumulated changes d IB,x and d IB,y are then normalized using the values obtained in Equation 6, indicating the level of blur present in the input image as blur x = d I,x − d IB,x d I,x blur y = d I,y − d IB,y d I,y(8) The quality, q, of the image I, i.e. the amount of highfrequency content, is then calculated as the maximum blur level along the x and y axes as: q I = 1 − max(blur x , blur y )(9) Fig. 5 shows an example of the CR score w.r.t the probe-totissue distance acquired from the previous experiment. The CR metric has minimal variation and the lowest value when the pCLE probe is out-of-focus (illustrated in Fig. 1a); the maximum value (i.e. the image quality is maximized) when the probe-to-tissue distance is optimized and the pCLE probe is in-focus (illustrated in Fig. 1c); and the CR metric drops sharply when the probe is either back-focus (Fig. 1b) or frontfocus (Fig. 1d). Accordingly, the autonomous probe-to-tissue adjustment algorithm (Algorithm 1) is designed based on a stochastic gradient-ascent approach to maximize the image quality. An interesting observation in Fig. 5 is that the metric has an almost symmetric pattern around the optimal in-focus probeto-tissue distance, while having an asymmetric pattern in the out-of-range region. This asymmetry is used in our proposed framework to identify probe-to-tissue distances that are too far, and to avoid the vanishing gradient problem that may occur. For this purpose, a pre-defined threshold T 1 is chosen, below which the pCLE probe is considered to be out-of-focus. When the probe-to-tissue distance is large and the pCLE probe is far from the retina (e.g., when the pCLE probe is first inserted into the eyeball), the CR score drops below T 1 . The surgeon can obtain full control of the pCLE probe in both the lateral and axial direction, and all autonomous movement is disabled. It should be noted that this transfer of control only happens when the probe is sufficiently far from the retina and in the out-ofrange region. The robot takes over the axial control as soon as the probe approaches the retina and enters the back-focus region. This ensures the safety of the patient as there is no longer the risk of the surgeon suddenly touching or puncturing the retina tissue. Due to the symmetry around the peak of the CR score (where the image quality is optimal and probe is in-focus), the history of the axial movement of the probe relative to the tissue is used to determine the gradient of the CR score w.r.t the probe-to-tissue distance, such that the direction to increase the CR score is found. If the relative motion of the probe has the same sign as the variation of the image score between the current state and the previous state, the robot is commanded by the algorithm to continue moving in the same direction, and otherwise, to reverse the direction of motion. For example, in the back-focus region, the probe will move closer to the tissue since the gradient is positive; in the frontfocus region, the probe will move further from the tissue since the gradient is negative. Therefore, the relative desired movement ∆X des,a is calculated based on the product of a gain value g (to convert the image score to a distance value) and 1 − q I . The term 1 − q I is used as an adaption factor of the step size, so that the motion of the probe is slowed down further as it gets closer to the optimal distance. The direction of the desired relative motion ∆X des,a is determined by SIGN (∆X probe ∆q) · SIGN (∆X probe ), where SIGN (·) is the sign (positive or negative) of the input (·). In Algorithm 1, a threshold T 2 is chosen as a termination condition, indicating that the optimal probe-to-tissue distance and, thereby, the in-focus condition is met. When an image quality higher than T 2 is achieved, the axial motion of zero is sent to the robot. Therefore, the axial motion of the robot is zero unless the score drops beyond the threshold T 2 . This could happen with the patient motion or a sudden curvature change on the surface of the retina. In this scenario, the robot does not have a previous state of motion to rely on to identify the desired direction of motion. Therefore, it will perform an exploration step to specify the appropriate direction of motion. As a safety consideration, the exploration step is always away from the retina to ensure no contact between the pCLE probe and the retina. In this framework, a CR score of 0.47, where the image view appears in-focus visually, is chosen for T 2 . A CR score of 0.10 is chosen for T 1 , where the pCLE probe is out-ofrange. A comparison of three images collected at CR values of 0.45, 0.47 and 0.61 is shown in Fig. 6. It should be mentioned that, since the proposed approach relies only on the gradient of the CR score, it is not sensitive to the exact shape of CR w.r.t the probe-to-tissue distance. Also, only knowing a rough estimation of the two threshold values would suffice for the algorithm, which can be specified pre-operatively. The desired axial velocity,Ẋ des,a used in Equation 1 is calculated based on the control-loop sampling time ∆t and the desired axial displacement ∆X des,a , asẊ des,a = ∆X des,a ∆t . Algorithm 1: Axial Control Algorithm, Image Optimizer Input: Image (I), Previous CR score (q prev ), Current/Previous probe position (X probe,curr , X probe,prev ) Output: Desired Movement (∆X des,a ) q I = CR(I); ∆q = q I − q prev ; ∆X probe = K a (X probe,curr − X probe,prev ) ; if q I < T 1 then return ∆X des,a =Ẋ des,c ∆t; else if T 1 < q I < T 2 then if ∆X probe < ROBOT RESOLUTION & ∆q < 0 then ∆X des,a = g(1 − q I ); else if ∆X probe < ROBOT RESOLUTION & ∆q > 0 then ∆X des,a = ∆X prev ; else ∆X des,a = gSIGN(∆X probe ∆q) SIGN(∆X probe )(1 − q I ); end else ∆X des,a = 0 ; end 4) Axial Direction -Prior Model: For safety purposes, Algorithm 1 is designed such that the exploration step is always away from the retina. When scanning a curved-surface, this retracted motion may move the probe to a gradientdiminishing region as shown in Fig. 7a before the system captures the right direction of motion again. This can elongate the process of specifying a dominant gradient for correcting motion direction and, thereby, lead to poor user experience. To address this, a prior model of the retina is integrated into the algorithm to provide the algorithm with a suitable initialization state. To obtain the retina curvature model, the perimeter of the area of interest is scanned once. During the scanning, both the CR score and the probe position are recorded. Based on the CR score threshold T 2 , the image qualities at different positions are classified as either in-focus or not. By spatially sampling 20 of the in-focus positions, a second-order parabola is fitted. The resulted model is denoted as M (·), where the input (·) is the lateral probe position K c X probe,curr , and the output is the desired axial position of the probe, both expressed in the base frame of the SHER. Fig. 7b shows an example of prior model. The prior model will not be accurate enough at the micronscale to keep the probe focused directly, but it can augment the previously mentioned gradient-ascent approach (Algorithm 1) to fine-tune the image quality. Thus, Algorithm 2 is proposed to take both the prior model and the gradient-ascent image optimizer into account. The flag F M indicates if the in-focus axial position has been reached according to the prior model. When the flag F M is set to true and the image quality is still far from desirable, the gradient-ascent approach will be then activated to further fine-tune and improve the image quality. If the current position of the probe tip X probe,curr is outside the registration region, based on which the model of the retina curvature has been established, the auto-focus algorithm will only use the gradient-ascent approach given in Algorithm 1. If the CR score is above the optimal image quality threshold T 2 , the desired axial motion ∆X des,a is set to zero and the probe stops moving. Otherwise, a score below T 2 indicates that the optimal image quality has not been reached and further adjustment is needed. The algorithm will check if the position inferred by the prior model has been achieved, i.e. F M is TRUE or not, and if the user has moved laterally to the tissue surface by comparing the lateral motion K c ∆X probe with ROBOT RESOLUTION. If the model-inferred position has been reached while the user has not moved laterally, i.e. if F M is TRUE and K c ∆X probe < ROBOT RESOLUTION, the prior model differs from the actual retina curvature due to external factors, e.g., patient movement or registration error; in this case, Algorithm 1 will be applied to fine-tune the image quality using gradient-ascent image optimizer. In any other case, the probe will move to the axial position specified by the prior-model and the flag is set accordingly, i.e. ∆X a is set to the difference between model inferred position M (K c X probe,curr ) and K a X probe,curr . If it is found that ∆X a is less than ROBOT RESOLUTION, the position inferred from prior model has been reached and flag F M is set to TRUE. Otherwise, if ∆X a is larger than ROBOT RESOLUTION, the position inferred from the prior model has not yet been reached and F M is set to FALSE. B. Mid-Level Optimizer and Low-Level Controller The desired motionẊ des , which is the output of the highlevel controller, is then sent to the mid-level optimizer, which calculates the equivalent desired joint values q while satisfying the limits of the robot's joints. The mid-level optimizer is formulated as Algorithm 2: Axial Control Algorithm, with Prior Model Input: Image (I), Prior Model (M ), Current/Previous probe position (X probe,curr , X probe,prev ) Output: Desired Movement (∆X des,a ) q I = CR(I); ∆X probe = X probe,curr − X probe,prev ; F M =TRUE; if X probe,curr outside registration region then ∆X des,a = Algorithm 1 (I); else if q I ≥ T 2 then ∆X des,a = 0 ; else if F M & K c ∆X probe < ROBOT RESOLUTION then ∆X des,a = Algorithm 1 (I); else ∆X des,a = M (K c X probe,curr ) − K a X probe,curr ; if ∆X des,a < ROBOT RESOLUTION then F M =TRUE; else F M =FALSE; end end end miṅ q des |Jq −Ẋ des | q L ≤q ≤q U & q L ≤ q ≤ q U(10) where J is the Jacobian of the SHER. q L , q U ,q U , andq U denote the lower and upper limits of the joint values and velocities, respectively. The desired joint values are then sent to the low-level PID controller of the SHER, as a result of which the desired objectives generated based on the high-level controller are satisfied. III. HYBRID TELEOPERATED FRAMEWORK In the previous section, the proposed hybrid strategy was explained using the hybrid cooperative framework. In this section, the hybrid teleoperated framework will be discussed. In addition to filtering tremors of surgeon's hand, the hybrid teleoperated framework enables large scaling down of the motions to make precise and minute manipulations inside the confined areas of the eyeball. The proposed hybrid teleoperated framework has the same patient-side platform, midlevel optimizer and low-level controller architecture at the SHER side as the hybrid cooperative platform. However, the lateral motion of the pCLE probe is controlled remotely by the surgeon. For this purpose, the dVRK system has been used to enable remote control of the SHER. The dVRK system is an open-source telerobotic research platform developed at Johns Hopkins University. The system consists of the first-generation da Vinci surgical system with a master console, including the surgeon interface with stereo display and MTM to control the pCLE probe. The proposed hybrid teleoperated framework differ from the previous hybrid cooperative framework in three aspects: 1) a hybrid control law using the teleoperated commands (Section III-A1), 2) a teleoperated scheme to control the lateral direction (Section III-A2), and 3) a haptic controller on the MTM side (Section III-B). Fig. 8 shows the block diagram of the proposed hybrid teleoperated framework. A. High-Level Controller 1) Hybrid Control Law: Like the hybrid cooperative framework, the hybrid motion controller of the teleoperated platform has the lateral and axial components at the SHER side with the following formulations: X des = K tẊdes,t + K aẊdes,a(11) where subscripts t and a refer to the teleoperated (the lateral) and autonomous (the axial) components of the motion, respectively; K t is the motion specification matrix along the lateral direction same in the hybrid cooperative framework, i.e., K t = K c . Also,Ẋ des,a is the desired axial motion component of the SHER, which is derived using the autonomous control strategy discussed in Section II-A3 and Section II-A4 to maximize the image quality. 2) Lateral Direction -Teleoperated Scheme: The desired lateral component of the motion,Ẋ des,t , is received from a remote master console as: Fig. 8. General schematic of the proposed hybrid teleoperated control strategy. Hybrid Control Law is described in Equation 11. Lateral Direction control is described in Section III-A2. Axial Direction control is described in Section II-A3 and Section II-A4. Mid-Level Optimizer and Low-Level Controller are described in Section II-B. Haptic Controller is described in Section III-B. X des,t = ṗ des,ṫ R des,t(12) such thatṗ des,t = ∆ṫ R des,t = θ ∆t(13) where all the above variables are expressed in the base frame of the SHER;ṗ des,t andṘ des,t refer to the translational and rotational components of the desired user-commanded velocities. and θ denote the current translational and rotational components of the user-commanded increments, which are calculated as the tracking error of the SHER. The tracking error, i.e. the difference between the commanded position (Cartesian position of the MTM transformed into the base frame of the SHER) and the actual position of the pCLE probe, can be written as = β · R −1 M T M −SHER · ∆p M T M − ∆p SHER θ = Rodriguez R −1 SHER · R −1 M T M −SHER · R M T M(14) where β is the teleoperation motion scaling factor; R M T M −SHER is the orientation mapping between the base frames of the MTM and SHER; R M T M and R SHER , respectively, denote the rotation of the MTM in its own base frame and the rotation of pCLE probe in the base frame of the SHER. ∆p SHER = p SHER − p SHER,0(15) This relative position and absolute orientation control setup allows the surgeon to control the pCLE probe intuitively. The output of the high-level hybrid controller given in Equation 11, is then sent to the mid-level optimizer and low-level PID controller at the SHER side (discussed in Section II-B) to fulfill the desired motion. B. Haptic Controller at the MTM Side To provide the surgeon with sensory situational awareness from the SHER side, a haptic controller is designed at the MTM side. The first component of an effective haptic feedback strategy is a gravity compensation to cancel out the weight of the MTM. To enable a zero-gravity low-friction controller, a multistep least square estimation/compensation approach is used, which also addresses elastic force and friction modeling in the dynamics model. Details of the gravity compensation algorithm can be found in the open-source code. Besides, the haptic controller implemented at the MTM side includes a compliance wrench defined in the MTM base frame, to enable haptic feedback. The compliance wrench has two parts: • an elastic part, which is proportional to the tracking error (Equation 14) of the SHER, and • a damping part, which is proportional to the Cartesian linear velocities of the MTM The compliance wrench W can be formulated as W = F p F R F p = k p + bV F R = k R θ(16) where k p and k R are the elasticity coefficient of translation error and rotation error θ, respectively; b is the damping coefficient of V , the Cartesian linear velocities of the MTM (defined in the base frame of the MTM). The compliance wrench is then converted to the corresponding joint torque τ W using the inverse Jacobian J −1 as τ W = J −1 W(17) The final desired torque is the summation of the compliance wrench torque τ W and gravity compensation torque τ gc , as τ des = τ W + τ gc(18) The desired torque is then sent to the low-level PID controller of the MTM to achieve the desired haptics feedback. Fig. 2 shows the experiment setup. Elbow support was provided to the user to interact with the SHER. The sampling frequency of the pCLE is set to be 60 Hz. The mid-level optimizer frequencies of the SHER and dVRK are both 200 Hz. The cooperative gain, α, is set to be 10 µm/s per 1 N . The motion scaling factor of the teleoperated framework, β, is set to be 15 µm/s per 1 mm/s. The elasticity and damping coefficients of the haptics feedback are set to 50 N/m and 5 N m/s , respectively. The software framework is built upon the CSA library developed at Johns Hopkins University [29]. The eyeball phantom used was built in-house with spherical shape of 30 mm outer diameter. A thin layer of tissue paper with an uneven surface is attached to the inside of the eyeball phantom, representing the detached retina. A 10 mm opening is made at the top to simulate the open-sky process. The phantom is 3D printed with ABS material. Before the experiment, the tissue was stained with 0.01% acriflavine. We chose acriflavine because of its ready availability. For clinical applications, fluorescein, which is bio-compatible, should be used. IV. EXPERIMENT RESULTS To validate the effectiveness of the proposed frameworks, three experiments were conducted, as discussed below. In this set of experiments, a user highly familiar with the SHER and the dVRK was chosen to eliminate any extra variability over the experiment outcome due to the insufficient skill level of the user. A. Experiment #1 The purpose of this experiment was to demonstrate the impracticality of manual pCLE scanning. As discussed previously, manual control of a pCLE probe within the micron-order focus range is extremely challenging, if not impossible, due to several factors including hand tremor and patient motion. In this experiment, the user was instructed to try his/her best to control the probe for a clear scanning stream of images. The user, however, was unsuccessful in performing the task. Fig. 9a shows a sequence of images acquired during this task. As can be seen, the quality of the images is far from desirable. By way of comparison, a sequence of images acquired using the proposed hybrid teleoperated framework is also presented in Fig. 9b, which shows a considerable improvement of the image quality, and an example of the generated mosaicking image is shown in Fig. 10. The complete result of this experiment can be found in the video supplemental material of the paper. Fig. 9. A sequence of images acquired from (a) manual scanning, (b) the proposed hybrid teleoperated framework, sub-sampled at 1.5 Hz. As can be seen, the manual scanning image quality is far from desirable. B. Experiment #2 The second experiment was designed to validate the improvement due to the addition of the prior model to the image optimizer as previously proposed in [21]. For this purpose, three experiments were conducted with: 1) only the gradient-ascent image optimizer (no prior model), 2) only the prior model (no image optimizer), and 3) the combined axial controller (image optimizer and prior model) as proposed in this work. In these experiments, the task was defined to follow a triangular path with a side length of approximately 3 cm using the teleoperated setup. Two metrics were used to evaluate and compare the efficacy of the three experiments: 1) Mean CR score: The average of CR scores throughout the scanning task, whose formulation was given in Section II-A3. 2) Duration of In-focus View: The percentage of the instances throughout the scanning task that the pCLE probe is in-focus (indicated as a CR score higher than the threshold T 2 ). Fig. 11a and Fig. 11b present a comparison of the two metrics for the three experiment scenarios. The combined axial controller outperformed both the image optimizer only and the prior model only approaches. The mean CR score of the combined controller is 0.47, higher than that of the prior model only approach 0.43, and the image optimizer only approach 0.35. The in-focus duration of the combined controller was 53%, also higher than that of the prior model only approach 40%, and the image optimizer only approach 41%. A higher mean CR score and longer in-duration focus make the combined axial controller more effective than using only either of the individual components. C. Experiment #3 The third experiment was designed to compare the performance with and without autonomous axial control using the cooperative and teleoperated frameworks. Same as Experiment #2, the task was defined to follow a triangular path with a side length of approximately 3 cm. The user was instructed to try his/her best to maintain the optimal image quality during the task. In this set of experiments, the orientation of the pCLE probe was locked for more accurate trajectory comparison and easier maneuvering for the user. Fig. 12a and Fig. 12b show a comparison of the 3-D path between the frameworks (the proposed hybrid cooperative vs. traditional cooperative, and the proposed hybrid teleoperated vs conventional teleoperated). As shown in Fig. 12a, the proposed hybrid cooperative framework resulted in a significantly smoother path with lower elevational variability along the axial direction (Z-axis), as compared to the traditional cooperative approach. Smaller deviations along the axial direction of the probe shaft are an indication of better (less jerky) manipulation and better control of the probe around the optimal probe-totissue distance. Interestingly, as can be seen in Fig. 12b, the teleoperated and hybrid teleoperated frameworks resulted in a visually comparable elevational motion along the axial direction. Both trajectories appear to be smooth visually. To further assess the motion made during these two cases, Motion smoothness (MS) [30] was calculated as a quantitative measure of smoothness. MS is calculated as the time-integrated squared jerk, where jerk is defined as the third-order derivative of position: M S = 1 N N i=0 J 2 x,i + J 2 y,i + J 2 z,i(19) where J u,i = u i+ 3 2 − 3u i+ 1 2 + 3u i− 1 2 − u i− 3 2 ∆t 3 is the third-order central finite difference of the discrete trajectory signal along u-axis. A smoother trajectory will result in a smaller jerk, thus smaller MS value. Based on the experiment, the MS scores are 6.477E−2 for the cooperative framework, 3.481E−2 for the hybrid cooperative framework, 4.201E−2 for the teleoperated framework, and 4.202E−2 for the hybrid teleoperated framework. The hybrid cooperative framework outperformed the traditional cooperative framework by 46.3%, while the two teleoperated frameworks yield almost the same motion smoothness scores. The quantitative evaluation of MS yields the same result as the visual inspection. The total duration of the scanning task was 61.92 s for the cooperative framework, 35.95 s for the hybrid cooperative framework, 60.66 s for the teleoperated framework, and 42.43 s for the hybrid teleoperated framework. V. USER STUDY A. Study Protocol To further evaluate and compare the proposed frameworks, a set of user studies was conducted. The study protocol was approved by the Homewood Institutional Review Board, Johns Hopkins University. In this study, 14 participants without clinical experience were recruited at the Laboratory for Computational Sensing and Robotics (LCSR), with all participants being right-handed. The participants were asked to perform a path-following task in four cases: • using the traditional cooperative framework, i.e., without the auto-focus algorithm • using the proposed hybrid shared cooperative framework, i.e., with the auto-focus algorithm • using the traditional teleoperated framework, i.e., without the auto-focus algorithm • using the proposed hybrid shared teleoperated framework, i.e., with the auto-focus algorithm The task was defined to perform pCLE scanning of a triangular region of a side length of approximately 3 mm within an eyeball phantom using the four setups. The participants were instructed to perform the scanning task while trying their best to maintain the best image quality. Fig. 2d shows the view provided to the users during the experiments. Before each trial, the participants were given around 10 minutes to familiarize themselves with the system before they proceed with the main experiments. The order of the experiments was randomized to eliminate the learning-curve effect. After each trial, the participants were asked to fill out a post-study questionnaire. The questionnaire included a form of the NASA Task Load Index (NASA TLX) survey for evaluation of operator workload. Data recording started when the participants pressed the activation pedal of the robot. This ensured consistent and trackable start timing between participants. In our previous study [21], we observed that manipulating the robot at micron-level precision within the confined space of the eye is very challenging for novices. Therefore, the orientation of the pCLE probe was locked to reduce the complexity of the scanning task. B. Metrics Extraction and Evaluation To evaluate task performance during four experiments, 6 quantitative metrics were used: CR score, duration of infocus view, and MS (discussed above); as well as task completion time, Cumulative Probability for Blur Detection (CPBD), and Marziliano Blurring Metric (MBM). A NASA TLX questionnaire including 6 qualitative metrics answered by the participants was also studied. While CR, MBM, and CPBD are image-quality metrics, they use different approaches to assess the sharpness of an image. These metrics were chosen to validate the consistent improvement of the image quality regardless of the metric type, demonstrating the generalizability and consistency of the outcome. . 13 shows the results of the NASA TLX questionnaire for 14 users. The questionnaire includes six criteria: mental demand, physical demand, temporal demand, performance level perceived by the users themselves, level of effort, and frustration level, each with a maximum value of 7. Single-factor ANOVA analysis was used to statistically evaluate the results, and statistical significance was observed in the mental demand (p-value=3.0E−2), physical demand (p-value=2.0E−3), effort (p-value=6.3E−5) and frustration level (p-value=1.0E−2), while no statistical significance was observed in the temporal demand (p-value=0.182) and performance (p-value=0.062). A Tukey's Honest Significance test was followed for the four categories with statistical significance (w.s.s). The mental demand has decreased from 5.43 for the cooperative framework to 2.78 for the hybrid teleoperated framework. The physical demand has decreased from 5.00 for the cooperative framework and 4.00 for the teleoperated framework, to 2.57 for the hybrid teleoperated framework. Similarly, the effort level has decreased from 5.43 for the cooperative framework and 4.57 for the teleoperated framework, to 2.78 for the hybrid teleoperated framework. A lower frustration level was observed in the hybrid teleoperated framework 2.29 compared to the cooperative framework 4.43. Out of the 14 users, 11 indicated that the hybrid teleoperated framework is the most preferred modality. The quantitative results are shown as boxplots in Fig. 14 with six metrics compared: in-focus duration, task completion time, CR, CPBD, MBM and MS. Applying the Single-factor ANOVA analysis, statistically significant differences were observed between various modes of operation for all the metrics, with p-values equal to 1.0E−8, 4.7E−5, 1.1E−9, 4.4E−7, 2.7E−7, and 9.2E−8 respectively. Post hoc analysis using the Tukey's test showed that in terms of in-focus duration, both the hybrid teleoperated (79%) and teleoperated (73%) frameworks outperformed the hybrid cooperative (56%) and cooperative (36%) frameworks w.s.s. The hybrid cooperative framework also outperformed the cooperative framework w.s.s. The mean CR scores of the hybrid teleoperated (CR=0.51), teleoperated (CR=0.50) and hybrid cooperative (CR=0.46) frameworks were better than that of the cooperative framework (CR=0.38) w.s.s. The mean MBM score of the teleoperated framework (MBM=0.2920) was better than the hybrid cooperative (MBM=0.2792) and cooperative (MBM=0.2657) frameworks w.s.s. The hybrid teleoperated (MBM=0.2886) and hybrid cooperative frameworks also outperformed the cooperative framework w.s.s. Task completion time, CPBD, and MS (metrics with a lower value indicating a higher performance) were also analyzed with Tukey's test as discussed below. The task completion time for the hybrid teleoperated framework (105s) was longer than the teleoperated (76s) and cooperative frameworks (58s) w.s.s. The hybrid cooperative framework (95s) also took longer than the cooperative framework (58s) w.s.s. However, it should be noted that the task completion time for the hybrid teleoperated framework and hybrid cooperative framework include preoperative eyeball registration time as well. Extracting the pre-operative registration time, the task completion time for the hybrid teleoperated framework and hybrid cooperative framework become 51s and 44s, respectively. The CPBD score for the hybrid teleoperated framework (CPBD=0.6342) was lower than the cooperative framework (CPBD=0.7233) w.s.s. The CPBD score for the teleoperated framework (CPBD=0.6270) was lower than that of both hybrid cooperative (CPBD=0.6773) and cooperative frameworks w.s.s. The hybrid cooperative framework also had lower CPBD compared to the cooperative framework w.s.s. The MS scores for the hybrid teleoperated (MS=1.407E-2), teleoperated (MS=2.608E-2) and hybrid cooperative (MS=2.514E-2) frameworks were smaller than that for the cooperative framework (MS=8.944E-2) w.s.s. In general, the hybrid cooperative framework was shown to be more advantageous over the traditional cooperative framework both qualitatively and quantitatively. The user workload decreased when using the hybrid cooperative framework, and the image quality was considerably improved. The comparison between the hybrid teleoperated framework and the traditional teleoperated framework showed that the hybrid teleoperated framework reduced the user's workload while providing an equally high-quality image. Comparing the hybrid teleoperated framework proposed in this work with the hybrid cooperative framework previously proposed in [21], the hybrid teleoperated framework demonstrated better performance with a higher percentage of duration where the pCLE probe is in-focus. A reason for the improved performance can be the ability for a large motion scaling, which is unavailable in a cooperative setting. Overall, the hybrid teleoperated system demonstrated clear advantages: quantitatively consistent high-quality images and qualitatively with 78.6% of the users indicating it as the most favorable framework. However, several limitations may be addressed in future studies. First, none of users were clinicians, but with a wide range of skills in controlling robotic frameworks. A part of our future work will focus on comparing and evaluating the four frameworks with a larger set of users including surgeons. Secondly, in this study, an artificial eyeball phantom was used and technical aspects of the work were demonstrated and validated. In the future, cadaveric eyes will be used along with image mosaicking and clinical studies to evaluate the systems in clinical settings. Lastly, the stain material should be switched to fluorescein for bio-compatibility. VI. SUMMARY A novel hybrid strategy was proposed for real-time endomicroscopy imaging for retinal surgery. The proposed strategy was deployed on two control frameworks -cooperative and teleoperated. The setup consists of the dVRK, SHER, and a distal-focus pCLE system. The hybrid frameworks allow surgeons to scan the area of interest in the retina without worrying about the loss of image quality and hand tremor. The effectiveness of the hybrid frameworks was demonstrated via a series of experiments in comparison with the traditional teleoperated and cooperative frameworks. A user study of 14 users showed that both hybrid frameworks lead to statistically significant lower workload and improved image quality both qualitatively and quantitatively. level visualization. To our knowledge, there is only one prior work for intraocular pCLE scanning of the retina (M. Paques, personal communication, March 22, 2020) using a contactbased CellVizio ProFlex probe (Mauna Kea Technologies). The work by M. Paques et. al. only took one image via without attempting to scan a large area. The image was acquired manually without considering the safety of retina. Fig. 1 . 1Sample views of a non-contact pCLE system: (a) out-of-range, (b) back-focus, (c) in-focus, and (d) front-focus views, measured at a tissue distance of (a) 2.34 mm, (b) 1.16 mm, (c) 0.69 mm, and (d) fully in contact with the tissue, respectively. Fig. 3 . 3General schematic of the proposed hybrid cooperative control strategy. Hybrid Control Law is described inEquation 1 Fig. 4 . 4Evaluation of 6 image metrics with respect to (w.r.t) the probe-totissue distance (linear scale). The optimal view at a probe-to-tissue distance of 690 µm is indicated by the vertical dashed line. Fig. 5 . 5CR score w.r.t the probe-to-tissue distance, with the two horizontal lines indicating two thresholds T 1 and T 2 , the vertical line indicating the optimal probe-to-tissue distance of approximately 690 µm. The four focus regions (out-of-focus, back-focus, in-focus, and front-focus) are labeled respectively. Fig. 6 . 6Sample probe views of different CR score, (a) CR=0.45 at 0.51 mm, (b) CR=0.47 at 0.62 mm, and (c) CR=0.61 at 0.69 mm. Fig. 7 . 7(a) Illustration of the failure case for Algorithm 1. Assuming the pCLE probe is currently in-focus. As the probe moves in the direction of the arrow, the image score will decrease due to the larger probe-to-tissue distance, without the probe movement in the axial direction. The safety feature implemented will move the probe away from the tissue.(b)(c)(d) Example of the fitted model, where (c) is the projected view of XZ plane and (d) is the projected view of YZ plane; Green surface: fitted local polynomial model of the retina. Red points: scanning path during the registration process. Blue points: sampled points where images are in-focus. ∆p M T M refers to the translation offset between the current position of the MTM (p M T M ) and the initial position of the MTM (p M T M,0 ) in the base frame of MTM. ∆p SHER denotes the translation offset between the current position (p SHER ) and the initial position (p SHER,0 ) of the pCLE probe in the base frame of the SHER. ∆p M T M = p M T M − p M T M,0 Fig. 10 . 10An example of generated mosaicking images. Fig. 11 . 11Experiment results of the image optimizer only, prior model only, and combined axial controller. Comparison of (a) CR scores, (b) duration of in-focus view. Fig. 12 . 12Comparison of scanning path between (a) Cooperative and hybrid cooperative frameworks, (b) Teleoperated and hybrid teleoperated frameworks. Fig. 13 . 13NASA TLX questionnaire result C. Results and Discussion Fig Fig. 13 shows the results of the NASA TLX questionnaire for 14 users. The questionnaire includes six criteria: mental demand, physical demand, temporal demand, performance level perceived by the users themselves, level of effort, and frustration level, each with a maximum value of 7. Single-factor ANOVA analysis was used to statistically evaluate the results, and statistical significance was observed in the mental demand (p-value=3.0E−2), physical demand (p-value=2.0E−3), effort (p-value=6.3E−5) and frustration level (p-value=1.0E−2), while no statistical significance was observed in the temporal demand (p-value=0.182) and performance (p-value=0.062). A Tukey's Honest Significance test was followed for the four categories with statistical significance (w.s.s). The mental demand has decreased from 5.43 for the cooperative framework to 2.78 for the hybrid teleoperated framework. The physical demand has decreased from 5.00 for the cooperative framework and 4.00 for the teleoperated framework, to 2.57 for the hybrid teleoperated framework. Similarly, the effort level has decreased from 5.43 for the cooperative framework and 4.57 for the teleoperated framework, to 2.78 for the hybrid teleoperated framework. A lower frustration level was observed in the hybrid teleoperated framework 2.29 compared to the cooperative framework 4.43. Out of the 14 users, 11 indicated that the hybrid teleoperated framework is the most preferred modality. The quantitative results are shown as boxplots in Fig. 14 with six metrics compared: in-focus duration, task completion time, CR, CPBD, MBM and MS. Applying the Single-factor ANOVA analysis, statistically significant differences were observed between various modes of operation for all the metrics, with p-values equal to 1.0E−8, 4.7E−5, 1.1E−9, 4.4E−7, 2.7E−7, and 9.2E−8 respectively. Post hoc analysis using the Tukey's test showed that in terms of in-focus duration, both the hybrid teleoperated (79%) and teleoperated (73%) frameworks outperformed the hybrid cooperative (56%) and cooperative (36%) frameworks w.s.s. The hybrid cooperative framework also outperformed the cooperative framework w.s.s. The mean CR scores of the hybrid teleoperated (CR=0.51), teleoperated (CR=0.50) and hybrid cooperative (CR=0.46) frameworks were better than that of the cooperative framework (CR=0.38) w.s.s. The mean MBM score of the teleoperated framework (MBM=0.2920) was better than the hybrid cooperative (MBM=0.2792) and Fig. 14 . 14Results of the user study: (a) Duration of in-focus view, (b) Task completion time, (c) CR score, (d) CPBD score, (e) MBM score, and (f) MS score. The epidemiology of rhegmatogenous retinal detachment: geographical variation and clinical associations. D Mitry, D G Charteris, B W Fleck, H Campbell, J Singh, British Journal of Ophthalmology. 946D. Mitry, D. G. Charteris, B. W. Fleck, H. Campbell, and J. Singh, "The epidemiology of rhegmatogenous retinal detachment: geographical variation and clinical associations," British Journal of Ophthalmology, vol. 94, no. 6, pp. 678-684, 2010. Neuroprotective strategies for retinal disease. M T Pardue, R S Allen, Progress in retinal and eye research. 65M. T. Pardue and R. S. Allen, "Neuroprotective strategies for retinal disease," Progress in retinal and eye research, vol. 65, pp. 50-76, 2018. Confocal laser scanning microscopy for in vivo histopathology of the gastrointestinal tract. A Meining, M Bajbouj, S Delius, C Prinz, Arab J. Gastroenterol. 81A. Meining, M. Bajbouj, S. Delius, and C. Prinz, "Confocal laser scanning microscopy for in vivo histopathology of the gastrointestinal tract," Arab J. Gastroenterol, vol. 8, no. 1, pp. 1-4, 2007. Robotic scanning device for intraoperative thyroid gland endomicroscopy. H Wang, S Wang, J Li, S Zuo, Annals of biomedical engineering. 464H. Wang, S. Wang, J. Li, and S. Zuo, "Robotic scanning device for intraoperative thyroid gland endomicroscopy," Annals of biomedical engineering, vol. 46, no. 4, pp. 543-554, 2018. Toward intraoperative breast endomicroscopy with a novel surface-scanning device. S Zuo, M Hughes, C Seneci, T P Chang, G.-Z Yang, IEEE Transactions on Biomedical Engineering. 6212S. Zuo, M. Hughes, C. Seneci, T. P. Chang, and G.-Z. Yang, "Toward intraoperative breast endomicroscopy with a novel surface-scanning device," IEEE Transactions on Biomedical Engineering, vol. 62, no. 12, pp. 2941-2952, 2015. Modular robotic scanning device for real-time gastric endomicroscopy. Z Ping, H Wang, X Chen, S Wang, S Zuo, Annals of biomedical engineering. 472Z. Ping, H. Wang, X. Chen, S. Wang, and S. Zuo, "Modular robotic scan- ning device for real-time gastric endomicroscopy," Annals of biomedical engineering, vol. 47, no. 2, pp. 563-575, 2019. Quantitative confocal imaging of the retinal microvasculature in the human retina. P E Z Tan, Investigative ophthalmology & visual science. 539P. E. Z. Tan et al., "Quantitative confocal imaging of the retinal microvasculature in the human retina," Investigative ophthalmology & visual science, vol. 53, no. 9, pp. 5728-5736, 2012. The use of confocal laser microscopy to analyze mouse retinal blood vessels. D Ramos, Confocal Laser Microscopy-Principles and Applications in Medicine, Biology, and the Food Sciences. D. Ramos et al., "The use of confocal laser microscopy to analyze mouse retinal blood vessels," Confocal Laser Microscopy-Principles and Applications in Medicine, Biology, and the Food Sciences, 2013. Surgical forces and tactile perception during retinal microsurgery. P K Gupta, P S Jensen, E De Juan, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerP. K. Gupta, P. S. Jensen, and E. de Juan, "Surgical forces and tactile perception during retinal microsurgery," in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 1999, pp. 1218-1225. Non-contact laser-scanning confocal microscopy of the human cornea in vivo. N Pritchard, K Edwards, N Efron, Contact Lens and Anterior Eye. 371N. Pritchard, K. Edwards, and N. Efron, "Non-contact laser-scanning confocal microscopy of the human cornea in vivo," Contact Lens and Anterior Eye, vol. 37, no. 1, pp. 44-48, 2014. Non-contact, laser-based confocal microscope for corneal imaging. M Pascolini, Investigative Ophthalmology & Visual Science. 6011M. Pascolini et al., "Non-contact, laser-based confocal microscope for corneal imaging," Investigative Ophthalmology & Visual Science, vol. 60, no. 11, pp. 014-014, 2019. Force adaptive robotically assisted endomicroscopy for intraoperative tumour identification. P Giataganas, M Hughes, G.-Z Yang, International journal of computer assisted radiology and surgery. 106P. Giataganas, M. Hughes, and G.-Z. Yang, "Force adaptive robotically assisted endomicroscopy for intraoperative tumour identification," Inter- national journal of computer assisted radiology and surgery, vol. 10, no. 6, pp. 825-832, 2015. A framework for sensorless and autonomous probe-tissue contact management in robotic endomicroscopic scanning. R J Varghese, P Berthet-Rayne, P Giataganas, V Vitiello, G.-Z Yang, Robotics and Automation (ICRA. R. J. Varghese, P. Berthet-Rayne, P. Giataganas, V. Vitiello, and G.- Z. Yang, "A framework for sensorless and autonomous probe-tissue contact management in robotic endomicroscopic scanning," in Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017, pp. 1738-1745. Physiological tremor amplitude during retinal microsurgery. S Singhy, C Riviere, Proceedings of the IEEE 28th Annual Northeast Bioengineering Conference. the IEEE 28th Annual Northeast Bioengineering ConferenceIEEES. Singhy and C. Riviere, "Physiological tremor amplitude during retinal microsurgery," in Proceedings of the IEEE 28th Annual Northeast Bioengineering Conference (IEEE Cat. No. 02CH37342). IEEE, 2002, pp. 171-172. Hand-held instrument with integrated parallel mechanism for active tremor compensation during microsurgery. T Zhang, L Gong, S Wang, S Zuo, Annals of Biomedical Engineering. 481T. Zhang, L. Gong, S. Wang, and S. Zuo, "Hand-held instrument with integrated parallel mechanism for active tremor compensation during microsurgery," Annals of Biomedical Engineering, vol. 48, no. 1, pp. 413-425, 2020. A hand-held instrument to maintain steady tissue contact during probe-based confocal laser endomicroscopy. W T Latt, R C Newton, M Visentini-Scarzanella, C J Payne, D P Noonan, J Shang, G.-Z Yang, IEEE transactions on biomedical engineering. 589W. T. Latt, R. C. Newton, M. Visentini-Scarzanella, C. J. Payne, D. P. Noonan, J. Shang, and G.-Z. Yang, "A hand-held instrument to maintain steady tissue contact during probe-based confocal laser endomicroscopy," IEEE transactions on biomedical engineering, vol. 58, no. 9, pp. 2694-2703, 2011. Conic-spiraleur: A miniature distal scanner for confocal microlaparoscope. M S Erden, B Rosa, N Boularot, B Gayet, G Morel, J Szewczyk, IEEE/ASME transactions on mechatronics. 196M. S. Erden, B. Rosa, N. Boularot, B. Gayet, G. Morel, and J. Szewczyk, "Conic-spiraleur: A miniature distal scanner for confocal microlaparo- scope," IEEE/ASME transactions on mechatronics, vol. 19, no. 6, pp. 1786-1798, 2013. Autonomous scanning for endomicroscopic mosaicing and 3d fusion. L Zhang, M Ye, P Giataganas, M Hughes, G.-Z Yang, 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEEL. Zhang, M. Ye, P. Giataganas, M. Hughes, and G.-Z. Yang, "Au- tonomous scanning for endomicroscopic mosaicing and 3d fusion," in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 3587-3593. Experimental validation of a robotic comanipulation and telemanipulation system for retinal surgery. A Gijbels, E B Vander Poorten, B Gorissen, A Devreker, P Stalmans, D Reynaerts, 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics. IEEEA. Gijbels, E. B. Vander Poorten, B. Gorissen, A. Devreker, P. Stalmans, and D. Reynaerts, "Experimental validation of a robotic comanipulation and telemanipulation system for retinal surgery," in 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics. IEEE, 2014, pp. 144-150. Line-scanning fiber bundle endomicroscopy with a virtual detector slit. M Hughes, G.-Z Yang, Biomedical optics express. 76M. Hughes and G.-Z. Yang, "Line-scanning fiber bundle endomi- croscopy with a virtual detector slit," Biomedical optics express, vol. 7, no. 6, pp. 2257-2268, 2016. A novel semi-autonomous control framework for retina confocal endomicroscopy scanning. Z Li, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEZ. Li et al., "A novel semi-autonomous control framework for retina confocal endomicroscopy scanning," in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019, pp. 7083-7090. A unified approach for motion and force control of robot manipulators: The operational space formulation. O Khatib, IEEE Journal on Robotics and Automation. 31O. Khatib, "A unified approach for motion and force control of robot manipulators: The operational space formulation," IEEE Journal on Robotics and Automation, vol. 3, no. 1, pp. 43-53, 1987. A steady-hand robotic system for microsurgical augmentation. R Taylor, P Jensen, L Whitcomb, A Barnes, R Kumar, D Stoianovici, P Gupta, Z Wang, E Dejuan, L Kavoussi, The International Journal of Robotics Research. 1812R. Taylor, P. Jensen, L. Whitcomb, A. Barnes, R. Kumar, D. Stoianovici, P. Gupta, Z. Wang, E. Dejuan, and L. Kavoussi, "A steady-hand robotic system for microsurgical augmentation," The International Journal of Robotics Research, vol. 18, no. 12, pp. 1201-1210, 1999. The blur effect: perception and estimation with a new no-reference perceptual blur metric. F Crete, T Dolmiere, P Ladret, M Nicolas, Human vision and electronic imaging XII. 649264920F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, "The blur effect: per- ception and estimation with a new no-reference perceptual blur metric," in Human vision and electronic imaging XII, vol. 6492. International Society for Optics and Photonics, 2007, p. 64920I. A no-reference perceptual blur metric. P Marziliano, F Dufaux, S Winkler, T Ebrahimi, Proceedings. International Conference on Image Processing. International Conference on Image ProcessingIEEE3III-IIIP. Marziliano, F. Dufaux, S. Winkler, and T. Ebrahimi, "A no-reference perceptual blur metric," in Proceedings. International Conference on Image Processing, vol. 3. IEEE, 2002, pp. III-III. A no-reference perceptual image sharpness metric based on a cumulative probability of blur detection. N D Narvekar, L J Karam, 2009 International Workshop on Quality of Multimedia Experience. IEEEN. D. Narvekar and L. J. Karam, "A no-reference perceptual image sharpness metric based on a cumulative probability of blur detection," in 2009 International Workshop on Quality of Multimedia Experience. IEEE, 2009, pp. 87-91. No-reference image quality assessment in the spatial domain. A Mittal, A K Moorthy, A C Bovik, IEEE Transactions on image processing. 2112A. Mittal, A. K. Moorthy, and A. C. Bovik, "No-reference image quality assessment in the spatial domain," IEEE Transactions on image processing, vol. 21, no. 12, pp. 4695-4708, 2012. Blind image quality evaluation using perception based features. N Venkatanath, D Praneeth, M C Bh, S S Channappayya, S S Medasani, 2015 Twenty First National Conference on Communications (NCC). IEEEN. Venkatanath, D. Praneeth, M. C. Bh, S. S. Channappayya, and S. S. Medasani, "Blind image quality evaluation using perception based features," in 2015 Twenty First National Conference on Communications (NCC). IEEE, 2015, pp. 1-6. A computational framework for complementary situational awareness (CSA) in surgical assistant robots. P Chalasani, A Deguet, P Kazanzides, R H Taylor, Proceedings -2nd IEEE International Conference on Robotic Computing, IRC 2018. -2nd IEEE International Conference on Robotic Computing, IRC 2018P. Chalasani, A. Deguet, P. Kazanzides, and R. H. Taylor, "A com- putational framework for complementary situational awareness (CSA) in surgical assistant robots," Proceedings -2nd IEEE International Conference on Robotic Computing, IRC 2018, vol. 2018-Janua, pp. 9- 16, 2018. Multimodal sensorimotor integration for expert-in-the-loop telerobotic surgical training. M Shahbazi, S F Atashzar, C Ward, H A Talebi, R V Patel, IEEE Trans. on Robotics. 99M. Shahbazi, S. F. Atashzar, C. Ward, H. A. Talebi, and R. V. Patel, "Multimodal sensorimotor integration for expert-in-the-loop telerobotic surgical training," IEEE Trans. on Robotics, no. 99, pp. 1-16, 2018.
[]
[ "Convergence-Optimal Quantizer Design of Distributed Contraction-based Iterative Algorithms with Quantized Message Passing", "Convergence-Optimal Quantizer Design of Distributed Contraction-based Iterative Algorithms with Quantized Message Passing" ]
[ "Student Member, IEEEYing Cui [email protected] \nDepartment of ECE\nThe Hong Kong University of Science and Technology\n\n", "Senior Member, IEEEVincent K N Lau \nDepartment of ECE\nThe Hong Kong University of Science and Technology\n\n" ]
[ "Department of ECE\nThe Hong Kong University of Science and Technology\n", "Department of ECE\nThe Hong Kong University of Science and Technology\n" ]
[]
In this paper, we study the convergence behavior of distributed iterative algorithms with quantized message passing. We first introduce general iterative function evaluation algorithms for solving fixed point problems distributively. We then analyze the convergence of the distributed algorithms, e.g. Jacobi scheme and Gauss-Seidel scheme, under the quantized message passing. Based on the closed-form convergence performance derived, we propose two quantizer designs, namely the time invariant convergence-optimal quantizer (TICOQ) and the time varying convergence-optimal quantizer (TVCOQ), to minimize the effect of the quantization error on the convergence. We also study the tradeoff between the convergence error and message passing overhead for both TICOQ and TVCOQ. As an example, we apply the TICOQ and TVCOQ designs to the iterative waterfilling algorithm of MIMO interference game.
10.1109/tsp.2010.2055861
[ "https://arxiv.org/pdf/1006.3919v1.pdf" ]
2,214,412
1006.3919
bcb3fee6c15f955f5aca1d3da67760732922eac6
Convergence-Optimal Quantizer Design of Distributed Contraction-based Iterative Algorithms with Quantized Message Passing 20 Jun 2010 Student Member, IEEEYing Cui [email protected] Department of ECE The Hong Kong University of Science and Technology Senior Member, IEEEVincent K N Lau Department of ECE The Hong Kong University of Science and Technology Convergence-Optimal Quantizer Design of Distributed Contraction-based Iterative Algorithms with Quantized Message Passing 20 Jun 20101 In this paper, we study the convergence behavior of distributed iterative algorithms with quantized message passing. We first introduce general iterative function evaluation algorithms for solving fixed point problems distributively. We then analyze the convergence of the distributed algorithms, e.g. Jacobi scheme and Gauss-Seidel scheme, under the quantized message passing. Based on the closed-form convergence performance derived, we propose two quantizer designs, namely the time invariant convergence-optimal quantizer (TICOQ) and the time varying convergence-optimal quantizer (TVCOQ), to minimize the effect of the quantization error on the convergence. We also study the tradeoff between the convergence error and message passing overhead for both TICOQ and TVCOQ. As an example, we apply the TICOQ and TVCOQ designs to the iterative waterfilling algorithm of MIMO interference game. I. INTRODUCTION Distributed algorithm design and analysis is a very important topic with important applications in many areas such as deterministic network utility maximization (NUM) for wireless networks and non-cooperative game. For example, in [1], [2], the authors derived various distributed algorithms for a generic deterministic NUM problem using the decomposition techniques, which can be classified into primal decomposition and dual decomposition methods. In [3], the authors investigated a distributed power control algorithm for an interference channel using non-cooperative game and derived an iterative water-filling algorithm to approach the Nash equilibrium (NE). The interference game problem was extended to iterative waterfilling algorithm for a wideband interference game with time/frequency offset in [4] and an iterative precoder optimization algorithm for a MIMO interference game in [5], [6]. The authors established a unified convergence proof of the iterative water-filling algorithms for the SISO frequencyselective interference game and the MIMO interference game using a contraction mapping approach. Using this framework, the iterative best response update (such as the iterative power water-filling as well as the iterative precoder design) can be regarded as an iterative function evaluations w.r.t. a certain contraction mapping and the convergence property can be easily established using fixed point theory [7], [8]. In all these examples, the iterative function evaluation algorithms involved explicit message passing between nodes in the wireless networks during the iteration process. Furthermore, these existing results have assumed perfect message passing during the iterations. In practice, explicit message passing during the iterations in the distributed algorithms requires explicit signaling in wireless networks. As such, the message passing cannot be perfect and in many cases, the messages to pass have to be quantized. As a result, it is very important and interesting to study about the impact of quantized message passing on the convergence of the distributed algorithms. Existing studies on the distributed algorithms under quantized message passing can be classified into two categories, namely the distributed quantized average consensus algorithms [9]- [14] as well as the distributed quantized incremental subgradient algorithms [15]- [18]. For the distributed quantized average consensus algorithms, existing works considered the algorithm convergence performance under quantized message passing for uniform quantizer [9], [10], [12]- [14] and logarithmic quantizer [11] with fixed quantization rate. In [12], [14], the authors also considered quantization interval optimization (for average consensus algorithms) based on the uniform fixed-rate quantization structure. Similarly, for the second category of quantized incremental subgradient algorithms, the authors in [15]- [18] considered the convergence performance of fixed-rate uniform quantization. In this paper, we are interested in the convergence behavior of distributed iterative algorithms for solving general fixed point problems under quantized message passing. The above works on quantized message passing cannot be applied to our case due to the following reasons. First of all, the algorithm dynamics of the existing works (linear dynamics for average consensus algorthms and step-size based algorithms for incremental subgradient algorithms) are very different from the contraction-based iterative algorithms we are interested in (for solving fixed point problems). Secondly, the above works have imposed simplifying constraints of uniform and fixed rate quantizer design and it is not known if a more general quantizer design or adaptive quantization rate could further improve the convergence performance of the iterative algorithms. There are a few technical challenges regarding the study of convergence behavior in distributed contraction-based iterative function evaluations. • Convergence Analysis and Performance Tradeoff under Quantized Message Passing: In the literature, convergence of distributed iterative function evaluation algorithms under quantized message passing has not been considered. The general model under quantized message passing and how does the quantization error affect the convergence are not fully studied. Furthermore, it will also be interesting to study the tradeoff between convergence error and message passing overhead. • Quantizer Design based on the Convergence Performance: Given the convergence analysis results, how to optimize the quantizer to minimize the effect of the quantization error on the convergence is a difficult problem. In general, quantizers are designed w.r.t. a certain distortion measure such as the mean square error [19], [20]. However, it is not clear which distortion measure we should use to design the quantizer in order to optimize the convergence performance of the iterative algorithms we considered. Furthermore, the convergence performance highly depends on the quantizer structure as well as the quantization rate, and hence, a lowcomplexity solution to the nonlinear integer quantizer optimization problem is of great importance. In this paper, we shall attempt to shed some lights on these questions. We shall first introduce a general iterative function evaluation algorithm with distributed message passing for solving fixed point problems. We shall then analyze the convergence of the distributed algorithms, e.g. Jacobi scheme and Gauss-Seidel scheme, under the quantized message passing. Based on the analysis, we shall propose two rate-adaptive quantizer designs, namely the time invariant convergence-optimal quantizer (TICOQ) and the time varying convergenceoptimal quantizer (TVCOQ), to minimize the effect of the quantization error on the convergence. We shall also develop efficient algorithms to solve the nonlinear integer programming problem associated with the quantizer optimization problem. As an illustrative example, we shall apply the TICOQ and TVCOQ designs to the iterative waterfilling algorithm of the MIMO interference game [5], [6]. We first list the important notations in this paper in table I. n dimension of vector of state variables m (1 ≤ m ≤ n) element index of vector K number of nodes/blocks k (1 ≤ k ≤ K) node index/block index T total number of iterations t (1 ≤ t ≤T ) iteration index Q k component quantizer of node k (general) Q = (Q 1 , · · · , Q K ) system quantizer (general) superscript s scalar quantizer (SQ) superscript v vector quantizer (VQ) Q s k = (Q s m ) m∈M k component quantizer of node k (SQ) Q s = (Q s 1 , · · · , Q s n ) system quantizer (SQ) I s = (I s 1 , · · · , I s n ) quantization index vector (SQ) L s = (L s 1 , · · · , L s n ) quantization rate vector (SQ) Q v k component quantizer of node k (VQ) Q v = (Q v 1 , · · · , Q v K ) system quantizer (VQ) I v = (I v 1 , · · · , I v K ) quantization index vector (VQ) L v = (L v 1 , · · · , L v K ) quantization rate vector (VQ) R + the set of nonnegative real numbers Z + the set of nonnegative integers II. ITERATIVE FUNCTION EVALUATIONS In this section, we shall introduce the basic iterative function evaluation algorithm to solve fixed point problems as well as its parallel and distributed implementations. We shall then review the convergence property under perfect message passing in the iteration process. We shall also illustrate the application of the framework using the MIMO interference game in [5], [6] as an example. A. A General Framework of Iterative Function Evaluation Algorithms In algorithm designs of wireless systems, many iterative algorithms can be described as the following dynamic update equation [7]: x(t + 1) = T x(t)(1) where x(t) ∈ R n is the vector of state variables of the system at (discrete) time t and T is a mapping from a subset X ⊆ R n into itself. Such iterative algorithm with dynamics described by (1) is called the iterative function evaluation algorithm, which is widely used to solve fixed point problems [7], [8]. Specifically, any vector x * ∈ X satisfying T(x * ) = x * is called a fixed point of T. If the sequence {x(t)} converges to some x * ∈ X and T is continuous at x * , then x * is a fixed point of T [7]. Therefore, the iteration in (1) can be viewed as an algorithm for finding such a fixed point of T. We shall first review a few properties below related to the convergence of (1). Specifically, T is called a contraction mapping if it satisfies some property, which is defined as follows: Definition 1 (Contraction Mapping): Let T : X → X be a mapping from a subset X ⊆ R n into itself satisfying the following property T(x) − T(y) ≤ α x − y (∀x, y ∈ X ), where · is some norm and α ∈ [0, 1) is a constant scalar. Then the mapping T is called a contraction mapping and the scalar α is called the modulus of T. Remark 1: (Comparison with Step-size Based Incremental Subgradient Algorithms) The incremental subgradient algorithms [21] can be described as x(t + 1) = x(t) − ǫ t g x(t) , where {ǫ t } is the step-size sequence and g x(t) is a subgradient of the objective function at x(t) in a minimization problem. Such step-size based update algorithms and their associated convergence dynamics are quite different from the iterative function evaluation algorithm we considered in (1). If T is a contraction mapping, then the iterative update in (1) is called contracting iteration. The convergence of (1) is summarized as follows (Proof can be found in [7]): Theorem 1 (Convergence of Contracting Iterations): Suppose that T : X → X is a contraction mapping with modulus α ∈ [0, 1) and that X ⊆ R n is closed. We have: (1) (Existence and Uniqueness of Fixed Points) The mapping T has a unique fixed point x * ∈ X . (2) (Geometric Convergence) For any initial vector x(0) ∈ X , the sequence {x(t)} generated by (1) converges to x * geometrically. In particular, x(t) − x * ≤ α t x(0) − x * ∀t ≥ 0. In the above discussion, · can be any well-defined norm. There are many useful norms in the literature. However, the commonly used norms can be classified into two groups, namely weighted maximum norm and L p norm (1 ≤ p < ∞). They are elaborated below: • Weighted maximum norm: x a ∞ = max m |x m | a m (a m > 0)(2) Note that for a m = 1 ∀m, this reduces to the maximum norm, which can also be obtained from the L p norm by taking the limit p → ∞. • L p norm (1 ≤ p < ∞): x p = n m=1 |x m | p 1 p(3) Note that for p = 1 we get the taxicab norm and for p = 2 we get the Euclidean norm. B. Parallel and Distributed Implementation of Contracting Iterations In practice, large scale computation always involves a number of processors or communication nodes jointly executing a computational task. As a result, parallel and distributed implementation is of prime importance. Information acquisition and control are within geographically distributed nodes, in which distributed computation is more preferable. In this part, we shall discuss the efficient parallel and distributed computation of the contracting iteration in (1). To perform efficient parallel and distributed implementations with K processors, the set X is partitioned into a Cartesian product of lower dimensional sets, based on the computational complexity consideration or the local information extraction and control requirement. Mathematically, it can be expressed as X = K k=1 X k , where X k ⊆ R n k and K k=1 n k = n. Let n 0 = 0 and M k = {m ∈ N : k l=1 n l−1 + 1 ≤ m ≤ k l=1 n l } be the index set of the k-th component set X k (1 ≤ k ≤ K), where N is the set of integers. Thus, X k = m∈M k X m , where X m ⊆ R 1 . Any vector x ∈ X is decomposed as x = (x 1 , · · · , x K ) with the k-th block component x k = (x m ) m∈M k ∈ X k = m∈M k X m and the mapping T : X → X is decomposed as T(x) = T 1 (x), · · · , T K (x) with the k-th block component T k : X → X k . When the set X is a Cartesian product of lower dimensional sets X k , block-parallelization with K processors can be implemented by assigning one processor to update a different block component. The most common updating strategies for x 1 , · · · , x K based on the block mapping T are: • Jacobi Scheme: All block components x 1 , · · · , x K are updated simultaneously, i.e. x k (t + 1) = T k x(t) , 1 ≤ k ≤ K(4) • Gauss-Seidel Scheme: All block components x 1 , · · · , x K are updated sequentially, one after the other, i.e. x k (t + 1) = S k x(t) , 1 ≤ k ≤ K(5) where S k : X → X k given by S k (x) (6) = T k (x), k = 1 T k S 1 (x), · · · , S k−1 (x), x k , · · · , x K , 2 ≤ k ≤ K is the k-th block component of the Guass-Seidel mapping S : X → X , i.e. S(x) = S 1 (x), · · · , S K (x) . Both Jacobi Scheme and Gauss-Seidel Scheme belong to synchronous update schemes 1 . Specifically, Jacobi Scheme assumes the network is synchronized, while the Gauss-Seidel Scheme assumes the network provides a (Hamiltonian) cyclic route [7]. The general weighted block-maximum norm on R n , which is usually associated with the block partition of the vector x, is defined as [7]: x w block = max k x k k w k(7) where w = (w 1 , · · · , w K ) > 0 is the vector weight and · k is the norm for the k-th block component 2 x k , which can be any given norm on R n k , such as the weighted maximum norm and the L p norm (1 ≤ p < ∞) defined in (2) and (3). The mapping T : X → X is called a blockcontraction with modulus α ∈ [0, 1) if it is a contraction under the above induced weighted block-maximum norm · w block with modulus α. The convergence of the Jacobi scheme and Gauss-Seidel scheme based on the blockcontraction is summarized in the following theorem [7]: Theorem 2: (Convergence of Jacobi Scheme and Gauss-Seidel Scheme) If T : X → X is a blockcontraction, then the Gauss-Seidel mapping S is also a block-contraction with the same modulus as T. Furthermore, if X is closed, then the sequence {x(t)} generated by both the Jocobi scheme in (4) and the Gauss-Seidel scheme in (5) based on the mapping T converges to the unique fixed point x * ∈ X of T geometrically. C. Application Example -MIMO Interference Game The contracting iteration in (1) is very useful in solving fixed point problems. Fixed point problem is highly related to distributed resource optimization problems in wireless systems [3], [5], [6], [22]. For example, finding the Nash Equilibrim (NE) of a game is a fixed point problem. In this subsection, we shall illustrate the application of contracting iterations using MIMO interference game [5], [6] as an example. Consider a system with K noncooperative transmitterreceiver pairs communicating simultaneously over a MIMO channel with N transmit antenna and N receive antenna [5], [6]. The received signal of the k-th receiver is given by: y k = H kk s k + j =k H jk s j + n k(8) where s k ∈ C N and y k ∈ C N are the vector transmitted by the k-th transmitter and the vector received by the k-th receiver respectively, H kk ∈ C N ×N is the direct-channel of the k-th link, H jk ∈ C N ×N is the cross-channel from the j-th transmitter to the k-th receiver, and n k ∈ C N is a zero-mean circularly symmetric complex Gaussian noise vector with covariance matrix R n k . For each transmitter k, the total average transmit power is given by E s k 2 2 = Tr(P k ) ≤ P k(9) where Tr(·) denotes the trace operator, P k E[s k s H k ] is the covariance matrix of the transmitted vector s k and P k is the maximum average transmitted power. The maximum throughput of link k for a given set of users' covariance matrices P 1 , · · · , P K is as follows r k (P k , P −k ) = log det I + H H kk R −1 −k (P −k )H kk P k(10) where R −k (P −k ) R n k + j =k H jk P j H H jk is the noise covariance matrix plus the MUI observed by user k, and P −k (P j ) j =k is the covariance matrix of all other users except user k. In the MIMO interference game [5], [6], each player k competes against the others by choosing his transmit covariance matrix P k (i.e., his strategy) that maximizes his own maximum throughput r k (P k , P −k ) in (10), subject to the transmit power constraint in (9), the mathematical structure of which is as follows (G) max P k r k (P k , P −k ) ∀k (11) s.t. P k ∈ P k where P k P k ∈ C N ×N : P k 0, Tr(P k ) = P k is the admissible strategy set of user k, and P k 0 denotes that P k is a positive semidefinite matrix. Given k and P −k ∈ P −k , the solution to the noncooperative game (11) is the well-know waterfilling solution P * k = WF k (P −k ), where the waterfilling operator WF k (P −k ) can be equivalently written as [5] WF k (P −k ) = − H H kk R −1 −k (P −k )H kk −1 P k(12) where X 0 X = arg min Z∈X Z − X 0 F denotes the matrix projection of X 0 w.r.t Frobenius norm 3 · F onto the set X . The NE of the MIMO Gaussian interference game is the fixed point solution of the waterfilling mapping WF : P → P, i.e. P * k = WF k (P * −k ) ∀k, where P P 1 × · · · × P K and WF = (WF 1 , · · · , WF K ). In [5], it is shown that under some mild condition, the mapping WF is a block-contraction 4 w.r.t. · w F,block . Therefore, the NE can be achieved by the following contracting iteration P(t + 1) = WF P(t)(13) 3 If we arrange M N elements of a M × N matrix X as a M N -dimensional vector x, then the Frobenius norm of matrix X is equivalent to the L 2 norm of vector x. 4 After rearranging the elements of the N × N covariance matrix P k as a N 2 -dimensional vector, the block-contraction WF w.r.t. · w F,block is equivalent to a block-contraction w.r.t. · w block defined in (7) with each · k being L 2 norm. where P = (P 1 , · · · , P K ). It can be easily seen that the waterfilling algorithm for the MIMO interference game in (13) is a special case of the contracting iterations in (1). In our general model, x in (1) corresponds to P in (13); the block-contraction mapping T in (1) corresponds to WF in (13); the k-th block component x k corresponds to the covariance matrix P k ; the k-th block component mapping T k corresponds to WF k . For the parallel and distributed implementation, we can partition the variable space P = k P k , where each variable space P k corresponds to the covariance matrix of the k-th link. In each iteration, the receiver of each link k needs to locally measure the PSD of the interference received from the transmitter of the other links, i.e. j =k H jk P j (t)H H jk , computes the covariance matrix of the k-th link and transmits the computational results to the associated transmitter. There are two distributed iterative waterfilling algorithms (IWFA) based on this waterfilling block-contraction, namely simultaneous IWFA and sequential IWFA, which are described as follows: • Simultaneous IWFA: It is an example of the Jacobi scheme, which is given by P k (t + 1) = WF P −k (t) , 1 ≤ k ≤ K • Sequential IWFA: It is an example of the Gauss-Seidel scheme, which is given by P k (t+1) = WF P −k (t) , if(t + 1)modK = k P k (t), otherwise III. CONTRACTING ITERATIONS UNDER QUANTIZED MESSAGE PASSING In this section, we shall study the impact of the quantized message passing on the contracting iterations. We shall first introduce a general quantized message passing model, followed by some general results regarding the convergence behavior under quantized message passing. We assume there are K processing nodes geographically distributed in the wireless systems. Fig. 1 illustrates an example of K-pair MIMO interference game with quantized message passing. The system quantizer can be characterized by the tuple Q = (Q 1 , ..., Q K ), where Q k is the component quantizer (can be scalar or vector quantizer) for the k-th node. Q k can be further denoted by the tuple Q k = (E k , D k ). E k : X k → I k is an encoder and D k : I k →X k is a decoder. I k = {1, · · · , 2 L k } and L k are the index set and the quantization rate of the component quantizer Q k .X k is the reproduction codebook, which is the set of all possible quantized outputs of Q k [19]. The quantization rule is completely specified by Q k : X k →X k . Specifically, the quantized value is A. General Model of Quantized Message Passing given byx k = Q k (x k ) = D k E k (x k ) . Each node k up- dates the k-th block component x k of the n-dimensional vector x, i.e. computes x k (t + 1) = T k x(t) . The encoder E k of Q k accepts the input T k x(t) and produces a quantization index I k (t) = E k T k x(t) . Each node k broadcasts the quantization index I k (t). In other words, the message passing involves only the quantization indices I(t) = I k (t), · · · , I K (t) instead of the actual controls T x(t) = T 1 x(t) , · · · , T K x(t) . Upon receiving the quantization index I k (t), the decoder D k of Q k produces a quantized value x k (t + 1) = T k x(t) = D k I k (t) = T k x(t) + e k (t) . Therefore, the contracting iteration update dynamics of (1) with quantized message passing can be modified as: x(t + 1) = T x(t) + e(t)(14) where e(t) ∈ R n is the quantization error vector at time t. The quantizer design affects the convergence property of the iterative update algorithm fundamentally via the quantization error random process e(t). Generally, the update of each block component is based on the latest overall vector, because T k : X → X k . Thus, the decoders of the system quantizer D = (D 1 , · · · , D K ) is needed at each node. On the other hand, the k-th node only requires the encoder E k of the corresponding quantizer component Q k . Consider the application example in Section II-C under quantized message passing. The system quantizer Q = (Q 1 , ..., Q K ) can be applied in the MIMO interference game with K noncooperative transmitter-receiver pairs as illustrated in Fig. 1. Specifically, for the k-th link, the encoder E k is placed at receiver and the decoder D k is placed at the transmitter. The MIMO interference game under quantized message passing will be illustrated in the following example: Example 1: (MIMO Interference under Quantized Message Passing) In the t-th iteration, the receiver of the k-th link locally measures PSD of the interference received from the transmitter of the other links, i.e. j =k H jk P j (t)H H jk , and computes WF k P −k (t) . The encoder E k of Q k at the k-th receiver encodes WF k P −k (t) and passes the quantization index I k (t) = E k WF k P −k (t) to the k-th transmitter. The decoder D k of Q k at k-th transmitter produces a quan- tized value P k (t + 1) = D k I k (t) = WF k P −k (t) + e k (t). The contracting iterative update dynamics of (13) for the MIMO interference game under quantized message passing is given by: P(t + 1) =WF P(t) + e(t)(15) B. Convergence Property under Quantized Message Passing Under the quantized message passing, the convergence of the contracting iterations is summarized in the following lemma: Lemma 1: (Convergence of Contracting Iterations under Quantized Message Passing) Suppose that T : X → X is a contraction mapping with modulus α ∈ [0, 1) and fixed point x * ∈ X , and that X ⊆ R n is closed. For any initial vector x(0) ∈ X , the sequence {x(t)} generated by (14) satisfies: (a) x(t) − x * ≤ α t x(0) − x * + E(t) ∀t ≥ 1, where E(t) = α t−1 t−1 l=0 α −l e(l) is the accumulated error up to the time t induced by the quantized message passing. ( b) For each t, if there exists a vectorẽ t ∈ R n such that e(t) ≤ ẽ t , then E(t) ≤Ẽ(t), whereẼ(t) α t−1 t−1 l=0 α −l ẽ l . (c) If ẽ 1 = · · · = ẽ t ē , then E(t) ≤Ē(t), whereĒ(t) 1−α t 1−α ē with limiting error bound E(∞) lim t→∞Ē (t) = ē 1−α . Furthermore, define the stationary set as S {Q(x) : x − x * ≤Ē(∞)}. The sufficient condition for convergence is x = Q T(x) ∀x ∈ S and the necessary condition for convergence is ∃x ∈ S, such that x = Q T(x) . Proof: Please refer to Appendix A for the proof. Note that, in the above lemma, the norm · can be any general norm. In the following, we shall focus on characterizing the convergence behavior of the distributed Jacobi and Gauss-Seidel schemes under quantized message passing with the underlying contraction mapping T defined w.r.t. the weighted block-maximum norm · w block [5]- [7]. Under quantized message passing, the algorithm dynamics of the two commonly used parallel and distributed schemes can be described as follows: • Jacobi Scheme under Quantized Message Passing: x k (t + 1) = T k x(t) + e k (t), 1 ≤ k ≤ K (16) • Gauss-Seidel Scheme under Quantized Message Passing: x k (t + 1) =Ŝ k x(t) + e k (t), 1 ≤ k ≤ K (17) wherê S k (x) (18) =    T k (x), k = 1 T k Ŝ 1 (x) + e 1 , · · · , S k−1 (x) + e k−1 , x k , · · · , x K , 2 ≤ k ≤ K Applying the results of Lemma 1, the convergence property of the distributed Jacobi and Gauss-Seidel schemes in (16) and (17) can be summarized in the following lemma. Lemma 2: (Convergence of Jacobi Scheme and Gauss-Seidel Scheme under Quantized Message Passing) Suppose that T : X → X is a block-contraction mapping w.r.t. the weighted block-maximum norm · w block with modulus α ∈ [0, 1) and fixed point x * ∈ X , and that X ⊆ R n is closed. For every initial vector x(0) ∈ X , the sequence {x(t)} generated by both the Jacobi scheme and the Gauss-Seidel scheme under quantized message passing in (16) and (17) satisfies 5 (a) x(t) − x * w block ≤ α t x(0) − x * w block + E w block (t) ∀t ≥ 1, where E w block (t) = α t−1 t−1 l=0 α −l e(l) w block for Jacobi scheme and E w block (t) = 1−α K 1−α α t−1 t−1 l=0 α −l e(l) w block for Gauss-Seidel scheme. (b) If condition in (b) of Lemma 1 holds w.r.t. · w block , then E w block (t) ≤Ẽ w block (t), wherẽ E w block (t) α t−1 t−1 l=0 α −l ẽ l w block for Jacobi scheme andẼ w block (t) 1−α K 1−α α t−1 t−1 l=0 α −l ẽ l w block for Gauss-Seidel scheme. (c) If condition in (c) of Lemma 1 holds w.r.t. · w block , then E w block (t) ≤Ē w block (t), whereĒ w block (t) 1−α t 1−α ē w block for Jacobi scheme withĒ w block (∞) = 1 1−α ē w block andĒ w block (t) 1−α K 1−α 1−α t 1−α ē w block for Gauss-Seidel 6 scheme withĒ w block (∞) = 1−α K (1−α) 2 ē w block . Furthermore, define the stationary set as S w block {Q(x) : x − x * w block ≤Ē w block (∞)}. The sufficient condition and necessary condition are the same as those in Lemma 1. Proof: Please refer to Appendix B for the proof. 5 Our analysis can be extended to totally asynchronous scheme in which the results of Lemma 2 becomes: (a) E w block (t) = 1 1−α α t−1 t−1 l=0 α −l e(l) w block . (b) E w block (t) 1 1−α α t−1 t−1 l=0 α −l ẽ l w block . (c) If ē w block < (1 − α) x(0) − x * w block , we haveĒ w block (t) = 1−α t (1−α) 2 ē w block and E w block (∞) = 1 (1−α) 2 ē w block . By the Asynchronous Convergence Theorem (Proposition 2.1 in Chapter 6 of [7]), we can prove (c) (similar to the proof of Theorem 12 in [6]). The proof is omitted here due to page limit. Since the error bound result of totally asynchronous scheme is similar to Jacobi Scheme and Gauss-Seidel Scheme, our quantizer design later can be applied to the asynchronous case. 6 Compared with Jacobi scheme, Gauss-Seidel scheme and totally asynchronous scheme have extra error terms 1−α K 1−α and 1 1−α , respectively, and 1 1−α > 1−α K 1−α > 1. Remark 2: As a result of Lemma 1 and Lemma 2, the effect of quantized message passing affects the convergence property of the contracting iterative algorithm in a fundamental way. From Lemma 2, the Jacobi and Gauss-Seidel distributed iterative algorithms may not be able to converge precisely to the fixed point under quantized message passing due to the term E w block (t). IV. TIME INVARIANT CONVERGENCE-OPTIMAL QUANTIZER DESIGN In this section, we shall define a Time Invariant Quantizer (TIQ) and then formulate the Time Invariant Convergence-Optimal Quantizer (TICOQ) design problem. We shall consider the TICOQ design for the scalar quantizer (SQ) and the vector quantizer (VQ) cases separately. Specifically, the component quantizer Q k of the k-th node can be a group of scalar quantizers Q s k = (Q s m ) m∈M k or a vector quantizer Q v k . In the SQ case, each element T m (·) (m ∈ M k ) of the vector T k (·) is quantized by a coordinate scalar quantizer Q s m separately. However, in the VQ case, the input to a vector quantizer Q v k is the vector T k (·). Definition 2 (Time Invariant Quantizer (TIQ)): A Time Invariant Quantizer (TIQ) is a quantizer Q = (E, D) such that E and D are time invariant mappings. The system scalar TIQ can be denoted as Q s = (Q s 1 , · · · , Q s m , · · · , Q s n ). Let L s = (L s 1 , · · · , L s n ) be the quantization rate vector for the system scalar TIQ Q s , where L s m ∈ Z + is the quantization rate (number of bits) of the coordinate scalar quantizer Q s m (1 ≤ m ≤ n). The sum quantization rate of the system scalar TIQ Q s is given by n m=1 L s m . Similarly, the system vector TIQ can be denoted as Q v = (Q v 1 , · · · , Q v k , · · · , Q v K ). Let L v = (L v 1 , · · · , L v K ) be the quantization rate vector for the system vector TIQ Q v , where L v k ∈ Z + is defined as the quantization rate (number of bits) of the coordinate vector quantizer Q v k (1 ≤ k ≤ K). The sum quantization rate of the system vector TIQ Q v is given by K k=1 L v k . Using Lemma 2 (c), the limiting error bound of the al- gorithm trajectory is given byĒ w block (∞) = 1 1−α ē w block (Jacobi scheme) orĒ w block (∞) = 1−α K (1−α) 2 ē w block (Gauss- Seidel scheme) . Therefore, the TICOQ design, which minimizesĒ w block (∞) under the sum quantization rate constraint, is equivalent to the following: Problem 1 (TICOQ Design Problem): min Q s orQ v ē w block(19) s.t. L v k = L, L v k ∈ Z + (1 ≤ k ≤ K) VQ (21) where ē w block = max x∈X x − Q s (x) w block (SQ case) or ē w block = max x∈X x − Q v (x) w block (VQ case). Remark 3 (Interpretation of Problem 1): Note that the optimization variable in Problem 1 is the system TIQ Q s or Q v . The objective function ē w block = max x∈X x − Q s (x) w block or ē w block = max x∈X x − Q v (x) w block obviously depends on the choice of the system TIQ Q s or Q v . Furthermore, the constraint (20) or (21) is the constraint on the quantization rate L s = (L s 1 , · · · , L s n ) or L v = (L v 1 , · · · , L v K ) , which is also an effective constraint on the optimization domains of Q s or Q v , respectively. It is because L s m or L v k is a parameter (corresponding to the cardinality of the index set, i.e. |I s m | = 2 L s m or |I v k | = 2 L v k ) of the encoder and decoder of Q s m or Q v k . The Lagrangian function of Problem 1 is given by: L s (Q s , µ s ) = ē w block + µ s ( n m=1 L s m − L) (SQ case) or L v (Q v , µ v ) = ē w block + µ v ( K k=1 L v k − L) (VQ case), where µ s or µ v is the Lagrange multiplier (LM) corresponding to the constraint (20) or (21). Hence, the optimization problem 1 can also be interpreted as optimizing the tradeoff between the convergence performance ē w block and the communication overhead (19) actually corresponds to a worst case error in the algorthm trajectory. In other words, the TICOQ design is trying to find the optimal TIQ which minimizes the worst case error. In fact, the algorithm trajectory x(t) is a random process induced by the uncertainty in the initial point x(0). In general, we do not have knowledge on the distribution of x(t) due to the uncertainty on x(0). Hence, the solution to Problem 1 (optimizing the worst case error) offers some robustness w.r.t. the choice of x(0). In the following, we shall discuss the scalar and vector TICOQ design based on Problem 1 separately. A. Time Invariant Convergence-Optimal Scalar Quantizer We first have a lemma on the structure of the optimizing quantizer in the scalar TICOQ design in Problem 1. Lemma 3 (Structure of the Scalar TICOQ): If each component norm · k on R n k (1 ≤ k ≤ K) of the weighted block-maximum norm · w block defined by (7) is monotone (or absolute) norm 7 , then the optimal coordinate scalar quantizer Q s * m (1 ≤ m ≤ n) w.r.t. the worst case error ē w block = max x∈X x − Q s (x) w block is a uniform quantizer. 7 A vector norm is monotone if and only if it is absolute [23]. Proof: Please refer to Appendix C for the proof. While the optimization variable in Problem 1 (SQ case) is Q s = (Q s 1 , · · · , Q s n ), using Lemma 3, we can restrict the optimization domain of each coordinate scalar quantizer Q s m (1 ≤ m ≤ n) to uniform quantizer without loss of optimality. Thus, the worst case error of the m-th coordinate is given by |ē m | max xm∈Xm |x m − Q s m (x m )| = |Xm| 2×2 L s m (1 ≤ m ≤ n), where |X m | is the length of the interval X m (x m ∈ X m ), and the remaining optimization variable is reduced from Q s = (Q s 1 , · · · , Q s n ) to L s = (L s 1 , · · · , L s n ). Scalar TICOQ design in Problem 1 w.r.t. L s = (L s 1 , · · · , L s n ) is a Nonlinear Integer Programming (NLIP) problem, which is in general difficult to solve. Verifying the optimality of a solution requires enumerating all the feasible solutions in most cases. In the following, we shall derive the optimal solution to the scalar TICOQ design in Problem 1 w.r.t. the weighted block-maximum norm · w block defined by (7), in which each component norm · k is the weighted maximum norm and the L p norm separately. Theorem 3 (Solution for Weighted Maximum Norm): Given a weighted block-maximum norm · w block defined by (7) (parameterized by w = (w 1 , · · · , w K )) with · k being the weighted maximum norm · a k ∞ defined by (2) (parameterized by a k = (a m ) m∈M k ), let L s * m = (log 2 Cm τ ) + , where C m |Xm| 2am( K k=1 w k I[m∈M k ]) , I[·] is indicator function and τ > 0 is a constant related to the LM of the constraint (20) chosen to satisfy the constraint n m=1 (log 2 Cm τ ) + = L. The optimal integer solution of Problem 1 for the SQ case is given by 8 : L s * [m] = ⌈L s * [m] ⌉, if m ≤ n m=1 (L s * m − ⌊L s * m ⌋) ⌊L s * [m] ⌋, otherwise(22) The optimal value of Problem 1 under continuous relaxation is τ . Proof: Please refer to Appendix D for the Proof. Theorem 4 (Solution for L p Norm): Given a weighted block-maximum norm · w block defined by (7) (parameterized by w = (w 1 , · · · , w K )) with · k being the L p norm · p defined by (3) (parameterized by p), the optimal solution of Problem 1 for the SQ case with continuous relaxation (L s m ∈ R + ) isL s * m = 1 p log 2 ( Cm K k=1 τ k I[m∈M k ] ∨ 1), where C m |Xm| p 2 p ( K k=1 w p k I[m∈M k ]) , {τ 1 , · · · , τ K } and τ are constants related to the LM of the constraint (21) chosen to satisfy the constraint K k=1 m∈M k 1 p log 2 ( Cm τ k ∨1) = L and complementary slackness conditions 1 τ k m∈M k (C m ∧ τ k ) − τ = 0 8 We arrange the real sequence z 1 , · · · , zn in decreasing order and denote them as z [1] ≥ · · · ≥ z [m] ≥ · · · ≥ z [n] , where z [m] represents the m-th largest term of {zm}. (∀k ∈ {1, · · · , K}) 9 . The optimal value of Problem 1 with continuous relaxation is τ 1 p . Proof: Please refer to Appendix D for the Proof. Remark 5 (Determination of {τ 1 , · · · , τ K } and τ ): Solving for {τ 1 , · · · , τ K } and τ involves solving a system of K + 1 equations with K + 1 unknowns. We have 2 K − 1 valid cases for the above system of equations according to τ k > max m∈M k C m or τ k ≤ max m∈M k C m (∀k). Firstly, if τ k > max m∈M k C m (∀k), thenL s m = 0 (∀m), which is not a valid case. Therefore, without loss of generality, assume τ k ≤ max m∈M k C m ∀k ∈ {1, · · · , N } and τ k > max m∈M k C m ∀k ∈ {N + 1, · · · , K}. The system of K + 1 equations and unknowns reduce to N + 1 equations, which are given by: N k=1 m∈M k 1 p log 2 ( Cm τ k ∨1) = L and m∈M k (C m ∧ τ k ) = τ (∀k ∈ {1 , · · · , N }), and N + 1 unknowns 10 : {τ 1 , · · · , τ N } and τ . B. Time Invariant Convergence-Optimal Vector Quantizer We first have a lemma on the structure of the optimizing quantizer in the vector TICOQ design in Problem 1. Lemma 4 (Structure of the Vector TICOQ): If the component norm · k (1 ≤ k ≤ K) on R n k of the weighted block-maximum norm · w block defined by (7) is monotone (or absolute) norm, each vector TICOQ Q v * k is a n k -dimensional lattice quantizer, the structure of which is uniquely determined by the norm · k on R n k . In particular, if · k (1 ≤ k ≤ K) is L 2 norm, each vector TICOQ Q v * k is the thinnest lattice for the covering problem in Euclidean space; If · k (1 ≤ k ≤ K) is weighted maximum norm, each vector TICOQ Q v * k reduces to n k coordinate scalar TICOQ Q s * m (m ∈ M k ) with scalar quantization 11 of each coordinate x m (m ∈ M k ) of x k . Proof: Please refer to Appendix E for the proof. A lattice is a regular arrangement of points in ndimensional space that includes the origin. "Regular" means that each point "sees" the same geometrical environment as any other point [20]. The lattice covering 9 x ∨ a max{x, a} and x ∧ a min{x, a}. ∨ 1) = L, m∈M 2 (Cm ∧ τ 2 ) = τ , and 2 unknowns: τ 2 , τ . 11 In other words, the vector TICOQ design reduces to scalar TICOQ design discussed in Theorem 3. problem asks for the most economical way to arrange the lattice points so that the n-dimensional space can be covered with overlapping spheres whose centers are the lattice points, i.e. tries to find the thinnest (i.e. minimum density 12 ) lattice covering [24]. The thinnest lattice coverings known in all dimension n (n ≤ 23) are the dual lattice A * n (when 1 ≤ n ≤ 5, A * n is known to be optimal) [24]. The worst case error of the dual lattice A * n k for the k-th node when · k is L p norm (p > 2) is less than that measured in L 2 norm (Appendix E). Therefore, if · k is L p norm (p ≥ 2), we can solve the TICOQ design (VQ case) in Problem 1 using dual lattice structure A * n k . Thus, the worst case error is given by ē k k = m∈M k |Xm| 1 n k +1 1 n k n k (n k +2) 12(n k +1) 2 − L v k n k (Appendix E), where |X m | is the length of the interval X m (x m ∈ X m ) (1 ≤ m ≤ n), and the remaining opti- mization variable is reduced from Q v = (Q v 1 , · · · , Q v K ) to L v = (L v 1 , · · · , L v K ). Similarly, vector TICOQ design in Problem 1 w.r.t. L v = (L v 1 , · · · , L v K ) is also a Nonlinear Integer Programming (NLIP) problem, which is in general difficult to solve. In the following, we shall derive the optimal solution to the vector TICOQ design in Problem 1 based on dual lattice A * n k . Theorem 5 (Solution for Dual Lattice quantizer): Given a weighted block-maximum norm · w block defined by (7) (parameterized by w = (w 1 , · · · , w K )) and the dual lattice {A * n k : 1 ≤ k ≤ K} quantizer, letL v * k = n k (log 2 D k τ ) + , where D k 1 w k m∈M k |Xm| 1 n k +1 1 n k n k (n k +2) 12(n k +1) , and τ > 0 is a constant related to the LM of the constraint (21) chosen to satisfy the constraint K k=1 n k (log 2 D k τ ) + = L. The optimal integer solution of Problem 1 for the VQ case w.r.t. dual lattice {A * n k : 1 ≤ k ≤ K} quantizer when n 1 = · · · = n K is given by: L v * [k] = ⌈L v * [k] ⌉, if k ≤ K k=1 (L v * k − ⌊L v * k ⌋) ⌊L v * [k] ⌋, otherwise(23) The optimal value of Problem 1 under continuous relaxation is τ . Proof: Please refer to Appendix E for the Proof. C. Tradeoff between Convergence Error and Message Passing Overhead In this subsection, we shall quantify the tradeoff between the convergence error of the algorithm trajectory and the message passing overhead using TICOQ. Specifically, the steady-state convergence error in the algorithm trajectory is related toĒ w block (∞) and the message passing overhead is related to the sum quantization rate 12 The density of a covering is the defined as the number of spheres that contain a point of the space [24]. (number of bits) L of the system quantizer. The following lemma summarizes the tradeoff result. Lemma 5: (Performance Tradeoff of the Scalar and Vector TICOQ) For L ≥ L ′ (L ∈ Z + ), where L ′ =    n m=1 log 2 C m − n log 2 (min m C m ), WM norm n m=1 log 2Cm − n log 2 min mCm , L p norm K k=1 log 2 D k − K log 2 (min k D k ), dual lattice(24) C m = ( K k=1 n k 1[m ∈ M k ])C m , the limiting error bound of the scalar and vector TICOQ considered in this section is given byĒ w block (∞) = 1 1−α O(2 − L n ). Proof: Please refer to Appendix F for the proof. Remark 6: As L → ∞,Ē w block (∞) → 0, which reduces to the conventional convergence results with perfect message passing. On the other hand,Ē w block (∞) > 0 for finite L and hence, we cannot guarantee the convergence behavior of the contracting iterations with TICOQ to the fixed point x * of the contraction mapping T anymore. Nevertheless, the convergence error decreases exponentially w.r.t. the message passing overhead L. V. TIME VARYING CONVERGENCE-OPTIMAL QUANTIZER DESIGN Similar to Section V, we shall define a Time Varying Quantizer (TVQ) and then formulate the Time Varying Convergence-Optimal Quantizer (TVCOQ) design problem. We shall consider the TVCOQ design for both the SQ and VQ cases separately. Definition 3 (Time Varying Quantizer (TVQ)): A Time Varying Quantizer (TVQ) is a quantizer Q(t) = E(t), D(t) such that E(t) and D(t) changes with time. In other words, the quantizer Q(t) = E(t), D(t) at the t-th iteration is function of time t. The system scalar TVQ at the t-th iteration can be denoted as Q s (t) = Q s 1 (t), · · · , Q s m (t), · · · , Q s n (t) with quantization rate vector (at the t-th iteration) L s (t) = L s 1 (t), · · · , L s n (t) . Similarly, the system vector TVQ at the t-th iteration can be denoted as Q v (t) = Q v 1 (t), · · · , Q v k (t), · · · , Q v K (t) with quan- tization rate vector (at the t-th iteration) L v (t) = L v 1 (t), · · · , L v K (t) . Using the result (c) of Lemma 2, the TVCOQ design, which minimizeẼ w block (T ) under total quantization rate constraint over a horizon ofT iterations, is equivalent to the following: Problem 2 (TVCOQ Design Problem): min {Q s (t):0≤t≤T } or{Q v (t):0≤t≤T } αT −1T −1 t=0 α −t ẽ t w block (25) s.t.T −1 t=0 n m=1 L s m (t) =T L, L s m (t) ∈ Z + (∀m, t)SQ (26) orT −1 t=0 K k=1 L v k (t) =T L, L v k (t) ∈ Z + (∀k, t)VQ (27) By introducing additional auxiliary variables {L(t) : 0 ≤ t ≤T − 1}, where L(t) can be interpreted as the per-stage sum quantization rate, and applying primal decomposition techniques, we can decompose Problem 2 into subproblems (per-stage TICOQ design Q s (t) or Q v (t) (0 ≤ t ≤T − 1)), which are given by Problem 3: (TVCOQ Subproblems: Per-stage TICOQ Design Problem) min Q s (t)orQ v (t) ẽ t L(t) w block (28) s.t. n m=1 L s m (t) = L(t), L s m (t) ∈ Z + (∀m)SQ (29) or K k=1 L v k (t) = L(t), L v k (t) ∈ Z + (∀k)VQ(αT −1T −1 t=0 α −t ẽ * t L(t) w block (31) s.t.T −1 t=0 L(t) =T L, L(t) ∈ Z + (0 ≤ t ≤T − 1) (32) where ẽ * t L(t) w block is the optimal value of the t-th subproblem in Problem 3 for given L(t) (0 ≤ t ≤T −1). Given the per-stage sum quantization rate L(t), each Perstage TICOQ Design Problem in Problem 3 is the same as the TICOQ design problem in Section IV, and hence, the approaches in Section IV can be applied to solve Problem 3 for given L(t) (0 ≤ t ≤T − 1), including both SQ case and VQ case. In the following, we shall mainly discuss the Sum Quantization Rate {L(t) : 0 ≤ t ≤T − 1} Allocation in Problem 4 and analyze the tradeoff between convergence error and message passing overhead for the TVCOQ design. A. Time Varying Convergence-Optimal Scalar and Vector Quantizers Similar to the TICOQ in Problem 1, each TVCOQ subproblem in Problem 3 is a Nonlinear Integer Programming (NLIP) problem. Brute-force solution to the TVCOQ master problem in Problem 4 requires exhaustive search, which is not acceptable. Therefore, we first apply continuous relaxation to the subproblems in Problem 3 to obtain the closed-form expression ẽ * t w block of the t-th subproblem. Based on the closed-form ẽ * t w block , the master problem in Problem 4 becomes tractable, the solution of which is summarized in the following lemma. Theorem 6: (Solution to TVCOQ Master Problem for SQ and VQ) For any givenT , assume L ≥ L ′ − nT −1 2 log 2 α (L ∈ Z + ). LetL * (t) = n log 2 ( α −t ln 2 nµ ), where µ > 0 is the LM of the constraint (32) chosen to satisfy the constraint T −1 t=0 n log 2 ( α −t ln 2 nµ ) =T L. The optimal integer solution to the TVCOQ Master Problem in Problem 4 is given by L * ([t]) = ⌈L * ([t])⌉, if t ≤ T −1 t=0 (L * (t) − ⌊L * (t)⌋) ⌊L * ([t])⌋, otherwise(33) Proof: Please refer to Appendix G for the Proof. Given the per-stage sum quantization rate {L * (t) : 0 ≤ t ≤T − 1} allocation obtained by Theorem 6, the TVCOQ Subproblems in Problem 3 (similar to the TICOQ design problem in Section IV) can be easily solved by Theorem 3, 4 and 5. B. Tradeoff between Convergence Error and Message Passing Overhead In this subsection, we shall quantify the tradeoff between the error of the algorithm trajectory and the message passing overhead using TVCOQ. The following lemma summarizes the tradeoff results. Lemma 6: (Performance Tradeoff of the Scalar and Vector TVCOQ) For any givenT , the convergence error bound (at theT -th iteration) of the scalar and vector TVCOQ is given byẼ w block (T ) =T αT −1 2 O(2 − L n ) for L ≥ L ′ − nT −1 2 log 2 α (L ∈ Z + ), where L ′ is given by (24). Proof: Please refer to Appendix G for the proof. Remark 7: As L → ∞,Ẽ w block (T ) → 0, which reduces to the conventional convergence results with perfect message passing. On the other hand, for any fixed L, as T → ∞, we haveẼ w block (T ) → 0. Hence, using TVCOQ, one could achieve asymptotically zero convergence error even with finite L. VI. SIMULATION RESULTS AND DISCUSSIONS In this section, we shall evaluate the performance of the proposed time invariant convergence-optimal quantizer (TICOQ) and time varying convergence-optimal quantizer (TVCOQ) for the contracting iterations by simulations. We consider distributed precoding updates in MIMO interference game with K transmitter and receiver pairs where the transmit convariance matrix P k of the k-th transmitter is iteratively updated (and quantized) according to (15). In the simulation, we consider K = 2, 4, 8 noncooperative transmitter-receiver pairs with N = 2, 4 transmit/receive antenna. The distance from the k-th transmitter to the j-th receiver is denoted as d kj . The bandwidth is 10 MHZ. The pathloss exponent is γ = 3.5. Each element of the small scale fading channel matrix is CN (0, 1) distributed. We compare the performance with two reference baselines. Baseline 1 (BL1) refers to the case with perfect message passing [5]. Baseline 2 (BL2) refers to the case with uniform scalar quantizer. A. Performance of the TICOQ Fig. 2 (a) and Fig. 2 (b) illustrate the sum throughput and convergence error (w.r.t. the weighted blockmaximum norm · w block ) versus the per-stage sum quantization rate of 2 pairs MIMO interference game under fixed number of iterations. The sum throughput of all the schemes increases as L decreases due to the decreasing convergence error. It can be observed that the proposed scalar and vector TICOQ have significant performance gain in sum throughput and convergence error compared with the commonly used uniform scalar quantizer. In addition, the vector TICOQ outperforms the scalar TICOQ in convergence performance at the cost of the higher encoding and decoding complexity. Fig. 3 and Fig. 4 show the sum throughput and convergence error versus the instantaneous iteration time index of MIMO interference game under a fixed perstage sum quantization rate and total number of iterations with different K and N . It can be seen that in all cases, the proposed scalar and vector TICOQ have significant performance gain in sum throughput and convergence error compared with the commonly used uniform scalar qunatizer. B. Performance of the TVCOQ From Fig. 2 (a) and Fig. 2 (b), we can observe that the proposed TVCOQ has significant performance gain in sum throughput and convergence error compared with commonly used uniform scalar qunatizer and TICOQ. For example, the sum throughput of the TVCOQ is very close to that with perfect message passing. Fig. 3 and 4 illustrate the transient performance of the TVCOQ versus iteration index t. We observe that the performance of the TVCOQ improves as t increases. This is because the TVCOQ optimizes the quantization rate allocation over both the node domain and the time domain (over a horizon ofT iterations). fixed total number of iterationsT ). As the message passing overhead L increases, the convergence error of all the proposed quantization schemes approaches to 0 with order O(2 − L n ) under fixedT , which verified the results in Lemma 5 and 6. Similarly, the sum throughput of all the schemes increases as L decreases due to the decreasing convergence error as shown in Fig. 2 (a). Fig. 5 (b) shows the tradeoff between convergence error and message passing overhead (in terms ofT at fixed L). As the total number of iterations increases, the convergence error of TVCOQ decreases, while the convergence error of the TICOQ and the uniform quantizer fail to decrease. It is because that TICOQ has steady state convergence error floor for any finite L (i.e. C. Tradeoff between Convergence Error and Message Passing Overhead E w block (∞) = 1 1−α O(2 − L n ) > 0 shown in Lemma 5), while the convergence error of TVCOQ goes to 0 asT goes to infinity (i.e.Ẽ w block (T ) =T αT Lemma 6). Similarly, as the total number of iterations increases, the sum throughput performance of the TVCOQ improves but this is not the case for TICOQ and the Baseline 2 as shown in Fig. 5 (a). −1 2 O(2 − L n ) → 0 asT → ∞ shown in VII. SUMMARY In this paper, we study the convergence behavior of general iterative function evaluation algorithms with quantized message passing. We first obtain closed-form expressions of the convergence performance under quantized message passing among distributed nodes. To minimize the effect of the quantization error on the convergence, we propose the time invariant convergenceoptimal quantizer (TICOQ) and the time varying convergence-optimal quantizer (TVCOQ). We found that the convergence error scales with the number of bits for quantized message passing in the order of 1 1 −α O(2 − L n ) andT αT −1 2 O(2 − L n ) for TICOQ and TVCOQ respectively. Finally, we illustrate using MIMO interference game as example that the proposed designs achieve significant gain in the convergence performance. APPENDIX APPENDIX A: PROOF OF LEMMA 1 First, we prove conclusion (a). By the update equation in (14), the triangle inequality of norm and the property of the contraction mapping, we have x(t) − x * = T x(t − 1) + e(t − 1) − x * ≤ T x(t − 1) − x * + e(t − 1) ≤ α x(t − 1) − x * + e(t − 1) = α T x(t − 2) + e(t − 2) − x * + e(t − 1) ≤ · · · ≤ α t x(0) − x * + E(t), where E(t) t l=1 α l−1 e(t − l) (1) = l−1 l ′ =0 α t−l ′ −1 e(l ′ ) (2) = α t−1 t−1 l=0 α −l e(l) ,(1) is obtained by denoting l ′ = t − l, and (2) is obtained by denoting l = l ′ . (b) is trivial. Finally, we prove conclusion (c). Since e(t) ≤ ē ∀t, we have E(t) = t l=1 α l−1 e(t − l) ≤Ē(t) t l=1 α l−1 ē = 1−α t 1−α ē andĒ(∞) lim t→∞Ē (t) = ē 1−α . Given the limiting error boundĒ(∞), we know that ∃T s.t. ∀t > T, x(t) ∈ S. If x = Q T(x) ∀x ∈ S, then x(t) → x(∞) ∈ S. Thus, we obtain the sufficient condition for convergence. On the other hand, if x = Q T(x) ∀x ∈ S, x(t) will not converge, but jumps among (at least two) points in S. Thus, we obtain the necessary condition for convergence. APPENDIX B: PROOF OF LEMMA 2 Jacobi scheme in (16) shares the similar form as (14). Therefore, the proof of Jacobi scheme is the same as that in Appendix A, except based on weighted blockmaximum norm. Next, we prove the convergence for the Gauss-Seidel scheme under quantized message passing. Letx k = x for k = 1 andx k = Ŝ 1 (x) + e 1 , · · · ,Ŝ k−1 (x) + e k−1 , x k , · · · , x K for 2 ≤ k ≤ K. By the definition of weighted block-maximum norm and the property of block-contraction T, we have Ŝ k (x) − x * k k w k = T k (x k ) − T k (x * ) k w k ≤ T(x k ) − T(x * ) w block ≤ α x k − x * w block        ≤ α x − x * w block , k = 1 = α max max j<k Ŝ j (x)+ej −x * j j wj , max j≥k xj−x * j j wj , 2 ≤ k ≤ K(34) When k = 2, by (34), we have Ŝ 2 (x) − x * 2 2 w 2 ≤α max Ŝ 1 (x) + e 1 − x * 1 1 w 1 , max j≥2 x j − x * j j w j ≤α max Ŝ 1 (x) − x * 1 1 w 1 + e 1 1 w 1 , max j≥2 x j − x * j j w j ≤α x − x * w block + α e w block by iteration ⇒ Ŝ k (x) − x * k k w k ≤ α x − x * w block + k−1 l=1 α l e w block , ∀k ⇒ Ŝ (x) − x * w block ≤ α x − x * w block + α(1 − α K−1 ) 1 − α e w block ⇒ x(t) − x * w block = Ŝ x(t − 1) + e(t − 1) − x * w block ≤α t x(0) − x * w block + E w block (t) where E w block (t) 1−α K 1−α α t−1 t−1 l=0 α −l e(l) w block . Since we have shown x(t) − x * w block ≤ α t x(0) − x * w block + E w block (t) , which is the same as the conclusion in Lemma 1 (a) except for the different norm · w block and the extra scalar 1−α K 1−α > 1 (indicating the additional error due to the incremental nature of the Gauss-Seidel update) in E w block (t), we can follow the similar steps in Appendix A to obtain the conclusion for Gauss-Seidel scheme. APPENDIX C: PROOF OF LEMMA 3 First, we shall show that · w block is monotone (absolute). Denote |x| (|x 1 |, · · · , |x n |). We say that |x| ≤ |y| if |x m | ≤ |y m | ∀m. Due to the monotonicity of · k (∀k), we have |x| ≤ |y| ⇔ |x k | ≤ |y k | (∀k) ⇒ x k k ≤ y k k (∀k) ⇒ max k x k k w k ≤ max k y k k w k ⇔ x w block ≤ y w block . Thus, · w block is monotone (absolute). Next, we shall show that each coordinate scalar TICOQ is uniform quantizer. Given any L s s.t. (20) is satisfied, by the monotonicity of · w block , we can easily prove min Q s (L s ) ē w block ⇔ min Q s m (L s m )ēm (∀m). In other words, given any L s m , the coordinate scalar TICOQ Q s * m (L s m ) should minimize the worst-case error |ē m | for the m-th coordinate of the input vector. Since the uniform quantizer minimizes the worst-case error regardless of the shape of the input pdf [20], each coordinate scalar TICOQ Q s * m is a uniform quantizer. APPENDIX D: PROOF OF THEOREM 3 AND THEOREM 4 When · k is weighted maximum norm, the objective function becomes ē w block = max k ē k k w k = max k maxm∈M k |ēm| am w k = max m |ēm| am( K k=1 w k I[m∈M k ]) = max m C m 2 −L s m . Therefore, we have min Q s ē w block = min L s max m C m 2 −L s m . By continuous relaxation and equivalent transformation of minimax problems [25], the minimax problem in Problem 1 is equivalent to the following problem (under continuous relaxation), which is in epigraph form with optimization variables {L s m }, τ : (P s ) : min {L s m },τ τ s.t. C m 2 −L s m ≤ τ (1 ≤ m ≤ n) (35) n m=1L s m = L,L s m ≥ 0(1 ≤ m ≤ n)(36) (P s ) is a convex optimization problem. It can be easily shown that the Slater's condition holds. Therefore, we shall get the optimal solution to the relaxed problem through KKT conditions. The Lagrangian of (P s ) is given by L s (L s , τ, λ s , ν s , µ s ) = τ + Cm τ ) + = L. Furthermore, substituting the relaxed solutionL s * m into the transformed problem (P s ), the optimal value of (P s ) is given by τ and this is also the optimal value ē * w block of the original optimization Problem 1 (under continuous relaxation) due to the equivalence of the epigraph transformation. Next, we are trying to prove that the rounding strategy in (22) in Theorem 3 is the optimal integer solution of Problem 1. Suppose we round L s * m to ⌊L s * m ⌋ and let δ s m =L s * m − ⌊L s * m ⌋. Denote b m = C m 2 −⌊L s * m ⌋ = C m 2 −(L s * m −δ s m ) , i.e. b m = C m if C m ≤ τ and b m = 2 δ s m τ otherwise. The value of the objective function in Problem 1 is max m b m . ReducingL s * m by N + δ s m (N ≥ 1, N ∈ Z + ) ∀m for integer solution will lead to the value of objective function greater than max m b m . On the other hand, since the optimal value of the original integer programming problem is greater than the optimal value under continuous relaxation, increasingL s * m by N − δ s m (N > 1, N ∈ Z + ) ∀m will not help further reducing the value of the objective function. Therefore, the optimal integer solution {L s * m } satisfies δ s m − 1 ≤L s * m − L s * m ≤ δ s m and the rounding strategy in (22) is the optimal integer solution of Problem 1. When · k is L p norm, the objective function becomes ē w block = max k ē k k w k = max k ( m∈M k |ēm| p ) 1 p w k = max k ( m∈M k |ēm| p K k=1 w p k I[m∈M k ] ) 1 p = max k ( m∈M k C m 2 −pL s m ) 1 p . There- fore, we have min Q s ē w block = min L s max k ( m∈M k C m 2 −pL s m ) 1 p . Using similar continuous relaxation and equivalent transformation of minimax problems [25], the minimax problem in Problem 1 is equivalent (under continuous relaxation) to the following problem, which is in epigraph form with optimization variables {L s m }, τ : (Q s ) : min L s m ,τ τ s.t. m∈M k C m 2 −pL s m ≤ τ (1 ≤ k ≤ K) (37) constraint in(36) (Q s ) is a convex optimization problem. Using similar argument as in the weighted maximum norm case, the Lagrangian of (Q s ) is given by L s (L s , τ, λ s , ν s , µ s ) = τ + K k=1 λ s k ( m∈M k C m 2 −pL s m −τ )− n m=1 ν s mL s m + µ s ( n m=1L s m − L), where λ s , ν s , µ s are the LMs. Using standard KKT conditions, the optimal solution of (Q s ) isL s * m = 1 p log 2 p ln(2)( K k=1 λ s k 1[m∈M k ])Cm µ s ∨ 1 and m∈M k C m 2 −pL s m = m∈M k µ s p ln(2)λ s k ∧ C m = τ if λ s k > 0, x ∨ a max{x,m∈M k (C m ∧ τ k ) − τ = 0 (∀k). Finally, substitut-ingL s * m into the transformed problem (Q s ), the optimal value of (Q s ) is given by τ 1 p and this is also the optimal value ē * w block of the original optimization Problem 1 (under continuous relaxation) due to the equivalence of the epigraph transformation. APPENDIX E: PROOF OF LEMMA 4 AND 5 Since x k k ≤ y k k (∀k) ⇒ x w block ≤ y w block , given any L v s.t. (21) is satisfied, we have min Q v (L v ) ē w block ⇔ min Q v k (L v k ) ē k k (∀k). The type of each component vector TICOQ Q v * k is uniquely determined by the norm · k on R n k . Furthermore, it is easy to prove that Q v k should be a lattice quantizer to minimize the worst-case error ē k k . We shall discuss the two cases for L 2 norm and weighted maximum norm separately. • · k (1 ≤ k ≤ K) is L 2 norm: The covering problem asks for the thinnest covering of R n k dimensional space with overlapping spheres, i.e. minimizes covering radius (circumradius of the Voronoi cell) ρ k = ē k k = m∈M k |ē m | 2 1 2 [24]. Therefore, each component vector TICOQ Q v * k minimizing the worst-case error is the thinnest lattice for the covering problem. • · k (1 ≤ k ≤ K) is weighted maximum norm: Given any L v k , we have min Q v k (L v k ) ē k a k ∞ = min Q v k (L v k ) max m∈M k |ēm| am . It can be easily shown that each face of the Voronoi cell for the weighted maximum norm is (n k −1)dimensional hyperplane parallel to a coordinate axis in the n k dimensional space. Therefore, it is equivalent to the scalar quantization of each coordinate x m of the input block component x k with different scalar quantizers, i.e. Q v * k = (Q s * m ) m∈M k . Next, we shall show the optimal solution for the dual lattice quantizer. For A * n k , the covering radius is R n k = n k (n k +2) 12(n k +1) , the volume of the fundamental region is 1 n k +1 . The volume of bounded region X k is V B k = m∈M k |X m |. Therefore, the worst case error of the vector TICOQ with quantization rate L v k is given by ē k k = V B k 2 L v k 1 n k +1 1 n k R n k = m∈M k |Xm| 1 n k +1 1 n k n k (n k +2) 12(n k +1) 2 − L v k n k 13 . The covering radius measured by L p norm (p > 2) can be proved to be less than R n k = n k (n k +2) 12(n k +1) , which is the covering radius measured by L 2 norm. Thus, the worst case error is also less than ē k k given above. Therefore, in general, we can apply A * n quantizer for VQ case when · k is L p norm (p ≥ 2) and consider TICOQ design for VQ cased based on A * n quantizer. Problem 1 for A * n quantizer is equivalent to min L v max k D k 2 − L v k n k s.t. (21). Similar to Appendix D, the optimal solution under continuous relaxation is L v * k = n k (log 2 D k τ ) + , where τ is a constant related to the LM and is chosen to satisfy n m=1 n k (log 2 D k τ ) + = L. For n 1 = · · · = n K , we can use similar argument as in Appendix D to show that the rounding method in (24) is optimal integer solution to Problem 1 (VQ case). APPENDIX F: PROOF OF LEMMA 5 We try to find L ′ in the following three cases s.t. when L ≥ L ′ , we have C m ≥ τ (∀m), C m ≥ K k=1 τ k I[m ∈ M k ] (∀m) and D k ≥ τ (∀k), separately (to obtain the closed-form optimal value ē * w block ). Specifically, we have • SQ ( · k is weighted maximum norm) in Theorem 3 (same as VQ ( · k is weighted maximum norm)): C m ≥ τ (∀m) ⇔L s m = (log 2 First, we shall find the requirement for L s.t. each subproblem (under continuous relaxation) has closedform ẽ * t L(t) w block (to obtain the closed-form objective function of the TVCOQ master problem). By Appendix F, to obtain closed-form ẽ * t L(t) w block , we require L(t) ≥ L ′ (0 ≤ t ≤T − 1). Under this assumption, we have ẽ * t L(t) w block = η2 − L(t) n , where . By continuous relaxation and standard convex optimization techniques (similar to Appendix D), we have the optimal solution (under continuous relaxation)L * (t) = n log 2 ( α −t ln 2 nµ ) + . Since L(t) ≥ L ′ (∀t),L * (t) = n log 2 ( α −t ln 2 nµ ). log 2 (ln 2) − t log 2 (α) − log 2 (nµ) = nt log 2 (ln 2) − n (T −1)T 2 log 2 (α) − nt log 2 (nµ) =T L ⇒L(t) = nT −1 2 log 2 (α) − nl log 2 (α) + L. Since L(t) increases with t, to satisfy L(t) ≥ L ′ (∀t), we require L(0) = nT −1 2 log 2 (α) + L ≥ L ′ ⇒ L ≥ L ′ − nT −1 2 log 2 α. Therefore, when L ≥ L ′ −nT −1 2 log 2 α, we haveL * (t) = n log 2 ( α −t ln 2 nµ ) and α −t ẽ * t (L * (t)) w block = η · α −t nµ α −t ln 2 = η · nµ ln 2 . Similar to Appendix D, the rounding policy in (33) can be shown to be optimal. Next, we shall analyze the tradeoff between convergence error and message passing overhead for L ≥ L ′ − nT −1 2 log 2 α (L ∈ Z + ). Since it has been shown thatL(t) = nT −1 2 log 2 (α) − nl log 2 (α) + L, we have ηαT −1 T −1 t=0 α −t 2 − L(t) n = ηαT −1 T −1 t=0 α −t α −T −1 2 · α t · 2 − L n =T αT −1 2 O(2 − L n ) ⇒Ẽ w block (T ) = T αT −1 2 O(2 − L n ). Fig. 1 . 1Illustration of K-pair MIMO interference game. m = L, L s m ∈ Z + (1 ≤ m ≤ n) SQ(20) or K k=1 v k . The LM µ s or µ v can be regarded as the per-iteration cost sensitivity. Remark 4 (Robust Consideration in Problem 1): The optimization objective ē w block in 10 For example, consider K = 2, n = 4, M 1 = {1, 2} and M 2 = {3, 4} . We have 2 2 − 1 = 3 valid cases. Case 1 ( τ 1 ≤ max m∈M 1 Cm and τ 2 ≤ max m∈M 2 Cm ) = L, m∈M k (Cm ∧ τ k ) = τ (∀k ∈ {1, 2}), and 3 unknowns: {τ 1 , τ 2 },τ . Case 2 ( τ 1 ≤ max m∈M 1 Cm and τ 2 > max m∈M 2 Cm ) L, m∈M 1 (Cm ∧ τ 1 ) = τ , and 2 unknowns: τ 1 , τ . Case 3 ( τ 1 > max m∈M 1 Cm and τ 2 ≤ max m∈M 2 Cm ) We have 2 equations: 30) and the master problem (per-stage sum quantization rate {L(t) : 0 ≤ t ≤T − 1} allocation among stages), which is given by Problem 4: (TVCOQ Master Problem: Sum Quantization Rate Allocation Problem) min {L(t):0≤t≤T −1} Fig. 2 . 2Sum throughput/convergence error versus per-stage sum quantization rate L (bits) of 2 pairs MIMO interference game with 2 transmit and receive antennas, d 11 = d 22 = 100 m, d 12 = 200 m, d 21 = 500 m, path loss exponent γ = 3.5, and transmit power of P 1 = P 2 = 10 dBm. The total number of iterations isT = 4 and the per-stage quantization rate per antenna is L K×N 2 = L 8 . In (b), the "*", "o", etc represent the simulation results of the proposed TICOQ and TVCOQ, while the dashed line represents the analytical expression O(2 − L n ) (TICOQ) andT αT −1 2 O(2 − L n ) at fixedT = 4 (TVCOQ). Fig. 3 . 3Sum throughput/convergence error versus instantaneous iteration index t of 4 pairs MIMO interference game with 4 transmit and receive antennas, d ii = 100 m, d ij = 200 m (i < j), d ij = 500 m (i > j), path loss exponent γ = 3.5, and transmit power of P 1 = P 2 = 5 dBm. The per-stage sum quantization rate L = 64 bits (i.e. the per-stage quantization rate per antenna is L K×N 2 = L 64 = 1 bit), and the total number of iterationsT = 7. Fig. 4 . 4Sum throughput/convergence error versus instantaneous iteration index t of 8 pairs MIMO interference game with 2 transmit and receive antennas, d ii = 100 m, d ij = 200 m (i < j), d ij = 500 m (i > j), path loss exponent γ = 3.5, and transmit power of P 1 = P 2 = 5 dBm. The per-stage sum quantization rate L = 32 bits (i.e. the per-stage quantization rate per antenna is L K×N 2 = L 32 = 1 bit), and the total number of iterationsT = 6. Fig. 2 2(b) illustrates the tradeoff between convergence error and message passing overhead (in terms of L at Fig. 5 . 5Sum throughput/convergence error versus the total number of iterationsT of 2 pairs MIMO interference game with 2 transmit and receive antennas, d 11 = d 22 = 100 m, d 12 = 200 m, d 21 = 500 m, path loss exponent γ = 3.5, and transmit power of P 1 = P 2 = 10 dBm. The per-stage sum quantization rate L = 8 bits (i.e. the per-stage quantization rate per antenna is L K×N 2 = L 8 = 1 bit). In (b), the "*", "o", etc represent the simulation results of the proposed TICOQ and TVCOQ, while the dashed line represents the analytical expression O(2 − L n ) (TICOQ) andT αT −1 2 O(2 − L n ), withT ≥ 1 (TVCOQ) at fixed L = 8 bits. L), where λ s , ν s , µ s are the Lagrangian multipliers (LM).L s , τ , λ s , ν s , µ s are optimal iff they satisfy the following KKT conditions: (a) primal constraints: (36),(35); (b) dual constraints: λ s 0, ν s 0; (c) complementary slackness: λ s m (C m 2 m = log 2 ( ln(2)λ s m C m µ s ), τ = µ s ln(2)λ s m , µ s > 0 if ln(2)λ s m C m µ s ≤ 1 : λ s m = 0, ν s m = µ s ,L s * m = 0, µ s > 0 where LMs {λ s m }, µ s are chosen m∈M k C m 2 −pL s m = n k τ k = 2have C m ≥ τ (∀m) ⇔ min m C 2 C m − n log 2 (min m C m ) L ′ . • SQ ( · k is L p norm) in Theorem 4: C m ≥ K k=1 τ k I[m ∈ M k ] (∀m) 2 C m − n log 2 min mCm . • VQ (A * n quantizer) in Theorem 5: D k ≥ τ (∀k) k log 2 D k − L n = O(2 − L n ). Similarly, we have L ′ = K k=1 log 2 D k − K log 2 (min k D k ).APPENDIX G: PROOF OF THEOREM 6 AND LEMMA 6 k log 2 D k , dual lattice (VQ) TABLE I LIST IOF IMPORTANT NOTATIONS. a}, x ∧ a min{x, a}, where the LMs {λ s k }, µ s are chosen to satisfy Cm K k=1 τ k I[m∈M k ] ∨ 1), where {τ k } and τ are constants related to the LMs {λ s k }, µ s of the constraints (36),(37) in problem (Q s ). They are chosen to satisfyn m=1 1 p log 2 p ln(2)( K k=1 λ s k 1[m∈M k ])Cm µ s ∨ 1 = L and K k=1 λ s k = 1. Let τ k = µ s p ln(2)λ s k , then we havē L s * m = 1 p log 2 ( K k=1 m∈M k 1 p log 2 ( Cm τ k ∨ 1) = L and 1 τ k Due to page limit, we shall illustrate the design for synchronous updates in(4)and(5). However, the scheme can be extended to deal with totally asynchronous updates easily[7], which will be further illustrated later in footnote 5. Since in general, the norm of each block component may not be the same, subscript k is used in · k . Note that boundary effect is ignored here. The performance loss is negligible when L is large, which is easily satisfied in most of the cases we are interested in. A tutorial on decomposition methods for network utillity maximization. D P Palomar, M Chiang, IEEE J. Select. Areas Commun. 248D. P. Palomar and M. Chiang, "A tutorial on decomposition methods for network utillity maximization," IEEE J. Select. Areas Commun., vol. 24, no. 8, pp. 1439 -1451, Aug. 2006. Alternative decompositions and distributed algorithms for network utility maximization. IEEE Global Telecommunications Conference (GLOBECOM). St. Louis, Missouri5--, "Alternative decompositions and distributed algorithms for network utility maximization," in IEEE Global Telecommunica- tions Conference (GLOBECOM), vol. 5, St. Louis, Missouri, Nov. 2005, pp. 2563 -2568. Distributed multiuser power control for digital subscriber lines. W Yu, G Ginis, J M Cioffi, IEEE J. Select. Areas Commun. 205W. Yu, G. Ginis, and J. M. Cioffi, "Distributed multiuser power control for digital subscriber lines," IEEE J. Select. Areas Com- mun., vol. 20, no. 5, June 2002. Distributed totally asynchronous iterative waterfilling for wideband interference channel with time/frequency offset. G Scutari, D P Palomar, S Barbarossa, ICASSP. G. Scutari, D.P. Palomar, and S. Barbarossa, "Distributed to- tally asynchronous iterative waterfilling for wideband interference channel with time/frequency offset," in ICASSP, 2007, pp. 4177- 4184. Competitive design of multiuser MIMO systems based on game theory: A unified view. G Scutari, D P Palomar, S Barbarossa, The MIMO iterative waterfilling. 26IEEE Trans. Signal ProcessingG. Scutari, D. P. Palomar, and S. Barbarossa, "Competitive design of multiuser MIMO systems based on game theory: A unified view," IEEE J. Select. Areas Commun., vol. 26, no. 7, Sept. 2008. [6] --, "The MIMO iterative waterfilling," IEEE Trans. Signal Processing, vol. 57, no. 5, May 2009. D P Bertsekas, J N Tsitsiklis, Parallel and Distributed Computation: Numerical Methods. Prentice-Hall1st edD. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, 1st ed. Prentice-Hall, 1989. R P Agarwal, M Meehan, D O Regan, Fixed Point Theory and Application. Cambrige University PressR. P. Agarwal, M. Meehan, and D. O. Regan, Fixed Point Theory and Application. Cambrige University Press, 2001. Quantized consensus. A Kashyap, T Basar, R Srikant, IEEE Int. Symp. Inform. Theory (ISIT). SeattleA. Kashyap, T. Basar, and R. Srikant, "Quantized consensus," in IEEE Int. Symp. Inform. Theory (ISIT), Seattle, July 2006. Distributed average consensus using probabilistic quantization. T Aysal, M Coates, M G Rabbat, Proc. IEEE Statistical Signal Processing Workshop. IEEE Statistical Signal essing WorkshopMadisonT. Aysal, M. Coates, and M. G. Rabbat, "Distributed average con- sensus using probabilistic quantization," in Proc. IEEE Statistical Signal Processing Workshop, Madison, Aug. 2007. Communication constraints in the average consensus problem. R Carli, F Fagnani, A Speranzon, S Zampieri, Automatica. 443R. Carli, F. Fagnani, A. Speranzon, and S. Zampieri, "Com- munication constraints in the average consensus problem," in Automatica, vol. 44, no. 3, 2008, pp. 671-684. Distributed consensus algorithms in sensor networks: Quantized data and random link failures. S Kar, J M F Moura, IEEE Trans. Signal Processing. 583S. Kar and J. M.F. Moura, "Distributed consensus algorithms in sensor networks: Quantized data and random link failures," IEEE Trans. Signal Processing, vol. 58, no. 3, pp. 1383-1400, Mar. 2010. On distributed averaging algorithms with quantization effects. A Nedic, A Olshevsky, A Ozdaglar, J Tsitsiklis, IEEE Trans. Automat. Contr. 5411A. Nedic, A. Olshevsky, A. Ozdaglar, and J. Tsitsiklis, "On distributed averaging algorithms with quantization effects," IEEE Trans. Automat. Contr., vol. 54, no. 11, pp. 2506-2517, 2009. Differential nested lattice encoding for consensus problems. M Yildiz, A Scaglione, Proc. of IEEE Information Processing in Sensor Networks (IPSN). of IEEE Information essing in Sensor Networks (IPSN)M. Yildiz and A. Scaglione, "Differential nested lattice encoding for consensus problems," in Proc. of IEEE Information Processing in Sensor Networks (IPSN), Apr. 2007. Quantized incremental algorithms for distributed optimization. M G Rabbat, R D Nowak, IEEE J. Select. Areas Commun. 234M. G. Rabbat and R. D. Nowak, "Quantized incremental al- gorithms for distributed optimization," IEEE J. Select. Areas Commun., vol. 23, no. 4, pp. 798-808, Apr. 2005. On the rate of convergence of distributed subgradient methods for multi-agent optimization. A Nedic, A Ozdaglar, Proc. of the 47th CDC Conference. of the 47th CDC ConferenceA. Nedic and A. Ozdaglar, "On the rate of convergence of distributed subgradient methods for multi-agent optimization," in Proc. of the 47th CDC Conference, 2007. Distributed subgradient methods and quantization effects. A Nedic, A Olshevsky, A Ozdaglar, J Tsitsiklis, Proc. of the 47th CDC Conference. of the 47th CDC ConferenceA. Nedic, A. Olshevsky, A. Ozdaglar, and J. Tsitsiklis, "Dis- tributed subgradient methods and quantization effects," in Proc. of the 47th CDC Conference, 2008, pp. 4177-4184. The effect of deterministic noise in subgradient methods. A Nedic, D P Bertsekas, Mathematical Programming. A. Nedic and D. P.Bertsekas, "The effect of deterministic noise in subgradient methods," in Mathematical Programming, 2009. Quantization. R M Gray, D L Neuhoff, IEEE Trans. Inform. Theory. 446R. M. Gray and D. L. Neuhoff, "Quantization," IEEE Trans. Inform. Theory, vol. 44, no. 6, Oct. 1998. A Gersho, R M Gray, Vector Quantization and Signal Compression. Springer1st ed.A. Gersho and R. M. Gray, Vector Quantization and Signal Compression, 1st ed. Springer, 1992. Incremental subgradient methods for nondifferentiable optimization. A Geary, D P Bertsekas, Proceedings of the 38th IEEE Conference on Decision and Control. the 38th IEEE Conference on Decision and Control1A. Geary and D. P. Bertsekas, "Incremental subgradient methods for nondifferentiable optimization," in Proceedings of the 38th IEEE Conference on Decision and Control, vol. 1, 1999, pp. 907- 912. Wireless resource allocation: Auctions, games and optimization. J Huang, Northwestern UniversityPh.D. dissertationJ. Huang, "Wireless resource allocation: Auctions, games and optimization," Ph.D. dissertation, Northwestern University, 2005. R A Horn, C R Johnson, Matrix Analysis. Cambrige University PressR. A. Horn and C. R. Johnson, Matrix Analysis. Cambrige University Press, 1986. J H Conway, N J A Sloane, Sphere Packings, Lattices and Groups. Springer3rd ed.J. H. Conway and N. J.A. Sloane, Sphere Packings, Lattices and Groups, 3rd ed. Springer, 1999. S Boyd, L Vandenberghe, Convex Optimization. Cambrige UK. Cambrige Univ. PressS. Boyd and L. Vandenberghe, Convex Optimization. Cambrige UK: Cambrige Univ. Press. Her current research interests include cooperative and cognitive communications, delaysensitive cross-layer scheduling as well as stochastic approximation and Markov Decision Process. Ying Cui received B.Eng degree (first class honor) in Electronic and Information Engineering, Xian Jiaotong UniversityShe is currently a Ph.D candidate in the Department of ECE, the Hong Kong University of Science and Technology (HKUST)Ying Cui received B.Eng degree (first class honor) in Electronic and Information Engi- neering, Xian Jiaotong University, China in 2007. She is currently a Ph.D candidate in the Department of ECE, the Hong Kong Uni- versity of Science and Technology (HKUST). Her current research interests include coop- erative and cognitive communications, delay- sensitive cross-layer scheduling as well as stochastic approximation and Markov Deci- sion Process.
[]
[ "Spin dynamics of the block orbital-selective Mott phase", "Spin dynamics of the block orbital-selective Mott phase" ]
[ "J Herbrych " ]
[]
[]
InSupplementary Fig. 1we present the parameter dependence of our dynamical DMRG calculations for a fixed frequency ω = 0.03 [eV] (namely, "inside" the acoustic mode) and L = 16 sites (48 orbitals). In panel (a) we present the broadening η dependence of our calculations [Eq. (4) of the main text]. It is clear from the figure that all features are properly resolved for the considered η/δω = 2. InSupplementary Fig. 1(b)we present the number of states kept M dependence of our findings. We conclude that at a fixed η and L, the results do not change appreciably for M > ∼ 800. 0 2 4 6 a ω = 0.03 [eV] M = 800 0 3 6 9 0 0.25 0.5 0.75 1 b ω = 0.03 [eV] η/δω = 2 S zz (q, ω) η/δω = 1 η/δω = 2 η/δω = 4 S zz (q, ω) Wavevector q/π M = 400 M = 600 M = 800 M = 1000 Supplementary Figure 1. Parameter dependence of dynamical-DMRG simulations. (a) Broadening η and (b) number of states kept M dependence corresponding to ω = 0.03 [eV] and L = 16 sites. In all simulations of the main text we use η/δω = 2 and M = 800.InSupplementary Fig. 2(a-d) we present the finite-size analysis at several momenta q cuts through the dynamical SSF. At large q/π ≥ 3/4, the results do not depend on the system size L because for this momentum only the optical mode is present in the spectrum. Since the excitations within this mode are local, the system size (and also dimensionality of the lattice) should not play a crucial role. On the other hand, at q ≤ π/2 the results depend more on the system size with maximal variation at q/π = 1/2. However, this dependence does not change the main findings of our work and it merely reflects the quasi-long-range nature of the block ordering [1]. This can be understood simply from the L-scaling of the static S(q = π/2) shown in the inset ofSupplementary Fig. 2(e). For completeness inSupplementary Fig. 2(e)we show the L dependence of the full momentum q resolved static SSF.Let us finally comment on the accuracy of our results for the multi-orbital ladder geometry. Different from the chain setup, where the three orbitals where treated as a single site with a local Hilbert space of 64 states, the ladder results were obtained using a 12 × 2 × 2 (rungs × legs × orbitals) lattice with a local Hilbert space of 4 states. Although such a setup have smaller memory requirements, the entanglement area law [5] heavily influences the accuracy of our results. The latter is a consequence of a large number of long-range connections (up to 7 nearest-neighbours). InSupplementary Fig. 3, we present the system size L and states M scaling of the results presented inFig. 7of the main text. In panel (a) we present the finite-size analysis of the static SSF in the bonding sector, q y /π = 0, for the M = 1000 states kept. The system size analysis of the ladder results is consistent with the findings for chains, namely the acoustic mode has size dependence, while the optical mode does not. In summary, while we are confident that our results for ladders capture the essence of the problem, including the existence of acoustic and optical bands and quite different weights in different portions of the Brillouin zone, only further (very demanding) work can achieve the same accuracy as shown here for chains.
10.1038/s41467-018-06181-6
null
51,790,255
1804.01959
46d850daf030828b685e93738b82333338b802c9
Spin dynamics of the block orbital-selective Mott phase J Herbrych Spin dynamics of the block orbital-selective Mott phase 1 SUPPLEMENTARY INFORMATION for: Supplementary Note 1. Numerical details InSupplementary Fig. 1we present the parameter dependence of our dynamical DMRG calculations for a fixed frequency ω = 0.03 [eV] (namely, "inside" the acoustic mode) and L = 16 sites (48 orbitals). In panel (a) we present the broadening η dependence of our calculations [Eq. (4) of the main text]. It is clear from the figure that all features are properly resolved for the considered η/δω = 2. InSupplementary Fig. 1(b)we present the number of states kept M dependence of our findings. We conclude that at a fixed η and L, the results do not change appreciably for M > ∼ 800. 0 2 4 6 a ω = 0.03 [eV] M = 800 0 3 6 9 0 0.25 0.5 0.75 1 b ω = 0.03 [eV] η/δω = 2 S zz (q, ω) η/δω = 1 η/δω = 2 η/δω = 4 S zz (q, ω) Wavevector q/π M = 400 M = 600 M = 800 M = 1000 Supplementary Figure 1. Parameter dependence of dynamical-DMRG simulations. (a) Broadening η and (b) number of states kept M dependence corresponding to ω = 0.03 [eV] and L = 16 sites. In all simulations of the main text we use η/δω = 2 and M = 800.InSupplementary Fig. 2(a-d) we present the finite-size analysis at several momenta q cuts through the dynamical SSF. At large q/π ≥ 3/4, the results do not depend on the system size L because for this momentum only the optical mode is present in the spectrum. Since the excitations within this mode are local, the system size (and also dimensionality of the lattice) should not play a crucial role. On the other hand, at q ≤ π/2 the results depend more on the system size with maximal variation at q/π = 1/2. However, this dependence does not change the main findings of our work and it merely reflects the quasi-long-range nature of the block ordering [1]. This can be understood simply from the L-scaling of the static S(q = π/2) shown in the inset ofSupplementary Fig. 2(e). For completeness inSupplementary Fig. 2(e)we show the L dependence of the full momentum q resolved static SSF.Let us finally comment on the accuracy of our results for the multi-orbital ladder geometry. Different from the chain setup, where the three orbitals where treated as a single site with a local Hilbert space of 64 states, the ladder results were obtained using a 12 × 2 × 2 (rungs × legs × orbitals) lattice with a local Hilbert space of 4 states. Although such a setup have smaller memory requirements, the entanglement area law [5] heavily influences the accuracy of our results. The latter is a consequence of a large number of long-range connections (up to 7 nearest-neighbours). InSupplementary Fig. 3, we present the system size L and states M scaling of the results presented inFig. 7of the main text. In panel (a) we present the finite-size analysis of the static SSF in the bonding sector, q y /π = 0, for the M = 1000 states kept. The system size analysis of the ladder results is consistent with the findings for chains, namely the acoustic mode has size dependence, while the optical mode does not. In summary, while we are confident that our results for ladders capture the essence of the problem, including the existence of acoustic and optical bands and quite different weights in different portions of the Brillouin zone, only further (very demanding) work can achieve the same accuracy as shown here for chains. In Supplementary Fig. 1 we present the parameter dependence of our dynamical DMRG calculations for a fixed frequency ω = 0.03 [eV] (namely, "inside" the acoustic mode) and L = 16 sites (48 orbitals). In panel (a) we present the broadening η dependence of our calculations [Eq. (4) of the main text]. It is clear from the figure that all features are properly resolved for the considered η/δω = 2. In Supplementary Fig. 1(b) we present the number of states kept M dependence of our findings. We conclude that at a fixed η and L, the results do not change appreciably for M > ∼ 800. In Supplementary Fig. 2(a-d) we present the finite-size analysis at several momenta q cuts through the dynamical SSF. At large q/π ≥ 3/4, the results do not depend on the system size L because for this momentum only the optical mode is present in the spectrum. Since the excitations within this mode are local, the system size (and also dimensionality of the lattice) should not play a crucial role. On the other hand, at q ≤ π/2 the results depend more on the system size with maximal variation at q/π = 1/2. However, this dependence does not change the main findings of our work and it merely reflects the quasi-long-range nature of the block ordering [1]. This can be understood simply from the L-scaling of the static S(q = π/2) shown in the inset of Supplementary Fig. 2(e). For completeness in Supplementary Fig. 2(e) we show the L dependence of the full momentum q resolved static SSF. Let us finally comment on the accuracy of our results for the multi-orbital ladder geometry. Different from the chain setup, where the three orbitals where treated as a single site with a local Hilbert space of 64 states, the ladder results were obtained using a 12 × 2 × 2 (rungs × legs × orbitals) lattice with a local Hilbert space of 4 states. Although such a setup have smaller memory requirements, the entanglement area law [5] heavily influences the accuracy of our results. The latter is a consequence of a large number of long-range connections (up to 7 nearest-neighbours). In Supplementary Fig. 3, we present the system size L and states M scaling of the results presented in Fig. 7 of the main text. In panel (a) we present the finite-size analysis of the static SSF in the bonding sector, q y /π = 0, for the M = 1000 states kept. The system size analysis of the ladder results is consistent with the findings for chains, namely the acoustic mode has size dependence, while the optical mode does not. In summary, while we are confident that our results for ladders capture the essence of the problem, including the existence of acoustic and optical bands and quite different weights in different portions of the Brillouin zone, only further (very demanding) work can achieve the same accuracy as shown here for chains. In Supplementary Fig. 4 we present the evolution of the local magnetic moment S 2 within the block-orbital selective Mott phase. This local moment can be obtained from the sum-rules of spin-spin correlation functions, i.e., S(q) = 1 π dω S(q, ω) , S 2 = 1 L dq S(q) .(1) Note that the above equations allow to relate the total spectral weight of INS data with the value of the local spin via S 2 = S(S + 1). The results presented in Supplementary Fig. 4 are obtained from the integration of the static structure factor S(q). As clearly visible, the magnetic moments start to develop already in the paramagnetic (metallic) phase [1] and are stabilized to its maximal value S 2 (S = 1 for n = 4/3) in the middle of the block-OSMP. Although BaFe 2 Se 3 is a quasi-1D compound, the finite ω-dependent properties should be dominated by the 1D nature of the ladder lattice (while, e.g., d.c. transport is more subtle). It is therefore appropriate to directly compare our dynamical SSF to experimental findings. Since the latter is obtained using a powder sample, our results presented in Fig. 2 of the main text have to be averaged over all spherical angles [2]. Furthermore, to qualitatively compare to the inelastic neutron scattering (INS) data we must incorporate in the analysis the momentum dependent magnetic form factors F (Q) of the spin carriers, namely the Fe 2+ ions. Here we assume a gyromagnetic ratio g = 2 (spin-only scattering). The functional form of the former can be taken from crystallography tables [3]. In Supplementary Fig. 5(a) we present the powder average of our spectra. Several interesting general features can be inferred: (i) using realistic values [4] for the Fe-Fe distance such as 2.7Å, remarkably we obtain a nearly perfect agreement for the position of the acoustic mode. The leading INS signal is centered at Q 0.7 (1/Å), followed by peaks at 1.8 (1/Å) to 2.5 (1/Å) with smaller intensity [indicated by vertical red arrows in Supplementary Fig. 5(b)]. (ii) The neutron spectrum gives three flat (momentum-independent) bands of spin exactions: two of them are centered approximately at ω ∼ 0.1 eV (ω 1 = 0.0889 eV and ω 2 = 0.1082 eV, depicted as horizontal red arrows in Supplementary Fig. 5(b)), while the third one is positioned at ω 3 = 0.198 eV. Our 1D results yield only one optical mode centered at ω 0.105 eV in accord with the most pronounced peak within the INS spectrum. This qualitative agreement indicates that our model is able to capture the nontrivial nature of the frustrated magnetism of BaFe 2 Se 3 , and that the studied parameter range of our Hamiltonian is valid for the whole 123 family. Supplementary Figure 1 . 1Parameter dependence of dynamical-DMRG simulations. (a) Broadening η and (b) number of states kept M dependence corresponding to ω = 0.03 [eV] and L = 16 sites. In all simulations of the main text we use η/δω = 2 and M = 800. Supplementary Figure 2 .Supplementary Figure 3 . 23Finite-size analysis. (a-d) Size L dependence of the frequency-resolved dynamical SSF for q/π = 1/4, 1/2, 3/4, 1, as calculated with η/δω = 2 and M = 800. (e) L-dependence of the static SSF. Open points represent the results obtained as the expectation value of the GS, while solid points are obtained from the integral over the frequency (see main text for details). Inset illustrates the quasi-long-range nature of block π/2 ordering, with a signal intensity growing with L. Ladder geometry analysis.(a) Finite size L and (b) number of states kept M scaling of the static SSF in the bonding sector, qy/π = 0. Supplementary Note 2. Magnetic moment evolution. Supplementary Figure 4 . 4Magnetic moment. Evolution of the local magnetic moment S 2 within the block-OSMP. The solid line (lower x-axis) represents results for fixed value of interaction U/W = 0.8 and various value of JH/U . The dashed line (upper x-axis) represents results at fixed JH/U = 1/4 and for various values of U/W . The results were obtained using a DMRG method with parameters L = 16 (48 orbitals), M = 800. Supplementary Note 3. Comparison of DMRG results with powder experiment. Figure 5 . 5Powder spectrum. (a) Spherical average of the dynamical SSF. The black solid line represents the magnetic form factor F (Q) 2 of the Fe 2+ ions [3]. (b) Spherical average of the dynamical SSF convoluted with the form factor F (q) relevant for a direct comparison with the BaFe2Se3 INS results [4]. Red arrows indicate the position of maximum intensities in the INS spectrum. See text for details. Exotic Magnetic Order in the Orbital-Selective Mott Regime of Multiorbital Systems. J Rincón, A Moreo, G Alvarez, E Dagotto, https:/journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.106405Phys. Rev. Lett. 112106405J. Rincón, A. Moreo, G. Alvarez, and E. Dagotto, Exotic Magnetic Order in the Orbital-Selective Mott Regime of Multiorbital Systems, Phys. Rev. Lett. 112, 106405 (2014). Conversion method of powder inelastic scattering data for one-dimensional systems. K Tomiyasu, M Fujita, A I Kolesnikov, R I Bewley, M J Bull, S M Bennington, https:/aip.scitation.org/doi/abs/10.1063/1.3089566Appl. Phys. Lett. 9492502K. Tomiyasu, M. Fujita, A. I. Kolesnikov, R. I. Bewley, M. J. Bull, and S. M. Bennington, Conversion method of powder inelastic scattering data for one-dimensional systems, Appl. Phys. Lett. 94, 092502 (2009). Magnetic Form Factors. P J Brown, International tables for crystallography. C (A. J. C. WilsonChapter 4.4.5P. J. Brown, Magnetic Form Factors, Chapter 4.4.5, International tables for crystallography vol. C (A. J. C. Wilson, ed.), pp. 391-399. Block Magnetic Excitations in the Orbitally Selective Mott Insulator BaFe2Se3. M Mourigal, S Wu, M B Stone, J R Neilson, J M Caron, T M Mcqueen, C L Broholm, https:/journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.047401Phys. Rev. Lett. 11547401M. Mourigal, S. Wu, M. B. Stone, J. R. Neilson, J. M. Caron, T. M. McQueen, and C. L. Broholm, Block Magnetic Excitations in the Orbitally Selective Mott Insulator BaFe2Se3, Phys. Rev. Lett. 115, 047401 (2015). Strongly Correlated Systems -Numerical Methods. A. Avella and F. ManciniBerlin176Springer Series in Solid-State SciencesStrongly Correlated Systems -Numerical Methods, edited by A. Avella and F. Mancini (Springer Series in Solid-State Sciences 176, Berlin, 2013).
[]
[ "Graph Matching with Partially-Correct Seeds", "Graph Matching with Partially-Correct Seeds" ]
[ "Liren Yu ", "Jiaming Xu ", "Xiaojun Lin " ]
[]
[]
The graph matching problem aims to find the latent vertex correspondence between two edge-correlated graphs and has many practical applications. In this work, we study a version of the seeded graph matching problem, which assumes that a set of seeds, i.e., pre-mapped vertexpairs, is given in advance. Most previous work on seeded graph matching requires all seeds to be correct. In contrast, we study the setting where the seeds are partially correct. Specifically, consider two correlated graphs whose edges are sampled independently with probability s from a parent Erdős-Rényi graph G(n, p). Furthermore, a mapping between the vertices of the two graphs is provided as seeds, of which an unknown β fraction is correct. This problem was first studied in[LS18]where an algorithm is proposed and shown to perfectly recover the correct vertex mapping with high probability if β ≥ max 8 3 p, 16 log n nps 2. We improve their condition to β ≥ max 30 log n n(1−p) 2 s 2 , 45 log n np(1−p) 2 s 2 , which is more relaxed when the parent Erdős-Rényi graph is dense, i.e., when p = Ω log n/ns 2 . However, when p = O log n/ns 2 , our improved condition still requires that β must increase inversely proportional to np. In order to improve the matching performance for sparse graphs, we propose a new algorithm that uses "witnesses" in the 2-hop neighborhood, instead of only 1-hop neighborhood as in[LS18]. We show that when np 2 ≤ 1 135 log n , our new algorithm can achieve perfect recovery with high probability if β ≥ max 900 np 3 (1−s) log n s , 600 log n ns 4 , 1200 log n n 2 p 2 s 4 and nps 2 ≥ 128 log n. Numerical experiments on both synthetic and real graphs corroborate our theoretical findings and show that our 2-hop algorithm significantly outperforms the 1-hop algorithm when the graphs are relatively sparse. * L. Yu and X. Lin are with
null
[ "https://arxiv.org/pdf/2004.03816v1.pdf" ]
215,416,228
2004.03816
66e1733d3275b287c8e3f43ca6b3e352258edca6
Graph Matching with Partially-Correct Seeds 8 Apr 2020 Liren Yu Jiaming Xu Xiaojun Lin Graph Matching with Partially-Correct Seeds 8 Apr 2020 The graph matching problem aims to find the latent vertex correspondence between two edge-correlated graphs and has many practical applications. In this work, we study a version of the seeded graph matching problem, which assumes that a set of seeds, i.e., pre-mapped vertexpairs, is given in advance. Most previous work on seeded graph matching requires all seeds to be correct. In contrast, we study the setting where the seeds are partially correct. Specifically, consider two correlated graphs whose edges are sampled independently with probability s from a parent Erdős-Rényi graph G(n, p). Furthermore, a mapping between the vertices of the two graphs is provided as seeds, of which an unknown β fraction is correct. This problem was first studied in[LS18]where an algorithm is proposed and shown to perfectly recover the correct vertex mapping with high probability if β ≥ max 8 3 p, 16 log n nps 2. We improve their condition to β ≥ max 30 log n n(1−p) 2 s 2 , 45 log n np(1−p) 2 s 2 , which is more relaxed when the parent Erdős-Rényi graph is dense, i.e., when p = Ω log n/ns 2 . However, when p = O log n/ns 2 , our improved condition still requires that β must increase inversely proportional to np. In order to improve the matching performance for sparse graphs, we propose a new algorithm that uses "witnesses" in the 2-hop neighborhood, instead of only 1-hop neighborhood as in[LS18]. We show that when np 2 ≤ 1 135 log n , our new algorithm can achieve perfect recovery with high probability if β ≥ max 900 np 3 (1−s) log n s , 600 log n ns 4 , 1200 log n n 2 p 2 s 4 and nps 2 ≥ 128 log n. Numerical experiments on both synthetic and real graphs corroborate our theoretical findings and show that our 2-hop algorithm significantly outperforms the 1-hop algorithm when the graphs are relatively sparse. * L. Yu and X. Lin are with Introduction Graph matching aims to find a bijective mapping between the vertex sets of two edge-correlated graphs so that their edge sets are maximally aligned. Graph matching is motivated by many practical applications, such as social network de-anonymization [NS09], computational biology [SXB08,KHGM16], computer vision [CFSV04,SS05], and natural language processing [HNM05]. For instance, from one anonymized version of the follow relationships graph on the Twitter microblogging service, researchers were able to re-identify the users by matching the anonymized graph to a correlated cross-domain auxiliary graph, i.e., the contact relationships graph on the Flickr photo-sharing service, where user identities are known [NS09]. Existing algorithms to find the correct matching can be divided into two categories, seedless and seeded matching algorithms. Seedless matching algorithms do not rely on any additional side information, but instead only use the topological information to match the two graphs. The quadratic assignment problem (QAP) [PRW94,BCPP98] is perhaps the most natural idea to be applied here. Let A and B denote the adjacency matrices of two correlated graphs with n vertices, respectively. The QAP can be formulated as the following optimization problem: max Π∈Sn A, ΠBΠ T ,(1) where S n is the set of permutation matrices in R n×n , and ·, · denotes the matrix inner-product. However, QAP is NP-hard to solve or to approximate within an approximation ratio that even grows with n [MMS10]. To reduce the computational complexity, one may rewrite the objective function (1) as min Π∈Sn ||AΠ − BΠ|| 2 F (where || · || F is the Froebenius matrix norm), which is a quadratic function of Π, and relax the set S n to its convex hull (the set of doubly stochastic matrices), arriving at a quadratic programming (QP) problem. The experimental results in [ABK15, VCL + 15, LFF + 16, DML17] demonstrate the effectiveness of the QP relaxation, but its theoretical understanding remains elusive. Other seedless matching algorithms have been proposed based on degree information [DCKG18], spectral method [Ume88, FQM + 19, FMWX19a,FMWX19b] or random walk [GMS05]. However, to the best of our knowledge, these algorithms either only succeed when the fraction of edges that differ between the two graphs is low, i.e., on the order of O 1/ log 2 n [DMWX18] or require quasi-polynomial runtime (n O(log n) ) [BCL + 18]. The only exception is the neighborhood tree matching algorithm recently proposed in [GM20], which can output a partially-correct matching in polynomial-time when two graphs are sparse and differ a constant fraction of edges. The other category of matching algorithms is seeded matching algorithms [PG11, YG13, KL14, LFP13, FAP18, SGE17, MX19]. These algorithms require "seeds", which are a set of pre-mapped vertex-pairs. Let G 1 and G 2 denote the two correlated graphs, respectively. For each pair of vertices (u, v), where u is in G 1 and v is in G 2 , a seed (w, w ′ ) becomes a witness for (u, v) if w is a neighbor of u in G 1 and w ′ is a neighbor of v in G 2 . The basic idea of seeded matching algorithm is that a candidate pair of vertices should have more witnesses if they are a true pair than if they are a fake pair. Assuming that the seeds are correct, seeded matching algorithms can find the correct matching for the remaining vertices more efficiently than seedless matching algorithm. In social network de-anonymization, such initially matched seeds are often available, thanks to users who have explicitly linked their accounts across different social networks. For other applications, the seeds can be matched by prior knowledge or manual labeling. However, most existing seeded matching algorithms require all seeds to be correct, which may be difficult to guarantee in practice. The authors of [LS18] extend the idea of seeded matching algorithm to allow incorrect seeds. For example, the seeds may be provided by seedless matching algorithms, which will produce some incorrect seeds. The seeded matching algorithm in [LS18] first counts the numbers of the witnesses for each candidate pair of vertices, and then uses Greedy Maximum Weight Matching (GMWM) to find the vertex correspondence between the two graphs such that the total number of witnesses is large. When the two graphs are correlated Erdős-Rényi graphs, whose edges are independently sub-sampled with probability s from a parent Erdős-Rényi graph G(n, p), and there are β fraction of correct seeds, [LS18] shows the proposed algorithm can correctly match all vertices with high probability if β ≥ max 8 3 p, 16 log n nps 2 . (2) However, the performance guarantee in (2) is somewhat conservative for the following two reasons. First, as p increases, i.e., the parent graph gets dense, the number of witnesses increases. Thus, we expect that the graphs should be easier to match, and hence it seems unnecessary to require β ≥ 8 3 p in (2). This observation thus raises the first question: Can we get a sufficient condition on β tighter than condition (2) for the success of the seeded matching algorithm of [LS18] on dense graphs? Second, condition (2) suggests that β must increase inversely proportional to np. Thus, if the graph is sparse, i.e., p = Θ log n/ns 2 , β would be required to be constant, which might be hard to obtain in practice. Indeed, our numerical experiments show that the algorithm in [LS18] fails when β = o log n/nps 2 . This is because their algorithm only uses witnesses that are in the 1-hop neighborhood. When p is low, i.e., the parent graph is sparse, there are only a small number of 1-hop neighbors for each vertex, which cannot provide sufficiently many 1-hop witnesses to distinguish the true pairs from the fake pairs. This observation raises the second open question: Can we develop an efficient seeded matching algorithm that further relaxes the requirement on β in relatively sparse graphs? In this paper, we provide new results that answer both questions affirmatively. For the first question, we provide a new analysis showing that the algorithm in [LS18] can correctly match all vertices with high probability if β ≥ max 30 log n n(1 − p) 2 s 2 , 45 log n np(1 − p) 2 s 2 . (3) Note that the requirement for β to grow with p is removed, and the new condition (3) is more relaxed than condition (2) for dense graphs when p = Ω log n/ns 2 . The key observation is that the analysis of [LS18] only considers the correct seeds and ignores the incorrect seeds, when counting the number of witnesses for the true pairs. The reason that they ignore the incorrect seeds is because the events that the incorrect seeds become witnesses for a true pair depend on each other. Our analysis takes into account of the incorrect seeds for the true pairs and carefully deals with the dependency using concentration inequalities for dependent random variables [Jan04]. For the second question, we propose a new algorithm that can exactly match all vertices with high probability if β ≥ max 900 np 3 (1 − s) log n s , 600 log n ns 4 , 1200 log n n 2 p 2 s 4 and nps 2 ≥ 128 log n. Note that our condition (4) only requires β to increase inversely to n 2 p 2 when p = O log n n 3 s 4 1 4 , and is more relaxed than condition (3) for sparse graphs when p = O log n n 3 s 3 (1−s) 1 5 . A key idea of our algorithm is to match vertices by comparing the number of witnesses in the 2-hop neighborhood. In sparse graphs, most vertices have more 2-hop neighbors than 1-hop neighbors. Thus, compared to the algorithm in [LS18] that uses only 1-hop witnesses, our algorithm can leverage more witnesses to distinguish the true pairs from the fake pairs. The idea of using multi-hop neighborhoods to match vertices is used and analyzed previously when all seeds are correct [MX19]. In comparison, our analysis with incorrect seeds is significantly more challenging, as we need to take care of the dependency on the size of the 1-hop neighborhood and the dependency between incorrect seeds. Finally, using numerical experiments on both synthetic random graphs and real graphs, we show that our algorithm achieves better performance than the algorithm in [LS18] on sparse graphs, which agrees with our theoretical analysis. The rest of the paper is organized as follows: In Section 2, we formally introduce the correlated Erdős-Rényi model and the problem statement. Section 3 describes the matching algorithms, presents our theoretical guarantees, and highlights the analysis challenges. Section 4 contains empirical evaluations of the various algorithms on both synthetic and real graphs. Section 5 concludes the paper with remarks on future directions. Additional numerical experiments and full proof are deferred to appendices. Model In this section, we formally introduce the model and the graph matching problem with partiallycorrect seeds. We let G(V, E) denote the parent graph with n vertices, where V is the set of vertices and E is the set of edges. The parent graph G(V, E) is generated from the Erdős-Rényi model G(n, p), i.e., we start with an empty graph on n vertices and connect any pair of two vertices independently with probability p. Then, we obtain a sub-sampled graph G 1 (V, E 1 ) by sampling each edge of G into E 1 independently with probability s. Repeat the same sub-sampling process independently and relabel the vertices according to an unknown permutation π : V → V to construct another sub-sampled graph G 2 (π(V ), E 2 ). Throughout the paper, we denote a vertex-pair by (u, π(v)), where u ∈ V and π(v) ∈ π(V ). For each vertex-pair (u, π(v)), if u = v, then (u, π(v)) is a true pair; if u = v, then (u, π(v)) is a fake pair. The goal of graph matching is to recover this unknown permutation π based on G 1 and G 2 . Prior literature proposes various algorithms to recover π based on G 1 and G 2 . The output of these graph matching algorithms can be interpreted as a set of partially correct seeds. Taking these partially correct seeds as input, we wish to efficiently correct all of the errors. However, it is difficult to perfectly model the correlation between the output of these algorithms and the graphs G 1 , G 2 . One way of getting around this issue is to treat these partially correct seeds as adversarially chosen and to design a matching algorithm that with high probability corrects all errors for all possible initial error patterns. However, the existing theoretical guarantees in this adversarial setting are rather pessimistic, requiring the fraction of incorrectly matched seeds is o(1) (cf. [BCL + 18, Lemma 3.21] and [DMWX18, Lemma 5]). In this paper, we adopt a mathematically more tractable model introduced by [LS18], where the partially correct seeds are assumed to be generated independently from the graphs G 1 , G 2 . More specifically, we use π : V → V to denote an initial mapping and generate π in the following way. For β ∈ [0, 1), we assume that π is uniformly randomly chosen from all the permutations σ : V → V such that σ(u) = π(u) for exactly βn vertices. The benefit of this model is that π is independent of the graph G and the sampling processes that generate G 1 and G 2 , and it is convenient for us to obtain theoretical results. Then, for each seed (u, π(u)), let π(v) be the underlying vertex matched to u, i.e., π(u) = π(v). If u = v, then (u, π(u)) is a correct seed; if u = v, then (u, π(u)) is an incorrect seed. Thus, only β fraction of the seeds are correct. Given G 1 , G 2 and π, our goal is to find a mapping π : V → V such that lim n→∞ P { π = π} = 1. Main Results Algorithm Description In this section, we present the general class of algorithms, shown in Algorithm 1, that we will use to recover π. As in [LS18], our algorithm also uses the notion of "witnesses". However, unlike [LS18], our algorithm leverages witnesses that are j-hop away. Given any graph G and two vertices u, v in G, we denote the length of the shortest path from u to v in G by d G (u, v). Then, for each vertex-pair (u, π(v)), the seed (w, π(w)) becomes a j-hop witness for (u, π(v)) if d G 1 (u, w) = j and d G 2 (π(v), π(w)) = j. The total number of witnesses of every vertex-pair can be efficiently calculated as follows (see Algorithm 1). We define the j-hop adjacency matrices A j ∈ {0, 1} n×n of G 1 , which indicates whether a pair of vertices are j-hop neighbors in graph G 1 , i.e., A j (u, v) = 1 if d G 1 (u, v) = j, 0 otherwise. Similarly, let B j ∈ {0, 1} n×n denote the j-hop adjacency matrix of G 2 . Equivalently express the seed mapping π by forming a permutation matrix Π ∈ {0, 1} n×n , where Π(u, v) = 1 if π(u) = v, and Π(u, v) = 0 otherwise. We can then count the number of j-hop witnesses for all vertex-pairs by computing W j = A j ΠB j , where the (u, v)-th entry of W j is equal to the number of j-hop witnesses for the vertex-pair (u, π(v)). This step has computational complexity of O(n 2.373 ) [Gal14]. As we have mentioned, a true pair tends to have more witnesses than a fake pair, and thus we want to find the vertex correspondence between the two graphs that maximizes the total number of witnesses. In other words, given a weighted bipartite graph G m with the vertex set being V ∪ π(V ), the edges connecting every possible vertex-pairs, and weight of an edge defined as w(u, π(v)) = W j (u, π(v)), we want to find the maximum weight matching in G m . To reduce computational complexity, we approximate the maximum weight matching by Greedy Maximum Weight Matching (GMWM) with computational complexity O(n 2 log n). GMWM first chooses the vertex-pair with the largest weight from all candidate vertex-pairs in G m , removes all edges adjacent to the chosen vertex-pair, and then chooses the vertex-pair with the largest weight among the remaining candidate vertex-pairs, and so on. The total computational complexity of Algorithm 1 for any constant j is O(n 2.373 ). Algorithm 1 Graph Matching based on Counting j-hop Witnesses. 1: Input: G 1 , G 2 , π, j 2: Generate j-hop adjacency matrices A j and B j based on G 1 and G 2 3: Generate Π based on π 4: W j = A j ΠB j 5: π = GM W M (W j ) 6: Return π Results In this section, we present the performance guarantees for the 1-hop (i.e., j = 1) and 2-hop (i.e., j = 2) algorithms. In [LS18], the authors show that if condition (2) holds, their 1-hop algorithm can exactly recover π with high probability. However, the analysis in [LS18] only takes into account the contribution of correct seeds when counting the 1-hop witnesses for true pairs. In this paper, we show a more relaxed bound of β by also considering the contribution of incorrect seeds. Theorem 1. If condition (3) holds and n is sufficiently large, then Algorithm 1 with j = 1 outputs a permutation π such that P { π = π} ≥ 1 − n −1 . Condition (3) is more relaxed than condition (2) for dense graphs when p = Ω log n/ns 2 . However, when p = O log n/ns 2 , condition (3) still requires β to grow inversely proportional to np. As we have discussed, this is because when the graph is sparse, there are not enough 1-hop witnesses among the true pairs. In order to reach improved performance for sparse graphs, we need to leverage witnesses in a larger neighborhood to match vertices. Algorithm 1 with j = 2 utilizes the 2-hop witnesses. We show that it recovers π with high probability under a more relaxed condition for sparse graphs. Theorem 2. Suppose that np 2 ≤ 1 135 . If condition (4) holds and n is sufficiently large, then Algorithm 1 with j = 2 outputs a permutation π such that P { π = π} ≥ 1 − n −1 . Note that, for small p, condition (4) only needs β to grow inversely proportional to n 2 p 2 . To compare the conditions on β given in Theorem 1, Theorem 2 and [LS18], we plot them as a function of p when s is a fixed constant, and p = o(1). The blue dashed curve depicts the condition (2) in [LS18]. The two segments correspond to: when p = O log n/n , log n np dominates the right-hand side of condition (2); When p = Ω log n/n , p dominates the right-hand side of condition (2). The red curve depicts the condition (3) in Theorem 1. The two segments correspond to: when p = O log n/n , log n np dominates the right-hand side of condition (3); When p = Ω log n/n , log n n dominates the right-hand side of condition (3). Clearly, while the two conditions are comparable for small p, condition (3) is more relaxed than condition (2) for dense graphs when p = Ω log n/n . The black curve depicts the condition (4) in Theorem 2. Condition (4) has three segments: , log n n 2 p 2 dominates the right-hand side of condition (4). • When p = O • When p = Ω log n n 3 1 4 and p = O n −2/3 , log n n dominates the right-hand side of condition (4). • When p = Ω n −2/3 and p = O n −1/2 , there are two cases. The first case is when s is a fixed constant smaller than 1, in which case np 3 log n dominates the right-hand side of condition (4). The other case is when s = 1, in which case np 3 (1−s) log n s = 0 and log n ns 4 dominates the right-hand side of condition (4). This bifurcation arises because if s < 1, then the two vertices of a true pair would have different 2-hop neighbors, which renders it harder to distinguish the true pairs based on the number of 2-hop witnesses when p = Ω(n − 2 3 ). Clearly, condition (4) is more relaxed than condition (2) and (3) for sparse graphs, provided that p = O log n n 3 1 5 when s is a constant smaller than 1 and that p = O n −1/2 when s = 1. Intuition and Analysis Challenges In this section, we explain the intuition and analysis challenges for Theorem 1 and Theorem 2. To understand the intuition behind Theorem 1 and why it provides a better result than [LS18], recall that the 1-hop algorithm will succeed (in recovering π) if the number of 1-hop witnesses for any true pair is larger than the number of 1-hop witnesses for any fake pair. For any correct seed, it is a 1-hop witness for a true pair with probability ps 2 and is a 1-hop witness for a fake pair with probability p 2 s 2 . In contrast, for any incorrect seed, it is a 1-hop witness with probability p 2 s 2 for both true pairs and fake pairs. Since there are nβ seeds that are correct, it follows that W 1 (u, π(v)) · ∼ Binom nβ, ps 2 + Binom n(1 − β), p 2 s 2 if u = v, Binom(n, p 2 s 2 ) if u = v.(5) where · ∼ denotes "approximately distributed". In (5), the second Binom distribution for u = v is not precise because the events that each incorrect seed becomes a witness for a true pair are dependent on each other. We address this issue using the concentration inequality for dependent random variables [Jan04]. Please refer to Appendix C for details. Therefore, in order to make the 1-hop algorithm work, we need the difference between the expected values of the two cases in (5) to be greater than both standard deviations, i.e., nβps 2 + n(1 − β)p 2 s 2 − np 2 s 2 ≥ nβps 2 + np 2 s 2 ⇔nβp(1 − p)s 2 ≥ nβps 2 + np 2 s 2 ⇐nβp(1 − p)s 2 ≥ 2 nβps 2 and nβp(1 − p)s 2 ≥ 2 np 2 s 2 . (6) This immediately leads to β ≥ max 4 np(1−p) 2 s 2 , 2 1 n(1−p) 2 s 2 , which differ from the condition (3) in Theorem 1 only by the log n factor. Adding the log n factor ensures that the 1-hop algorithm will succeed with high probability. This argument suggests that condition (3) is also close to necessary for the 1-hop algorithm to succeed, which is confirmed by our simulation results in Appendix A. We next explain condition (4) in Theorem 2. First, we need that the average degree of every vertex should be high enough so that there is no isolated vertex in G 1 or G 2 . For the correlated Erdős-Rényi model, nps 2 − log n → +∞ ensures the intersection graph G 1 ∧ G 2 to be connected [ER59]. Then, analogous to the 1-hop algorithm, we derive the condition on β by comparing the expected values and standard deviations of the number of 2-hop witnesses for true pairs and for fake pairs. However, the dependency issue is more severe here when we bound the number of 2-hop witnesses. Specifically, in the analysis of (5) (see Appendix C), the event that a seed becomes a 1-hop witness for a true pair is at most dependent on that of two other incorrect seeds. For here, any two seeds could be dependent through the 1-hop neighborhood of the candidate vertex-pair (see Appendix D). Thus, directly using the concentration inequality in [Jan04] will lead to a poor bound. To address this new difficulty, we will condition on the 1-hop neighborhood first. After this conditioning, the remaining dependency becomes more manageable, which is handled by either ignoring a small fraction of seeds or by applying the concentration inequality in [Jan04] again. Please refer to Appendix D for details. Towards this end, given any graph G, for any vertex u in graph G, we use N G j (u) to denote the set of j-hop neighbors of u in G, i.e., N G j (u) = v ∈ G : d G (u, v) = j . For any two vertices u, v in graph G 1 , let C 1 (u, π(v)) denote the set of 1-hop "common" neighbors of u and π(v) across G 1 and G 2 , i.e., C 1 (u, π(v)) = (w, π(w)) : w ∈ N G 1 1 (u), π(w) ∈ N G 2 1 (π(v)) . Conditioning on the 1-hop neighborhoods, we calculate the probability that a seed becomes a 2-hop witness by calculating the probability that the seed connects to the 1-hop neighbors. For any correct seed, it is a 2-hop witness for a true pair (u, π(u)) with probability about |C 1 (u, π(u))| ps 2 and is a 2-hop witness for a fake pair (u, π(v)) with probability about N G 1 1 (u) N G 2 1 (π(v)) p 2 s 2 . In contrast, for any incorrect seed, it is is a 2-hop witness for a true pair (u, π(u)) with probability about N G 1 1 (u) N G 2 1 (π(u)) p 2 s 2 and is a 2-hop witness for a fake pair (u, π(v)) with probability about N G 1 1 (u) N G 2 1 (π(v)) p 2 s 2 . Then, we have W 2 (u, π(v)) · ∼          Binom nβ, |C 1 (u, π(u))| ps 2 + Binom n(1 − β), N G 1 1 (u) N G 2 1 (π(u)) p 2 s 2 if u = v, Binom n, N G 1 1 (u) N G 2 1 (π(v)) ps 2 if u = v.(7) The number of 1-hop neighbors of a certain vertex in G 1 or G 2 is approximately nps±O √ nps , and the number of common 1-hop neighbors for a true pair is approximately nps 2 ± O nps 2 . In order to make the 2-hop algorithm work, we need the difference between the conditional expected values of the two cases in (7) to be greater than both conditional standard deviations. We first ignore the fluctuation of the size of 1-hop neighborhood and use only the expected values of |C 1 (u, π(u))| , N G 1 1 (u) , N G 2 1 (π(u)) and N G 2 1 (π(v)) . Then, we need nβ (nps 2 )ps 2 + n(1 − β) (nps)(nps)p 2 s 2 − n(nps)(nps)p 2 s 2 ≥ nβ(nps 2 )ps 2 + n(nps)(nps)p 2 s 2 . Analogous to (6), (8) leads to the sufficient condition β ≥ max 16 n 2 p 2 s 4 , 4 1 ns 4 . Next, we consider the fluctuation of the size of 1-hop neighborhood. To guarantee the difference between the conditional expected values of the two cases in (7) to be greater than zero, it suffices to ensure that nβ nps 2 − nps 2 ps 2 + n(1 − β)(nps − √ nps) 2 p 2 s 2 − n (nps + √ nps) 2 p 2 s 2 =nβ nps 2 − nps 2 ps 2 − nβ(nps − √ nps) 2 p 2 s 2 − 4n 2 p 3 s 3 √ nps (a) ≈ n 2 βp 2 s 4 − n 3 βp 4 s 4 − 4n 2 p 3 s 3 √ nps (b) ≈n 2 βp 2 s 4 − 4n 2 p 3 s 3 √ nps ≥ 0,(10) where the approximation in the step (a) is based on nps 2 ≫ 1, and the step (b) is based on np 2 ≪ 1. The fluctuation of the number of 1-hop neighbors has a negligible impact on the standard deviation of the number of 2-hop witnesses. Thus, combining (8) and (10) yields an approximately sufficient condition for 2-hop algorithm to succeed, β ≥ max 4 np 3 s , 4 1 ns 4 , 16 n 2 p 2 s 4 .(11) However, condition (11) does not give us the desirable result in condition (4) because we have considered a very strict criteria for GMWM to succeed, which requires the numbers of 2-hop witnesses of any true pair to be greater than the numbers of 2-hop witnesses of any fake pair. Indeed, the GMWM algorithm may succeed even when the above criteria does not hold. For example, consider the case when u and π(u) both have few 1-hop neighbors, while π(v) has many 1-hop neighbors (see Fig. 2). Then, the fake pair (u, π(v)) may have more 2-hop witnesses than the true pair (u, π(u)). Thus, we can not guarantee the 2-hop algorithm to succeed (and inequality (10) will not hold) if we use the above criteria. However, note that the 1-hop neighbors of v and π(v) should overlap significantly, and thus in this case v is also likely to have many 1-hop neighbors. As a result, the true pair (v, π(v)) is likely to have more 2-hop witnesses than the fake pair (u, π(v)). Then, GMWM will still select the true pair (v, π(v)) and eliminate the fake pair (u, π(v)). From the above example, we can see that, in order to make the 2-hop algorithm work, it suffices to consider the new criteria that each fake pair, (u, π(v)), has fewer 2-hop witnesses than either the true pair (u, π(u)) or the true pair (v, π(v)). u π(u) v π(v) G 1 G 2 Figure 2: The fake pair (u, π(v)) has more 2-hop witnesses than the true pair (u, π(u)), but it has fewer 2-hop witnesses than the true pair (v, π(v)). The 2-hop algorithm still works in this case. Next, we use the above new criteria to analyze when the 2-hop algorithm succeeds. Since N G 1 1 (u) and N G 2 1 (π(u)) are both generated by sampling with probability s the 1-hop neighbors of u in the parent graph G, the difference between N G 1 1 (u) and N G 2 1 (π(u)) is bounded by roughly 2 N G 1 (u) s(1 − s) ≤ 2 nps(1 − s) with high probability. Building upon this observation, for any two vertices u = v in graph G 1 , we can derive that N G 1 1 (u) − N G 1 1 (v) + N G 2 1 (π(v)) − N G 2 1 (π(u)) = N G 1 1 (u) − N G 2 1 (π(u)) + N G 2 1 (π(v)) − N G 1 1 (v) ≤ 4 nps(1 − s). This inequality implies that either N G 1 1 (u) − N G 1 1 (v) or N G 2 1 (π(v)) − N G 2 1 (π(u) ) is no larger than 2 nps(1 − s). Without loss of generality, we assume below that N G 2 1 (π(v)) − N G 2 1 (π(u)) ≤ 2 nps(1 − s). Then, by comparing the conditional expected value of W 2 (u, π(u)) and that of W 2 (u, π(v)), we can relax the condition (10). Specifically, it suffices to ensure that nβ nps 2 − nps 2 ps 2 + n(1 − β) N G 1 1 (u) N G 2 1 (π(u)) p 2 s 2 − n N G 1 1 (u) N G 2 1 (π(v)) p 2 s 2 (a) ≈ n 2 βp 2 s 4 + n N G 1 1 (u) N G 2 1 (π(u)) − N G 2 1 (π(v)) p 2 s 2 ≥n 2 βp 2 s 4 − 4n 2 p 3 s 4 nps(1 − s) ≥ 0, where the approximation in step (a) is based on nβ N G 1 1 (u) N G 2 1 (π(u)) p 2 s 2 ≈ n 3 βp 4 s 4 ≪ n 2 βp 2 s 4 . This immediately leads to the condition β ≥ 4 np 3 (1−s) s . Combining with (9) leads to condition (4) in Theorem 2 except for the log n factor. Adding the log n factor ensures the 2-hop algorithm to succeed with high probability. Note that, when 1 − s = o(1), condition (4) given by the new criteria is more relaxed than (11) given by the old criteria. Numerical Experiments In this section, we conduct numerical studies to compare the performance of the 1-hop and 2-hop algorithms. The experiment results show that the 2-hop algorithm outperforms the 1-hop algorithm in both synthetic and real graphs when the graphs are sparse. Moreover, we verify our theoretical results, Theorem 1 and Theorem 2, are not only sufficient, but also close to necessary for the 1-hop and 2-hop algorithms to succeed, respectively. These experiment results are deferred to Appendix A due to space constraints. Performance Comparison with Synthetic Data For our experiments using synthetic data, we generate G 1 , G 2 and π according to the correlated Erdős-Rényi model. We calculate the accuracy rate as the median of the proportion of vertices that are correctly matched, taken over 10 independent simulations. In Fig. 3, we plot the accuracy rates of the 1-hop and 2-hop algorithms for p = n − 3 4 and s = 0.8. We vary the number of vertices from 2000 to 8000, and fix s = 0.8. From the results, we observe that the 2-hop algorithm significantly outperforms the 1-hop algorithm. Performance Comparison with Real Data We use the Autonomous Systems dataset from [LKF05] as real data graphs. The dataset consists of 9 graphs of Autonomous Systems (AS) peering information inferred from Oregon route-views between March 31, 2001, and May 26, 2001. Since some vertices and edges are changed over time, these nine graphs can be viewed as correlated versions of each other. The number of vertices of the 9 graphs ranges from 10,670 to 11,174 and the number of edges from 22,002 to 23,409. Thus, the average degree is about 3, and the graphs are sparse. To test the graph matching methods, we consider 10,000 vertices of the network that are present in all nine graphs. We apply the 1-hop algorithm and 2-hop algorithm to match each graph to that on March 31, with vertices randomly permuted. The performance comparison of the two algorithms is plotted in Fig. 4 for β = 0.3, 0.6, 0.9. We observe that our proposed 2-hop algorithm significantly outperforms the 1-hop algorithm. Note that the accuracy rate decays in time because over time the graphs become less correlated with the initial one on March 31. Conclusion In this work, we tackle the graph matching problem with partially-correct seeds. Under the correlated Erdős-Rényi model, we first present a sharper characterization of the condition for the 1-hop algorithm to perfectly recover the vertex matching, which is more relaxed than the prior art for dense graphs. Then, by exploiting 2-hop neighbourhoods, we proposed a more efficient 2-hop algorithm that exactly recovers the true vertex correspondence more effectively for sparse graphs. Experimental results validate the our theoretical analysis. Possible directions for future work includes finding more efficient algorithms to recover true vertex correspondence in dense graphs, analyzing the performance of algorithms using neighborhoods of a larger number of hops, and studying graph matching under other random graph models beyond Erdős-Rényi random graph. Appendix A Numerical Experiments to Verify Our Theoretical Results In this section, we conduct numerical studies to verify that our theoretical results, Theorem 1 and Theorem 2, are not only sufficient, but also close to necessary for the 1-hop and 2-hop algorithms to succeed, respectively. For our experiments using synthetic data, we generate G 1 , G 2 and π according to the correlated Erdős-Rényi model. We vary the number of vertices from 2000 to 8000, and fix s = 0.8. We calculate the accuracy rate as the median of the proportion of vertices that are correctly matched, taken over 10 independent simulations. We first verify that the condition (3) in Theorem 1 is both sufficient and close to necessary for the 1-hop algorithm to exactly recover π. We simulate the performance of the 1-hop algorithm for p = n − 1 3 and p = n − 2 3 . The results are presented in Fig. 5(a) and Fig. 6(a) as a function of β. Since Theorem 1 predicts that the 1-hop algorithm succeeds in exact recovery with high probability at β log n n when p = Ω log n/n and at β log n np when p = O log n/n , we rescale the x-axis in Fig. 5(b) and Fig. 6(b) as β/ log n n for p = n − 1 3 and β/ log n np for p = n − 2 3 . As we can see in Fig. 5(b) and Fig. 6(b), the curves for different n align well with each other, which suggests that condition (3) is both sufficient and close to necessary for the 1-hop algorithm to succeed. Next, we verify that the condition (4) in Theorem 2 is both sufficient and close to necessary for the 2-hop algorithm to exactly recover π. We simulate the performance of the 2-hop algorithm for p = n − 3 5 , p = n − 17 24 , and p = n − 4 5 . The results are presented in Fig. 7(a), Fig. 8(a) and Fig. 9(a). Since Theorem 2 predicts that the 2-hop algorithm succeeds in exact recovery with high probability when β max np 3 log n, log n n , log n n 2 p 2 , we rescale the x-axis in Fig. 7 and Fig. 9(b) as β/ np 3 log n for p = n − 3 5 , β/ log n n for p = n − 17 24 and β/ log n n 2 p 2 for p = n − 4 5 . As we can see in Fig. 7 Fig. 8(b) and Fig. 9(b), the curves for different n align well with each other, which suggests that condition (4) is both sufficient and close to necessary for the 2-hop algorithm to succeed. (b), In addition, if s = 1 and p = n − 1 2 , we show in Fig. 10 that the curves for different n align well when we rescale the x-axis as β/ log n n , but they do not align well with each other when the x-axis is rescaled as β/ np 3 log n. This result agrees with Theorem 2, demonstrating that the condition (11) derived from the old criteria is not tight. Appendix B Preliminary Results for Proofs In this section, we present some preliminary results that are useful for the proofs of Theorem 1 and Theorem 2. Notation For any positive integer n, let [n] = {1, 2, ..., n}. Theorem 3. Chernoff Bound ([DP09]): Let X = i∈[n] X i , where X i , i ∈ [n], are independentP {X ≤ (1 − δ)E [X]} ≤ exp − δ 2 2 E [X] , P {X ≥ (1 + δ)E [X]} ≤ exp − δ 2 3 E [X] . Theorem 4. Bernstein's Inequality ([DP09]): Let X = i∈[n] X i , where X i , i ∈ [n] , are independent random variables such that |X i | ≤ K almost surely. Then, for t > 0, we have P {X ≥ E [X] + t} ≤ exp − t 2 2(σ 2 + Kt/3) , where σ 2 = i∈[n] var(X i ) is the variance of X. It follows then for γ > 0, we have P X ≥ E [X] + 2σ 2 γ + 2Kγ 3 ≤ exp(−γ). The obtained estimate holds for P X ≤ E [X] − 2σ 2 γ − 2Kγ 3 too (by considering −X), i.e., Corollary 1. Let X denote a random variable such that X ∼ Binom(n x , α). If n x ∈ [n min , n max ], then for γ > 0, P X ≤ E [X] − 2σ 2 γ − 2Kγ 3 ≤ exp(−γ).P X ≤ n min α − 2n max αγ − 2γ 3 ≤ exp(−γ), and P X ≥ n max α + 2n max αγ + 2γ 3 ≤ exp(−γ). Proof. Since X ∼ Binom (n x , α), applying Bernsteins inequality in Theorem 4 yields P X ≤ n x α − 2n x α(1 − α)γ − 2γ 3 ≤ exp (−γ) .(12) Since n x ∈ [n min , n max ], we have n x α − 2n x α(1 − α)γ − 2γ 3 ≥ n min α − 2n max αγ − 2γ 3 .(13) and P X ≤ n min α − 2n max αγ − 2γ 3 ≤ exp (−γ) .(14) Similarly, P X ≤ n x α + 2n x αγ + 2γ 3 ≤ exp (−γ) .(15) Since n x ∈ [n min , n max ], we have n x α + 2n x α(1 − α)γ + 2γ 3 ≤ n max α + 2n max αγ + 2γ 3 .(16) and P X ≤ n max α + 2n max α(1 − α)γ + 2γ 3 ≤ exp (−γ) .(17)Definition 1. Given random variables {X i }, i ∈ [n], the dependency graph for {X i } is a graph Γ with vertex set [n] such that if i ∈ [n] is not connected by an edge to any vertex in J for J ⊂ [n], then X i is independent of {X j } j∈J . Theorem 5. ([Jan04]) Let X = i∈[n] X i , where X i , i ∈ [n] are random variables such that X i − E [X i ] ≤ K, i ∈ [n] for some K > 0. We let Γ denote a dependency graph for {X i } and ∆(Γ) denote the maximum degree of Γ. Let σ 2 = i∈[n] var(X i ). Then, for t ≥ 0, we have P {X ≥ E [X] + t} ≤ exp − 8t 2 25∆ 1 (Γ)(σ 2 + Kt/3) , where ∆ 1 (Γ) = ∆(Γ) + 1. It follows then for γ > 0, we have P X ≥ E [X] + 25∆ 1 (Γ) 8 σ 2 γ + 25∆ 1 (Γ)Kγ 24 ≤ exp(−γ). If the assumption X i − E [X i ] ≤ K is reversed to X i − E [X i ] ≥ −K, the obtained estimate holds for P X ≤ E [X] − 25∆ 1 (Γ) 8 σ 2 γ − 25∆ 1 (Γ)Kγ 24 too (by considering −X), i.e., P X ≤ E [X] − 25∆ 1 (Γ) 8 σ 2 γ − 25∆ 1 (Γ)Kγ 24 ≤ exp(−γ). Theorem 6. For r ≥ 0, every real number x ∈ (0, 1) and rx ≤ 1, it holds that r log (1 − x) ≤ log (1 − rx 2 ). Proof. We set f (x) = r log (1 − x) − log (1 − rx 2 ), and have f (0) = r log 1 − log 1 = 0. Then, we can get the derivative of f (x), f ′ (x) = −r 1 − x + r 2 − rx = r(rx − x − 1) (2 − rx)(1 − x) ≤0. It shows that f (x) decreases and f (0) = 0. Thus, we conclude that the statement is true for r > 0, x ∈ (0, 1) and rx ≤ 1. Appendix C Proof of Theorem 1 We prove Theorem 1 by proving that, with high probability, W 1 (u, π(u)) > W 1 (v, π(w)) for all vertices u, v and w in graph G 1 . This follows from the following two lemmas. The proofs of the lemmas are deferred to Appendix C.1 and C.2, respectively. Lemma 1. For any vertex u in graph G 1 and sufficiently large n, the following holds P {W 1 (u, π(u)) > x min + y min } ≥ 1 − n − 7 3 . where x min = (nβ −1)ps 2 − 5nβps 2 log n− 5 3 log n and y min = (n(1−β)−2)p 2 s 2 −5 np 2 s 2 log n− 25 3 log n. Lemma 2. For any two vertices v = w in graph G 1 and sufficiently large n, the following holds P {W 1 (v, π(w)) < z max } ≥ 1 − n − 7 2 . where z max = np 2 s 2 + 7np 2 s 2 log n + 7 3 log n + 2. Proof of Theorem 1. Based on Lemma 1 and the union bound, we have P u∈V {W 1 (u, π(u)) > x min + y min } ≥ 1 − n · n − 7 3 = 1 − n − 4 3 . Based on Lemma 2 and the union bound, we have P    v,w∈V {W 1 (v, π(w)) < z max }    ≥ 1 − n 2 · n − 7 2 = 1 − n − 3 2 . It remains to verify x min + y min − z max ≥ 0 under the condition of Theorem 1. Note that x min + y min − z max =nβp(1 − p)s 2 − 5nβps 2 log n − (5 + √ 7) np 2 s 2 log n − 2p(1 + p)s 2 − 37 3 log n − 2. First, by assumption that β ≥ 45 log n np(1−p) 2 s 2 , we have 1 3 nβp(1 − p)s 2 ≥ 45 log n np(1 − p) 2 s 2 · 1 3 n βp(1 − p)s 2 = 5nβps 2 log n. Second, by assumption that β ≥ 30 log n n(1−p) 2 s 2 , we have 1 3 nβp(1 − p)s 2 ≥ 30 log n n(1 − p) 2 s 2 · 1 3 np(1 − p)s 2 ≥ (5 + √ 7) np 2 s 2 log n.(19) Third, by assumption that β ≥ 45 log n np(1−p) 2 s 2 and n is sufficiently large, we have 1 3 nβp(1 − p)s 2 ≥ 45 log n np(1 − p) 2 s 2 · 1 3 np(1 − p)s 2 ≥ 37 3 log n + 2 + 2p(1 + p)s 2 .(20) Combing (18)-(20), we have x min + y min − z max ≥ 0. Taking a union bound, we have P min u∈V W 1 (u, π(u)) > max v,w∈V W 1 (v, π(w)) ≥1 − P u∈V {W 1 (u, π(u)) ≤ x min + y min } − P    v,w∈V {W 1 (v, π(w)) ≥ z max }    ≥1 − n − 4 3 − n − 3 2 ≥ 1 − n −1 . Thus, GMWM outputs π with P { π = π} ≥ 1 − n −1 if we use the 1-hop algorithm to match graphs. C.1 Proof of Lemma 1 Fix a true pair (u, π(u)), we next bound the number of 1-hop witnesses for (u, π(u)). For each seed (u i , π(u i )), let π(v i ) be the underlying vertex matched to u i , i.e., π(u i ) = π(v i ). Then, (u i , π(v i )) is a correct seed if u i = v i and is an incorrect seed if u i = v i . Among all seeds, some of them may be of the form that u i = u or v i = u. Then, they can not become 1-hop witnesses for u and π(u). The number of such seeds is at most 2. In the following, we exclude such seeds. We first count the contribution to W 1 (u, π(u)) by correct seeds. For any correct seed (u i , π(u i )), let X i be a binary random variable such that X i = 1 if (u i , π(u i )) is a 1-hop witness for u and π(u), and X i = 0 otherwise. Then, we have P {X i = 1} = ps 2 because X i = 1 if and only if the edge (u, u i ) is in G, and is sampled into both G 1 and G 2 . Note that the edges (u, u i )'s are different for different u i in the parent graph G, and the sampling process is independent on each edge. Thus, X i 's are mutually independent across u i . Let X denote the number of 1-hop witnesses contributed by the correct seeds. Since there are at least nβ − 1 correct seeds that could be 1hop witnesses for u and π(u), it follows that X ≥ nβ−1 i=1 X i ∼ Binom(nβ − 1, ps 2 ). Recall that x min = (nβ − 1)ps 2 − 5nβps 2 log n − 5 3 log n. It follows that P {X ≤ x min } ≤P nβ−1 i=1 X i ≤ x min ≤P nβ−1 i=1 X i ≤ (nβ − 1)ps 2 − 5(nβ − 1)ps 2 (1 − ps 2 ) log n − 5 3 log n ≤ exp − 5 2 log n = n − 5 2 ,(21) where that last inequality follows from Bernsteins inequality given in Theorem 4 with γ = 5 2 log n and K = 1. We then count the contribution to W 1 (u, π(u)) by incorrect seeds. For any incorrect seed (u i , π(v i )), let Y i be a binary random variable such that Y i = 1 if (u i , π(v i )) is a 1-hop witness for u and π(u), and Y i = 0 otherwise. Then, we have P {Y i = 1} = p 2 s 2 because Y i = 1 if and only if the two edges (u, u i ) and (u, v i ) are both in G, and are sampled in G 1 and G 2 , respectively. Let Y denote the number of 1-hop witnesses contributed by the incorrect seeds. Since there are n(1 − β) incorrect seeds, the number of incorrect seeds that could be 1-hop witness for (u, π(u)) is no less than n(1 − β) − 2. Thus, Y ≥ n(1−β)−2 i=1 Y i . Note that Y i are dependent. Specifically, the event that (u i , π(v i )) becomes a 1-hop witness for (u, π(u)) is dependent on (u j , π(u i )) and (v i , π(v j )) (See Fig. 11 for an example). Then, we cannot apply Bernsteins Inequality in Theorem 4. Fortunately, the event (u j , π(v j )) becomes a 1-hop witnesses for (u, π(u)) is dependent on (u i , π(v i )) only if u j = v i or v j = u i . Thus, the event that (u i , π(v i )) becomes a 1-hop witness for (u, π(u)) depends on at most two other seeds. Thus, we apply the concentration inequality for the sum of dependent random variables given in Theorem 5. Specifically, we construct a dependency graph Γ for {Y i }. The maximum degree of Γ, ∆(Γ), equals to two. Thus, we apply Theorem 5 with ∆ 1 (Γ) = ∆(Γ) + 1 = 3, K = 1, σ 2 = (n(1 − β) − 2)p 2 s 2 (1 − p 2 s 2 ) and γ = 8 3 log n. Recall that y min = (n(1 − β) − 2)p 2 s 2 − 5 np 2 s 2 log n − 25 3 log n. We then get Figure 11: The event that (u i , π(v i )) becomes a 1-hop witness for (u, π(u)) is dependent on (u j , π(u i )) and (v i , π(v j )). P {Y ≤ y min } ≤P    n(1−β)−2 i=1 Y i ≤ y min    ≤P    n(1−β)−2 i=1 Y i ≤ (n(1 − β) − 2)p 2 s 2 − 5 (n(1 − β) − 2)p 2 s 2 (1 − p 2 s 2 ) log n − 25 3 log n    ≤ exp − 8 3 log n = n − 8 3 .(22)u u i π(v i ) π(u) v i π(v j ) u j π(u i ) G 1 G 2 Finally, since W 1 (u, π(u)) = X + Y and n is sufficiently large, (21) and (22) yield that P {W 1 (u, π(u)) ≤ x min + y min } ≤ P {X ≤ x min } + P {Y ≤ y min } ≤ n − 5 2 + n − 8 3 < n − 7 3 . C.2 Proof of Lemma 2 Fix a fake pair (v, π(w)), we next bound the number of 1-hop witnesses for (v, π(w)). For each seed (u i , π(u i )), let π(v i ) be the underlying vertex matched to u i , i.e., π(u i ) = π(v i ). Then, (u i , π(v i )) is a correct seed if u i = v i and is an incorrect seed if u i = v i . The seed (u i , π(v i )) could be a 1-hop witness for (v, π(w)) only if u i = v and π(v i ) = π(w). Besides, if u i = w or π(v i ) = π(v), then the event u i ∈ N G 1 1 (v) is dependent on the event π(v i ) ∈ N G 2 1 (π(w)) (see Fig. 12 for details). Fortunately, there is at most two such seed. Thus, we exclude this seed and consider the remaining seeds. v π(w) w π(v) G 1 G 2 Figure 12: If u i = w and π(v i ) = π(v), then the event u i ∈ N G 1 1 (v) would be dependent on the event π(v i ) ∈ N G 2 1 (π(w)). Let Z i be a binary random variable such that Z i = 1 if (u i , π(v i )) is a 1-hop witness for v and π(w), and X i = 0 otherwise. Then, we have P {Z i = 1} = p 2 s 2 because Z i = 1 if and only if the two edges (v, u i ) and (w, v i ) are both in G, and are sampled in G 1 and G 2 , respectively. Note that the edges (v, u i ) and (w, v i ) are different in the parent graph G for different seeds (u i , π(v i )); Otherwise u i = w or v i = v, but we have excluded such seeds. Thus, Z i 's are mutually independent because the sampling process is independent on each edge. Since there are at most n seeds which could be 1-hop witnesses for (v, π(w)), we have W 1 (v, π(w)) ≤ 2 + n i=1 Z i and n i=1 Z i ∼ Binom(n, p 2 s 2 ). Recall z max = np 2 s 2 + 7np 2 s 2 log n + 7 3 log n + 2. We then get P {W 1 (v, π(w)) ≥ z max } ≤P n i=1 Z i ≥ z max − 2 ≤P n i=1 Z i ≥ np 2 s 2 + 7np 2 s 2 (1 − p 2 s 2 ) log n + 7 3 log n ≤ exp − 7 2 log n = n − 7 2 . where the last inequality follows from Bernsteins inequality given in Theorem 4 with γ = 7 2 log n and K = 1. Appendix D Proof of Theorem 2 In this section, we prove Theorem 2 for the correlated Erdős-Rényi model G(n, p; s) when np 2 ≤ 1 135 log n and nps 2 ≥ 128 log n. We set ǫ = 12 log n (n − 1)ps 2 ≤ 1 3 . Before proving Theorem 2, we need some lemmas to bound the number of 2-hop witnesses. In order to count the number of 2-hop witnesses, we first bound the number of 1-hop neighbors of any vertex in G 1 and G 2 . Lemma 3. Given any two vertices u = v in graph G 1 , let R uv denote the event such that the followings hold simultaneously: (1 − ǫ)(n − 1)ps < N G 1 1 (u) , N G 2 1 (π(u)) , N G 1 1 (v) , N G 2 1 (π(v)) < (1 + ǫ)(n − 1)ps, (1 − ǫ)(n − 1)ps 2 < |C 1 (u, π(u))| , |C 1 (v, π(v))| < (1 + ǫ)(n − 1)ps 2 , |C 1 (u, π(v))| , W 1 (v, π(u)) < 3 log n, where ǫ is given in (23), i.e., ǫ = 12 log n (n−1)ps 2 ≤ 1 3 . Then P {R uv } ≥ 1 − n − 7 2 . Remark 1. The number of 1-hop neighbors in G 1 or G 2 is approximately nps ± O √ nps , and the number of common 1-hop neighbors is approximately between nps 2 ± O nps 2 . Since |C 1 (u, π(v))| , W 1 (v, π(u)) · ∼ Binom(n, p 2 s 2 ), they can be bounded by sub-exponential tail bounds. See Appendix D.2 for the proof. We next bound the number of 2-hop witnesses for the true pair (u, π(u)), and the fake pair (u, π(v)), by conditioning on their 1-hop neighbors. We set δ 1 = 6ps β and δ 2 = 9ps 1−β . The proofs of Lemma 4 and Lemma 5 are deferred to Appendix D.3 and D.4, respectively. Lemma 4. Given any two vertices u = v in graph G 1 , we use Q uv to collect all information of 1-hop neighborhood of u and v, i.e., Q uv = N G 1 1 (u), N G 2 1 (π(u)), N G 1 1 (v), N G 2 1 (π(v)) . For sufficiently large n, P W 2 (u, π(u)) ≤ l min + m min Q uv · 1(R uv ) ≤ n − 7 2 , where l min = 7 24 (1 − δ 1 )n 2 βp 2 s 4 − 35 16 n 2 βp 2 s 4 log n − 5 2 log n, and m min =(1 − δ 2 )n(1 − β) 1 − (1 − ps) N G 1 1 (u)\{u} 1 − (1 − ps) N G 2 1 (π(u))\{π(v)} − 15 2 3 2 n 3 p 4 s 4 log n − 25 2 log n.(25) Remark 2. Lemma 4 provides a lower bound on the number of 2-hop witnesses for the true pair, u and π(u), conditioned on Q uv . Note that l min is contributed by the correct seeds in π and m min is contributed by the incorrect seeds in π. Conditional on the 1-hop neighbors, a correct seed becomes a 2-hop witness for the true pair (u, π(u)) with probability about |C 1 (u, π(u))| ps 2 ≈ np 2 s 4 . An incorrect seed becomes a 2-hop witness for the true pair (u, π(u)) with probability about (1 − (1 − ps) |N G 1 1 (u)| )(1 − (1 − ps) |N G 2 1 (π(u))| ) . Both the expressions of l min and m min consist of three parts. Specifically, the first term of (24) and (25) is a lower bound of the expectation, the second term is due to the sub-Gaussian tail bound, and the third term is due to the sub-exponential tail bound. We introduce δ 1 and δ 2 to exclude seeds that are 1-hop neighbors of u or π(u), which simplify the expressions. Lemma 5. For any two vertices u = v in graph G 1 and sufficiently large n, P W 2 (u, π(v)) ≥ x max + y max + 2z max + 109 log n Q uv · ½ (R uv ) ≤ n − 7 2 .(26) where x max = 2nβ 3ps 2 log n + 9 4 n 2 p 4 s 4 , Remark 3. Lemma 5 provides an upper bound on the number of 2-hop witnesses for the fake pair, u and π(v), conditioned on Q uv . Let (u i , π(v i )) denote any seed. Note that if u and v are connected in G 1 , conditioning on Q uv changes the probability that those seeds with u i ∈ N G 1 1 (v) become 2-hop witnesses for u and π(v). Thus, we have to divide the seeds into several types based on whether u i ∈ N G 1 1 (v) or π(v i ) ∈ N G 2 1 (π(u)), and consider their contribution to the number of 2-hop witnesses separately. y max =n(1 − β) 1 − (1 − ps) N G 1 1 (u)\{u} 1 − (1 − ps) N G 2 1 (π(v))\{π(v)} 1) x max + y max is contributed by the seeds such that u i / ∈ N G 1 1 (v) ∪ {v} and π(v i ) / ∈ N G 2 1 (π(u)) ∪ {π(u)} (see Fig. 13(a) for example). In addition, we divide the seeds in the first type into two cases: x max is contributed by the correct seeds in π, and y max is contributed by the incorrect seeds in π. 2) One multiple of z max in (26) is contributed by the seeds such that u i ∈ N G 1 1 (v) and π(v i ) / ∈ N G 2 1 (π(u)) ∪ {π(u)} (see Fig. 13(b) for example). There are roughly nps seeds, (u i , π(v i )), such that u i a is 1-hop neighbor of v in graph G 1 . If u and v are connected, these u i have been 2-hop neighbors of u. The probability that π(v i ) becomes a 2-hop neighbor of π(v) is approximately np 2 s 2 . Thus, the expected number of 2-hop witnesses contributed by this type of seeds, (u i , π(v i )), is approximately n 2 p 3 s 3 . The other multiple of z max in (26) is for the opposite case: it is contributed by the seeds such that u i / ∈ N G 1 1 (v) ∪ {v} and π(v i ) ∈ N G 2 1 (π(u)). 3) The term 109 log n in (26) is contributed by the seeds such that u i ∈ N G 1 1 (v) and π(v i ) ∈ N G 2 1 (π(u)) (see Fig. 13(c) for example) and the sub-exponential tail bounds. This type of seeds, (u i , π(v i )), are 1-hop witnesses for v and π(u). Since there are no more than 3 log n 1-hop witnesses for v and π(u) according to Lemma 3, we obtain the upper bound 109 log n. u π(u) v π(v) u i π(v i ) G 1 G 2 (a) ui / ∈ N G 1 1 (v) ∪ {v} and π(vi) / ∈ N G 2 1 (π(u)) ∪ {π(u)}. u π(u) v π(v) u i π(v i ) G 1 G 2 (b) ui ∈ N G 1 1 (v) and π(vi) / ∈ N G 2 1 (π(u)) ∪ {π(u)}. u π(u) v π(v) u i π(v i ) G 1 G 2 (c) ui ∈ N G 1 1 (v) and π(vi) ∈ N G 2 1 (π(u)). Figure 13: We divide the seeds into several types based on whether u i ∈ N G 1 1 (v) and π(v i ) ∈ N G 2 1 (π(u)). We now present the following lemma, which shows that the lower bound l min + m min is no smaller than the upper bound x max + y max + 2z max + 109 log n if N G 2 1 (π(v)) − N G 2 1 (π(u)) ≤ 2 5nps(1 − s) log n + 10 3 log n. See Appendix Section D.5 for the proof. Lemma 6. Given any two vertices u = v in graph G 1 , if R uv occurs, N G 2 1 (π(v)) − N G 2 1 (π(u)) ≤ τ with τ ∆ = 2 5nps(1 − s) log n + 10 3 log n, and condition (4) holds, then for sufficiently large n, l min + m min ≥ x max + y max + 2z max + 109 log n. where the definitions of l min , m min , x max , y max and z max are given in Lemma 4 and Lemma 5. Lemma 6 gives the condition when the number of 2-hop witnesses for u and π(u) is larger than that for u and π(v). However, N G 2 1 (π(v)) − N G 2 1 (π(u)) ≤ τ is not always satisfied. We present Lemma 7 to show that either N G 1 1 (u) − N G 1 1 (v) or N G 2 1 (π(v)) − N G 2 1 (π(u)) is no larger than τ with high probability. See Appendix D.6 for the proof. Lemma 7. Given any u, v ∈ G 1 , let T uv denote the event: T uv = N G 1 1 (u) − N G 1 1 (v) ≤ τ ∪ N G 2 1 (π(v)) − N G 2 1 (π(u)) ≤ τ .(27) Then, P(T uv ) ≥ 1 − n − 7 2 . Clearly, if the sub-event N G 2 1 (π(v)) − N G 2 1 (π(u)) ≤ τ in (27) occurs, we can apply Lemma 6 directly. If the other sub-event N G 1 1 (u) − N G 1 1 (v) ≤ τ occurs, we can use an analogous result of Lemma 6 that compares the number of 2-hop witnesses for u and π(v) with that for v and π(v) instead of comparing to the number of 2-hop witnesses for u and π(u). This leads to the proof of Theorem 2, which shows that, with high probability, either W 2 (u, π(u)) > W 2 (u, π(v)) or W 2 (v, π(v)) > W 2 (u, π(v)) for any u = v, and as a result, GMWM must succeed. Proof of Theorem 2. Given any vertices u, v in graph G 1 and u = v, we let W uv denote W uv = {W 2 (u, π(u)) > W 2 (u, π(v))} ∪ {W 2 (v, π(v)) > W 2 (u, π(v))} . We will prove W uv happens with high probability. We condition on Q uv = N G 1 1 (u), N G 2 1 (π(u)), N G 1 1 (v), N G 2 1 (π(v)) such that the event R uv in Lemma 3 is true. Then, we consider two cases: N G 2 1 (π(v)) − N G 2 1 (π(u)) ≤ τ and N G 1 1 (u) − N G 1 1 (v) ≤ τ . Case 1: N G 2 1 (π(v)) − N G 2 1 (π(u)) ≤ τ . Let w min denote l min + m min and w max denote x max + y max + 2z max + 109 log n. According to Lemma 4 and Lemma 5, W 2 (u, π(u)) > w min with high probability, and W 2 (u, π(v)) < w max with high probability. Since w min ≥ w max according Lemma 6, we get that W 2 (u, π(u)) > W 2 (u, π(v)) with high probability. More precisely, if R uv occurs, P W 2 (u, π(u)) ≤ W 2 (u, π(v)) Q uv (a) ≤ P {W 2 (u, π(u)) ≤ w min } ∪ {W 2 (u, π(v)) ≥ w max } Q uv (b) ≤P W 2 (u, π(u)) ≤ w min Q uv + P W 2 (u, π(v)) ≥ w max Q uv (c) ≤2 · n − 7 2 . where the inequality (a) is based on w min ≥ w max and the De Morgan's laws, which states R ∪ S = R ∩ S. The inequality (b) is based on the union bound. The inequality (c) is based on Lemma 4 and Lemma 5. Since {W 2 (u, π(u)) > W 2 (u, π(v))} ⊂ W uv , it follows that, P W uv Q uv ≤ P W 2 (u, π(u)) ≤ W 2 (u, π(v)) Q uv ≤ 2 · n − 7 2 . Case 2: N G 1 1 (u) − N G 1 1 (v) ≤ τ . We can lower bound the number of 2-hop witnesses for v and π(v) analogous to Lemma 4, and prove that the lower bound is no smaller than the upper bound of the number of 2-hop witnesses for u and π(v) in this case. Then, P W uv Q uv ≤ P W 2 (v, π(v)) ≤ W 2 (u, π(v)) Q uv ≤ 2 · n − 7 2 . Since T uv = N G 1 1 (u) − N G 1 1 (v) ≤ τ ∪ N G 2 1 (π(v)) − N G 2 1 (π(u)) ≤ τ , applying the union bound yields that P W uv Q uv · ½ (R uv ∩ T uv ) =P W uv Q uv · ½ (R uv ) · ½ (T uv ) ≤P W uv Q uv · ½ (R uv ) · ½ N G 2 1 (π(v)) − N G 2 1 (π(u)) ≤ τ + P W uv Q uv · ½ (R uv ) · ½ N G 1 1 (u) − N G 1 1 (v) ≤ τ ≤4 · n − 7 2 . Then, applying Lemma 3 and Lemma 7 yields that P W uv =E Quv P W uv Q uv · ½ (R uv ∩ T uv ) + P W uv Q uv · ½ R uv ∩ T uv ≤E Quv P W uv Q uv · ½ (R uv ∩ T uv ) + E Quv ½ R uv ∩ T uv ≤6 · n − 7 2 . Finally, applying the union bound over all pairs (u, v) with u = v, we get that P    u,v∈V,u =v W uv    = 1 − P    u,v∈V,u =v W uv    ≥ 1 − n 2 P W uv ≥ 1 − 6 · n − 3 2 ≥ 1 − n −1 . Assuming u,v∈V,u =v W uv is true, we next show that the output of GMWM, π, must be equal to π. We prove this by contradiction. Suppose in contrary that π = π. Assume the first fake pair is chosen by GMWM in the k-th iteration, which implies that GMWM selects true pairs in the first k − 1 iterations. We let u k , π v k denote the fake pair chosen at the k-th iteration. Because u,v∈V,u =v W uv is true, we have W 2 u k , π u k > W 2 u k , π v k or W 2 v k , π v k > W 2 u k , π v k . We consider two cases. The first case is that u k , π u k or v k , π v k has been selected in the first k − 1 iterations, in which case the fake pair u k , π v k would have been eliminated before the k-th iteration. The second case is that u k , π u k and v k , π v k have not been selected in the first k − 1 iterations. Then, GMWM would select one of them instead of u k , π v k in the k th iteration. Thus, both cases contradict to the assumption that GMWM picks a fake pair in the k-th iteration. Hence, GMWM outputs n true pairs. Then, we have P { π = π} ≥ P u,v∈V,u =v W uv ≥ 1 − n −1 . D.1 A Supporting Lemma In this section, we present a supporting lemma that is useful for the proofs of Lemma 3 and Lemma 7. Lemma 8. Let X denote a random variable such that X ∼ Binom(n − 1, α). If α ∈ ps 2 , 1 , then P {X ≤ (1 − ǫ)(n − 1)α} ≤ n −6 , P {X ≥ (1 + ǫ)(n − 1)α} ≤ n −4 , where ǫ is given in (23), i.e., ǫ = 12 log n (n−1)ps 2 ≤ 1 3 . Proof. Since X ∼ Binom (n − 1, α) and ǫ = 12 log n (n−1)ps 2 < 1 3 , applying Chernoff bound in Theorem 3 yields P {X ≤ (1 − ǫ)(n − 1)α} ≤ exp − ǫ 2 (n − 1)α 2 = exp − 6α log n ps 2 ≤ n −6 , and P {X ≤ (1 + ǫ)(n − 1)α} ≤ exp − ǫ 2 (n − 1)α 3 = exp − 4α log n ps 2 ≤ n −4 . D.2 Proof of Lemma 3 In the sub-sampled graph G 1 , for any vertex u i ∈ V \{u}, u i and u are connected with probability ps. Then, we have N G 1 1 (u) ∼ Binom(n − 1, ps). Since ps ≥ ps 2 , we apply Lemma 8 in Appendix D.1 and get P N G 1 1 (u) ≤ (1 − ǫ)(n − 1)ps ≤ n −6 , P N G 1 1 (u) ≥ (1 + ǫ)(n − 1)ps ≤ n −4 . The same lower and upper bound hold for N G 1 1 (v) , N G 2 1 (π(u)) , N G 2 1 (π(v)) by similar proof. In the sub-sampled graph G 1 , for any vertex u i ∈ V \{u}, we have P {(u i , π(u i )) ∈ C 1 (u, π(u))} = ps 2 because (u i , π(u i )) ∈ C 1 (u, π(u)) if and only if the edge (u, u i ) is in G, and is sampled into both G 1 and G 2 . Note that the edges (u, u i )'s are different for different u i in the parent graph G, and the sampling process is independent on each edge. Thus, (u i , π(u i )) ∈ C 1 (u, π(u)) are mutually independent across u i . Then, we have |C 1 (u, π(u))| ∼ Binom(n − 1, ps 2 ). Applying Lemma 8 in Appendix D.1 implies that P |C 1 (u, π(u))| ≤ (1 − ǫ)(n − 1)ps 2 ≤ n −6 , P |C 1 (u, π(u))| ≥ (1 + ǫ)(n − 1)ps 2 ≤ n −4 . The same lower and upper bound hold for |C 1 (v, π(v))| by similar proof. According to Lemma 2, we have P W 1 (v, π(u)) ≤ np 2 s 2 + 7np 2 s 2 log n + 7 3 log n + 2 ≥ 1 − n − 7 2 . Since np 2 ≤ 1 135 log n and n is sufficiently large, we have np 2 s 2 + 7np 2 s 2 log n + 7 3 log n + 2 ≤ 1 135 log n + 2 + 1 19 + 7 3 log n ≤3 log n. Hence, P {W 1 (v, π(u)) ≥ 3 log n} ≤ n − 7 2 . In the sub-sampled graph G 1 , for any vertex u i ∈ V \{u, v}, we have P {(u i , π(u i )) ∈ C 1 (u, π(v))} = p 2 s 2 because u i ∈ N G 1 1 (u) and π(u i ) ∈ N G 2 1 (π(v)) if and only if the two edges (u, u i ) and (v, u i ) are both in G, and are sampled into G 1 and G 2 , respectively. Note that the two edges (u, u i ) and (v, u i ) are different for different u i in the parent graph G, and the sampling process is independent on each edge. We then get (u i , π(u i )) ∈ C 1 (u, π(v)) are mutually independent across u i . Then, we have |C 1 (u, π(v))| ∼ Binom(n − 2, p 2 s 2 ). Applying Bernstein's inequality in Theorem 4 with γ = 7 2 log n and K = 1 yields P |C 1 (u, π(v))| ≥ (n − 2)p 2 s 2 + 7(n − 2)p 2 s 2 (1 − p 2 s 2 ) log n + 7 3 log n ≤ exp − 7 2 log n = n − 7 2 . Since np 2 ≤ 1 135 log n and n is sufficiently large, we have (n − 2)p 2 s 2 + 7(n − 2)p 2 s 2 (1 − p 2 s 2 ) log n + 7 3 log n ≤ 1 135 log n + 1 19 + 7 3 log n ≤3 log n. Hence, P {|C 1 (u, π(v))| ≥ 3 log n} ≤ n − 7 2 . Taking the union bound over all these inequalities yields that P {R uv } ≥ 1 − 4(n −6 + n −4 ) − 2(n −6 + n −4 ) − 2n − 7 2 ≥ 1 − n − 10 3 . D.3 Proof of Lemma 4 Fixing any two vertices u = v in G 1 , we condition on Q uv and assume R uv is true. For each seed (u i , π(u i )), let π(v i ) be the underlying vertex matched to u i , i.e., π(u i ) = π(v i ). Then, (u i , π(v i )) is a correct seed if u i = v i and is an incorrect seed if u i = v i . Among all seeds, some of them may be of the form that u i ∈ N G 1 1 (u) ∪ {u} or π(v i ) ∈ N G 2 1 (π(u)) ∪ {π(u)}. They could not become 2-hop witnesses for (u, π(u)). The number of such seeds is bounded by N G 1 1 (u) + N G 2 1 (π(u)) +2 ≤ 2(1+ ǫ)(n−1)ps+2 ≤ 3nps, where the first inequality holds because R uv is true, and the second inequality holds because ǫ ≤ 1 3 and nps ≥ 6. Then, for each remaining seed (u i , π(v i )), we will estimate the probability that it becomes a 2-hop witness for u and π(u) by calculating the probability that u i connects to the 1-hop neighbours of u and π(v i ) connects to the 1-hop neighbours of π(u). However, if u and v are connected in G 1 , then any 1-hop neighbour of v would become the 2-hop neighbour of u. Hence, conditioning on Q uv changes the probability that those seeds, (u i , π(v i )), become 2-hop witnesses for u and π(u) if u i is a 1-hop neighbor of v. Similarly if π(u) and π(v) are connected in G 2 and π(v i ) is a 1-hop neighbor of π(v). Therefore, we will also exclude all those seeds such that u i ∈ N G 1 1 (v) ∪ {v} or π(v i ) ∈ N G 2 1 (π(v)) ∪ {π(v)} so that we can avoid this difficulty. The number of such seeds is bounded by N G 1 1 (v) + N G 2 1 (π(v)) + 2 ≤ 2(1 + ǫ)(n − 1)ps + 2 ≤ 3nps, where the first inequality holds because R uv is true, and the second inequality holds because ǫ ≤ 1 3 and nps ≥ 6. Fortunately, since the total number of seeds we exclude is bounded by 6nps, which is far less than n, we can still get a pretty tight lower bound. We first count the contribution to W 2 (u, π(u)) by the correct seeds. We use n R to denote the number of correct seeds remained after we exclude the above kind of seeds. Since there are a constant fraction β of correct seeds in π, n R is no less than nβ − 6nps. Let n + R denote max {⌈n(β − 6ps)⌉, 0}. Then, there are at least n + R correct seeds which could be a 2-hop witness for (u, π(u)). Next, we lower bound the number of 2-hop witnesses contributed by the correct seeds. Let L i be a binary random variable such that L i = 1 if (u i , π(u i )) is a 2-hop witness for (u, π(u)) and L i = 0 otherwise. Let C i be a binary random variable such that C i = 1 if u i and π(u i ) connect to the "common" 1-hop neighbours of u and π(u), respectively, and C i = 0 otherwise. Since we have excluded (u i , π(u i )) such that u i ∈ N G 1 1 (v) or π(u i ) ∈ N G 2 1 (π(v)), the remaining u i and π(u i ) can not connect to v and π(v), respectively. Thus, C i = 1 can be expressed as {C i = 1} = (w,π(w))∈C 1 (u,π(u))\ {(v,π(v))} u i ∈ N G 1 1 (w) ∩ π(u i ) ∈ N G 2 1 (π(w)) = (w,π(w))∈C 1 (u,π(u))\ {(v,π(v))} {(u i , π(u i )) ∈ C 1 (w, π(w))} . Then, we can bound P C i = 1 Q uv as follows: P C i = 1 Q uv (a) = 1 − P        (w,π(w))∈C 1 (u,π(u))\ {(v,π(v))} {(u i , π(u i )) / ∈ C 1 (w, π(w))} Q uv        (b) =1 − (w,π(w))∈C 1 (u,π(u))\ {(v,π(v))} P (u i , π(u i )) / ∈ C 1 (w, π(w)) Q uv =1 − (w,π(w))∈C 1 (u,π(u))\ {(v,π(v))} 1 − P (u i , π(u i )) ∈ C 1 (w, π(w)) Q uv (c) =1 − 1 − ps 2 |C 1 (u,π(u))\{(v,π(v))}| (d) ≥ 1 − 1 − 1 2 (|C 1 (u, π(u))| − 1) ps 2 = 1 2 (|C 1 (u, π(u))| − 1) ps 2 (e) ≥ 7 24 np 2 s 4 . We use the De Morgan's laws, which states R ∪ S = R ∩ S, in the equality (a). Because the edges (u i , w)'s are different in the parent graph G across different w, and the sampling process is independent on each edge, which implies that {(u i , π(u i )) / ∈ C 1 (w, π(w))} are independent across different w conditional on Q uv . Hence, the equality (b) holds. Because {(u i , π(u i )) ∈ C 1 (w, π(w))} if and only if the edge (u i , w) is in G, and is sampled into both G 1 and G 2 , the equality (c) holds. Because |C 1 (u, π(u))| ≥ 0, ps 2 ∈ (0, 1) and |C 1 (u, π(u))| ps 2 ≤ (1 + ǫ)(n − 1)p 2 s 4 ≤ 4 3 np 2 s 4 < 1, the inequality (d) holds based on Theorem 6. The inequality (e) holds because |C 1 (u, π(u))| − 1 ≥ (1 − ǫ)(n − 1)ps 2 − 1 ≥ 2 3 (n − 1)ps 2 − 1 ≥ 7 12 nps 2 . If C i = 1, then (u i , π(v i )) is a 2-hop witness for (u, π(u)). Thus, {C i = 1} ⊂ {L i = 1}, i.e., P L i = 1 Q uv ≥ P C i = 1 Q uv ≥ 7 24 np 2 s 4 .(28) Note that we have known the common neighbors (w, π(w)) ∈ C 1 (u, π(u)) conditional on Q uv , and the edges (u i , w)'s are different for different u i in the parent graph G, then C i 's are mutual independent of each other conditional on Q uv . Then, let L denote the number of 2-hop witnesses contributed by correct seeds. Since n + R ∈ [(1 − δ 1 )nβ, nβ], we have L = n R i=1 L i ≥ n + R i=1 C i s.t. ≥ C ′ , where C ′ is a random variable such that C ′ ∼ Binom(n + R , 7 24 np 2 s 4 ). Recall that l min = 7 24 (1 − δ 1 )n 2 βp 2 s 4 − 35 16 n 2 βp 2 s 4 log n − 5 2 log n. We then get P L ≤ l min Q uv ≤ P C ′ ≤ l min Q uv ≤ exp − 15 4 log n = n − 15 4 ,(29) where the last inequality follows from Corollary 1 with γ = 15 4 log n. We then count the contribution to W 2 (u, π(u)) by the incorrect seeds. For any incorrect seed (u i , π(v i )), if π(u i ) ∈ N G 2 1 (π(u)) and v i ∈ N G 1 1 (u), then the event that u i connects to v i would be dependent on the event that π(v i ) connects to π(u i ) (See Fig. 14 for example). Thus, we also exclude such seeds to avoid the difficulty of calculating probability. We use n W to denote the number of incorrect seeds remained after we exclude the seeds shown above. Then, n W is no less than n(1 − β) − 6nps − N G 1 1 (u) − N G 2 1 (π(u)) ≥ n(1 − β) − 9nps, where the inequality holds because R uv is true. Let n + W denote max {⌈n(1 − β − 9ps)⌉, 0}, then there are at least n + W incorrect seeds (u i , π(v i )) which could be a 2-hop witness for (u, π(u)). Figure 14: If π(u i ) ∈ N G 2 1 (π(u)) and v i ∈ N G 1 1 (u), then the event that u i connects to v i would be dependent on the event that π(v i ) connects to π(u i ). u π(u) v i π(u i ) u i π(v i ) G 1 G 2 Let M i be a binary random variable such that M i = 1 if (u i , π(v i )) is a 2-hop witness for (u, π(u)) and M i = 0 otherwise. Then, we let λ = P M i = 1 Q uv and have λ (a) = P u i ∈ N G 1 2 (u) Q uv · P π(v i ) ∈ N G 2 2 (π(u)) Q uv (b) = 1 − P u i ≁ N G 1 1 (u) Q uv 1 − P π(v i ) ≁ N G 2 1 (π(u)) Q uv (c) = 1 − (1 − ps) N G 1 1 (u)\{v} 1 − (1 − ps) N G 2 1 (π(u))\{π(v)} , where for any graph G(V, E), A ⊂ V and u ∈ V \A, let u ≁ A denote the event that u does not connect to any node in A, i.e., u ≁ A = v∈A u / ∈ N G 1 (v) . The equality (a) holds because u i ∈ N G 1 2 (u) is independent of π(v i ) ∈ N G 2 2 (π(u)); Otherwise, π(u i ) ∈ N G 2 1 (π(u)) and v i ∈ N G 1 1 (u) since u i = v i , but we have neglected such seeds. The equality (b) shows the probability that u i and v i connect to at least one 1-hop neighbour of u and π(u), respectively. Because we have excluded (u i , π(v i )) such that Figure 15: The event that (u i , π(v i )) becomes a 2-hop witness for (u, π(u)) conditional on Q uv is dependent on (u j , π(u i )) and (v i , π(v j )). u i ∈ N G 1 1 (v) ∪ {v}, it is impossible that u i connects v. Thus, when we calculate P u i ≁ N G 1 1 (u) Q uv , we have to exclude v from N G 1 1 (u). And P π(v i ) ≁ N G 2 1 (π(u)) Q uv is similar. Then, equality (c) holds. u π(u) u i π(v i ) v i π(v j ) u j π(u i ) G 1 G 2 Note that M i are dependent. Specifically, the event that (u i , π(v i )) becomes a 2-hop witness for (u, π(u)) conditional on Q uv is dependent on (u j , π(u i )) and (v i , π(v j )) (See Fig. 15 for an example). Then, we cannot apply Bernsteins Inequality in Theorem 4. Fortunately, the event that (u j , π(v j )) becomes a 2-hop witnesses for (u, π(u)) conditional on Q uv is dependent on (u i , π(v i )) only if u j = v i or v j = u i . Thus, the event that (u i , π(v i )) becomes a 2-hop witness for (u, π(u)) conditional on Q uv depends on at most two other seeds. Thus, we apply the concentration inequality for the sum of dependent random variables given in Theorem 5. Specifically, we construct a dependency graph Γ for {M i }. The maximum degree of Γ, ∆(Γ), equals to two. Thus, we apply Theorem 5 with ∆ 1 (Γ) = ∆(Γ) + 1 = 3, K = 1, σ 2 = n + W λ(1 − λ) and γ = 4 log n, P    n + W 1 M i ≤ n + W λ − 5 3 2 n + W λ(1 − λ) log n − 25 2 log n Q uv    ≤ exp (−4 log n) = n −4 . Since n + W ∈ [n(1 − β − 9ps), n], we have n + W λ(1 − λ) ≤n 1 − (1 − ps) N G 1 1 (u) 1 − (1 − ps) N G 2 1 (π(u)) (a) ≤ n N G 1 1 (u) ps N G 2 1 (π(u)) ps (b) ≤ 9 4 n 3 p 4 s 4 . The inequality (a) is based on Bernoulli's Inequality which states that (1+x) r ≥ 1+rx for every integer r ≥ 0 and every real number x ≥ −2. The equality (b) holds because N G 1 1 (u) , N G 2 1 (π(u)) ≤ (1 + ǫ)(n − 1)ps ≤ 3 2 nps. Let M denote the number of 2-hop witnesses contributed by the incorrect seeds. We have M ≥ n + W 1 M i . Recall that m min = (1 − δ 2 )n(1 − β)λ − 15 2 3 2 n 3 p 4 s 4 log n − 25 2 log n. We then get P M ≤ m min Q uv ≤P    n + W 1 M i ≤ n + W λ − 5 3 2 n + W λ(1 − λ) log n − 25 2 log n Q uv    ≤ n −4 .(30) Due to W 2 (u, π(u)) = L + M and sufficiently large n, taking an union bound over (29) and (30) yields that P W 2 (u, π(u)) ≤ l min + m min Q uv · ½ (R uv ) ≤ n − 15 4 + n −4 < n − 7 2 . Remark 4. In (28), even though we neglect the case that u i and v i connect to different 1-hop neighbours of u and π(u), the lower bound is still tight. This is because P {L i = 1} \ {C i = 1} Q uv ≈P u i ∈ N G 1 2 (u) Q uv P π(v i ) ∈ N G 2 2 (π(u)) Q uv ≈ N G 1 1 (u) N G 2 1 (π(u)) p 2 s 2 ≤ 9 4 n 2 p 4 s 4 . We can observe P C i = 1 Q uv ≫ P {L i = 1} \ {C i = 1} Q uv because np 2 ≤ 1 135 log n , and n is sufficiently large. Thus, we can say 7 24 np 2 s 4 is a tight lower bound of P L i = 1 Q uv . D.4 Proof of Lemma 5 Fixing any two vertices u = v in G 1 , we condition on Q uv and assume R uv is true. For each seed (u i , π(u i )), let π(v i ) be the underlying vertex matched to u i , i.e., π(u i ) = π(v i ). Then, (u i , π(v i )) is a correct seed if u i = v i and is an incorrect seed if u i = v i . For any seed (u i , π(v i )), it could be a 2-hop witness for (u, π(v)) if u i / ∈ N G 1 1 (u) ∪ {u} and π(v i ) / ∈ N G 2 1 (π(v)) ∪ {π(v)}. Then, there are five types of the remaining seeds we need consider (see Fig. 13 for example): Type 1: u i / ∈ N G 1 1 (v) ∪ {v} and π(v i ) / ∈ N G 2 1 (π(u)) ∪ {π(u)}. We first consider the correct seeds of this type. For any correct seed (u i , π(u i )) in Type 1 which could be a 2-hop witness for u and π(v), let X i be a binary random variable such that X i = 1 if (u i , π(u i )) is a 2-hop witness for (u, π(v)) and X i = 0 otherwise. Let D i be a binary random variable such that D i = 1 if (u i , π(u i )) ∈ C 1 (u, π(v)). Thus, D i = 1 can be expressed as {D i = 1} = (w,π(w))∈C 1 (u,π(v)) u i ∈ N G 1 1 (w) ∩ π(u i ) ∈ N G 2 1 (π(w)) = (w,π(w))∈C 1 (u,π(v)) {(u i , π(u i )) ∈ C 1 (w, π(w))} . Then, we have P D i = 1 Q uv (a) = 1 − P    (w,π(w))∈C 1 (u,π(v)) {(u i , π(u i )) / ∈ C 1 (w, π(w))} Q uv    (b) =1 − (w,π(w))∈C 1 (u,π(v)) P (u i , π(u i )) / ∈ C 1 (w, π(w)) Q uv (c) =1 − (1 − ps 2 ) |C 1 (u,π(v))| (d) ≤ 1 − (1 − |C 1 (u, π(v))| ps 2 ) = |C 1 (u, π(v))| ps 2 (e) ≤3ps 2 log n. The equality (a) is based on the De Morgan's laws, which states R ∪ S = R ∩ S. Because {(u i , π(u i )) ∈ C 1 (w, π(w))} only depends on the edge (u i , w), and the edges (u i , w) are different in the parent graph G across different w, {(u i , π(u i )) ∈ C 1 (w, π(w))} are independent across different w conditional on Q uv . Hence, the equality (b) holds. Because (u i , π(u i )) ∈ C 1 (w, π(w)) if and only if the edge (u i , w) is in G, and is sampled into both G 1 and G 2 , the equality (c) holds. The inequality (d) is based on Bernoulli's Inequality which states that (1 + x) r ≥ 1 + rx for every integer r ≥ 0 and every real number x ≥ −2. The inequality (e) holds based on |C 1 (u, π(v))| < 3 log n. Note that if D i = 1, then (u i , π(u i )) is a 2-hop witnesses for (u, π(v)). Thus, {D i = 1} ⊂ {X i = 1}, and {X i = 1} \ {D i = 1} can be can be expressed as {X i = 1} \ {D i = 1} = A⊂N G 1 1 (u),A =∅    B⊂N G 2 1 (π(v))\π(A),B =∅ u i N G 1 1 (u) ∼ A, π(u i ) N G 2 1 (π(v)) ∼ B    (a) = A⊂N G 1 1 (u),A =∅,    u i N G 1 1 (u) ∼ A ∩    B⊂N G 2 1 (π(v))\π(A),B =∅ π(u i ) N G 2 1 (π(v)) ∼ B       where in any graph G(V, E), if U ⊂ V, A ⊂ V, u ∈ V \ A, let u U ∼ A denote the event that u connects to all vertices in A but does not connect any vertex in U \ A, i.e., u U ∼ A = v∈A u ∈ N G 1 (v) ∩   w∈U \A u / ∈ N G 1 (w)   . The equality (a) is based on the distribution law of set, which states that (R ∩ S) ∪ (R ∩ T ) = R ∩ (S ∪ T ). Since i N G 1 1 (u) ∼ A is independent of π(u i ) N G 2 1 (π(v)) ∼ B because B does not have the same vertex with π(A). Thus, P {X i = 1} \ {D i = 1} Q uv could be bounded by P {X i = 1} \ {D i = 1} Q uv = A⊂N G 1 1 (u),A =∅   P u i N G 1 1 (u) ∼ A Q uv · B⊂N G 2 1 (π(v))\π(A),B =∅ P π(u i ) N G 2 1 (π(v)) ∼ B Q uv    (a) ≤ A⊂N G 1 1 (u),A =∅   P u i N G 1 1 (u) ∼ A Q uv · B⊂N G 2 1 (π(v)),B =∅ P π(u i ) N G 2 1 (π(v)) ∼ B Q uv    (b) =    A⊂N G 1 1 (u),A =∅ P u i N G 1 1 (u) ∼ A Q uv       B⊂N G 2 1 (π(v)),B =∅ P π(u i ) N G 2 1 (π(v)) ∼ B Q uv    =P u i ∈ N G 1 2 (u) Q uv · P π(u i ) ∈ N G 2 2 (π(v) Q uv . where we relax the sum of probability that π(u i ) connects to some vertices belonging to N G 2 1 (π(v)) in the inequality (a) and take it out in the equality (b). Then, we have P u i ∈ N G 1 2 (u) Q uv =1 − P      w∈N G 1 1 (u) u i ∈ N G 1 1 (w) Q uv      (a) = 1 − (1 − ps) N G 1 1 (u)\{v} (b) ≤1 − 1 − N G 1 1 (u) ps = N G 1 1 (u) ps (c) ≤ 3 2 np 2 s 2 .(31) Because we have excluded (u i , π(v i )) such that u i ∈ N G 1 1 (v) ∪ {v}, it is impossible that u i connects v. Thus, when we calculate the probability that u i connects to the 1-hop neighbors of u, we have to exclude v from N G 1 1 (u). The inequality (b) is based on Bernoulli's Inequality which states that (1 + x) r ≥ 1 + rx for every integer r ≥ 0 and every real number x ≥ −2. The inequality (c) is based on N G 1 1 (u) , N G 2 1 (π(v)) < (1 + ǫ)(n − 1)ps < 3 2 nps. And P π(u i ) ∈ N G 2 2 (π(v) Q uv ≤ 3 2 np 2 s 2 follows from the similar proof. Then, we have P {X i = 1} \ {D i = 1} Q uv ≤ 9 4 n 2 p 4 s 4 . Thus, P X i = 1 Q uv ≤ 3ps 2 log n + 9 4 n 2 p 4 s 4 = µ 1 . Let X denote the number of 2-hop witnesses contributed by correct seeds in Type 1. Since there are a constant fraction β of correct seeds, the number of correct seeds which could be 2-hop witnesses is no larger than nβ. Thus, we have X ≤ nβ i=1 X i . For incorrect seed (u i , π(v i )) in Type 1, if π(u i ) ∈ N G 2 1 (π(v)) or v i ∈ N G 1 1 (u), then the event u i connects to v i would be dependent on the event that π(v i ) connects to π(u i ) (see Fig. 16 for example). The number of such (u i , π(v i )) is no larger than 3nps (using N G 1 1 (u) + N G 2 1 (π(v)) < 2(1 + ǫ)(n − 1)ps < 3nps). Let Ψ i be a binary random variable such that Ψ i = 1 if such (u i , π(v i )) is a 2-hop witness for (u, π(v)) and Ψ i = 0 otherwise. Figure 16: If π(u i ) ∈ N G 2 1 (π(u)) and v i ∈ N G 1 1 (v), then the event that u i connects to v i would be dependent on the event that π(v i ) connects to π(u i ). u π(v) v i π(u i ) u i π(v i ) G 1 G 2 We use µ 2 to denote P Ψ i = 1 Q uv and have µ 2 (a) = P Ψ i = 1 Q uv ∩ u i ∈ N G 1 1 (v i ) P u i ∈ N G 1 1 (v i ) Q uv + P Ψ i = 1 Q uv ∩ u i / ∈ N G 1 1 (v i ) P u i / ∈ N G 1 1 (v i ) Q uv (b) = 1 − (1 − s)(1 − ps) N G 2 1 (π(v))\{π(u)} −1 ps + 1 − (1 − ps) N G 1 1 (u)\{v} −1 1 − (1 − ps) N G 2 1 (π(v))\{π(u)} (1 − ps) ≤ps + 1 − (1 − ps) N G 1 1 (u) 1 − (1 − ps) N G 2 1 (π(v)) (c) ≤ps + 1 − 1 − N G 1 1 (u) ps 1 − 1 − N G 2 1 (π(v)) ps (d) ≤ ps + 9 4 n 2 p 4 s 4 . We condition on two cases: u i ∈ N G 1 1 (v i ) and u i / ∈ N G 1 1 (v i ) in equality (a). If u i ∈ N G 1 1 (v i ), then u i is a 2-hop neighbor of u, we only need to calculate the probability that π(v i ) becomes a 2-hop neighbor of π(v). If u i / ∈ N G 1 1 (v i ), the event u i connects to the other 1-hop neighbor of u would not be dependent on the event that π(v i ) connects to 1-hop neighbor of π(v). In addition, because we have excluded (u i , π(v i )) such that u i ∈ N G 1 1 (v) ∪ {v}, it is impossible that u i connects v. Thus, when we calculate the probability that u i connects to the 1-hop neighbors of u, we have to exclude v from N G 1 1 (u). And the probability that π(v i ) connects to the 1-hop neighbors of π(v) is similar. Thus, equality (b) holds. The inequality (c) is based on Bernoulli's Inequality which states that (1 + x) r ≥ 1 + rx for every integer r ≥ 0 and every real number x ≥ −2. The inequality (d) is based on N G 1 1 (u) , N G 2 1 (π(v)) < (1 + ǫ)(n − 1)ps < 3 2 nps. Then, we use Ψ to denote the number of 2-hop witnesses for (u, π(v)) contributed by the seeds (u i , π(v i )) such that π(u i ) ∈ N G 2 1 (π(v)) and v i ∈ N G 1 1 (u). We have Ψ ≤ ⌊3nps⌋ i=1 Ψ i . For incorrect seed (u i , π(v i )) such that π(u i ) / ∈ N G 2 1 (π(v)) and v i / ∈ N G 1 1 (u), u i ∈ N G 1 2 (u) would be independent of v i ∈ N G 2 2 (π(v)) . Let Y i be a binary random variable such that Y i = 1 if such seed (u i , π(v i )) is the 2-hop witnesses for (u, π(v)). We use µ 3 to denote P Z i = 1 Q uv and have µ 3 =P u i ∈ N G 1 2 (u), v i ∈ N G 2 2 (π(v)) Q uv (a) = P u i ∈ N G 1 2 (u) Q uv · P v i ∈ N G 2 2 (π(v)) Q uv (b) = 1 − (1 − ps) N G 1 1 (u)\{v} 1 − (1 − ps) N G 2 1 (π(v))\{π(u)} (c) ≤ 9 4 n 2 p 4 s 4 . The equality (a) holds because the event that u i ∈ N G 1 2 (u) is independent of the event that v i ∈ N G 2 2 (π(v)). Otherwise, π(u i ) ∈ N G 2 1 (π(v)) and v i ∈ N G 1 1 (u) since u i = v i , but we have discussed such seeds. The steps (b) and (c) follow from the similar proof in (31). Let Y denote the number of 2-hop witnesses contributed by the incorrect seeds such that . We have Y ≤ n(1−β) i=1 Y i . Type 2: u i ∈ N G 1 1 (v) and π(v i ) / ∈ N G 2 1 (π(u)) ∪ {π(u)}. The number of the seeds in Type 2 is no larger than 3 2 nps (using N G 1 1 (v) ≤ (1+ǫ)nps < 3 2 nps). Let Z v i denote a binary random variable such that Z v i = 1 if the seed, (u i , π(v i )), of this type is a 2-hop witness for (u, π(v)). If u and v are connected in G 1 , then as u i ∈ N G 1 1 (v), it follows that u i ∈ N G 1 2 (u). Thus, P Z v i = 1 Q uv =P u i ∈ N G 1 2 (u), π(v i ) ∈ N G 2 2 (π(v)) Q uv ≤P π(v i ) ∈ N G 2 2 (π(v)) Q uv ≤ 3 2 np 2 s 2 . The last inequality follows from the similar proof in (31). We use Z v to denote the number of 2-hop witnesses contributed by this type of seeds. Then, we have Z v ≤ ⌊ 3 2 nps⌋ i=1 Z v i . Type 3: u i / ∈ N G 1 1 (v) ∪ {v} and π(v i ) ∈ N G 2 1 (π(u)). Let Z u i denote a binary random variable such that Z u i = 1 if the seed, (u i , π(v i )), in Type 3 is a 2-hop witness for (u, π(v)). Let Z u to denote the number of 2-hop witnesses contributed by this type of seeds. Following the similar proof as in Type 2 cases, we can get Z u ≤ ⌊ 3 2 nps⌋ i=1 Z u i and P Z u i = 1 Q uv ≤ 3 2 np 2 s 2 . Type 4: u i ∈ N G 1 1 (v) ∪ {v} and π(v i ) ∈ N G 2 1 (π(u)) ∪ {π(u)}. There are at most 3 log n seeds in Type 4 conditional on Q uv (using W 1 (v, π(u)) ≤ 3 log n). Here, (u i , π(v i )) would be a 1-hop witness for (v, π(u)). If v ∈ N G 1 1 (u) and π(u) ∈ N G 2 1 (π(v)), then (u i , π(v i )) would be a 2-hop witness for (u, π(v)). Type 5: u i = v or π(v i ) = π(u). There are at most 2 such seeds. After calculating the probability that each type of seeds become 2-hop witnesses for (u, π(v)), we upper bound the contribution to W 2 (u, π(v)) by the correct and incorrect seeds, respectively. First, we consider the correct seeds. Let W R denote the number of 2-hop witnesses contributed by the correct seeds. We then have W R ≤ X + Z u + Z v + 3 log n. Note that, for any correct seed (u i , π(u i )), the event that (u i , π(u i )) becomes a 2-hop witness for (u, π(v)) is independent of any other seeds (u j , π(v j )). Otherwise, u i ∈ N G 1 1 (u), u j ∈ N G 1 1 (u) or π(v j ) ∈ N G 2 1 (π(v)), but we have excluded such seeds. We then can get P W R ≥ x max + 9n 2 p 3 s 3 + 18 log n Q uv ≤P X + Z u + Z v ≥ x max + 9n 2 p 3 s 3 + 15 log n Q uv (a) ≤ P X ≥ x max + 5 log n Q uv + 2 · P Z u ≥ 9 2 n 2 p 3 s 3 + 5 log n Q uv (b) ≤P X ≥ nβµ 1 + 15 2 nβµ 1 (1 − µ 1 ) log n + 5 2 log n Q uv + 2 · P Z u ≥ ⌊ 3 2 nps⌋ 3 2 np 2 s 2 + ⌊ 3 2 nps⌋ 45 4 np 2 s 2 1 − 3 2 np 2 s 2 log n + 5 2 log n Q uv (c) ≤3 exp − 15 4 log n = 3 · n − 15 4 .(32) The inequality (a) follows from the union bound. The inequality (b) follows from the AM-GM inequality, which states that 2 √ xy ≤ x + y for two non-negative numbers x and y. The inequality (c) follows from Bernsteins Inequality in Theorem 4 with γ = 15 4 log n. Next, we upper bound the contribution to W 2 (u, π(v)) by the incorrect seeds. Let W W denote the number of 2-hop witnesses contributed by the correct seeds. We then have W W ≤ Y + Ψ + Z u + Z v + 3 log n + 2 ≤ Y + Ψ + Z u + Z v + 7 2 log n. Note that, the event that the incorrect seed (u i , π(v i )) becomes a 2-hop witness for (u, π(v)) is dependent on other incorrect seeds. Specifically, (u i , π(v i )) is dependent on (u j , π(u i )) and (v i , π(v j )) (see Fig. 17 for example). Then, we cannot apply Bernsteins Inequality in Theorem 4. Fortunately, the event (u j , π(v j )) becomes a 2-hop witnesses for (u, π(v)) conditional on Q uv is dependent on (u i , π(v i )) only if u j = v i or v j = u i . Thus, the event that (u i , π(v i )) becomes a 2-hop witness for (u, π(u)) conditional on Q uv depends on at most two other seeds. Thus, we apply the concentration inequality for the sum of dependent random variables given in Theorem 5. Specifically, we construct a dependency graph Γ for {Y i , Ψ i , Z u i , Z v i }. The maximum degree of Γ, ∆(Γ), equals to two. Thus, we apply Theorem 5 with we have ∆ 1 (Γ) = 1 + 2 = 3 and γ = 4 log n. P W W ≥ y max + 10n 2 p 3 s 3 + 91 log n Q uv ≤P Y + Ψ + Z u + Z v ≥ y max + 10n 2 p 3 s 3 + 175 2 log n Q uv (a) ≤ P Y ≥ y max + 25 2 log n Q uv + P Ψ ≥ n 2 p 3 s 3 6 nps + 27 2 np 2 s 2 + 25 log n Q uv + 2 · P Z u ≥ 9 2 n 2 p 3 s 3 + 25 log n Q uv (b) ≤P Y ≥ n(1 − β)µ 3 + 5 3 2 n(1 − β)µ 3 (1 − µ 3 ) log n + 25 2 log n Q uv + P Ψ ≥ ⌊3nps⌋µ 2 + 5 3 2 ⌊3nps⌋µ 2 (1 − µ 2 ) log n + 25 2 log n Q uv + 2 · P Z u ≥ ⌊ 3 2 nps⌋ 3 2 np 2 s 2 + 15 2 ⌊ 3 2 nps⌋np 2 s 2 1 − 3 2 np 2 s 2 log n + 25 2 log n Q uv ≤4 exp (−4 log n) = 4 · n −4 . The inequality (a) follows from the union bound and 6 nps + 27 2 np 2 s 2 ≤ 1. The inequality (b) follows from the AM-GM inequality, which states that 2 √ xy ≤ x + y for two non-negative numbers x and y. Since n is sufficiently large and W 2 (u, π(v)) = W R + W W , taking an union bound on (32) and (33) yields P W 2 (u, π(v)) ≥ x max + y max + 2z max + 109 log n Q uv · ½ (R uv ) =P W R + W W ≥ x max + y max + 2z max + 109 log n Q uv · ½ (R uv ) ≤4 · n −4 + 3 · n − 15 4 < n − 7 2 . Figure 17: The event that (u i , π(v i )) is a 2-hop witness for (u, π(v)) is dependent on (u j , π(u i )) and (v i , π(v j )). u π(v) u i π(v i ) v i π(v j ) u j π(u i ) G 1 G 2 In order to bound l min + m min − x max − y max − 2z max − 109 log n, we bound the last two terms in (34) firstly, The inequality (a) is based on the Bernoulli's Inequality which states that (1+x) r ≥ 1+rx for every integer r ≥ 0 and every real number x ≥ −2. The inequality (b) is based on N G 1 1 (u) , N G 2 1 (π(u)) ≤ (1 + ǫ)(n − 1)ps ≤ 4 3 nps. n(1 − β) 1 − (1 − ps) 1 (π(u))\{π(v)} N G 2 1 (π(v)) − N G 2 1 (π(u)) ps, 0 (b) ≥ − τ psn(1 − β) 1 − (1 − ps) N G 1 1 (u) (c) ≥ − τ n N G 1 1 (u) p 2 s 2 (d) ≥ − 3n 2 p 3 s 3 5nps(1 − s) log n − 5n 2 p 3 s 3 log n. For the inequality (a), if N G 2 1 (π(v)) < N G 2 1 (π(u)) , then (1 − ps) N G 2 1 (π(v)) − N G 2 1 (π(u)) − 1 > 0. Otherwise, we apply Bernoulli's Inequality. The inequality (b) is based on N G 2 1 (π(v)) − N G 2 1 (π(u)) ≤ τ . The inequality (c) is based on N G 1 1 (u) < (1 + ǫ)(n − 1)ps < 3 2 nps. Then we can continue to bound l min + m min − x max − y max − 2z max − 109 log n, l min + m min − x max − y max − 2z max − 109 log n ≥ 7 24 n 2 βp 2 s 4 − 7 4 n 2 p 3 s 5 − 15 8 n 2 βp 2 s 4 log n − 15 3 2 n 3 p 4 s 4 log n − 2nβ 3ps 2 log n + 9 4 n 2 p 4 s 4 − 19n 2 p 3 s 3 − 139 log n − 3n 2 p 3 s 3 5nps(1 − s) log n − 5n 2 p 3 s 3 log n − 16n 3 p 5 s 5 . In view of (35), we can guarantee l min +m min −x max −y max −2z max −109 log n ≥ 0 if inequalities (36)-(41) hold. We next verify (36)-(41) hold. First, by assumption that β ≥ 600 log n ns 4 , np 2 ≤ 1 135 log n , and n is sufficiently large, we have 1 60 n 2 βp 2 s 4 ≥ 1 60 n 2 p 2 s 4 · 600 log n ns 4 ≥10n 2 p 2 s 4 · p log n ≥n 2 p 3 s 3 (5 log n + 19 + 7 4 s 2 + 16np 2 s 2 ), Second, by assumption that β ≥ 1200 log n n 2 p 2 s 4 , we have 1 24 n 2 βp 2 s 4 ≥ 1 24 n 2 βp 2 s 4 · 1200 log n n 2 p 2 s 4 > 35 16 n 2 βp 2 s 4 log n. Third, by assumption that β ≥ 600 log n ns 4 , we have 1 30 n 2 βp 2 s 4 ≥ 1 30 n 2 p 2 s 4 · 600 log n ns 4 > 15 3 2 n 3 p 4 s 4 log n. Fourth, by assumption that β ≥ 1200 log n n 2 p 2 s 4 , we have 1 8 n 2 βp 2 s 4 ≥ 1 8 n 2 p 2 s 4 · 1200 log n n 2 p 2 s 4 > 139 log n. Fifth, by the assumption that nps 2 ≥ 128 log n, np 2 ≤ 1 135 log n , and n is sufficiently large, we have 1 15 n 2 βp 2 s 4 ≥ 1 20 nβps 2 · 128 log n + 1 60 n 2 βp 2 s 4 · 135np 2 log n ≥2nβ 3ps 2 log n + 9 4 n 2 p 4 s 4 , Sixth, by the assumption that β ≥ 900 np 3 (1−s) log n s , we have 1 120 n 2 βp 2 s 4 ≥ 1 120 n 2 p 2 s 4 · 900 np 3 (1 − s) log n s ≥ 3n 2 p 3 s 3 5nps(1 − s) log n. Thus, l min + m min ≥ x max + y max + 2z max + 109 log n. D.6 Proof of Lemma 7 In the parent graph G, for any vertex u i ∈ V \{u}, u i and u are connected with probability p. Then, N G 1 (u) ∼ Binom(n − 1, p). Since p ≥ ps 2 , applying Lemma 8 in Appendix D.1 implies, P N G 1 (u) ≥ (1 + ǫ)(n − 1)p ≤ n −4 .(42) Then, we use R G u to denote the event N G 1 (u) < (1 + ǫ)(n − 1)p . For any two vertices u = v in the parent graph G, u and π(u) are the same vertex, and v and π(v) are the same vertex. Hence, N G 1 (u) = N G 1 (π(u)) and N G 1 (v) = N G 1 (π(v)). Then, we use E uv to denote: E uv = N G 1 (u), N G 1 (v) . Conditioning on E uv such that R G u and R G v are true, we have to consider two cases: N G 1 (u) ≤ N G 1 (v) and N G 1 (u) > N G 1 (v) . Case 1: N G 1 (u) ≤ N G 1 (v) . We let H 1 , H 2 , ..., H |N G 1 (u)| denote the binary random variables such that H i = 1 if the i-th 1-hop neighbour of u in G is still connected to u in G 1 and J 1 , J 2 , ..., J |N G 1 (v)| denote the binary random variables such that J i = 1 if the i-th 1-hop neighbour of v in G is still connected to v in G 1 . For any i ∈ 1, 2, ..., N G 1 (u) , we have P H i − J i = k E uv =      s(1 − s) if k = −1 s 2 + (1 − s) 2 if k = 0 s(1 − s) if k = 1 . Since H i 's and J i 's are generated by independent sub-sampling process, (H i − J i )'s are mutually independent conditional on E uv . Thus, we can apply Bernstein's inequality given in Theorem 4. For where the first inequality is based on that R G u is true and ǫ ≥ 0. Then, P      |N G 1 (u)| i=1 (H i − J i ) ≥ τ E uv      ≤ n − 5 1+ǫ . Because N G 1 1 (u) − N G 1 1 (v) = |N G 1 (u)| i=1 H i − |N G 1 (v)| i=1 J i ≤ |N G 1 (u)| i=1 (H i −J i ) (using N G 1 (u) ≤ N G 1 (v) ), we have P N G 1 1 (u) − N G 1 1 (v) ≥ τ E uv ≤ P      |N G 1 (u)| i=1 (H i − J i ) ≥ τ E uv      ≤ n − 5 1+ǫ . Since T uv ⊂ N G 1 1 (u) − N G 1 1 (v) ≥ τ , then P T uv E uv ≤ P N G 1 1 (u) − N G 1 1 (v) ≥ τ E uv ≤ n − 5 1+ǫ . Case 2: N G 1 (u) > N G 1 (v) . Following the similar proof, we can get P T uv E uv ≤ P N G 1 1 (π(v)) − N G 1 1 (π(u)) ≥ τ E uv ≤ n − 5 1+ǫ . We combine the two cases and get P T uv E uv · ½ R G u ∩ R G v ≤n − 5 1+ǫ .(43) Finally, since n is sufficiently large, applying (42), (43) and the union bound yields P T uv =E Euv P T uv E uv =E Euv P T uv E uv · ½ R G u ∩ R G v + P T uv E uv · ½ R G u ∩ R G v ≤E Euv P T uv E uv · ½ R G u ∩ R G v + E Euv ½ R G u ∩ R G v ) ≤n − 5 1+ǫ + 2 · n −4 ≤ n − 7 2 . Figure 1 : 1Comparison of the conditions on β given in Theorem 1, Theorem 2, and [LS18], when s is a fixed constant, and p = o(1). Figure 3 : 3Performance comparison of 1-hop algorithm and 2-hop algorithm with varying n and p = n − 3 4 . Fix s = 0.8. is β/ log n/n. Figure 5 : 3 . 53The 1-hop algorithm with varying n and p = n − 1 Fix x-axis is β/(log n/np). Figure 6 : 6The 1-hop algorithm with varying n and p = n − 2 3 . Fix s = 0.8. is β/ np 3 log n. Figure 7 : 5 . 75The 2-hop algorithm with varying n and p = n − 3 Fix x-axis is β/ log n/n. Figure 8 : 8The 2-hop algorithm with varying n and p = n − 17 24 . Fix s = 0.8. random variables taking values in {0, 1}. Then, for δ ∈ (0, 1), is β/(log n/n 2 p 2 ). Figure 9 : 5 . 95The 2-hop algorithm with varying n and p = n − 4 Fix x-axis is β/ np 3 log n. Figure 10 : 10The 2-hop algorithm with varying n and p = n − 1 2 . Fix s = 1. D. 5 1 5Proof of Lemma 6 l min + m min − x max − y max − 2z max − 109 log n 2 p 4 s 4 − 19n 2 p 3 s 3 − 139 log n − 9nps 1 − (1 − ps) (π(u))\{π(v)} − 1 . 11 (π(u))\{π(v)} · (1 − ps) (π(v))\{π(v)} − N G 2 1 (π(u))\{π(v)} − 1 i( = 1, 2, ..., N G 1 (u) , E H i − J i E uv = 0, |H i − J i | ≤ 1 = K and σ 2 = |N G 1 (u)| i=1 var H i − J i E uv = 2 N G 1 (u) s(1 − s). We set γ = 5 1+ǫ log n and get H i − J i ) ≥ 20 1 + ǫ N G 1 (u) s(1 − s) log n + 10 3(1 + ǫ)log n E uv Figure 4: Performance comparison of 1-hop algorithm and 2-hop algorithm applied to the Autonomous Systems graphs.3/31 4/7 4/14 4/21 4/28 5/5 5/12 5/19 5/26 Date 0 0.2 0.4 0.6 0.8 1 Accuracy Rate 1-hop =0.3 1-hop =0.6 1-hop =0.9 2-hop =0.3 2-hop =0.6 2-hop =0.9 A determines the 1-hop neighbors of u that u i connects to, {u iN G 1 1 (u) ∼ A} are disjoint from each other across A conditional on Q uv . And {π(u i ) N G 2 1 (π(v)) ∼ B} are disjoint from each other across B conditional on Q uv for the same reason. In addition, when A is fixed, u On convex relaxation of graph isomorphism. Yonathan Aflalo, Alexander Bronstein, Ron Kimmel, Proceedings of the National Academy of Sciences. 11210Yonathan Aflalo, Alexander Bronstein, and Ron Kimmel. On convex relaxation of graph isomorphism. Proceedings of the National Academy of Sciences, 112(10):2942- 2947, 2015. Boaz Barak, Chi-Ning Chou, Zhixian Lei, arXiv:1805.02349Tselil Schramm, and Yueqi Sheng. (Nearly) efficient algorithms for the graph matching problem on correlated random graphs. arXiv preprintBoaz Barak, Chi-Ning Chou, Zhixian Lei, Tselil Schramm, and Yueqi Sheng. (Nearly) efficient algorithms for the graph matching problem on correlated random graphs. arXiv preprint arXiv:1805.02349, 2018. The quadratic assignment problem. E Rainer, Eranda Burkard, Cela, M Panos, Leonidas S Pardalos, Pitsoulis, Handbook of combinatorial optimization. SpringerRainer E Burkard, Eranda Cela, Panos M Pardalos, and Leonidas S Pitsoulis. The quadratic assignment problem. In Handbook of combinatorial optimization, pages 1713-1809. Springer, 1998. Thirty years of graph matching in pattern recognition. Donatello Conte, Pasquale Foggia, Carlo Sansone, Mario Vento, International journal of pattern recognition and artificial intelligence. 1803Donatello Conte, Pasquale Foggia, Carlo Sansone, and Mario Vento. Thirty years of graph matching in pattern recognition. International journal of pattern recognition and artificial intelligence, 18(03):265-298, 2004. On the performance of a canonical labeling for matching correlated Erdös-Rényi graphs. Daniel Osman Emre Dai, Negar Cullina, Matthias Kiyavash, Grossglauser, arXiv:1804.09758arXiv preprintOsman Emre Dai, Daniel Cullina, Negar Kiyavash, and Matthias Grossglauser. On the performance of a canonical labeling for matching correlated Erdös-Rényi graphs. arXiv preprint arXiv:1804.09758, 2018. DS++: a flexible, scalable and provably tight relaxation for matching problems. Nadav Dym, Haggai Maron, Yaron Lipman, ACM Transactions on Graphics (TOG). 366184Nadav Dym, Haggai Maron, and Yaron Lipman. DS++: a flexible, scalable and provably tight relaxation for matching problems. ACM Transactions on Graphics (TOG), 36(6):184, 2017. Efficient random graph matching via degree profiles. Jian Ding, Zongming Ma, Yihong Wu, Jiaming Xu, arxiv:1811.07821arxiv preprintJian Ding, Zongming Ma, Yihong Wu, and Jiaming Xu. Efficient random graph matching via degree profiles. arxiv preprint arxiv:1811.07821, Nov 2018. Concentration of Measure for the Analysis of Randomized Algorithms. Devdatt Dubhashi, Alessandro Panconesi, Cambridge University PressUSA1st editionDevdatt Dubhashi and Alessandro Panconesi. Concentration of Measure for the Anal- ysis of Randomized Algorithms. Cambridge University Press, USA, 1st edition, 2009. On random graphs. P Erdös, A Rényi, I. Publicationes Mathematicae (Debrecen). 6P. Erdös and A. Rényi. On random graphs, I. Publicationes Mathematicae (Debrecen), 6:290-297, 1959. E Donniell, Sancar Fishkind, Carey E Adali, Priebe, arXiv:1209.0367Seeded graph matching. arXiv preprintDonniell E. Fishkind, Sancar Adali, and Carey E. Priebe. Seeded graph matching. arXiv preprint arXiv:1209.0367, 2018. Spectral graph matching and regularized quadratic relaxations i: The gaussian model. Cheng Zhou Fan, Yihong Mao, Jiaming Wu, Xu, Zhou Fan, Cheng Mao, Yihong Wu, and Jiaming Xu. Spectral graph matching and regularized quadratic relaxations i: The gaussian model, 2019. Cheng Zhou Fan, Yihong Mao, Jiaming Wu, Xu, arXiv:1907.08883Spectral graph matching and regularized quadratic relaxations II: Erdős-Rényi graphs and universality. arxiv preprintZhou Fan, Cheng Mao, Yihong Wu, and Jiaming Xu. Spectral graph matching and regularized quadratic relaxations II: Erdős-Rényi graphs and universality. arxiv preprint arXiv:1907.08883, 2019. Spectral alignment of graphs. Soheil Feizi, Gerald Quon, Mariana Mendoza, Muriel Medard, Manolis Kellis, Ali Jadbabaie, IEEE Transactions on Network Science and Engineering. + 19+ 19] Soheil Feizi, Gerald Quon, Mariana Mendoza, Muriel Medard, Manolis Kellis, and Ali Jadbabaie. Spectral alignment of graphs. IEEE Transactions on Network Science and Engineering, 2019. Powers of tensors and fast matrix multiplication. Franois Le Gall, Franois Le Gall. Powers of tensors and fast matrix multiplication, 2014. From tree matching to sparse graph alignment. Luca Ganassali, Laurent Massouli, Luca Ganassali and Laurent Massouli. From tree matching to sparse graph alignment, 2020. Exact and approximate graph matching using random walks. Marco Gori, Marco Maggini, Lorenzo Sarti, IEEE transactions on pattern analysis and machine intelligence. 27Marco Gori, Marco Maggini, and Lorenzo Sarti. Exact and approximate graph match- ing using random walks. IEEE transactions on pattern analysis and machine intelli- gence, 27:1100-11, 08 2005. Robust textual inference via graph matching. D Aria, Haghighi, Y Andrew, Christopher D Ng, Manning, Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing. the conference on Human Language Technology and Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsAria D Haghighi, Andrew Y Ng, and Christopher D Manning. Robust textual in- ference via graph matching. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 387-394. Association for Computational Linguistics, 2005. Large deviations for sums of partly dependent random variables. Svante Janson, Random Struct. Algorithms. 24Svante Janson. Large deviations for sums of partly dependent random variables. Random Struct. Algorithms, 24:234-248, 2004. Proper: global protein interaction network alignment through percolation matching. Ehsan Kazemi, Hamed Hassani, Matthias Grossglauser, Hassan Pezeshgi Modarres, BMC bioinformatics. 171527Ehsan Kazemi, Hamed Hassani, Matthias Grossglauser, and Hassan Pezeshgi Modar- res. Proper: global protein interaction network alignment through percolation match- ing. BMC bioinformatics, 17(1):527, 2016. An efficient reconciliation algorithm for social networks. Nitish Korula, Silvio Lattanzi, Proceedings of the VLDB Endowment. the VLDB Endowment7Nitish Korula and Silvio Lattanzi. An efficient reconciliation algorithm for social networks. Proceedings of the VLDB Endowment, 7(5):377-388, 2014. Graph matching: Relax at your own risk. Vince Lyzinski, Donniell Fishkind, Marcelo Fiori, Joshua Vogelstein, Carey Priebe, Guillermo Sapiro, IEEE Transactions on Pattern Analysis & Machine Intelligence. 381Vince Lyzinski, Donniell Fishkind, Marcelo Fiori, Joshua Vogelstein, Carey Priebe, and Guillermo Sapiro. Graph matching: Relax at your own risk. IEEE Transactions on Pattern Analysis & Machine Intelligence, 38(1):60-73, 2016. Seeded graph matching for correlated Erdős-Rényi graphs. Vince Lyzinski, Donniell E Fishkind, Carey E Priebe, Journal of Machine Learning Research. 15Vince Lyzinski, Donniell E. Fishkind, and Carey E. Priebe. Seeded graph matching for correlated Erdős-Rényi graphs. Journal of Machine Learning Research, 15, 2013. Graphs over time: Densification laws, shrinking diameters and possible explanations. Jure Leskovec, Jon Kleinberg, Christos Faloutsos, Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, KDD 05. the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, KDD 05New York, NY, USAAssociation for Computing Machinery177187Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. Graphs over time: Densi- fication laws, shrinking diameters and possible explanations. In Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, KDD 05, page 177187, New York, NY, USA, 2005. Association for Computing Machinery. Correcting the output of approximate graph matching algorithms. Joseph Lubars, Srikant, IEEE INFOCOM 2018-IEEE Conference on Computer Communications. IEEEJoseph Lubars and R Srikant. Correcting the output of approximate graph matching algorithms. In IEEE INFOCOM 2018-IEEE Conference on Computer Communica- tions, pages 1745-1753. IEEE, 2018. Maximum quadratic assignment problem: Reduction from maximum label cover and LP-based approximation algorithm. Automata, Languages and Programming. Konstantin Makarychev, Rajsekar Manokaran, Maxim Sviridenko, Konstantin Makarychev, Rajsekar Manokaran, and Maxim Sviridenko. Maximum quadratic assignment problem: Reduction from maximum label cover and LP-based approximation algorithm. Automata, Languages and Programming, pages 594-604, 2010. Seeded graph matching via large neighborhood statistics. Elchanan Mossel, Jiaming Xu, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms. the Thirtieth Annual ACM-SIAM Symposium on Discrete AlgorithmsSIAMElchanan Mossel and Jiaming Xu. Seeded graph matching via large neighborhood statistics. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1005-1014. SIAM, 2019. De-anonymizing social networks. Arvind Narayanan, Vitaly Shmatikov, 30th IEEE Symposium on. IEEESecurity and PrivacyArvind Narayanan and Vitaly Shmatikov. De-anonymizing social networks. In Secu- rity and Privacy, 2009 30th IEEE Symposium on, pages 173-187. IEEE, 2009. On the privacy of anonymized networks. Pedram Pedarsani, Matthias Grossglauser, Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. the 17th ACM SIGKDD international conference on Knowledge discovery and data miningACMPedram Pedarsani and Matthias Grossglauser. On the privacy of anonymized net- works. In Proceedings of the 17th ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 1235-1243. ACM, 2011. The quadratic assignment problem: A survey and recent developments. M Panos, Franz Pardalos, Henry Rendl, Wolkowicz, Proceedings of the DIMACS Workshop on Quadratic Assignment Problems. the DIMACS Workshop on Quadratic Assignment ProblemsAmerican Mathematical Society16Panos M. Pardalos, Franz Rendl, and Henry Wolkowicz. The quadratic assignment problem: A survey and recent developments. In In Proceedings of the DIMACS Work- shop on Quadratic Assignment Problems, volume 16 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 1-42. American Mathematical Society, 1994. Seeded graph matching: Efficient algorithms and theoretical guarantees. Farhad Shirani, Siddharth Garg, Elza Erkip, 2017 51st Asilomar Conference on Signals, Systems, and Computers. IEEEFarhad Shirani, Siddharth Garg, and Elza Erkip. Seeded graph matching: Efficient algorithms and theoretical guarantees. In 2017 51st Asilomar Conference on Signals, Systems, and Computers, pages 253-257. IEEE, 2017. Probabilistic subgraph matching based on convex relaxation. Christian Schellewald, Christoph Schnörr, International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition. SpringerChristian Schellewald and Christoph Schnörr. Probabilistic subgraph matching based on convex relaxation. In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, pages 171-186. Springer, 2005. Global alignment of multiple protein interaction networks with application to functional orthology detection. Rohit Singh, Jinbo Xu, Bonnie Berger, Proceedings of the National Academy of Sciences. 10535Rohit Singh, Jinbo Xu, and Bonnie Berger. Global alignment of multiple protein interaction networks with application to functional orthology detection. Proceedings of the National Academy of Sciences, 105(35):12763-12768, 2008. An eigendecomposition approach to weighted graph matching problems. Shinji Umeyama, IEEE transactions on pattern analysis and machine intelligence. 10Shinji Umeyama. An eigendecomposition approach to weighted graph matching prob- lems. IEEE transactions on pattern analysis and machine intelligence, 10(5):695-703, 1988. Fast approximate quadratic programming for graph matching. T Joshua, John M Vogelstein, Vince Conroy, Lyzinski, J Louis, Podrazik, G Steven, Eric T Kratzer, Harley, E Donniell, Jacob Fishkind, Carey E Vogelstein, Priebe, PLOS one. 104121002Joshua T Vogelstein, John M Conroy, Vince Lyzinski, Louis J Podrazik, Steven G Kratzer, Eric T Harley, Donniell E Fishkind, R Jacob Vogelstein, and Carey E Priebe. Fast approximate quadratic programming for graph matching. PLOS one, 10(4):e0121002, 2015. On the performance of percolation graph matching. Lyudmila Yartseva, Matthias Grossglauser, Proceedings of the first ACM conference on Online social networks. the first ACM conference on Online social networksACMLyudmila Yartseva and Matthias Grossglauser. On the performance of percolation graph matching. In Proceedings of the first ACM conference on Online social networks, pages 119-130. ACM, 2013.
[]
[ "Cosmological Constraints from DES Y1 Cluster Abundances and SPT Multi-wavelength data", "Cosmological Constraints from DES Y1 Cluster Abundances and SPT Multi-wavelength data" ]
[ "M Costanzi \nINAF-Osservatorio Astronomico di Trieste\nvia G. B. Tiepolo 11I-34143TriesteItaly\n\nDepartment of Physics\nAstronomy Unit\nUniversity of Trieste\nvia Tiepolo 11I-34131TriesteItaly\n\nInstitute for Fundamental Physics of the Universe\nVia Beirut 234014TriesteItaly\n", "A Saro \nINAF-Osservatorio Astronomico di Trieste\nvia G. B. Tiepolo 11I-34143TriesteItaly\n\nDepartment of Physics\nAstronomy Unit\nUniversity of Trieste\nvia Tiepolo 11I-34131TriesteItaly\n\nInstitute for Fundamental Physics of the Universe\nVia Beirut 234014TriesteItaly\n\nINFN -National Institute for Nuclear Physics\nVia Valerio 2I-34127TriesteItaly\n", "S Bocquet \nFaculty of Physics\nLudwig-Maximilians-Universität\nScheinerstr. 181679MunichGermany\n", "T M C Abbott \nCerro Tololo Inter-American Observatory\nNSF's National Optical-Infrared Astronomy Research Laboratory\nCasilla 603, La SerenaChile\n", "M Aguena \nDepartamento de Física Matemática\nInstituto de Física\nUniversidade de São Paulo\n66318, 05314-970São PauloCP, SPBrazil\n\nLaboratório Interinstitucional de e-Astronomia -LIneA\nRua Gal. José Cristino 77RJ -20921-400Rio de JaneiroBrazil\n", "S Allam \nFermi National Accelerator Laboratory\nP. O. Box 50060510BataviaILUSA\n", "A Amara \nInstitute of Cosmology and Gravitation\nUniversity of Portsmouth\nPO1 3FXPortsmouthUK\n", "J Annis \nFermi National Accelerator Laboratory\nP. O. Box 50060510BataviaILUSA\n", "S Avila \nInstituto de Fisica Teorica UAM/CSIC\nUniversidad Autonoma de Madrid\n28049MadridSpain\n", "D Bacon \nInstitute of Cosmology and Gravitation\nUniversity of Portsmouth\nPO1 3FXPortsmouthUK\n", "B A Benson \nFermi National Accelerator Laboratory\n60510-0500BataviaILUSA\n\nDepartment of Astronomy and Astrophysics\nUniversity of Chicago\n5640 South Ellis Avenue60637ChicagoIL\n\nKavli Institute for Cosmological Physics\nUniversity of Chicago\n5640 South Ellis Avenue60637ChicagoIL\n", "S Bhargava \nDepartment of Physics and Astronomy\nPevensey Building\nUniversity of Sussex\nBN1 9QHBrightonUK\n", "D Brooks \nDepartment of Physics & Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK\n", "E Buckley-Geer \nFermi National Accelerator Laboratory\nP. O. Box 50060510BataviaILUSA\n", "D L Burke \nKavli Institute for Particle Astrophysics & Cosmology\nStanford University\nP. O. Box 245094305StanfordCAUSA\n\nSLAC National Accelerator Laboratory\n94025Menlo ParkCAUSA\n", "A Carnero Rosell \nInstituto de Astrofisica de Canarias\nE-38205La Laguna, TenerifeSpain\n\nDpto. Astrofisica\nUniversidad de La Laguna\nE-38206La Laguna, TenerifeSpain\n", "M Carrasco \nDepartment of Astronomy\nUniversity of Illinois at Urbana-Champaign\n1002 W. Green Street61801UrbanaILUSA\n\nNational Center for Supercomputing Applications\n1205 West Clark St61801UrbanaILUSA\n", "J Carretero \nInstitut de Física d'Altes Energies (IFAE)\nThe Barcelona Institute of Science and Technology\nCampus UAB08193Bellaterra, BarcelonaSpain\n", "A Choi \nCenter for Cosmology and Astro-Particle Physics\nThe Ohio State University\n43210ColumbusOHUSA\n", "L N Da Costa \nLaboratório Interinstitucional de e-Astronomia -LIneA\nRua Gal. José Cristino 77RJ -20921-400Rio de JaneiroBrazil\n\nObservatório Nacional\nRua Gal. José Cristino 77RJ -20921-400Rio de JaneiroBrazil\n", "M E S Pereira \nDepartment of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA\n", "J De Vicente \nCentro de Investigaciones Energéticas\nMedioambientales y Tecnológicas (CIEMAT)\nMadridSpain\n", "S Desai \nDepartment of Physics\nIIT Hyderabad\n502285KandiTelanganaIndia\n", "H T Diehl \nFermi National Accelerator Laboratory\nP. O. Box 50060510BataviaILUSA\n", "J P Dietrich \nFaculty of Physics\nLudwig-Maximilians-Universität\nScheinerstr. 181679MunichGermany\n", "P Doel \nDepartment of Physics & Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK\n", "T F Eifler \nDepartment of Astronomy/Steward Observatory\nUniversity of Arizona\n933 North Cherry Avenue85721-0065TucsonAZUSA\n\nJet Propulsion Laboratory\nCalifornia Institute of Technology\n4800 Oak Grove Dr91109PasadenaCAUSA\n", "S Everett \nSanta Cruz Institute for Particle Physics\n95064Santa CruzCAUSA\n", "I Ferrero \nInstitute of Theoretical Astrophysics\nUniversity of Oslo\nBlindernP.O. Box 1029NO-0315OsloNorway\n", "P Singh \nINAF-Osservatorio Astronomico di Trieste\nvia G. B. Tiepolo 11I-34143TriesteItaly\n\nInstitute for Fundamental Physics of the Universe\nVia Beirut 234014TriesteItaly\n", "M Smith \nSchool of Physics and Astronomy\nUniversity of Southampton\nSO17 1BJSouthamptonUK\n", "M Soares-Santos \nDepartment of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA\n", "A A Stark \nCenter for Astrophysics | Harvard & Smithsonian\n60 Garden Street02138CambridgeMAUSA\n", "E Suchyta \nComputer Science and Mathematics Division, Oak Ridge National Laboratory\n37831Oak RidgeTN\n", "M E C Swanson \nNational Center for Supercomputing Applications\n1205 West Clark St61801UrbanaILUSA\n", "G Tarle \nDepartment of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA\n", "D Thomas \nInstitute of Cosmology and Gravitation\nUniversity of Portsmouth\nPO1 3FXPortsmouthUK\n", "C ", "\nInstitut d'Estudis Espacials de Catalunya (IEEC)\n08034BarcelonaSpain\n", "\nInstitute of Space Sciences (ICE, CSIC)\nCampus UAB, Carrer de Can Magrans, s/n08193BarcelonaSpain\n", "\nKavli Institute for Cosmological Physics\nUniversity of Chicago\n60637ChicagoILUSA\n", "\nDepartment of Astronomy\nUniversity of Michigan\n48109Ann ArborMIUSA\n", "\nInstitute of Astronomy\nUniversity of Cambridge\nMadingley RoadCB3 0HACambridgeUK\n", "\nKavli Institute for Cosmology\nUniversity of Cambridge\nMadingley RoadCB3 0HACambridgeUK\n", "\nDepartment of Physics\nStanford University\n382 Via Pueblo Mall, Stanford94305CAUSA\n", "\nSchool of Physics\nUniversity of Melbourne\n3010ParkvilleVICAustralia\n", "\nDépartement de Physique Théorique and Center for Astroparticle Physics\nUniversité de Genève\n24 quai Ernest AnsermetCH-1211GenevaSwitzerland\n", "\nSchool of Mathematics and Physics\nUniversity of Queensland\n4072BrisbaneQLDAustralia\n", "\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOHUSA\n", "\nAustralian Astronomical Optics\nMacquarie University\nNorth Ryde2113NSWAustralia\n", "\nLowell Observatory\n1400 Mars Hill Rd86001FlagstaffAZUSA\n", "\nGeorge P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nDepartment of Physics and Astronomy\nTexas A&M University\n77843College StationTXUSA\n", "\nInstitució Catalana de Recerca i Estudis Avançats\nE-08010BarcelonaSpain\n", "\nMax Planck Institute for Extraterrestrial Physics, Giessenbachstrasse\n85748GarchingGermany\n", "\nPhysics Department\nUniversity of Wisconsin-Madison\n1150 University Avenue Madison2320, 53706-1390Chamberlin HallWI\n", "\nDepartment of Astrophysical Sciences\nPrinceton University\nPeyton Hall08544PrincetonNJUSA\n", "\nNASA Ames Research Center\n94035Moffett FieldCAUSA\n", "\nAcademic Mission Services Task Lead, Research Institute for Advanced Computer Science, Universities Space Research Association\nNASA\n94043Mountain ViewCAUSA\n", "\nCenter for Astrophysics and Space Astronomy\nDepartment of Astrophysical and Planetary Sciences\nUniversity of Colorado Boulder\n80309COUSA\n", "\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n", "\nDepartment of Physics\nCarnegie Mellon University\n15312PittsburghPennsylvaniaUSA\n", "\nFakultät für Physik\nUniversitäts-Sternwarte\nLudwig-Maximilians Universität München\nScheinerstr. 181679MünchenGermany\n" ]
[ "INAF-Osservatorio Astronomico di Trieste\nvia G. B. Tiepolo 11I-34143TriesteItaly", "Department of Physics\nAstronomy Unit\nUniversity of Trieste\nvia Tiepolo 11I-34131TriesteItaly", "Institute for Fundamental Physics of the Universe\nVia Beirut 234014TriesteItaly", "INAF-Osservatorio Astronomico di Trieste\nvia G. B. Tiepolo 11I-34143TriesteItaly", "Department of Physics\nAstronomy Unit\nUniversity of Trieste\nvia Tiepolo 11I-34131TriesteItaly", "Institute for Fundamental Physics of the Universe\nVia Beirut 234014TriesteItaly", "INFN -National Institute for Nuclear Physics\nVia Valerio 2I-34127TriesteItaly", "Faculty of Physics\nLudwig-Maximilians-Universität\nScheinerstr. 181679MunichGermany", "Cerro Tololo Inter-American Observatory\nNSF's National Optical-Infrared Astronomy Research Laboratory\nCasilla 603, La SerenaChile", "Departamento de Física Matemática\nInstituto de Física\nUniversidade de São Paulo\n66318, 05314-970São PauloCP, SPBrazil", "Laboratório Interinstitucional de e-Astronomia -LIneA\nRua Gal. José Cristino 77RJ -20921-400Rio de JaneiroBrazil", "Fermi National Accelerator Laboratory\nP. O. Box 50060510BataviaILUSA", "Institute of Cosmology and Gravitation\nUniversity of Portsmouth\nPO1 3FXPortsmouthUK", "Fermi National Accelerator Laboratory\nP. O. Box 50060510BataviaILUSA", "Instituto de Fisica Teorica UAM/CSIC\nUniversidad Autonoma de Madrid\n28049MadridSpain", "Institute of Cosmology and Gravitation\nUniversity of Portsmouth\nPO1 3FXPortsmouthUK", "Fermi National Accelerator Laboratory\n60510-0500BataviaILUSA", "Department of Astronomy and Astrophysics\nUniversity of Chicago\n5640 South Ellis Avenue60637ChicagoIL", "Kavli Institute for Cosmological Physics\nUniversity of Chicago\n5640 South Ellis Avenue60637ChicagoIL", "Department of Physics and Astronomy\nPevensey Building\nUniversity of Sussex\nBN1 9QHBrightonUK", "Department of Physics & Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK", "Fermi National Accelerator Laboratory\nP. O. Box 50060510BataviaILUSA", "Kavli Institute for Particle Astrophysics & Cosmology\nStanford University\nP. O. Box 245094305StanfordCAUSA", "SLAC National Accelerator Laboratory\n94025Menlo ParkCAUSA", "Instituto de Astrofisica de Canarias\nE-38205La Laguna, TenerifeSpain", "Dpto. Astrofisica\nUniversidad de La Laguna\nE-38206La Laguna, TenerifeSpain", "Department of Astronomy\nUniversity of Illinois at Urbana-Champaign\n1002 W. Green Street61801UrbanaILUSA", "National Center for Supercomputing Applications\n1205 West Clark St61801UrbanaILUSA", "Institut de Física d'Altes Energies (IFAE)\nThe Barcelona Institute of Science and Technology\nCampus UAB08193Bellaterra, BarcelonaSpain", "Center for Cosmology and Astro-Particle Physics\nThe Ohio State University\n43210ColumbusOHUSA", "Laboratório Interinstitucional de e-Astronomia -LIneA\nRua Gal. José Cristino 77RJ -20921-400Rio de JaneiroBrazil", "Observatório Nacional\nRua Gal. José Cristino 77RJ -20921-400Rio de JaneiroBrazil", "Department of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA", "Centro de Investigaciones Energéticas\nMedioambientales y Tecnológicas (CIEMAT)\nMadridSpain", "Department of Physics\nIIT Hyderabad\n502285KandiTelanganaIndia", "Fermi National Accelerator Laboratory\nP. O. Box 50060510BataviaILUSA", "Faculty of Physics\nLudwig-Maximilians-Universität\nScheinerstr. 181679MunichGermany", "Department of Physics & Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK", "Department of Astronomy/Steward Observatory\nUniversity of Arizona\n933 North Cherry Avenue85721-0065TucsonAZUSA", "Jet Propulsion Laboratory\nCalifornia Institute of Technology\n4800 Oak Grove Dr91109PasadenaCAUSA", "Santa Cruz Institute for Particle Physics\n95064Santa CruzCAUSA", "Institute of Theoretical Astrophysics\nUniversity of Oslo\nBlindernP.O. Box 1029NO-0315OsloNorway", "INAF-Osservatorio Astronomico di Trieste\nvia G. B. Tiepolo 11I-34143TriesteItaly", "Institute for Fundamental Physics of the Universe\nVia Beirut 234014TriesteItaly", "School of Physics and Astronomy\nUniversity of Southampton\nSO17 1BJSouthamptonUK", "Department of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA", "Center for Astrophysics | Harvard & Smithsonian\n60 Garden Street02138CambridgeMAUSA", "Computer Science and Mathematics Division, Oak Ridge National Laboratory\n37831Oak RidgeTN", "National Center for Supercomputing Applications\n1205 West Clark St61801UrbanaILUSA", "Department of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA", "Institute of Cosmology and Gravitation\nUniversity of Portsmouth\nPO1 3FXPortsmouthUK", "Institut d'Estudis Espacials de Catalunya (IEEC)\n08034BarcelonaSpain", "Institute of Space Sciences (ICE, CSIC)\nCampus UAB, Carrer de Can Magrans, s/n08193BarcelonaSpain", "Kavli Institute for Cosmological Physics\nUniversity of Chicago\n60637ChicagoILUSA", "Department of Astronomy\nUniversity of Michigan\n48109Ann ArborMIUSA", "Institute of Astronomy\nUniversity of Cambridge\nMadingley RoadCB3 0HACambridgeUK", "Kavli Institute for Cosmology\nUniversity of Cambridge\nMadingley RoadCB3 0HACambridgeUK", "Department of Physics\nStanford University\n382 Via Pueblo Mall, Stanford94305CAUSA", "School of Physics\nUniversity of Melbourne\n3010ParkvilleVICAustralia", "Département de Physique Théorique and Center for Astroparticle Physics\nUniversité de Genève\n24 quai Ernest AnsermetCH-1211GenevaSwitzerland", "School of Mathematics and Physics\nUniversity of Queensland\n4072BrisbaneQLDAustralia", "Department of Physics\nThe Ohio State University\n43210ColumbusOHUSA", "Australian Astronomical Optics\nMacquarie University\nNorth Ryde2113NSWAustralia", "Lowell Observatory\n1400 Mars Hill Rd86001FlagstaffAZUSA", "George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nDepartment of Physics and Astronomy\nTexas A&M University\n77843College StationTXUSA", "Institució Catalana de Recerca i Estudis Avançats\nE-08010BarcelonaSpain", "Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse\n85748GarchingGermany", "Physics Department\nUniversity of Wisconsin-Madison\n1150 University Avenue Madison2320, 53706-1390Chamberlin HallWI", "Department of Astrophysical Sciences\nPrinceton University\nPeyton Hall08544PrincetonNJUSA", "NASA Ames Research Center\n94035Moffett FieldCAUSA", "Academic Mission Services Task Lead, Research Institute for Advanced Computer Science, Universities Space Research Association\nNASA\n94043Mountain ViewCAUSA", "Center for Astrophysics and Space Astronomy\nDepartment of Astrophysical and Planetary Sciences\nUniversity of Colorado Boulder\n80309COUSA", "Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Physics\nCarnegie Mellon University\n15312PittsburghPennsylvaniaUSA", "Fakultät für Physik\nUniversitäts-Sternwarte\nLudwig-Maximilians Universität München\nScheinerstr. 181679MünchenGermany" ]
[]
We perform a joint analysis of the counts of redMaPPer clusters selected from the Dark Energy Survey (DES) Year 1 data and multi-wavelength follow-up data collected within the 2500 deg 2 South Pole Telescope (SPT) Sunyaev-Zel'dovich (SZ) survey. The SPT follow-up data, calibrating the richness-mass relation of the optically selected redMaPPer catalog, enable the cosmological exploitation of the DES cluster abundance data. To explore possible systematics related to the modeling of projection effects, we consider two calibrations of the observational scatter on richness estimates: a simple Gaussian model which accounts only for the background contamination (BKG), and a model which further includes contamination and incompleteness due to projection effects (PRJ). Assuming either a ΛCDM+ mν or wCDM+ mν cosmology, and for both scatter models, we derive cosmological constraints consistent with multiple cosmological probes of the low and high redshift Universe, and in particular with the SPT cluster abundance data. This result demonstrates that the DES Y1 and SPT cluster counts provide consistent cosmological constraints, if the same mass calibration data set is adopted. It thus supports the conclusion of the DES Y1 cluster cosmology analysis which interprets the tension observed with other cosmological probes in terms of systematics affecting the stacked weak lensing analysis of optically-selected low-richness clusters. Finally, we analyse the first combined optically-SZ selected cluster catalogue obtained by including the SPT sample above the maximum redshift probed by the DES Y1 redMaPPer sample (z = 0.65). Besides providing a mild improvement of the cosmological constraints, this data combination serves as a stricter test of our scatter models: the PRJ model, providing scaling relations consistent between the two abundance and multi-wavelength follow-up data, is favored over the BKG model.
10.1103/physrevd.103.043522
[ "https://arxiv.org/pdf/2010.13800v2.pdf" ]
225,076,230
2010.13800
5994d41883d0109ef5a05fa2c9bdcfac8d0cef0b
Cosmological Constraints from DES Y1 Cluster Abundances and SPT Multi-wavelength data 15 Feb 2022 2 M Costanzi INAF-Osservatorio Astronomico di Trieste via G. B. Tiepolo 11I-34143TriesteItaly Department of Physics Astronomy Unit University of Trieste via Tiepolo 11I-34131TriesteItaly Institute for Fundamental Physics of the Universe Via Beirut 234014TriesteItaly A Saro INAF-Osservatorio Astronomico di Trieste via G. B. Tiepolo 11I-34143TriesteItaly Department of Physics Astronomy Unit University of Trieste via Tiepolo 11I-34131TriesteItaly Institute for Fundamental Physics of the Universe Via Beirut 234014TriesteItaly INFN -National Institute for Nuclear Physics Via Valerio 2I-34127TriesteItaly S Bocquet Faculty of Physics Ludwig-Maximilians-Universität Scheinerstr. 181679MunichGermany T M C Abbott Cerro Tololo Inter-American Observatory NSF's National Optical-Infrared Astronomy Research Laboratory Casilla 603, La SerenaChile M Aguena Departamento de Física Matemática Instituto de Física Universidade de São Paulo 66318, 05314-970São PauloCP, SPBrazil Laboratório Interinstitucional de e-Astronomia -LIneA Rua Gal. José Cristino 77RJ -20921-400Rio de JaneiroBrazil S Allam Fermi National Accelerator Laboratory P. O. Box 50060510BataviaILUSA A Amara Institute of Cosmology and Gravitation University of Portsmouth PO1 3FXPortsmouthUK J Annis Fermi National Accelerator Laboratory P. O. Box 50060510BataviaILUSA S Avila Instituto de Fisica Teorica UAM/CSIC Universidad Autonoma de Madrid 28049MadridSpain D Bacon Institute of Cosmology and Gravitation University of Portsmouth PO1 3FXPortsmouthUK B A Benson Fermi National Accelerator Laboratory 60510-0500BataviaILUSA Department of Astronomy and Astrophysics University of Chicago 5640 South Ellis Avenue60637ChicagoIL Kavli Institute for Cosmological Physics University of Chicago 5640 South Ellis Avenue60637ChicagoIL S Bhargava Department of Physics and Astronomy Pevensey Building University of Sussex BN1 9QHBrightonUK D Brooks Department of Physics & Astronomy University College London Gower StreetWC1E 6BTLondonUK E Buckley-Geer Fermi National Accelerator Laboratory P. O. Box 50060510BataviaILUSA D L Burke Kavli Institute for Particle Astrophysics & Cosmology Stanford University P. O. Box 245094305StanfordCAUSA SLAC National Accelerator Laboratory 94025Menlo ParkCAUSA A Carnero Rosell Instituto de Astrofisica de Canarias E-38205La Laguna, TenerifeSpain Dpto. Astrofisica Universidad de La Laguna E-38206La Laguna, TenerifeSpain M Carrasco Department of Astronomy University of Illinois at Urbana-Champaign 1002 W. Green Street61801UrbanaILUSA National Center for Supercomputing Applications 1205 West Clark St61801UrbanaILUSA J Carretero Institut de Física d'Altes Energies (IFAE) The Barcelona Institute of Science and Technology Campus UAB08193Bellaterra, BarcelonaSpain A Choi Center for Cosmology and Astro-Particle Physics The Ohio State University 43210ColumbusOHUSA L N Da Costa Laboratório Interinstitucional de e-Astronomia -LIneA Rua Gal. José Cristino 77RJ -20921-400Rio de JaneiroBrazil Observatório Nacional Rua Gal. José Cristino 77RJ -20921-400Rio de JaneiroBrazil M E S Pereira Department of Physics University of Michigan 48109Ann ArborMIUSA J De Vicente Centro de Investigaciones Energéticas Medioambientales y Tecnológicas (CIEMAT) MadridSpain S Desai Department of Physics IIT Hyderabad 502285KandiTelanganaIndia H T Diehl Fermi National Accelerator Laboratory P. O. Box 50060510BataviaILUSA J P Dietrich Faculty of Physics Ludwig-Maximilians-Universität Scheinerstr. 181679MunichGermany P Doel Department of Physics & Astronomy University College London Gower StreetWC1E 6BTLondonUK T F Eifler Department of Astronomy/Steward Observatory University of Arizona 933 North Cherry Avenue85721-0065TucsonAZUSA Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Dr91109PasadenaCAUSA S Everett Santa Cruz Institute for Particle Physics 95064Santa CruzCAUSA I Ferrero Institute of Theoretical Astrophysics University of Oslo BlindernP.O. Box 1029NO-0315OsloNorway P Singh INAF-Osservatorio Astronomico di Trieste via G. B. Tiepolo 11I-34143TriesteItaly Institute for Fundamental Physics of the Universe Via Beirut 234014TriesteItaly M Smith School of Physics and Astronomy University of Southampton SO17 1BJSouthamptonUK M Soares-Santos Department of Physics University of Michigan 48109Ann ArborMIUSA A A Stark Center for Astrophysics | Harvard & Smithsonian 60 Garden Street02138CambridgeMAUSA E Suchyta Computer Science and Mathematics Division, Oak Ridge National Laboratory 37831Oak RidgeTN M E C Swanson National Center for Supercomputing Applications 1205 West Clark St61801UrbanaILUSA G Tarle Department of Physics University of Michigan 48109Ann ArborMIUSA D Thomas Institute of Cosmology and Gravitation University of Portsmouth PO1 3FXPortsmouthUK C Institut d'Estudis Espacials de Catalunya (IEEC) 08034BarcelonaSpain Institute of Space Sciences (ICE, CSIC) Campus UAB, Carrer de Can Magrans, s/n08193BarcelonaSpain Kavli Institute for Cosmological Physics University of Chicago 60637ChicagoILUSA Department of Astronomy University of Michigan 48109Ann ArborMIUSA Institute of Astronomy University of Cambridge Madingley RoadCB3 0HACambridgeUK Kavli Institute for Cosmology University of Cambridge Madingley RoadCB3 0HACambridgeUK Department of Physics Stanford University 382 Via Pueblo Mall, Stanford94305CAUSA School of Physics University of Melbourne 3010ParkvilleVICAustralia Département de Physique Théorique and Center for Astroparticle Physics Université de Genève 24 quai Ernest AnsermetCH-1211GenevaSwitzerland School of Mathematics and Physics University of Queensland 4072BrisbaneQLDAustralia Department of Physics The Ohio State University 43210ColumbusOHUSA Australian Astronomical Optics Macquarie University North Ryde2113NSWAustralia Lowell Observatory 1400 Mars Hill Rd86001FlagstaffAZUSA George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy Department of Physics and Astronomy Texas A&M University 77843College StationTXUSA Institució Catalana de Recerca i Estudis Avançats E-08010BarcelonaSpain Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse 85748GarchingGermany Physics Department University of Wisconsin-Madison 1150 University Avenue Madison2320, 53706-1390Chamberlin HallWI Department of Astrophysical Sciences Princeton University Peyton Hall08544PrincetonNJUSA NASA Ames Research Center 94035Moffett FieldCAUSA Academic Mission Services Task Lead, Research Institute for Advanced Computer Science, Universities Space Research Association NASA 94043Mountain ViewCAUSA Center for Astrophysics and Space Astronomy Department of Astrophysical and Planetary Sciences University of Colorado Boulder 80309COUSA Kavli Institute for Astrophysics and Space Research Massachusetts Institute of Technology 02139CambridgeMAUSA Department of Physics Carnegie Mellon University 15312PittsburghPennsylvaniaUSA Fakultät für Physik Universitäts-Sternwarte Ludwig-Maximilians Universität München Scheinerstr. 181679MünchenGermany Cosmological Constraints from DES Y1 Cluster Abundances and SPT Multi-wavelength data 3515 Feb 2022 2DES-2020-0588 FERMILAB-PUB-20-541-V (DES & SPT Collaborations) We perform a joint analysis of the counts of redMaPPer clusters selected from the Dark Energy Survey (DES) Year 1 data and multi-wavelength follow-up data collected within the 2500 deg 2 South Pole Telescope (SPT) Sunyaev-Zel'dovich (SZ) survey. The SPT follow-up data, calibrating the richness-mass relation of the optically selected redMaPPer catalog, enable the cosmological exploitation of the DES cluster abundance data. To explore possible systematics related to the modeling of projection effects, we consider two calibrations of the observational scatter on richness estimates: a simple Gaussian model which accounts only for the background contamination (BKG), and a model which further includes contamination and incompleteness due to projection effects (PRJ). Assuming either a ΛCDM+ mν or wCDM+ mν cosmology, and for both scatter models, we derive cosmological constraints consistent with multiple cosmological probes of the low and high redshift Universe, and in particular with the SPT cluster abundance data. This result demonstrates that the DES Y1 and SPT cluster counts provide consistent cosmological constraints, if the same mass calibration data set is adopted. It thus supports the conclusion of the DES Y1 cluster cosmology analysis which interprets the tension observed with other cosmological probes in terms of systematics affecting the stacked weak lensing analysis of optically-selected low-richness clusters. Finally, we analyse the first combined optically-SZ selected cluster catalogue obtained by including the SPT sample above the maximum redshift probed by the DES Y1 redMaPPer sample (z = 0.65). Besides providing a mild improvement of the cosmological constraints, this data combination serves as a stricter test of our scatter models: the PRJ model, providing scaling relations consistent between the two abundance and multi-wavelength follow-up data, is favored over the BKG model. We perform a joint analysis of the counts of redMaPPer clusters selected from the Dark Energy Survey (DES) Year 1 data and multi-wavelength follow-up data collected within the 2500 deg 2 South Pole Telescope (SPT) Sunyaev-Zel'dovich (SZ) survey. The SPT follow-up data, calibrating the richness-mass relation of the optically selected redMaPPer catalog, enable the cosmological exploitation of the DES cluster abundance data. To explore possible systematics related to the modeling of projection effects, we consider two calibrations of the observational scatter on richness estimates: a simple Gaussian model which accounts only for the background contamination (BKG), and a model which further includes contamination and incompleteness due to projection effects (PRJ). Assuming either a ΛCDM+ mν or wCDM+ mν cosmology, and for both scatter models, we derive cosmological constraints consistent with multiple cosmological probes of the low and high redshift Universe, and in particular with the SPT cluster abundance data. This result demonstrates that the DES Y1 and SPT cluster counts provide consistent cosmological constraints, if the same mass calibration data set is adopted. It thus supports the conclusion of the DES Y1 cluster cosmology analysis which interprets the tension observed with other cosmological probes in terms of systematics affecting the stacked weak lensing analysis of optically-selected low-richness clusters. Finally, we analyse the first combined optically-SZ selected cluster catalogue obtained by including the SPT sample above the maximum redshift probed by the DES Y1 redMaPPer sample (z = 0.65). Besides providing a mild improvement of the cosmological constraints, this data combination serves as a stricter test of our scatter models: the PRJ model, providing scaling relations consistent between the two abundance and multi-wavelength follow-up data, is favored over the BKG model. I. INTRODUCTION Tracing the highest peaks of the matter density field, galaxy clusters are a sensitive probe of the growth of structures [see e.g. 1, 2, for reviews]. In particular, the abundance of galaxy clusters as a function of mass and redshift has been used over the last two decades to place independent and competitive constraints on the density and amplitude of matter fluctuations, as well as dark energy and modified gravity models [e.g. [3][4][5][6][7][8][9]. Thanks to the increasing number of wide area surveys at different wavelengths -e.g. in the optical the Sloan Digi-tal Sky Survey 1 and the Dark Energy Survey 2 (DES), in the microwave Planck 3 , South Pole Telescope 4 (SPT) and Atacama Cosmology Telescope 5 , and in the X-ray eROSITA 6 -cluster catalogs have grown in size by an order of magnitude compared to early studies, extending to lower mass systems and/or to higher redshifts. Despite this improved statistic, the constraining power of current cluster abundance studies is limited by the uncertainty in the calibration of the relation between cluster mass and the observable used as mass proxy [see e.g. 10]. In general, the observable-mass relation (or OMR) can be calibrated either using high-quality X-ray, weak lensing and/or spectroscopic follow-up data for a representative sub-sample of clusters [e.g. 5,7,11], or, if wide area imaging data are available, exploiting the noisier weak lensing signal measured for a large fraction of the detected clusters [e.g. 8,9,12]. Depending on the methodology adopted the mass estimates can be affected by different sources of systematics: e.g. violation of the hydrostatic or dynamical equilibrium when relying on Xray or spectroscopic follow-up data, respectively, or shear and photometric biases in weak lensing analyses. The calibration of the scaling relation is further hampered by the cluster selection and correlations between observables, which, if not properly modeled, can lead to large biases in the inferred parameters. The recent analysis of the optical cluster catalog extracted from the DES year 1 data (Y1), which combines cluster abundance and stacked weak lensing data, exemplifies such limitations [9, hereafter DES20]. The DES20 analysis results in cosmological posteriors in tension with multiple cosmological probes. The tension is driven by low richness systems, and has been interpreted in terms of an unmodeled systematic affecting the stacked weak lensing signal of optically selected clusters. A possible route to improve our control over systematics relies on the combination of mass-proxies observed at different wavelengths, and thus not affected by the same sources of error. Even more advisable would be the combination of cluster catalogs selected at different wavelengths which would enable the full exploitation of the cosmological content of current and future cluster surveys. The DES and SPT data provides such an opportunity thanks to the large area shared between the two footprints and the high quality of the photometric and millimeter-wave data, respectively. Moreover, the X-ray and weak lensing follow-up data collected within the SPT survey provide an alternative data set to the stacked weak lensing signal adopted in DES20 to constrain the observable-mass scaling relations, that has already been extensively vetted [7,13]. The goal of this study is twofold: i) reanalyze the DES Y1 cluster abundance data adopting the SPT follow-up data to calibrate the observable-mass relation(s), and ii) provide a first case study for the joint analysis of cluster catalogs selected at different wavelengths. In turn, this serves as independent test of the conclusions drawn in DES20; secondly, combining the abundance data of the two surveys, we explore the possible cosmological gain given by the joint analysis of the two catalogs and exploit the complementary mass and redshift range probed by the two surveys to test the internal consistency of the data sets. Concerning this last point, we consider two calibrations of the observational noise on richness estimates with the aim of assessing possible model systematics induced by a too simplistic modeling of the relation between richness and mass. The paper is organized as follows: In section II we present the data sets employed in this work. Section III introduces the methodology used to analyze the data. We present our results and discuss their implication in section IV. Finally we draw our conclusions in section V. II. DATA In this work we combine cluster abundance data from the DES Y1 redMaPPer optical cluster catalog [DES Y1 RM; 9], with multi-wavelength data collected within the 2500 deg 2 SPT-SZ cluster survey [SPT-SZ; 7,14]. Exploiting the large overlap (∼ 1300 deg 2 ) of the DES Y1 and SPT-SZ survey footprints, we aim to use the SPT-SZ multi-wavelength data to calibrate the observable-mass relation of redMaPPer clusters, which in turn enables the derivation of cosmological constraints from the DES Y1 abundance data. Below we present a summary of the data sets employed in this work. To build our data vectors we follow the prescriptions adopted in DES20 and [7] (hereafter B19) and refer the reader to the original works for further details. A. DES Y1 redMaPPer Cluster Catalog The DES Y1 redMaPPer clusters are extracted from the DES Y1 photometric galaxy catalog [15]. The latter is based on the photometric data collected by the DECam during the Year One (Y1) observational season (from August 31, 2013 to February 9, 2014) over ∼1800 deg 2 of the southern sky in the g, r, i, z and Y bands. Galaxy clusters are selected through the redMaP-Per photometric cluster finding algorithm that identifies galaxy clusters as overdensities of red-sequence galaxies [16,17]. redMaPPer uses a matched filter approach to estimate the membership probability of each red-sequence galaxy brighter than a specified luminosity threshold, L min (z), within an empirically calibrated cluster radius (R λ = 1.0 h −1 Mpc(λ ob /100) 0.2 ). The sum of these membership probabilities is called richness, and is denoted as λ ob . Along with the richness, redMaPPer estimates the photometric redshift of the identified galaxy clusters. Typical DES Y1 cluster photometric redshift uncertainties are σ z /(1 + z) ≈ 0.006 with negligible bias (|∆z| ≤ 0.003). The photometric redshift errors are both redshift and richness dependent. To determine candidate central galaxies the redMaPPer algorithm iteratively selftrains a filter that relies on galaxy brightness, cluster richness, and local galaxy density. The algorithm centers the cluster on the most likely candidate central galaxy which is not necessarily the brightest cluster galaxy. [18] studied the centering efficiency of the redMaPPer algorithm using X-ray imaging and found that the fraction of correctly centered clusters is f cen = 0.75 ± 0.08 with no significant dependence on richness. Following DES20, we use for the cluster count analysis the DES Y1 redMaPPer volume-limited catalog with λ ob ≥ 20, in the redshift interval z ∈ [0.2, 0.65] , with a total of 6504 clusters 7 . Galaxy clusters are included in the volume-limited catalog if the cluster redshift z ≤ z max (n), where z max (n) is the maximum redshift at which galaxies at the luminosity threshold L min (z) are still detectable in the DES Y1 at 10σ. Figure 1 shows the cluster density in the two non-contiguous regions of the DES Y1 redMaPPer cluster survey considered in this work. The lower panel, dubbed the SPT region, corresponds to the ∼ 1300 deg 2 overlapping area between the SPT-SZ and DES Y1 survey footprints. Accordingly with the binning scheme adopted in DES20, we split our cluster sample in four richness bins and three redshift bins as listed in Table I. Moreover, we correct the cluster count data for miscentering effects following the prescription of DES20. Briefly, cluster miscentering tends to bias low the richness estimates 7 The redMaPPer catalog can be found here: https://des.ncsa. illinois.edu/releases/y1a1/key-catalogs/key-redmapper and thus the abundance data, introducing covariance amongst neighboring richness bins. The correction and covariance matrix associated with this effect are estimated in DES20 through Monte Carlo realizations of the miscentering model of [18]. The corrections derived for each richness/redshift bin are of the order of ≈ 3% with an uncertainty of ≈ 1.0% (see Table I). B. SPT-SZ 2500 Cluster Catalog and Follow-Up Data Galaxy clusters are detected in the millimeter wavelength via the thermal Sunyaev-Zel'dovich signature [SZ,19] which arises from the inverse Compton scattering of CMB photons with hot electrons in the intracluster medium (ICM). The SPT-SZ survey observed the millimeter sky in the 95, 150, and 220 GHz bands over a contiguous 2500 deg 2 area reaching a fiducial depth of ≤ 18µK-arcmin in the 150 GHz band. Galaxy clusters are extracted from the SPT-SZ maps using a multiscale matched-filter approach [20] applied to the 95 and 150 GHz bands data as described in [14,21,22]. For each cluster candidate, corresponding to a peak in the matched-filtered maps, the SZ observable ξ is defined as the maximum detection significance over twelve equally spaced filter scales ranging from 0. 25 to 3 [14]. The SPT-SZ cosmological sample consists of 365 candidates Finally, to calibrate the redMaPPer richness-mass relation we assign richnesses to the SPT-SZ clusters by cross-matching the two catalogs. To mitigate the impact of the optical selection we consider for the matching procedure all the clusters with λ ob ≥ 5 in the DES Y1 redMaPPer volume-limited catalog. The match is performed following the criterion adopted in [27]; see also [28] for an analogous study. Specifically: i) we sort the SPT-SZ and DES Y1 RM sample in descending order according to their selection observable, ξ and λ ob ; ii) starting with the SPT-SZ cluster with the largest ξ, we match the system to the richest DES Y1 RM cluster within a projected radius of 1.5 Mpc and redshift interval δ z = 0.1; iii) we remove the matched DES Y1 RM cluster from the list of possible counterparts and move to the next SPT-SZ system in the ranked list iterating step ii) until all the SPT-SZ clusters have been checked for a match. We match all the 129 optically confirmed SPT-SZ clusters with ξ > 5 and z > 0.25 that are in the proper redshift range and that lie in the DES Y1 footprint. The remaining 214 non-matched systems reside either in masked regions of the DES Y1 footprint or at redshifts larger than the local maximum redshift z max (n) of the DES Y1 RM volume-limited catalog. Figure 2 shows the λ ob distribution of the matched sample as a function of the SZ detection significance. The median of the distribution is λ ob = 78, while 68% and 95% of the matched sample resides above richness λ ob > 60 and λ ob > 37, respectively. To assess the probability of false association we repeat the matching procedure with 1000 randomized DES Y1 RM catalogs and compute the fraction of times that an SPT-SZ system is associated with a random redMaPPer cluster with λ ≥ λ ob . We find this probability to be less than 0.2% for all the SPT-SZ matched systems, and thus we neglect it for the rest of the analysis. We also explore the possible cosmological gain given by the inclusion of the number count data from the SPT-SZ catalog. When included, we only consider SPT-SZ clusters above redshift 0.65 -the redshift cut adopted for the DES Y1 redMaPPer catalog -corresponding to 40% of the whole SPT-SZ sample. This redshift cut ensures the independence of DES Y1 RM and SPT-SZ abundance data, which allows a straightforward combination of the two data sets. A summary of the SPT-SZ data employed in this analysis can be found in Table II. III. ANALYSIS METHOD Operatively, we can split our data set in three subsamples and corresponding likelihoods: i) the DES Y1 RM abundance data (DES-NC), ii) the SPT-SZ multiwavelength data (SPT-OMR) and iii) the SPT-SZ abundance data at z > 0.65 (SPT-NC). Our theoretical model for the DES Y1 RM number counts is the same as that described in detail in [8] and DES20, while for the analysis of the SPT-SZ abundance and multi-wavelength data we rely on the model presented in B19. Here we only provide a brief summary of these methods and refer the reader to the original works for further details. Throughout the paper, all quantities labeled with "ob" denote quantities inferred from observation, while P (Y |X) denotes the conditional probability of Y given X. All masses are given in units of M /h, where h = H 0 /100 km s −1 Mpc −1 , and refer to an overdensity of 500 with respect to the critical density. We use "log" and "ln" to refer to the logarithm with base 10 and e, respectively. A. Observable-Mass Relations Likelihood The SPT-SZ multi-wavelength data comprises four mass proxies: the SZ detection significance ξ, the richness λ ob , the X-ray radial profile Y ob X , and the reduced tangential shear profile g t (θ). The corresponding mean observable-mass relations for the intrinsic quantities -ζ, λ, Y X , M WL -are parameterized as follows: ln ζ = ln(γ f A SZ ) + B SZ ln M 3 × 10 14 M h −1 + + C SZ ln E(z) E(0.6)(1)ln λ = ln(A λ ) + B λ ln M 3 × 10 14 M h −1 + + C λ ln 1 + z 1 + 0.45 (2) ln M 5.86 × 10 13 M h −1 = ln(A Y X ) + B Y X ln Y X + + B Y X ln (h/0.7) 5/2 3 × 10 14 M keV + C Y X ln E(z) (3) ln M WL = ln b WL + ln M ,(4) where γ f in equation 1 depends on the position of the SPT-SZ cluster and accounts for the variation of survey depth over the SPT footprint [13], while E(z) = H(z)/H 0 . For each scaling relation we fit for the amplitude, slope, and redshift evolution (see Table III), but for the weak lensing mass, M WL , which we assume to be simply proportional to the true halo mass accordingly to the simulation-based calibration of B19. We assume the logarithm of our four intrinsic observables, ln O, to follow a multivariate Gaussian distribution with intrinsic scatter parameters D O , and correlation co- efficients ρ(O i ; O j ): P (ln O|M, z) = N ( ln O , C) ,(5) where the covariance matrix elements read C ij = ρ(O i ; O j )D i O D j O and ρ(O i ; O i ) = 1. All the intrinsic scatters are described by a single parameter D O independent of mass and redshift, but the scatter on ln λ which includes a Poisson-like term -σ 2 ln λ = D 2 λ + ( λ(M ) − 1)/ λ(M ) 2 -which does not correlate with the other scatter parameters. Finally, we set to zero the correlation coefficients between the D Y X and the other scatter parameters. This approximation is justified by the fact that while the richness, SZ and weak lensing signal are sensitive to the projected density field along the line of sight of the system, the X-ray emission is mainly contributed by the inner region of the cluster. This approximation is also supported by the analysis of B19 which obtained unconstrained posteriors peaked around zero for the X-ray correlation coefficients. We explicitly verified that this approximation does not affect our results, while reducing noticeably the computational cost of the analysis. To account for the observational uncertainties and/or biases, we consider the following conditional probabilities between the intrinsic cluster proxies and the actual observed quantities. For ξ, Y x and γ t (θ) we follow the prescriptions outlined in B19, namely: P (ξ|ζ) = N ζ 2 + 3, 1 (6) P (Y ob X |Y X ) = N Y X , σ ob Y X ,(7) where σ ob Y X is the uncertainty associated with the X-ray measurements [see section 3.2.2 in 7, for further details]. The reduced tangential shear g t (θ) is analytically related to the underlying halo mass M WL assuming a Navarro-Frenk-White (NFW) halo profile [29], a concentrationmass relation, and using the observed redshift distribution of source galaxies. Deviation from the NFW profile, large-scale structure along the line of sight, miscentering and uncertainties in the concentration-mass relation, introduce bias and/or scatter on the estimated weak lensing mass, M WL . As introduced in equation 4, we assume M WL to be proportional to the true halo mass, and use the simulation-based calibration of b WL from B19 to account for such effects (see their Section 3.1.2 and Table 1 for further details). In total the weak lensing (WL) modeling introduces six free parameters which account for the uncertainties in the determination of the systematics associated to the mean bias (δ WL,bias , δ HST/MegaCam,bias ) and scatter (δ WL,scatter , δ HST/MegaCam,scatter ) of the WL-mass scaling relation. Of these, two parameters are shared among the entire WL sample (δ WL,bias , δ WL,scatter ), while the other two pairs are associated with the sub-sample observed with HST (δ HST,bias , δ HST,scatter ) or Megacam (δ MegaCam,bias , δ MegaCam,scatter ). As for the uncertainty on the richness, many studies already highlight the importance of projection effects on richness estimates [e.g. [30][31][32][33][34][35]. In this context, projection effects denote the contamination from correlated and uncorrelated structures along the line of sight due to the limited resolution that a photometric cluster finding algorithm can achieve along the radial direction. In this study we consider two prescriptions based on the model presented in [35]: 1. P bkg (λ ob |λ) = N (λ, σ bkg λ ) , which accounts only for the "background subtraction" scatter , σ bkg λ , due to the misclassification of background galaxies as member galaxies and vice versa, labelled BKG throughout the paper. 2. P prj (λ ob |λ), defined in equation 15 of [35], which includes, besides the "background subtraction" noise, the scatter due to projection and masking effects (PRJ, hereafter). The approximated BKG model is derived from P prj (λ ob |λ) by setting to zero the fraction of clusters affected by projection and masking effects and corresponds to the model often adopted in literature [e.g. 27,28,33]. PRJ is the model adopted in DES20, and it has been calibrated by combining real data and simulated catalogs analysis. While being a more complete model which includes known systematics effects, its calibration, in part based on simulated catalogs, might be subject to biases. Comparisons of the results obtained with these two models are used to assess the capability of our simplest model (BKG) to absorb the impact of projection effects and, in turn, possible biases due to their incorrect calibration. Putting all the above pieces together, the "observablemass relation" likelihood for the SPT-SZ multiwavelength data is given by: ln L OMR (O ob |θ) = i ln P (λ ob i , Y X ob i , g ti |ξ i , z i , θ) ,(8) where θ denotes the model parameters and the sum runs over all the SPT-SZ clusters with at least a follow-up measurement (besides ξ). Each term of the summation is computed as: P (λ ob , Y ob X , g t |ξ, z, θ) ∝ dM dζ dλ dY X dM WL P (ξ|ζ)P (λ ob |λ)P (Y ob X |Y X )P (g t |M WL ) P (ζ, λ, Y X , M WL |M, z)n(M, z). (9) In the above expression n(M, z) represents the halo mass function for which we adopt the [36] fitting formula. Following the original analyses of DES20 and B19 we neglect the uncertainty on the halo mass function due to baryonic feedback effects, being the latter subdominant to the uncertainty on the cluster counts due to the mass calibration. The proportionality constant is set by the normalization condition: ∞ 5 dλ ob dξdg t dY ob X P (λ ob , Y ob X , g t |ξ, z) = 1, where the lower limit is set by the λ ob ≥ 5 cut applied to the DES Y1 RM sample to match the catalogs. Finally, note that in the above expression only the integrals over the mass proxies for which we have a measurement need to be computed in practice. If no follow-up measurements are available for a SPT system the conditional probability reduces to one and thus can be omitted from the sum in equation 8. B. Cluster Abundance Likelihoods The expected number of clusters observed with O ob at redshift z, over a survey area Ω(z), is given by: N (O ob , z) = dM n(M, z)Ω(z) dV dzdΩ z · · dOP (O ob |O)P (O|M, z) ,(10) where dV /(dzdΩ) is the comoving volume element per unit redshift and solid angle, whereas the conditional probabilities for the observed and intrinsic mass proxies are those described in the previous section. The DES Y1 RM cluster abundance data are analyzed following the methodology adopted in DES20 where the number counts likelihood takes the form: L NC DES (N ∆ |θ) = exp − 1 2 (N ∆ − N ∆ ) T C −1 (N ∆ − N ∆ ) (2π) 12 det(C) ,(11) where N ∆ and N ∆ are respectively the abundance data (see Table I), and the expected number counts in bins of richness and redshift obtained by integrating equation 10 over the relevant λ ob and z intervals. The covariance matrix C is modeled as the sum of three distinct contributions: i) the Poisson noise, ii) a sample variance term due to density fluctuations within the survey area and iii) a miscentering component (see section II A). The Poisson and sample variance contributions are computed analytically at each step of the chain following the prescription outlined in Appendix A of [8]. At high richness, the Poisson term dominates the uncertainty, with sample variance becoming increasingly important at low richness [37]. Note that the large occupancy of all our bins -our least populated bin contains 91 galaxy clusters -justify the Gaussian approximation adopted for the Poisson component. Following B19, we assume a purely Poisson likelihood for the SPT-SZ abundance data [38]: ln L NC SPT (N |θ) = i ln N (ξ i , z i ) − 0.65 dz 5 dξ N (ξ, z) ,(12) where the sum runs over all the SPT-SZ clusters above the redshift and SZ significance cuts (z cut = 0.65, ξ cut = 5). Note that here we can safely neglect the sample variance contribution given large cluster masses (M 3 × 10 14 M h −1 ) probed by the SPT-SZ survey (see [37,39]). C. Parameters Priors and Likelihood Sampling The cosmological and model parameters considered in this analysis are listed in Table III along with their priors. Our reference cosmological model is a flat ΛCDM Correlation coefficients between scatters ρ(SZ, WL) X-ray Y X scaling relation A Y X Amplitude [1, 10] B Y X Power-law index mass dependence [1, 2.5] C Y X Power-law index redshift evolution [−1, 2] D Y X Intrinsic scatter [0.01, 0.5] d ln Y X /d ln r Radial slope Y X profile N (1.12, 0.23) M WL scaling relation δ WL,Correlation coefficient SZ-WL [−1, 1] ρ(SZ,λ) Correlation coefficient SZ-λ [−1, 1] ρ(WL,λ) Correlation coefficient WL-λ [−1, 1] Determinant OMR matrix (eq. 5) det|C| > 0 model with three degenerate species of massive neutrinos (ΛCDM+ m ν ), for a total of six cosmological parameters: Ω m , A s , h, Ω b h 2 , Ω ν h 2 , n s . Being that our data set is insensitive to the optical depth to reionization, we fix τ = 0.078. We also consider a wCDM+ m ν model where the dark energy equation of state parameter w is let free to vary in the range [−2.5, −0.33]. The four observable-mass scaling relations considered in this work comprise 19 model parameters. Besides those already introduced in section III A, the Y X scaling relation has the additional parameter (d ln Y X /d ln r) -the measured radial slope of the Y X profile -which allows to re-scale and compare the measured and predicted Y X profiles at a fixed fiducial radius [see section 3.2.2 of 7, for additional details]. The parameters ranges and priors match those used in B19, apart from the richness-mass scaling relation parameters, which were not included in the B19 analysis, and for which we adopt flat uninformative priors. The parameter ranges for Ω b h 2 and n s are chosen to roughly match the 5σ credibility interval of the Planck constraints [40], while the lower limit adopted for Ω ν h 2 corresponds to the minimal total neutrino mass allowed by oscillation experiments, 0.056 eV [41]. We consider two different data combinations in this work. Our baseline data set is given by the combination of DES Y1 RM counts data and the SPT-SZ multiwavelength data (DES-NC+SPT-OMR). Moreover, we explore the cosmological gain given by the further inclusion of the SPT-SZ abundance data (DES-NC+SPT-[OMR,NC]). The total log-likelihood is thus given by the sum of log-likelihoods corresponding to the data considered in each analysis. We remind here that the independence of the two abundance likelihoods is guaranteed by the redshift cut z > 0.65 adopted for the SPT-SZ number count data which ensures the absence of overlap between the volume probed by the two abundance data sets. The parameter posteriors are estimated within the cosmoSIS package [42] using the importance nested sampler algorithm MultiNest [43] with target error on evidence equal to 0.1 as convergence criterion. . The matter power spectrum is computed at each step of the chain using the Boltzmann solver CAMB [44]. To keep the universality of the Tinker fitting formula in cosmologies with massive neutrinos we adopt the prescription of [45] neglecting the neutrino density component in the relation between scale and mass -i.e. M ∝ (ρ cdm + ρ b )R 3 -and using only the cold dark matter and baryon power spectrum components to compute the variance of the density field at a given scale, σ 2 (R). Table IV summarizes the results obtained for the different models and data combinations considered in this work. Along with the varied ones we also report posteriors for two derived parameters: the amplitude of the matter power spectrum on a 8h −1 Mpc scale, σ 8 , and the cluster normalization condition, IV. RESULTS S 8 = σ 8 (Ω m /0.3) 0.5 . A. ΛCDM+ mν cosmology Figure 3 shows the parameter posteriors obtained from the four analyses carried out for the ΛCDM+ m ν model. We do not report posteriors for those parameters not constrained by our data or dominated by their priors. Also, to avoid overcrowding we omit from this figure the Y X scaling relation parameters which can be found in appendix A along with the correlation matrix for a sub-set of parameters. The only two cosmological parameters constrained by our data are Ω m and σ 8 . For all the other cosmological parameters -Ω b h 2 , Ω ν h 2 and n s -we obtain almost flat posteriors, but for the Hubble Table IV. Only parameters that are not prior dominated are shown in the plot. parameter which is loosely constrained by the abundance data thanks to the mild sensitivity of the slope of the halo mass function and comoving volume element to variation of h. Models and data combinations comparison The left panels of Figure 4 compare the abundances of the DES Y1 RM clusters (boxes) with the corresponding mean model predictions (markers). The right panels show the residuals between the data and the model ex- IV. Cosmological and model parameter constraints obtained for the different models and data combinations considered in this work. For all the parameters we report the mean of the 1-d marginalized posterior along with the 1-σ errors. We omit from this table parameters whose posteriors are equal to or strongly dominated by their priors. DES-NC, SPT-OMR and SPT NC stand for the different data set considered in the analyses, respectively: cluster counts from DES Y1 RM, multi-wavelength data from SPT-SZ, and abundance from the SPT-SZ cluster catalog above z > 0.65. BKG and PRJ refer to the model adopted to describe the observational noise on the richness estimate (see section III A). pectations for the two scatter models and data combinations considered. Starting with our baseline data set DES-NC+SPT-OMR, the SPT multi-wavelength data carry the information to constrain the observable-mass relation parameters, while the DES Y1 RM abundance data, thanks to the SPT-OMR calibrated richness-mass relation, constrain the cosmological parameters. Specifically, the richness-mass relation parameters are constrained through the calibration of the ξ-mass scaling relation, which in turn is primarily informed by the weak lensing data. The X-ray data mainly affect the constraints on the intrinsic scatter parameters [see also 7]. We explicitly verified that when dropping the X-ray data, we obtain perfectly consistent results for all parameters but for the scatters D SZ and D λ whose mean values increase and decrease by ∼ 0.1 (∼ 1σ), respectively. The further inclusion of the SPT-NC data bring additional cosmological information which slightly improves the σ 8 and Ω m constraints -by 30% and 20%, respectively -while shifting their confidence contours along the S 8 degeneracy direction (black dashed and green contours in figure 3). The shift of the σ 8 posterior can be understood by looking at figure 5 which compares the SPT-SZ number count data with predictions from the DES-NC+SPT-OMR and DES-NC+SPT-[OMR,NC] analyses. The larger σ 8 value preferred by the DES-NC+SPT-OMR data tend to over-predict the number of SPT-SZ clusters above z > 0.65. Consequently, when included, the SPT number count data shift σ 8 towards lower values to recover the correct number of SPT-SZ clusters (see also orange contours in figure 7). Concurrently, to counterbalance the lower σ 8 mean value and thus keep roughly unvaried the predictions for the DES Y1 RM cluster counts, Ω m , B λ and C λ move toward larger values following the corresponding degeneracy directions with σ 8 . We will further comment on the origin of this shift in section IV A 4. Finally, the SPT abundance data improve the constraints on B SZ and C SZ thanks to the sensitivity of the SPT-NC likelihood to the SZ-mass scaling relation. Moving to the modeling of P (λ ob |λ), we find consistent results between the two models adopted for the observational noise on λ (BKG with orange and black contours and PRJ with blue and green contours; see section III A), albeit the PRJ model prefers a slightly lower Ω m value, driven by a shallower (B λ = 0.86 ± 0.04) and redshift independent (C λ = −0.02 ± 0.34) richness-mass scaling relation compared to the BKG results. This result can be understood as follows: the PRJ model, which accounts also for projection and masking effects, tends to bias high the richness estimates and introduces a larger scatter between λ ob and λ compared to the BKG model. As a consequence, for a given set of cosmological and scaling relation parameters, the slope of the λ ob -mass relation increases, as well as the predicted cluster counts for DES Y1 RM. Given the strong degeneracy between A λ − A SZ and D λ −D SZ , and the tight constraints on SZ parameters provided by the SPT-OMR data, B λ is the only parameter which can compensate for such effects by moving its posterior to lower values. Similarly, the preference for a non-evolving λ-mass scaling relation is explained by the redshift dependent bias and scatter intrinsic to the PRJ model, which is a consequence of the worsening of the photo-z accuracy with increasing redshift. These findings are consistent with those obtained in DES20, where it is shown the robustness of the cosmological posteriors to different model assumptions for P (λ ob |λ). As for the correlation coefficients between scatters in all the four cases analyzed the posteriors are prior dominated. We note, however, that while the posteriors of the correlation coefficients between SZ and WL and WL and λ peak around zero, the ρ(SZ, λ) posterior always has its maximum at ∼ −0.2, suggesting an anti-correlation between the two observables (see figure 12 in appendix A). Goodness of fit The four analyses perform similarly well in fitting the DES Y1 abundance data. The model predictions are all consistent within 2σ with the data but for the highest richness/redshift bin, where all the models over-predict the number counts by ∼ 35% (see right panels of figure 4). Notably, while the SPT-OMR data is only available for clusters above λ ob 40, the scaling relation extrapolated at low richness provides a good fit to the DES Y1 abundance data. Our composite likelihood model and parameter degeneracies do not allow us to apply a χ 2 statistic to assess the goodness of the fit. The same tensions between predictions and DES Y1 RM abundance data was observed in DES20, where the authors verified that dropping the highest-λ/z bin from the data does not affect their results, but improve the goodness of the fit. Here we use the posterior predictive distribution to asses the likelihood of observing the highest-λ/z data point given our models [see e.g. 46, section 6.3]. The method consists of drawing simulated values from the posterior predictive distribution of replicated data and comparing these mock samples to the observed data. The posterior predictive distribution is defined as: P (y rep |y) = dθP (y rep |θ)P (θ|y) (13) where y is the observed data vector, y rep the replicated one, and θ the model parameters. In practice, we generate our replicated data for the highest-λ/z by sampling the posterior distribution, P (θ|y), and drawing for each sampled θ a value from the multivariate normal distribution defined by equation 10 and covariance matrix C. We draw 500 samples for each of the four analyses, and fit the distributions with a Gaussian to easily quantify the likelihood of the observed data point. As can been seen in figure 6 for the two models and data combinations considered here the observed data lies within the 3σ region (dashed and dotted vertical lines), thus we conclude that the highest-λ/z data point is not a strong outlier of the predicted distribution and our model suffices to describe it. Similarly for the SPT-SZ abundance data, the models retrieved from the posteriors of the DES-NC+SPT-[OMR,NC] and DES-NC+SPT-[OMR,NC]+PRJ analyses provide a good fit to the SPT number counts but for the highest ξ bin, where the model predictions lie at the edge of the ∼ 2σ region (see lower panel of figure 5). As for the SPT-OMR data we inspect the goodness of the fit of the derived P (λ ob |ξ) distributions against the cross-matched sample. Specifically, we verified that all the data points lie within the 3σ region of the posterior predictive distributions independently from the data combination and model assumed for the observational scatter on λ ob (see figure 2). To determine whether our data sets prefer one of the two models adopted for P (λ ob |λ) -BKG and PRJwe rely on the Deviance Information Criterion [hereafter DIC ; 47]. Specifically, for a given model M the DIC is computed from the mean χ 2 over the posterior volume and the maximum posterior χ 2 as: DIC(M ) = 2 χ 2 M − χ 2 MaxP (M ) .(14) The model with the lower DIC value either fits better the data -lower χ 2 -or has a lower level of complexity -lower χ 2 − χ 2 MaxP . For the data combination DES-NC+SPT-OMR we obtain ∆DIC = DIC(PRJ) − DIC(BKG) = 3.5, while for the full data set ∆DIC = −3.8. Adopting the Jeffreys' scale to interpret the ∆DIC values, the DES-NC+SPT-OMR data combination has a "positive" (|∆DIC| ∈ [2, 5]) -even though not "strong" (|∆DIC| ∈ [−5, −10]) or "definitive" (|∆DIC| > 10) -preference for the BKG model, while the full data combination has a "positive" preference for the PRJ model. Additional follow-up data extending to lower richness -as the one soon available from the combination of DES Y3 and Y6 data with the full SPT surveys or eROSITA -will help to identify the model which better describes the data. ing calibration (see also section IV A 4). The consistency of our posteriors with the DES Y1 combined analysis of galaxy clustering and weak lensing [DES 3x2pt 48], Planck CMB data [40], and other cluster abundance studies, seems to confirm the conclusions of DES20: the tension between DES-[NC,M WL ] and other probes is most likely due to flawed interpretation of the stacked weak lensing signal of redMaPPer clusters in terms of mean cluster mass. Comparison with other cosmological probes The similar constraining power provided by our data set and SPT-SZ 2500, which combine SPT-OMR data and SPT-SZ cluster counts above z > 0.25, indicates that the two analyses are limited by the uncertainty in the mass calibration, i.e. the data set they have in common. The lower σ 8 value preferred by the SPT SZ-2500 analysis [7] 9 can be again understood by looking at figure 5: the cosmology preferred by the DES-NC+SPT-OMR data combination over-predict the SPT-NC by a factor of ∼ 2, and the same trend holds for the SPT abundance data below z = 0.65 (not shown in the figure). As a consequence, when substituting the DES-NC data with the SPT-SZ cluster number counts, the σ 8 posterior shifts toward lower values to accommodate the model predictions to the new abundance data. 9 Note that at odds with the B19 analysis, here we show results for the SPT-SZ 2500 analysis obtained assuming 3 degenerate massive neutrino species and adopting the massive neutrino prescription for the halo mass function presented in [45], consistently with our analysis. The different massive neutrino scheme and the inclusion of this prescription lowers the σ 8 posterior by 0.024 (corresponding to ∼ 0.5σ) compared to original results of B19. The inclusion of SPT-NC data (DES-NC+SPT-[OMR,NC]) worsens the consistency with the other lowredshift probes considered here by shifting the Ω m /σ 8 posteriors towards higher/lower values. In particular, the agreement is degraded with the DES 3x2pt and WtG results, with which the tension in the σ 8 -Ω m plane raises to 1.8σ and 1.9σ, respectively. Notably, the full data combination is at 1.3σ tension also with results from SPT-SZ 2500 with which it shares part of the abundance data (SPT-SZ counts above z = 0.65) and the follow-up data. The fact that the DES-NC+SPT-[OMR,NC] posteriors do not lie in the intersection of the DES-NC+SPT-OMR and SPT-SZ 2500 contours suggests the presence of some -yet not statistically significant -tension between the DES-NC, SPT-OMR and SPT-NC data, possibly driven by an imperfect modeling of the scaling relations 10 . On the other hand, by turning the σ 8 -Ω m degeneracy direction, the inclusion of the PRJ model (lower panel) improves the agreement of the DES-NC+SPT-OMR posteriors with the SPT-SZ 2500 results (from 1σ to 0.5σ tension), at the expense of larger, yet not significant (1.3σ), tension with CMB data (red contours). Also the tension with the DES20 results decreases (0.7σ) as a consequence of the improved consistency between the richness-mass scaling relations (see section IV A 4). Similarly, when considering the full data combination, the PRJ model shifts the cosmological posteriors in the intersection of the DES-NC+SPT-OMR and SPT-SZ 2500 contours, solving the above mentioned tension between the three data set. We will go back to this point in the next section. The mass-richness relation Being constrained by the SPT multi-wavelength data both the SZ and Y X scaling relations derived from the DES-NC+SPT-OMR analysis are perfectly consistent with those obtained in B19. The inclusion of SPT-NC data in our analysis shifts the slope of the SZ relation, B SZ , by 1.5σ towards steeper values to compensate for the larger Ω m value preferred by the full data combination. As mentioned before, the shift of the cosmological posteriors along the S 8 direction suggests the presence of some inconsistencies between the scaling relations preferred by the different data sets: DES-NC, SPT-OMR and SPT-NC. To pinpoint the source of tension we reanalyze the abundance and multi-wavelengths data independently using as cosmological priors the product of the posterior distributions obtained from the DES-NC+SPT- 10 To exclude the possibility that the tension is driven by SPT-SZ abundance data at low redshift we re-analyze the SPT-SZ 2500 catalog excluding the cluster counts data below z = 0.65i.e. analysing the data combination SPT- OMR and SPT-[OMR,NC] analyses (roughly the intersection between the black and pink contours in the upper right panel of figure 7). This test will allow us to understand why that region of the σ 8 -Ω m plane is disfavored by the full data combination. As can been seen in figure 8 the tension between DES-NC+SPT-OMR, SPT-NC and DES-NC+SPT-[OMR,NC] arises from the different amplitude of the richness and SZ scaling relation preferred by the abundance (blue contours) and SPT-OMR data (orange contours). The PRJ model, lowering the A λ value preferred by the abundance data (black dot-dashed contours), but leaving almost unaffected the SPT-OMR posteriors (green dotdashed contours), largely alleviates the tension between data sets. Once we let the cosmological parameters free to vary, the tight correlation between the SZ and richness scaling relation parameters introduced by the SPT-OMR data, along with the different posteriors for the amplitudes preferred by the latter, moves the Ω m posterior of the full data combination towards larger values. The larger shift with respect to the DES-NC+SPT-OMR data combination observed for the BKG analysis can be understood in terms of the larger tension between multiwavelength and abundance data displayed in figure 8. Despite the better agreement of the A λ -A SZ posteriors derived assuming the PRJ calibration, the DIC suggests a mild preference for this model only for the full data combination (see section IV A 1). Moving to the mass-richness relation, figure 9 compares the scaling relations derived in this work (hatched bands) with other results from the literature. The scaling relation from DES20 originally derived for M 200,m has been converted to M 500,c |λ ob , z imposing the condition n(M 500,c )dM 500,c = n(M 200,m )dM 200,m to the Tinker halo mass function. The mean mass-richness relation and its uncertainty are computed from the λ-mass parameter posteriors through Bayes' theorem as follows: M |λ ob , z ∝ dM dλ M n(M, z) P (λ ob |λ, z)P (λ|M, z) . The DES-NC+SPT-OMR and DES-NC+SPT-[OMR,NC] analyses provide mass-richness relations consistent with each other within one standard deviation (gray and hatched orange bands). These results are also consistent with a similar analysis performed by [27] who calibrate the λ-mass relation combining cluster counts from both SPT-SZ and SPTpol Extended Cluster Survey, richnesses obtained by matching the SZ sample with the redMaPPer DES Year 3 catalog, and assuming the fiducial cosmology σ 8 = 0.8 and Ω m = 0.3 (magenta band). Also here, the slightly steeper M-λ ob relation preferred by our data is due to the different cosmologies preferred by the DES and SPT abundance data. Indeed, when we include the SPT-NC data in our analysis, the M |λ ob , z relation totally overlaps with the results from [27] (hatched orange and magenta bands). Similarly, [51] derived a richness-mass relation consistent with ours (A λ = 83.3 ± 11.2 and B λ = 1.03 ± 0.10) analysing the same redMaPPer-SPT matched sample and adopting as priors the results of B19. A consistent slope of the mass-richness relation is also found in the work of [11, B λ = 0.99 +0.06 −0.07 ± 0.04], who calibrate the richness-mass relation of a X-ray selected, optically confirmed cluster sample through galaxy dynamics. However, a direct interpretation of their results in the context of this analysis is not possible due to the different assumptions on the X-ray scaling relation and the scatter of the richness-mass relation, made in that work. A larger than 1σ tension below λ ob 60 is found with the DES20 results which base their mass calibration on the stacked weak lensing analysis of [50] (cyan band in figure 9). As noted in DES20, the weak lensing mass estimates for λ < 30 are responsible for the low values derived for the slope and amplitude of the richnessmass relation compared to the ones preferred by the SPT multi-wavelength data. We stress again here that the SPT-OMR data can actually constrain the richness-mass relation only at λ ob 50 and the constraints at low richness follow from the power law model assumed for the λ|M relation. The inclusion of the PRJ calibration, increasing the fraction of low mass clusters boosted to large richnesses, lowers the mean cluster masses compared to the BKG model up to ∼ 25% at λ ob 60 (compare green and yellow with gray and orange bands in figure 9, respectively). Specifically, from the DES-NC+SPT-OMR+PRJ analysis we obtain: . (17) The improved consistency between the scaling relations derived from the analyses adopting the PRJ calibration and DES20 reflects the improved agreement between the corresponding cosmological posteriors due to the lower Ω m value preferred by the former (see figure 7). The fact that the mass-richness relations derived from the two P (λ ob |λ true ) models display a larger than 1σ tension below λ ob 50, but perform equally well in fitting the data (see section IV A 1), is due to the lack of multiwavelength data at low richness. Additional follow-up data at λ ob 40 will be fundamental to clearly reject one of the models and thus enable the full exploitation of the cosmological information carried by photometrically selected cluster catalogs. It is worth noting that at odds with other studies which rely on stacked weak lensing measurements to calibrate the mean scaling relation [e.g. 8,9], the SPT-OMR data, allowing a cluster by cluster analysis (see equation 8), enable to constrain also the scatter of the richness-mass relation. This is particularly relevant for the analysis of optically-selected cluster samples for which reliable simulation-based priors on the scatter are not available: if constrained only by the abundance data, the scatter parameter becomes degenerate with Ω m and σ 8 , degrading the constraining power of the sample [e.g. see discussion in 8]. To better investigate the implications of the derived scaling relation for low richness objects we compare in figure 10 our predictions for the mean cluster masses in different richness/redshift bins to the mean weak lensing mass estimates from [50] (filled boxes). We also include the weak lensing mass estimates employed in the DES Y1 cluster analysis (hatched boxes) which adopt an updated calibration of the selection bias based on the simulation analysis of [52] [see also appendix D of 9]. Both weak lensing mass estimates and mean mass predictions have been derived assuming Ω m = 0.3, h = 0.7 and σ 8 = 0.8 12 . The mean mass predictions for the DES-NC+SPT-OMR analysis are in tension with both weak lensing mass estimates. In particular, in the lowest richness bins, λ ob ∈ [20,30], the mean mass predictions are 25% to 40% higher than the weak lensing mass estimates, while they are consistent within 1 σ with the lensing masses at λ ob > 30. The inclusion of the PRJ model, lowering the mean mass predictions, largely reduce the tension at low richness with both weak lensing mass estimates, while at λ ob > 30 the model predictions are consistent within 1 σ with the weak lensing masses derived adopting the selection effect bias calibration of [52]. These results are consistent with those of DES20: for the DES Y1 cluster cosmology analysis to be consistent with other probes the weak lensing mass estimates of λ ob < 30 systems need to be boosted. Or conversely, the weak lensing mass estimates of λ ob < 30 systems are biased low compared to the mean masses predicted by DES Y1 abundance data alone assuming a cosmology consistent with other probes. As discussed in DES20 this tension might be due to an overestimate of the selection effect correction at low richness, or to another systematic not captured by the current synthetic cluster catalogs. The good agreement of the PRJ mass predictions with the weak lensing masses adopted in DES20 reflects the consistency of our cosmological posteriors with those derived in DES Y1 cluster analysis (see the lower left panel of figure 7). The same conclusions last also for the full data combination analyses (not shown in figure 10), which provide model predictions fully consistent with those obtained from the combination of DES abundances and SPT multi-wavelength data. 12 The larger tension seen in figure 9 between the scaling relations derived in this work and [50] is due to the different cosmology preferred by the two analyses. ), even though consistent within 2σ with a cosmological constant. Despite that the inclusion of the SPT-NC data increases the redshift range probed by the abundance data up to z 1.75, the constraints on w improve only by 15%. This again is due to the fact that the analysis is [50], including (hatched boxes) or not (filled boxes) the selection effect bias correction as derived in [52]. Over plotted are the mean cluster masses predicted by the scaling relations derived in this work (circles and triangles). The y extent of the boxes corresponds to uncertainties associated with the mass estimates. The error bars correspond to the 1σ uncertainty of the models as derived from the corresponding posterior distributions. The model predictions for the analyses including the SPT-NC data are fully consistent with those obtained from the analyses combining DES-NC and SPT-OMR data, and thus not included in the plot to improve the readability. and the preference for w < −1. Interestingly in this case, the inclusion of the SPT-NC data does not cause the large relations are consistent within one sigma with the corresponding results of the ΛCDM analysis. Adopting the DIC to asses which cosmological model performs better, we find a "strong" preference for the wCDM over the ΛCDM model: DIC ΛCDM − DIC wCDM = −5.3 for DES-NC+SPT-OMR and DIC ΛCDM − DIC wCDM = −11.3 for the full data combination. This preference is mainly driven by the improved fit to DES-NC data compared to ΛCDM case in all the redshift/richness bins, though a larger than 2σ tension persist with the highest richness/redshift data point. Nevertheless, with the current level of knowledge of the scaling relations and their evolution it is not clear if the preference for a wCDM is driven by a flawed modeling of the scaling relation absorbed by w, or an actual preference for an evolving dark energy cosmology. Not surprisingly, given the broad posteriors derived for w, our results for the dark energy equation of state parameter are consistent with those obtained from Planck CMB data (w = −1.41 ± 0.27; green contours) and DES Y1 galaxy clustering and shear analysis (w = −0.88 +0.26 −0.15 ; pink contours), as well as, with those derived in the SPT-SZ [w = −1.55 ± 0.41; 7] and WtG [w = −0.98 ± 0.15, assuming m ν = 0 and including gas mass fraction data and a ±5 per cent uniform prior on the redshift evolution of the M gas -M relation; 53] cluster abundance studies. As mentioned above, an improved calibration of the scal-ing relations and their evolution will be paramount for future cluster surveys aimed to disentangle a cosmological constant from a wCDM model [e.g. 54]. V. SUMMARY AND CONCLUSION In this study, we derive cosmological and scaling relation constraints from the combination of DES Y1 cluster abundance data (DES-NC) and SPT follow-up data (SPT-OMR). The former contains ∼ 6500 clusters above richness 20 in the redshift range 0.2 < z < 0.65, the latter consists of high-quality X-ray data from Chandra and imaging data from HST and Megacam for 121 clusters collected within the SPT-SZ 2500deg 2 survey, along with richness estimates for 129 systems cross matched with the DES Y1 redMaPPer catalog. The SPT multiwavelength data allows us to constrain the richness-mass scaling relation, enabling the cosmological exploitation of the DES cluster counts data. Mass proxies based on photometric data are prone to contaminations from structures along the line of sight -i.e. projection effectswhich hamper the calibration of the scaling relations. To explore possible model systematics related to the latter we consider two calibrations of the observational scatter on richness estimates: i) a simple Gaussian model which accounts only for the noise due to misclassification of background and member galaxies, and ii) the model developed in [35] which includes also the scatter on λ ob introduced by projection effects (labelled respectively BKG and PRJ throughout the paper). Independently from the model adopted for the scatter on the observed richness, we derive cosmological constraints for a ΛCDM model consistent with CMB data and low redshift probes, including other cluster abundance studies. Our results are in contrast with the findings of DES20 which obtained cosmological constraints in tension with multiple cosmological probes analysing the same DES abundance data but calibrating the λ − M relation with mass estimates derived from stacked weak lensing data. Our results thus support the conclusion of DES20 which suggests that the tension is due to the presence of systematics in the modeling of the stacked weak lensing signal of low richness clusters (λ ob 30). Indeed, the mass-richness relations derived in this work adopting the BKG and PRJ models are in tension with that derived in DES20 below λ ob ∼ 60 and λ ob ∼ 40, respectively. We stress however that the SPT-OMR data are mainly available for λ ob 40 systems and thus we need to extrapolate the λ ob − M relation when fitting the DES abundance data at lower richness. Nevertheless, both scatter models perform well in fitting the DES cluster abundance at all richnesses, supporting the goodness of the relation extrapolated at low richness. We further consider the combination of the DES-NC and SPT-OMR data with the SPT number counts data above redshift z = 0.65 (SPT-NC), to assess possible cosmological gains given by the analysis of the joint abun-dance catalog. This also serves as a test of the consistency of the three combined data sets. When included in the analysis the SPT-NC data reduces the σ 8 and Ω m uncertainties by 30% and 20% respectively, while shifting their posteriors along the S 8 degeneracy direction, increasing the tension with other cosmological probes, and especially with the SPT-SZ 2500 results, with which it shares the SPT abundance at z > 0.65 and follow-up data. The shift is due to the tension between the scaling relation parameters preferred by the DES and SPT abundance data and the SPT follow-up data at the "fiducial" cosmology σ 8 ∼ 0.75 Ω m ∼ 0.3. This tension is largely solved once we consider the PRJ model. Compared to the BKG results, it provides cosmological posteriors for the full data combination in better agreement with all the other probes considered here. Adopting the DIC for the model selection, we find a "positive" preference for the BKG model for the DES-NC+SPT-OMR data combination, and a "positive" preference for the PRJ model for the full data combination. Additional follow-up data, especially at low richness will be necessary to clearly identify which scatter model for λ ob is best suited to describe the data. In this respect, the upcoming SZ and X-ray surveys SPT-3G and eROSITA are expected to provide valuable follow-up data by lowering the limiting mass of the detected clusters to ∼ 10 14 M [see e.g. 55]. Finally we consider a wCDM model and derive cosmological constraints for the DES-NC+SPT-OMR and DES-NC+SPT-[OMR,NC] data combinations assuming the BKG model. We find in both cases a preference at more than 1 σ for w values lower than −1, but consistent with a cosmological constant. The inclusion of the SPT-NC does not substantially improve the w constraints despite the larger redshift leverage provided by the SPT abundance data, indicating that also in this case we are limited by the uncertainty in the calibration of the scaling relations and their evolution. According to the DIC the wCDM model is "strongly" preferred over the ΛCDM one, thanks to the improved fit to the DES-NC data provided by the extended model. However, given the strong degeneracy between w and the scaling relation parameters we cannot exclude that this preference is due to a flawed modeling of the scaling relations which is absorbed by w. Again, an improved calibration of the scaling relations and their evolution, will be necessary for future cluster surveys aimed to constrain the dark energy equation of state parameter. Future optical survey such as Euclid and LSST, in combination with data from the forthcoming eROSITA and SPT-3G surveys, will provide the necessary high-redshift multi-wavelength data to break such degeneracies and thus constrain parameters affecting the growth rate of cosmic structures [see e.g. 54]. The results of this work highlight the capability of multi-wavelength cluster data to improve our understanding of the systematics affecting the observable-mass scaling relations, and the potential power that a joint analysis of cluster catalogs detected at different wave-lengths will have in future cosmological studies with galaxy clusters. MC and AS are supported by the ERC-StG 'ClustersX-Cosmo' grant agreement 716762. AS is supported by the FARE-MIUR grant 'ClustersXEuclid'. AAS acknowledges support by U.S. NSF grant AST-1814719. This analysis has been carried out using resources of the computing center of INAF-Osservatorio Astronomico di Trieste, under the coordination of the CHIPP project [56,57], CINECA grants INA20 C6B51 and INA17 C5B32, and of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. Table III. Inset panel: Correlation matrix for the scaling relations and cosmological parameters derived from the DES-NC+SPT-OMR analysis. FIG. 1 . 1The DES Y1 redMaPPer cluster density (λ > 20) over the two non-contiguous regions of the Y1 footprint: the Stripe 82 region (116 deg 2 ; upper panel) and the SPT region (1321 deg 2 ; lower panel). In the lower panel, we also show the locations of the SPT-SZ 2500 deg 2 clusters (ξ > 5) in blue circles with sizes proportional to the detection significance. FIG. 3 . 3Marginalized posterior distributions of the fitted parameters. The 2D contours correspond to the 68% and 95% confidence levels of the marginalized posterior distribution. The description of the model parameters along with their posteriors are listed in FIG. 4 . 4Observed (shaded areas) and mean model predictions (markers) for the DES Y1 RM cluster number counts as a function of richness for each of our three redshift bins. The y extent of the data boxes is given by the square root of the diagonal terms of the covariance matrix. The right panels show the residual between the data and the mean model predictions. The error bars on the predicted number counts represent one and two standard deviations of the distribution derived sampling the corresponding chain. All points have been slightly displaced along the richness axis to avoid overcrowding. FIG. 5 . 5Observed SPT-SZ cluster number counts (shaded areas) and mean model predictions from the DES-NC+SPT-OMR (triangles) and DES-NC+SPT-[OMR,NC] (circles) analyses, as a function of ξ. The points have been slightly displaced along the ξ axis to avoid overcrowding. The y extent of the data boxes corresponds to the Poisson noise. The bottom panel shows the residual between the data and the mean model predictions derived from DES-NC+SPT-[OMR,NC]. The error bars on the predicted number counts represent one and two standard deviations of the distribution derived sampling the relevant parameters of the corresponding chain. The y extent of the data boxes corresponds to one and two standard deviations of the associated Poisson distribution. The SPT-NC model predictions for the two analyses including the PRJ model are fully consistent with those obtained from the baseline model, and thus not included in the plot to avoid overcrowding. Figure 7 7compares the σ 8 -Ω m posteriors derived in this work for a ΛCDM + m ν cosmology including (lower panels) or excluding (upper panels) the PRJ calibration, to other results from the literature. To assess the consistency of two data sets A and B in the σ 8 -Ω m plane we test the hypothesis p A − p B = 0 [see method '3' in 49], where p A and p B are the σ 8 -Ω m posterior distributions as constrained by data sets A and B, respectively. Starting with the simpler scatter model (BKG, upper panels), our baseline data combination (DES-NC+SPT-OMR) is consistent within 2σ with all the probes considered here. The largest tension (1.7σ) is found with the results from DES20 (DES-[NC,M WL ] in figure 7) which combine DES Y1 RM abundances and mass estimates from the stacked weak lensing signal around DES Y1 RM clusters [50]. The tension with DES-[NC,M WL ] results is not surprising and reflects the different richnessmass scaling relation preferred by the DES Y1 weak lens-NC+SPT-OMR DES-NC+SPT-OMR+PRJ DES-NC+SPT-[OMR,NC] DES-NC+SPT-[OMR,NC]+PRJ FIG. 6. Posterior predictive distributions for the highest-λ/z data point derived from the four analyses considered in section IV A. The solid black line correspond to the observed cluster abundance in that bin, while the four dashed and dotdashed lines mark the 3σ limit of the corresponding posterior predictive distribution. Although residing in the tail of the distributions, in none of the four analyses the observed data point lies outside the 3σ region. FIG. 7 . 7[OMR,NC] -finding posteriors fully consistent with SPT-SZ 2500 results [see also figure 16 in 7]. Upper panels: Comparison of the 68% and 95% confidence contours in the σ8-Ωm plane derived in this work adopting the BKG scatter model (black and orange contours) with other constraints from the literature: DES Y1 cluster counts and weak lensing mass calibration [DES20, dot-dashed magenta contours]; DES-Y1 3x2 from [48, dark violet contours]; Planck CMB from [40, brown contours]; cluster number counts and follow-up data from the SPT-SZ 2500 survey [B19, dot-dashed pink contours]; cluster abundance analysis of Weighing the Giants [5, WtG, dashed dark blue contours]. Lower panels: Same as left panel but considering the projection effect model (PRJ) for the scatter between true and observed richness (see section III A). ( 15 ) 15Fitting the M |λ ob , z relation derived from DES-NC+SPT-OMR to a power law model similar to the one assumed in [50] we get 11 : M 500,c |λ ob , z = 10 14.29±0.03 λ ob 60 M 500,c |λ ob , z = 10 14.22±0.03 λ ob 60 FIG. 8 . 868% and 95% confidence contours for the amplitude parameters A λ -ASZ from the combination of DES Y1 and SPT cluster counts data (blue and black ) or the SPT multiwavelength data (orange and green), including (dot-dashed contours) or not (filled contours) the projection effect model (PRJ). All the contours are derived imposing the cosmological priors resulting from the combination of the posteriors obtained from the DES-NC+SPT-OMR and SPT-[OMR,NC] analyses. By shifting the abundance posteriors towards lower A λ values (black versus blue contours) the PRJ model relieves the tension between the scaling relation parameters preferred by abundance and multi-wavelength data.B. wCDM + mν cosmologyWe consider an extension to the vanilla ΛCDM model by allowing the dark energy equation of state parameter w to vary in the range [−2.5, −0.33]. Here we are interested in the capability of the DES-NC+SPT-OMR data to constrain the equation of state parameter w, and the possible cosmological gain given by the inclusion of the high redshift SPT abundance data. For this reason we report here only results for the BKG scatter model. Nevertheless, we explicitly verified that the PRJ model provides for both data combinations posteriors on w fully consistent with those obtained assuming the BKG model. Infigure 11and table IV we show constraints for the DES-NC+SPT-OMR and DES-NC+SPT-[OMR,NC] data sets. Both data sets prefer a w value smaller than -1 at more than one σ (w = −1.76 +0.46 −0.33 and w = −1.95 +0.48−0.19 FIG. 9 .FIG. 10 . 910Comparison of mass-richness relations at the mean DES Y1 RM redshift z = 0.45. The gray, green, orange and yellow bands show the M-λ ob relations derived in this work for different models and data combinations. Shown in magenta is the M |λ ob relation derived by[27] using SPT SZ cluster counts and follow-up data, assuming a Planck cosmology. The relation derived in DES20 combining DES Y1 number counts and weak lensing mass estimates is shown with the cyan band. The y extent of the bands corresponds to 1σ uncertainty of the mean relation. The lower panels show the ratio of the different mass-richness relations to the one derived from the DES-NC+SPT-OMR analysis. The dashed (λ ob = 37) and solid (λ ob = 60) vertical lines correspond to the richnesses above which 95% and 68% of the DES Y1 RM-SPT-SZ matched sample is contained.limited by the uncertainty in the calibration of the scaling relations with which the w parameter is degenerate. For the DES-NC+SPT-OMR analysis the model extension minimally affects the cosmological posteriors on σ8 and Ω m compared to the ΛCDM model despite the mild anti-/correlation of the two parameters with w (ρ ∼ ±0.25) Mean mass estimates from the stacked weak lensing analysis of σ 8 -FIG. 11 . 811Ω m shift observed in the ΛCDM scenario, and the DES-NC+SPT-[OMR,NC] posteriors almost completely overlap with those derived in the DES-NC+SPT-OMR analysis. This difference with the ΛCDM results is explained by the degeneracy of the equation of state parameter w with the SZ and λ-M scaling relation parameters.In particular for the DES-NC+SPT-OMR analysis, the preference for w < −1 and the anti-/correlation of w with the slope and amplitude parameters of the richness-mass relation shifts the corresponding posteriors into the same region of the parameter space preferred by the full data combination (seefigure 13in appendix C). Despite the modest (∼ 0.5 − 1.0σ) shift of the λ-M posteriors observed for the wCDM model, the resulting mass-richness Cosmological posteriors (68% and 95% C.L.) for the wCDM+ mν model from the combination of DES-NC and SPT-OMR data (blue) and the full data combination (orange). For comparison we include in the figure the posteriors obtained from Planck CMB (green), DES 3x2pt (pink ) and SPT-SZ 2500 (black ) analyses. FIG. 12 . 12Marginalized posterior distributions for the ΛCDM+ mν model for a subset of the fitted parameters. The 2D contours correspond to the 68% and 95% confidence levels of the marginalized posterior distribution. The description of the model parameters along with their posteriors are listed in Table III. Inset panel: Correlation matrix for the scaling relations and cosmological parameters derived from the DES-NC+SPT-OMR analysis. FIG. 13. Marginalized posterior distributions for the wCDM+ mν model. The 2D contours correspond to the 68% and 95% confidence levels of the marginalized posterior distribution. The description of the model parameters along with their posteriors are listed in TABLE I . INumber of galaxy clusters in each richness and redshift bin for the DES Y1 redMaPPer catalog. Each entry takes the form N (N ) ± ∆N stat ± ∆N sys. The first error bar is the statistical uncertainty in the number of galaxy clusters in that bin given by the sum of a Poisson and a sample variance term. The number between parenthesis and the second error bar correspond to the number counts corrected for the miscentering bias factors and the corresponding uncertainty (see section II A).λ ob z ∈ [0.2, 0.35) z ∈ [0.35, 0.5) z ∈ [0.5, 0.65) [20, 30) 762 (785.1) ± 54.9 ± 8.2 1549 (1596.0) ± 68.2 ± 16.6 1612 (1660.9) ± 67.4 ± 17.3 [30, 45) 376 (388.3) ± 32.1 ± 4.5 672 (694.0) ± 38.2 ± 8.0 687 (709.5) ± 36.9 ± 8.1 [45, 60) 123 (127.2) ± 15.2 ± 1.6 187 (193.4) ± 17.8 ± 2.4 205 (212.0) ± 17.1 ± 2.7 [60, ∞) 91 (93.9) ± 14.0 ± 1.3 148 (151.7) ± 15.7 ± 2.2 92 (94.9) ± 14.2 ± 1.4 TABLE II. Summary of the SPT-SZ cluster data used in this analysis split in mass-calibration data (SPT-OMR), and abundance data (SPT-NC). For the SPT-OMR data we spec- ify in the third column the number of clusters with a specific follow-up measurement (see section II B for details). Note that a cluster might have more than one follow-up measure- ment. Data set Number of Clusters Follow-up z-cut WL: 32 z > 0.25 SPT-OMR 187 λ: 129 0.25 < z < 0.65 X-ray: 89 z > 0.25 SPT-NC 141 z > 0.65 with ξ > 5 and redshift z > 0.25 8 (blue circles in fig- ure 1). Of these: 343 clusters are optically confirmed and have redshift measurements, 89 have X-ray follow-up measurements with Chandra [23, 24], 32 have weak lens- ing shear profile measurements from ground-based ob- servations with Magellan/Megacam [19 clusters; 25] and from space observations with the Hubble Space Telescope [13 clusters; 26]. TABLE III . IIICosmological and model parameter posteriors: a range indicates a top-hat prior, while N (µ, σ) stands for a Gaussian prior with mean µ and variance σ 2 .Parameter Description Prior Ωm Mean matter density [0.1, 0.9] As Amplitude of the primordial curvature perturbations [10 −10 , 10 −8 ] h Hubble rate [0.55, 0.9] Ω b h 2 Baryon density [0.020, 0.024] Ων h 2 Massive neutrinos energy density [0.0006, 0.01] ns Spectral index [0.94, 1.0] w Dark energy equation of state [−2.5, −0.33] SZ scaling relation A SZ Amplitude [1, 10] B SZ Power-law index mass dependence [1, 2.5] C SZ Power-law index redshift evolution [−1, 2] D SZ Intrinsic scatter [0.01, 0.5] Richness scaling relation A λ Amplitude [20, 120] B λ Power-law index mass dependence [0.4, 2.0] C λ Power-law index redshift evolution [−1, 2] D λ Intrinsic scatter [0.01, 0.7] TABLE Research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Program (FP7/2007-2013) including ERC grant agreements 240672, 291329, and 306478. We acknowledge support from the Brazilian Instituto Nacional de Ciência e Tecnologia (INCT) do e-Universo (CNPq grant 465376/2014-2). This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. This paper has gone through internal review by the DES collaboration. https://www.sdss.org/ 2 https://www.darkenergysurvey.org 3 https://www.cosmos.esa.int/web/planck 4 https://pole.uchicago.edu/ 5 https://act.princeton.edu/ 6 https://www.mpe.mpg.de/eROSITA Below z=0.25 the ξ-mass relation breaks due to confusion with the primary CMB fluctuations The corresponding mean richness-mass relations, λ ob |M 500,c , z , for both scatter models are reported for completeness in appendix B. ACKNOWLEDGEMENTSFunding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of ScienceFor completeness we report infigure 12the posteriors obtained for the ΛCDM model including the Y x scaling relation parameters and the correlation coefficients. Also, to easily visualize the many degeneracies between the parameters constrained in the analysis we show in the inset plot offigure 12the correlation matrix obtained from the DES-NC+SPT-OMR data. The correlation matrices for the full data combination and/or including the PRJ model are qualitatively consistent with the one shown here. Depending only on the SPT-OMR data the Y x posteriors are consistent among the different analyses, even though the correlations with the other scaling relations cause slight shifts of the slope and amplitude parameters and improve the constraint on the evolution parameter once we include the SPT-NC data.Appendix B: Observed richness-mass scaling relationsTo ease the comparison and use of our results we report here the mean observed richness-mass scaling relations derived from the DES-NC+SPT-OMR data combination for the two scatter models adopted. The mean relations and uncertainties are derived from the appropriate model for P (λ ob |M ) = dλP (λ ob |λ, z)P (λ|M, z) sampling the posterior distributions of the richness-mass relation. Fitting the mean relation to a power law model we obtain for the BKG model: Cosmological Parameters from Observations of Galaxy Clusters. S W Allen, A E Evrard, A B Mantz, 10.1146/annurev-astro-081710-102514arXiv:1103.4829ARA&A. 49astro-ph.COS. W. Allen, A. E. Evrard, and A. B. Mantz, Cosmolog- ical Parameters from Observations of Galaxy Clusters, ARA&A 49, 409 (2011), arXiv:1103.4829 [astro-ph.CO]. A V Kravtsov, S Borgani, 10.1146/annurev-astro-081811-125502arXiv:1205.5556Formation of Galaxy Clusters. 50astro-ph.COA. V. Kravtsov and S. Borgani, Formation of Galaxy Clusters, ARA&A 50, 353 (2012), arXiv:1205.5556 [astro-ph.CO]. A Vikhlinin, R A Burenin, H Ebeling, W R Forman, A Hornstrup, C Jones, A V Kravtsov, S S Murray, D Nagai, H Quintana, A Voevodkin, 10.1088/0004-637X/692/2/1033arXiv:0805.2207Chandra Cluster Cosmology Project. II. Samples and X-Ray Data Reduction. 6921033A. Vikhlinin, R. A. Burenin, H. Ebeling, W. R. Forman, A. Hornstrup, C. Jones, A. V. Kravtsov, S. S. Murray, D. Nagai, H. Quintana, and A. Voevodkin, Chandra Clus- ter Cosmology Project. II. Samples and X-Ray Data Re- duction, ApJ 692, 1033 (2009), arXiv:0805.2207. E Rozo, R H Wechsler, E S Rykoff, J T Annis, M R Becker, A E Evrard, J A Frieman, S M Hansen, J Hao, D E Johnston, B P Koester, T A Mckay, E S Sheldon, D H Weinberg, 10.1088/0004-637X/708/1/645arXiv:0902.3702Cosmological Constraints from the Sloan Digital Sky Survey maxBCG Cluster Catalog. 708astro-ph.COE. Rozo, R. H. Wechsler, E. S. Rykoff, J. T. Annis, M. R. Becker, A. E. Evrard, J. A. Frieman, S. M. Hansen, J. Hao, D. E. Johnston, B. P. Koester, T. A. McKay, E. S. Sheldon, and D. H. Weinberg, Cosmological Con- straints from the Sloan Digital Sky Survey maxBCG Cluster Catalog, ApJ 708, 645 (2010), arXiv:0902.3702 [astro-ph.CO]. Weighing the giants -IV. Cosmology and neutrino mass. A B Mantz, A Der Linden, S W Allen, D E Applegate, P L Kelly, R G Morris, D A Rapetti, R W Schmidt, S Adhikari, M T Allen, P R Burchat, D L Burke, M Cataneo, D Donovan, H Ebeling, S Shandera, A Wright, 10.1093/mnras/stu2096arXiv:1407.4516MNRAS. 4462205A. B. Mantz, A. von der Linden, S. W. Allen, D. E. Applegate, P. L. Kelly, R. G. Morris, D. A. Rapetti, R. W. Schmidt, S. Adhikari, M. T. Allen, P. R. Bur- chat, D. L. Burke, M. Cataneo, D. Donovan, H. Ebel- ing, S. Shandera, and A. Wright, Weighing the giants - IV. Cosmology and neutrino mass, MNRAS 446, 2205 (2015), arXiv:1407.4516. P A R Ade, Planck CollaborationN Aghanim, Planck CollaborationM Arnaud, Planck CollaborationM Ashdown, Planck CollaborationJ Aumont, Planck CollaborationC Baccigalupi, Planck CollaborationA J Banday, Planck CollaborationR B Barreiro, Planck CollaborationJ G Bartlett, Planck Collaboration10.1051/0004-6361/201525833arXiv:1502.01597Planck 2015 results. XXIV. Cosmology from Sunyaev-Zeldovich cluster counts. 594Planck Collaboration, P. A. R. Ade, N. Aghanim, M. Ar- naud, M. Ashdown, J. Aumont, C. Baccigalupi, A. J. Banday, R. B. Barreiro, J. G. Bartlett, and et al., Planck 2015 results. XXIV. Cosmology from Sunyaev-Zeldovich cluster counts, A&A 594, A24 (2016), arXiv:1502.01597. Cluster Cosmology Constraints from the 2500 deg 2 SPT-SZ Survey: Inclusion of Weak Gravitational Lensing Data from Magellan and the Hubble Space Telescope. S Bocquet, SPT10.3847/1538-4357/ab1f10arXiv:1812.01679Astrophys. J. 878astro-ph.COS. Bocquet et al. (SPT), Cluster Cosmology Constraints from the 2500 deg 2 SPT-SZ Survey: Inclusion of Weak Gravitational Lensing Data from Magellan and the Hub- ble Space Telescope, Astrophys. J. 878, 55 (2019), arXiv:1812.01679 [astro-ph.CO]. Methods for cluster cosmology and application to the SDSS in preparation for DES Year 1 release. M Costanzi, E Rozo, M Simet, Y Zhang, A E Evrard, A Mantz, E S Rykoff, T Jeltema, D Gruen, S Allen, 10.1093/mnras/stz1949arXiv:1810.09456MNRAS. 488astro-ph.COM. Costanzi, E. Rozo, M. Simet, Y. Zhang, A. E. Evrard, A. Mantz, E. S. Rykoff, T. Jeltema, D. Gruen, S. Allen, et al., Methods for cluster cosmology and application to the SDSS in preparation for DES Year 1 release, MNRAS 488, 4779 (2019), arXiv:1810.09456 [astro-ph.CO]. Dark Energy Survey Year 1 Results: Cosmological constraints from cluster abundances and weak lensing. T M C Abbott, DES CollaborationM Aguena, DES CollaborationA Alarcon, DES CollaborationS Allam, DES CollaborationS Allen, DES CollaborationJ Annis, DES CollaborationS Avila, DES CollaborationD Bacon, DES CollaborationK Bechtol, DES CollaborationA Bermeo, DES Collaboration10.1103/PhysRevD.102.023509arXiv:2002.11124Phys. Rev. D. 10223509astro-ph.CODES Collaboration, T. M. C. Abbott, M. Aguena, A. Alarcon, S. Allam, S. Allen, J. Annis, S. Avila, D. Ba- con, K. Bechtol, A. Bermeo, et al., Dark Energy Survey Year 1 Results: Cosmological constraints from cluster abundances and weak lensing, Phys. Rev. D 102, 023509 (2020), arXiv:2002.11124 [astro-ph.CO]. G W Pratt, M Arnaud, A Biviano, D Eckert, S Ettori, D Nagai, N Okabe, T H Reiprich, 10.1007/s11214-019-0591-0arXiv:1902.10837The Galaxy Cluster Mass Scale and Its Impact on Cosmological Constraints from the Cluster Population. 215astro-ph.COG. W. Pratt, M. Arnaud, A. Biviano, D. Eckert, S. Et- tori, D. Nagai, N. Okabe, and T. H. Reiprich, The Galaxy Cluster Mass Scale and Its Impact on Cosmological Con- straints from the Cluster Population, Space Sci. Rev. 215, 25 (2019), arXiv:1902.10837 [astro-ph.CO]. Mass calibration of the CODEX cluster sample using SPIDERS spectroscopy -I. The richness-mass relation. R Capasso, 10.1093/mnras/stz931arXiv:1812.06094Mon. Not. Roy. Astron. Soc. 4861594astro-ph.COR. Capasso et al., Mass calibration of the CODEX clus- ter sample using SPIDERS spectroscopy -I. The rich- ness-mass relation, Mon. Not. Roy. Astron. Soc. 486, 1594 (2019), arXiv:1812.06094 [astro-ph.CO]. The mass-richness relation of optically selected clusters from weak gravitational lensing and abundance with Subaru HSC first-year data. R Murata, M Oguri, T Nishimichi, M Takada, R Mandelbaum, S More, M Shirasaki, A J Nishizawa, K Osato, 10.1093/pasj/psz092arXiv:1904.07524PASJ. 71astro-ph.COR. Murata, M. Oguri, T. Nishimichi, M. Takada, R. Man- delbaum, S. More, M. Shirasaki, A. J. Nishizawa, and K. Osato, The mass-richness relation of optically selected clusters from weak gravitational lensing and abundance with Subaru HSC first-year data, PASJ 71, 107 (2019), arXiv:1904.07524 [astro-ph.CO]. T De Haan, B A Benson, L E Bleem, S W Allen, D E Applegate, M L N Ashby, M Bautz, M Bayliss, S Bocquet, M Brodwin, 10.3847/0004-637X/832/1/95arXiv:1603.06522Cosmological Constraints from Galaxy Clusters in the 2500 Square-degree SPT-SZ Survey. 832T. de Haan, B. A. Benson, L. E. Bleem, S. W. Allen, D. E. Applegate, M. L. N. Ashby, M. Bautz, M. Bayliss, S. Bocquet, M. Brodwin, et al., Cosmological Constraints from Galaxy Clusters in the 2500 Square-degree SPT-SZ Survey, ApJ 832, 95 (2016), arXiv:1603.06522. Galaxy Clusters Discovered via the Sunyaev-Zel'dovich Effect in the 2500-Square-Degree SPT-SZ Survey. L E Bleem, 10.1088/0067-0049/216/2/27arXiv:1409.0850ApJS. 216astro-ph.COL. E. Bleem et al., Galaxy Clusters Discovered via the Sunyaev-Zel'dovich Effect in the 2500-Square-Degree SPT-SZ Survey, ApJS 216, 27 (2015), arXiv:1409.0850 [astro-ph.CO]. Dark Energy Survey Year 1 Results: The Photometric Data Set for Cosmology. A Drlica-Wagner, I Sevilla-Noarbe, E S Rykoff, R A Gruendl, B Yanny, D L Tucker, B Hoyle, A Rosell, G M Bernstein, K Bechtol, 10.3847/1538-4365/aab4f5arXiv:1708.01531ApJS. 235A. Drlica-Wagner, I. Sevilla-Noarbe, E. S. Rykoff, R. A. Gruendl, B. Yanny, D. L. Tucker, B. Hoyle, A. Carnero Rosell, G. M. Bernstein, K. Bechtol, et al., Dark Energy Survey Year 1 Results: The Photometric Data Set for Cosmology, ApJS 235, 33 (2018), arXiv:1708.01531. E S Rykoff, E Rozo, M T Busha, C E Cunha, A Finoguenov, A Evrard, J Hao, B P Koester, A Leauthaud, B Nord, M Pierre, R Reddick, T Sadibekova, E S Sheldon, R H Wechsler, 10.1088/0004-637X/785/2/104arXiv:1303.3562redMaPPer. I. Algorithm and SDSS DR8 Catalog. 785E. S. Rykoff, E. Rozo, M. T. Busha, C. E. Cunha, A. Finoguenov, A. Evrard, J. Hao, B. P. Koester, A. Leauthaud, B. Nord, M. Pierre, R. Reddick, T. Sadibekova, E. S. Sheldon, and R. H. Wechsler, redMaPPer. I. Algorithm and SDSS DR8 Catalog, ApJ 785, 104 (2014), arXiv:1303.3562. The RedMaPPer Galaxy Cluster Catalog From DES Science Verification Data. E S Rykoff, E Rozo, D Hollowood, A Bermeo-Hernandez, T Jeltema, J Mayers, A K Romer, P Rooney, A Saro, C Vergara, R H Cervantes, H Wechsler, Wilcox, 10.3847/0067-0049/224/1/1arXiv:1601.00621ApJS. 224E. S. Rykoff, E. Rozo, D. Hollowood, A. Bermeo- Hernandez, T. Jeltema, J. Mayers, A. K. Romer, P. Rooney, A. Saro, C. Vergara Cervantes, R. H. Wech- sler, H. Wilcox, et al., The RedMaPPer Galaxy Cluster Catalog From DES Science Verification Data, ApJS 224, 1 (2016), arXiv:1601.00621. Dark Energy Surveyed Year 1 results: calibration of cluster mis-centring in the redMaPPer catalogues. Y Zhang, T Jeltema, D L Hollowood, S Everett, E Rozo, A Farahi, A Bermeo, S Bhargava, P Giles, A K Romer, 10.1093/mnras/stz1361arXiv:1901.07119MNRAS. 4872578astro-ph.COY. Zhang, T. Jeltema, D. L. Hollowood, S. Everett, E. Rozo, A. Farahi, A. Bermeo, S. Bhargava, P. Giles, A. K. Romer, et al., Dark Energy Surveyed Year 1 results: calibration of cluster mis-centring in the redMaPPer cat- alogues, MNRAS 487, 2578 (2019), arXiv:1901.07119 [astro-ph.CO]. The Observations of Relic Radiation as a Test of the Nature of X-Ray Radiation from the Clusters of Galaxies. R A Sunyaev, Y B Zeldovich, Comments on Astrophysics and Space Physics. 4173R. A. Sunyaev and Y. B. Zeldovich, The Observations of Relic Radiation as a Test of the Nature of X-Ray Radia- tion from the Clusters of Galaxies, Comments on Astro- physics and Space Physics 4, 173 (1972). Catalog extraction in sz cluster surveys: a matched filter approach. J.-B Melin, J G Bartlett, J Delabrouille, 10.1051/0004-6361:20065034A&A. 459341Melin, J.-B., Bartlett, J. G., and Delabrouille, J., Cat- alog extraction in sz cluster surveys: a matched filter approach, A&A 459, 341 (2006). R Williamson, B A Benson, F W High, K Vand Erlinde, P A R Ade, K A Aird, K Andersson, R Armstrong, M L N Ashby, M Bautz, 10.1088/0004-637X/738/2/139arXiv:1101.1290A Sunyaev-Zel'dovich-selected Sample of the Most Massive Galaxy Clusters in the 2500 deg 2 South Pole Telescope Survey. 738astro-ph.COR. Williamson, B. A. Benson, F. W. High, K. Vand er- linde, P. A. R. Ade, K. A. Aird, K. Andersson, R. Arm- strong, M. L. N. Ashby, M. Bautz, et al., A Sunyaev- Zel'dovich-selected Sample of the Most Massive Galaxy Clusters in the 2500 deg 2 South Pole Telescope Survey, ApJ 738, 139 (2011), arXiv:1101.1290 [astro-ph.CO]. C L Reichardt, B Stalder, L E Bleem, T E Montroy, K A Aird, K Andersson, R Armstrong, M L N Ashby, M Bautz, M Bayliss, Galaxy Clusters Discovered via the Sunyaev-Zel. dovich Effect in the First 720C. L. Reichardt, B. Stalder, L. E. Bleem, T. E. Mon- troy, K. A. Aird, K. Andersson, R. Armstrong, M. L. N. Ashby, M. Bautz, M. Bayliss, et al., Galaxy Clusters Dis- covered via the Sunyaev-Zel'dovich Effect in the First 720 Square Degrees of the South Pole Telescope Survey. 10.1088/0004-637X/763/2/127arXiv:1203.5775astro-ph.COApJ. 763Square Degrees of the South Pole Telescope Survey, ApJ 763, 127 (2013), arXiv:1203.5775 [astro-ph.CO]. The Growth of Cool Cores and Evolution of Cooling. M Mcdonald, B A Benson, A Vikhlinin, B Stalder, L E Bleem, T De Haan, H W Lin, K A Aird, M L N Ashby, M W Bautz, 83M. McDonald, B. A. Benson, A. Vikhlinin, B. Stalder, L. E. Bleem, T. de Haan, H. W. Lin, K. A. Aird, M. L. N. Ashby, M. W. Bautz, et al., The Growth of Cool Cores and Evolution of Cooling Properties in a Sample of 83 Galaxy Clusters at 0.3 ¡ z ¡ 1.2 Selected from the SPT-SZ Survey. 10.1088/0004-637X/774/1/23arXiv:1305.2915ApJ. 774astroph.COGalaxy Clusters at 0.3 ¡ z ¡ 1.2 Selected from the SPT- SZ Survey, ApJ 774, 23 (2013), arXiv:1305.2915 [astro- ph.CO]. M Mcdonald, S W Allen, M Bayliss, B A Benson, L E Bleem, M Brodwin, E Bulbul, J E Carlstrom, W R Forman, J Hlavacek-Larrondo, G P Garmire, M Gaspari, M D Gladders, A B Mantz, S S Murray, 10.3847/1538-4357/aa7740arXiv:1702.05094The Remarkable Similarity of Massive Galaxy Clusters from z 0 to z 1.9. 843astro-ph.COM. McDonald, S. W. Allen, M. Bayliss, B. A. Benson, L. E. Bleem, M. Brodwin, E. Bulbul, J. E. Carlstrom, W. R. Forman, J. Hlavacek-Larrondo, G. P. Garmire, M. Gaspari, M. D. Gladders, A. B. Mantz, and S. S. Murray, The Remarkable Similarity of Massive Galaxy Clusters from z 0 to z 1.9, ApJ 843, 28 (2017), arXiv:1702.05094 [astro-ph.CO]. Sunyaev-Zel'dovich effect and X-ray scaling relations from weak lensing mass calibration of 32 South Pole Telescope selected galaxy clusters. J P Dietrich, S Bocquet, T Schrabback, D Applegate, H Hoekstra, S Grandis, J J Mohr, S W Allen, M B Bayliss, B A Benson, 10.1093/mnras/sty3088arXiv:1711.05344MNRAS. 4832871astroph.COJ. P. Dietrich, S. Bocquet, T. Schrabback, D. Applegate, H. Hoekstra, S. Grandis, J. J. Mohr, S. W. Allen, M. B. Bayliss, B. A. Benson, et al., Sunyaev-Zel'dovich effect and X-ray scaling relations from weak lensing mass cali- bration of 32 South Pole Telescope selected galaxy clus- ters, MNRAS 483, 2871 (2019), arXiv:1711.05344 [astro- ph.CO]. Cluster mass calibration at high redshift: HST weak lensing analysis of 13 distant galaxy clusters from the South Pole Telescope Sunyaev-Zel'dovich Survey. T Schrabback, D Applegate, J P Dietrich, H Hoekstra, S Bocquet, A H Gonzalez, A Der Linden, M Mcdonald, C B Morrison, S F Raihan, 10.1093/mnras/stx2666arXiv:1611.03866MNRAS. 4742635astro-ph.COT. Schrabback, D. Applegate, J. P. Dietrich, H. Hoek- stra, S. Bocquet, A. H. Gonzalez, A. von der Linden, M. McDonald, C. B. Morrison, S. F. Raihan, et al., Clus- ter mass calibration at high redshift: HST weak lensing analysis of 13 distant galaxy clusters from the South Pole Telescope Sunyaev-Zel'dovich Survey, MNRAS 474, 2635 (2018), arXiv:1611.03866 [astro-ph.CO]. . L E Bleem, S Bocquet, B Stalder, M D Gladders, P A R Ade, S W Allen, A J Anderson, J Annis, M L N Ashby, J E Austermann, 10.3847/1538-4365/ab6993arXiv:1910.04121The SPTpol Extended Cluster Survey. 247ApJS. astro-ph.COL. E. Bleem, S. Bocquet, B. Stalder, M. D. Gladders, P. A. R. Ade, S. W. Allen, A. J. Anderson, J. An- nis, M. L. N. Ashby, J. E. Austermann, et al., The SPTpol Extended Cluster Survey, ApJS 247, 25 (2020), arXiv:1910.04121 [astro-ph.CO]. Constraints on the richness-mass relation and the optical-SZE positional offset distribution for SZE-selected clusters. A Saro, S Bocquet, E Rozo, B A Benson, J Mohr, E S Rykoff, M Soares-Santos, L Bleem, S Dodelson, P Melchior, 10.1093/mnras/stv2141arXiv:1506.07814MNRAS. 4542305A. Saro, S. Bocquet, E. Rozo, B. A. Benson, J. Mohr, E. S. Rykoff, M. Soares-Santos, L. Bleem, S. Dodel- son, P. Melchior, et al., Constraints on the richness-mass relation and the optical-SZE positional offset distribu- tion for SZE-selected clusters, MNRAS 454, 2305 (2015), arXiv:1506.07814. . J F Navarro, C S Frenk, S D M White, ApJ. 490493J. F. Navarro, C. S. Frenk, and S. D. M. White, ApJ 490, 493. (1997). Galaxy cluster mass estimation from stacked spectroscopic analysis. A Farahi, A E Evrard, E Rozo, E S Rykoff, R H Wechsler, 10.1093/mnras/stw1143arXiv:1601.05773MNRAS. 460A. Farahi, A. E. Evrard, E. Rozo, E. S. Rykoff, and R. H. Wechsler, Galaxy cluster mass estimation from stacked spectroscopic analysis, MNRAS 460, 3900 (2016), arXiv:1601.05773. On the level of cluster assembly bias in SDSS. Y Zu, R Mandelbaum, M Simet, E Rozo, E S Rykoff, 10.1093/mnras/stx1264arXiv:1611.00366MNRAS. 470551Y. Zu, R. Mandelbaum, M. Simet, E. Rozo, and E. S. Rykoff, On the level of cluster assembly bias in SDSS, MNRAS 470, 551 (2017), arXiv:1611.00366. Assembly bias and splashback in galaxy clusters. P Busch, S D M White, 10.1093/mnras/stx1584arXiv:1702.01682MNRAS. 470P. Busch and S. D. M. White, Assembly bias and splash- back in galaxy clusters, MNRAS 470, 4767 (2017), arXiv:1702.01682. Constraints on the mass-richness relation from the abundance and weak lensing of SDSS clusters. R Murata, T Nishimichi, M Takada, H Miyatake, M Shirasaki, S More, R Takahashi, K Osato, arXiv:1707.01907R. Murata, T. Nishimichi, M. Takada, H. Miyatake, M. Shirasaki, S. More, R. Takahashi, and K. Osato, Constraints on the mass-richness relation from the abun- dance and weak lensing of SDSS clusters, ArXiv e-prints (2017), arXiv:1707.01907. Understanding the effects of imperfect membership on cluster mass estimation. R Wojtak, L Old, G A Mamon, F R Pearce, R Carvalho, C Sifón, M E Gray, R A Skibba, D Croton, S Bamford, 10.1093/mnras/sty2257arXiv:1806.03199Galaxy Cluster Mass Reconstruction Project -IV. R. Wojtak, L. Old, G. A. Mamon, F. R. Pearce, R. de Carvalho, C. Sifón, M. E. Gray, R. A. Skibba, D. Cro- ton, S. Bamford, et al., Galaxy Cluster Mass Reconstruc- tion Project -IV. Understanding the effects of imper- fect membership on cluster mass estimation, MNRAS 10.1093/mnras/sty2257 (2018), arXiv:1806.03199. Modelling projection effects in optically selected cluster catalogues. M Costanzi, E Rozo, E S Rykoff, A Farahi, T Jeltema, A E Evrard, A Mantz, D Gruen, R Mandelbaum, J Derose, T Mcclintock, T N Varga, Y Zhang, J Weller, R H Wechsler, M Aguena, 10.1093/mnras/sty2665arXiv:1807.07072MNRAS. 482astro-ph.COM. Costanzi, E. Rozo, E. S. Rykoff, A. Farahi, T. Jel- tema, A. E. Evrard, A. Mantz, D. Gruen, R. Man- delbaum, J. DeRose, T. McClintock, T. N. Varga, Y. Zhang, J. Weller, R. H. Wechsler, and M. Aguena, Modelling projection effects in optically selected cluster catalogues, MNRAS 482, 490 (2019), arXiv:1807.07072 [astro-ph.CO]. Toward a Halo Mass Function for Precision Cosmology: The Limits of Universality. J Tinker, A V Kravtsov, A Klypin, K Abazajian, M Warren, G Yepes, S Gottlöber, D E Holz, 10.1086/591439arXiv:0803.2706ApJ. 688J. Tinker, A. V. Kravtsov, A. Klypin, K. Abazajian, M. Warren, G. Yepes, S. Gottlöber, and D. E. Holz, Toward a Halo Mass Function for Precision Cosmology: The Limits of Universality, ApJ 688, 709-728 (2008), arXiv:0803.2706. Sample Variance Considerations for Cluster Surveys. W Hu, A V Kravtsov, 10.1086/345846astro- ph/0203169ApJ. 584W. Hu and A. V. Kravtsov, Sample Variance Consider- ations for Cluster Surveys, ApJ 584, 702 (2003), astro- ph/0203169. Parameter estimation in astronomy through application of the likelihood ratio. W Cash, 10.1086/156922ApJ. 228939W. Cash, Parameter estimation in astronomy through application of the likelihood ratio., ApJ 228, 939 (1979). . A Fumagalli, A. Fumagalli et al., (to be published), . N Aghanim, Planck CollaborationY Akrami, Planck CollaborationM Ashdown, Planck CollaborationJ Aumont, Planck CollaborationC Baccigalupi, Planck CollaborationM Ballardini, Planck CollaborationA J Banday, Planck CollaborationR B Barreiro, Planck CollaborationN Bartolo, Planck Collaboration10.1051/0004-6361/201833910arXiv:1807.06209Planck 2018 results. VI. Cosmological parameters. 641astro-ph.COPlanck Collaboration, N. Aghanim, Y. Akrami, M. Ash- down, J. Aumont, C. Baccigalupi, M. Ballardini, A. J. Banday, R. B. Barreiro, N. Bartolo, et al., Planck 2018 results. VI. Cosmological parameters, A&A 641, A6 (2020), arXiv:1807.06209 [astro-ph.CO]. . M Tanabashi, K Hagiwara, K Hikasa, K Nakamura, Y Sumino, F Takahashi, J Tanaka, K Agashe, G Aielli, C Amsler, 10.1103/PhysRevD.98.030001Review of Particle Physics *. 9830001Phys. Rev. DM. Tanabashi, K. Hagiwara, K. Hikasa, K. Naka- mura, Y. Sumino, F. Takahashi, J. Tanaka, K. Agashe, G. Aielli, C. Amsler, et al., Review of Particle Physics * , Phys. Rev. D 98, 030001 (2018). CosmoSIS: modular cosmological parameter estimation. J Zuntz, M Paterno, E Jennings, D Rudd, A Manzotti, S Dodelson, S Bridle, S Sehrish, J Kowalkowski, 10.1016/j.ascom.2015.05.005arXiv:1409.3409Astron. Comput. 12astro-ph.COJ. Zuntz, M. Paterno, E. Jennings, D. Rudd, A. Manzotti, S. Dodelson, S. Bridle, S. Sehrish, and J. Kowalkowski, CosmoSIS: modular cosmological pa- rameter estimation, Astron. Comput. 12, 45 (2015), arXiv:1409.3409 [astro-ph.CO]. Importance Nested Sampling and the MultiNest Algorithm. F Feroz, M P Hobson, E Cameron, A N Pettitt, 10.21105/astro.1306.2144arXiv:1306.2144The Open Journal of Astrophysics. 2astro-ph.IMF. Feroz, M. P. Hobson, E. Cameron, and A. N. Pettitt, Importance Nested Sampling and the MultiNest Algo- rithm, The Open Journal of Astrophysics 2, 10 (2019), arXiv:1306.2144 [astro-ph.IM]. Efficient computation of cosmic microwave background anisotropies in closed friedmann-robertson-walker models. A Lewis, A Challinor, A Lasenby, 10.1086/309179The Astrophysical Journal. 538473A. Lewis, A. Challinor, and A. Lasenby, Efficient com- putation of cosmic microwave background anisotropies in closed friedmann-robertson-walker models, The Astro- physical Journal 538, 473 (2000). Cosmology with massive neutrinos III: the halo mass function and an application to galaxy clusters. M Costanzi, F Villaescusa-Navarro, M Viel, J.-Q Xia, S Borgani, E Castorina, E Sefusatti, 10.1088/1475-7516/2013/12/012arXiv:1311.1514J. Cosmology Astropart. Phys. 1212astro-ph.COM. Costanzi, F. Villaescusa-Navarro, M. Viel, J.-Q. Xia, S. Borgani, E. Castorina, and E. Sefusatti, Cosmology with massive neutrinos III: the halo mass function and an application to galaxy clusters, J. Cosmology Astropart. Phys. 12, 012 (2013), arXiv:1311.1514 [astro-ph.CO]. Rubin, Bayesian Data Analysis. A Gelman, J B Carlin, H S Stern, D B , Chapman and Hall/CRC2nd ed.A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Ru- bin, Bayesian Data Analysis, 2nd ed. (Chapman and Hall/CRC, 2004). Bayesian measures of model complexity and fit. D J Spiegelhalter, N G Best, B P Carlin, A Van Der Linde, https:/arxiv.org/abs/https:/rss.onlinelibrary.wiley.com/doi/pdf/10.1111/1467-9868.00353Journal of the Royal Statistical Society: Series B (Statistical Methodology). 64D. J. Spiegelhalter, N. G. Best, B. P. Carlin, and A. Van Der Linde, Bayesian measures of model complexity and fit, Journal of the Royal Statistical Soci- ety: Series B (Statistical Methodology) 64, 583 (2002), https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/1467- 9868.00353. Dark Energy Survey year 1 results: Cosmological constraints from galaxy clustering and weak lensing. T M C Abbott, DES CollaborationF B Abdalla, DES CollaborationA Alarcon, DES CollaborationJ Aleksić, DES CollaborationS Allam, DES CollaborationS Allen, DES CollaborationA Amara, DES CollaborationJ Annis, DES CollaborationJ Asorey, DES CollaborationS Avila, DES Collaboration10.1103/PhysRevD.98.043526arXiv:1708.01530Phys. Rev. D. 9843526astro-ph.CODES Collaboration, T. M. C. Abbott, F. B. Abdalla, A. Alarcon, J. Aleksić, S. Allam, S. Allen, A. Amara, J. Annis, J. Asorey, S. Avila, et al., Dark Energy Sur- vey year 1 results: Cosmological constraints from galaxy clustering and weak lensing, Phys. Rev. D 98, 043526 (2018), arXiv:1708.01530 [astro-ph.CO]. Planck data versus large scale structure: Methods to quantify discordance. T Charnock, R A Battye, A Moss, 10.1103/PhysRevD.95.123535arXiv:1703.05959Phys. Rev. D. 95123535T. Charnock, R. A. Battye, and A. Moss, Planck data versus large scale structure: Methods to quan- tify discordance, Phys. Rev. D 95, 123535 (2017), arXiv:1703.05959. Dark Energy Survey Year 1 results: weak lensing mass calibration of redMaPPer galaxy clusters. T Mcclintock, T N Varga, D Gruen, E Rozo, E S Rykoff, T Shin, P Melchior, J Derose, S Seitz, J P Dietrich, E Sheldon, Y Zhang, A Der Linden, T Jeltema, A B Mantz, A K Romer, 10.1093/mnras/sty2711arXiv:1805.00039MNRAS. 4821352astro-ph.COT. McClintock, T. N. Varga, D. Gruen, E. Rozo, E. S. Rykoff, T. Shin, P. Melchior, J. DeRose, S. Seitz, J. P. Di- etrich, E. Sheldon, Y. Zhang, A. von der Linden, T. Jel- tema, A. B. Mantz, A. K. Romer, et al., Dark Energy Survey Year 1 results: weak lensing mass calibration of redMaPPer galaxy clusters, MNRAS 482, 1352 (2019), arXiv:1805.00039 [astro-ph.CO]. . S Grandis, S. Grandis et al., (to be published), . . H Wu, H. Wu et al., (to be published), . A B Mantz, S W Allen, R G Morris, A Der Linden, D E Applegate, P L Kelly, D L Burke, D Donovan, H Ebeling, 10.1093/mnras/stw2250arXiv:1606.03407Weighing the giants-V. Galaxy cluster scaling relations. 463A. B. Mantz, S. W. Allen, R. G. Morris, A. von der Linden, D. E. Applegate, P. L. Kelly, D. L. Burke, D. Donovan, and H. Ebeling, Weighing the giants-V. Galaxy cluster scaling relations, MNRAS 463, 3582 (2016), arXiv:1606.03407. Next generation cosmology: constraints from the Euclid galaxy cluster survey. B Sartoris, A Biviano, C Fedeli, J G Bartlett, S Borgani, M Costanzi, C Giocoli, L Moscardini, J Weller, B Ascaso, S Bardelli, S Maurogordato, P T P Viana, 10.1093/mnras/stw630arXiv:1505.02165MNRAS. 4591764astro-ph.COB. Sartoris, A. Biviano, C. Fedeli, J. G. Bartlett, S. Bor- gani, M. Costanzi, C. Giocoli, L. Moscardini, J. Weller, B. Ascaso, S. Bardelli, S. Maurogordato, and P. T. P. Viana, Next generation cosmology: constraints from the Euclid galaxy cluster survey, MNRAS 459, 1764 (2016), arXiv:1505.02165 [astro-ph.CO]. Impact of weak lensing mass calibration on eROSITA galaxy cluster cosmological studies -a forecast. S Grandis, J J Mohr, J P Dietrich, S Bocquet, A Saro, M Klein, M Paulus, R Capasso, 10.1093/mnras/stz1778arXiv:1810.10553MNRAS. 4882041astro-ph.COS. Grandis, J. J. Mohr, J. P. Dietrich, S. Bocquet, A. Saro, M. Klein, M. Paulus, and R. Capasso, Impact of weak lensing mass calibration on eROSITA galaxy clus- ter cosmological studies -a forecast, MNRAS 488, 2041 (2019), arXiv:1810.10553 [astro-ph.CO]. S Bertocco, D Goz, L Tornatore, A Ragagnin, G Maggio, F Gasparo, C Vuerli, G Taffoni, M Molinaro, arXiv:1912.05340INAF Trieste Astronomical Observatory Information Technology Framework. arXiv e-printsastro-ph.IMS. Bertocco, D. Goz, L. Tornatore, A. Ragagnin, G. Mag- gio, F. Gasparo, C. Vuerli, G. Taffoni, and M. Moli- naro, INAF Trieste Astronomical Observatory Infor- mation Technology Framework, arXiv e-prints (2019), arXiv:1912.05340 [astro-ph.IM]. G Taffoni, U Becciani, B Garilli, G Maggio, F Pasian, G Umana, R Smareglia, F Vitello, arXiv:2002.01283arXiv:2002.01283CHIPP: INAF pilot project for HTC, HPC and HPDA, arXiv e-prints. astro-ph.IMG. Taffoni, U. Becciani, B. Garilli, G. Maggio, F. Pasian, G. Umana, R. Smareglia, and F. Vitello, CHIPP: INAF pilot project for HTC, HPC and HPDA, arXiv e-prints , arXiv:2002.01283 (2020), arXiv:2002.01283 [astro-ph.IM].
[]
[ "Weak solutions to triangular cross diffusion systems modeling chemotaxis with local sensing", "Weak solutions to triangular cross diffusion systems modeling chemotaxis with local sensing" ]
[ "Laurent Desvillettes ", "Philippe Laurençot ", "Ariane Trescases ", "Michael Winkler ", "\nInstitut de Mathématiques de Toulouse\nUMR 5219\nUniversité de Paris\nIUF and Sorbonne Université\nCNRS\nIMJ-PRG\nF-75013ParisFrance\n", "\nInstitut de Mathématiques de Toulouse\nUMR 5219\nUniversité de Toulouse\nCNRS\nF-31062Toulouse Cedex 9France\n", "\nInstitut für Mathematik\nUniversité de Toulouse\nCNRS\nF-31062Toulouse Cedex 9France\n", "\nUniversität Paderborn\n33098PaderbornGermany\n" ]
[ "Institut de Mathématiques de Toulouse\nUMR 5219\nUniversité de Paris\nIUF and Sorbonne Université\nCNRS\nIMJ-PRG\nF-75013ParisFrance", "Institut de Mathématiques de Toulouse\nUMR 5219\nUniversité de Toulouse\nCNRS\nF-31062Toulouse Cedex 9France", "Institut für Mathematik\nUniversité de Toulouse\nCNRS\nF-31062Toulouse Cedex 9France", "Universität Paderborn\n33098PaderbornGermany" ]
[]
New estimates and global existence results are provided for a class of systems of cross diffusion equations arising from the modeling of chemotaxis with local sensing, possibly featuring a growth term of logistic-type as well. For sublinear non-increasing motility functions, convergence to the spatially homogeneous steady state is shown, a dedicated Liapunov functional being constructed for that purpose.
10.1016/j.na.2022.113153
[ "https://arxiv.org/pdf/2202.10246v1.pdf" ]
247,011,878
2202.10246
5f89c5ad530d4511743c58c1feecf76f225809c4
Weak solutions to triangular cross diffusion systems modeling chemotaxis with local sensing 21 Feb 2022 February 22, 2022 Laurent Desvillettes Philippe Laurençot Ariane Trescases Michael Winkler Institut de Mathématiques de Toulouse UMR 5219 Université de Paris IUF and Sorbonne Université CNRS IMJ-PRG F-75013ParisFrance Institut de Mathématiques de Toulouse UMR 5219 Université de Toulouse CNRS F-31062Toulouse Cedex 9France Institut für Mathematik Université de Toulouse CNRS F-31062Toulouse Cedex 9France Universität Paderborn 33098PaderbornGermany Weak solutions to triangular cross diffusion systems modeling chemotaxis with local sensing 21 Feb 2022 February 22, 2022 New estimates and global existence results are provided for a class of systems of cross diffusion equations arising from the modeling of chemotaxis with local sensing, possibly featuring a growth term of logistic-type as well. For sublinear non-increasing motility functions, convergence to the spatially homogeneous steady state is shown, a dedicated Liapunov functional being constructed for that purpose. Introduction We consider a class of systems of two parabolic equations in which the first equation is a cross diffusion equation (that is, the diffusion rate in this equation depends on the solution of the second equation), while the second equation is a standard heat equation coupled to the first one only through its source term. Such systems are sometimes called triangular cross diffusion systems. We focus on the systems introduced in [4] to treat specific situations arising in the theory of chemotaxis. The quantity u := u(t, x) ≥ 0 is then the density of cell and v := v(t, x) ≥ 0 is the concentration of chemoattractant. We refer to [4] for a discussion of the modeling assumptions underlying such systems. Let us just say that with respect to general systems appearing in the modeling of chemotaxis, where the dynamics of the density of cells is driven by the evolution equation ∂ t u = div(∇F + G), where F, G both depend on u and v, the specificity of the system considered here is that it can be written in a form where G = 0; that is, ∂ t u − ∆(uγ(v)) = 0 , (t, x) ∈ (0, ∞) × Ω , (1.1a) ε∂ t v − ∆v = u − v , (t, x) ∈ (0, ∞) × Ω , (1.1b) ∇(uγ(v)) · n = ∇v · n = 0 , (t, x) ∈ (0, ∞) × ∂Ω , (1.1c) (u, v)(0, ·) = (u in , v in ) , x ∈ Ω , (1.1d) where Ω is a smooth bounded domain of R N , with N ≥ 2, and ε > 0. Here, n is the outward unit normal vector at a point of ∂Ω. The initial data u in and v in are given and nonnegative. Typical functions γ are assumed to be bounded and strictly positive on [0, ∞), and to decay at infinity (that is, when v → ∞) typically like a power. In other words, they generalize the prototype (which makes sense from the point of view of modeling) given by γ(z) = (z + 1) −k , z > 0, (1.2) for some k > 0, but are not always assumed to be monotone decreasing. We also consider the counterpart of this system when the cell population has a logistic-type growth, that is, ∂ t u − ∆(uγ(v)) = u h(u), (t, x) ∈ (0, ∞) × Ω, (1.3a) ε∂ t v = ∆v − v + u, (t, x) ∈ (0, ∞) × Ω, (1.3b) ∇(uγ(v)) · n = ∇v · n = 0, (t, x) ∈ (0, ∞) × ∂Ω, (1.3c) (u, v)(0, ·) = (u in , v in ) , x ∈ Ω, (1.3d) where h is a continuous function. It can indeed be interesting to take into account cells' division, as well as their death due to the lack of resources. Notation We will sometimes denote the spaces L p (Ω), H 1 (Ω), and (H 1 (Ω)) ′ by L p , H 1 , and (H 1 ) ′ , respectively (with p ∈ [1, ∞]). Furthermore, for w ∈ L p , we denote the L p norm of w by w p . Given w ∈ (H 1 ) ′ (Ω), we define w by w := 1 |Ω| w, 1 (H 1 ) ′ ,H 1 and note that w = 1 |Ω| Ω w(x) dx when w ∈ (H 1 ) ′ (Ω) ∩ L 1 (Ω) . For w ∈ (H 1 ) ′ (Ω) such that w = 0, we introduce Kw ∈ H 1 (Ω) as the unique (variational) solution to − ∆(Kw) = w in Ω , ∇(Kw) · n = 0 on ∂Ω , Kw = 0 . (1.4) The operator K plays a significant role in the analysis of our system, in particular in view of the specific form of the cross-diffusion in (1.1a). Indeed, for a solution (u, v) regular enough, one expects that the conservation of mass holds for u, that is, u = u in , and that consequently (1.1a) can be rewritten as ∂ t K u − u in = uγ(v) − uγ(v) , (t, x) ∈ (0, ∞) × Ω . (1.5) For this reason, we choose the following norm on (H 1 ) ′ (Ω): We note that the following norm: w ∈ (H 1 ) ′ (Ω) → w (H 1 ) ′ := ∇K(w − w ) 2 .w ∈ (H 1 ) ′ (Ω) → ∇A −1 w 2 + A −1 w − A −1 w 2 = ∇A −1 w 2 + A −1 w − w 2 ,(1.9) is equivalent to the (H 1 ) ′ (Ω)-norm defined in (1.6). Main results We first propose a definition of very weak solutions associated with problem (1.1): Definition 1.1. Let Ω be a smooth bounded domain of R N , with N ≥ 2, ε > 0, and γ ∈ C([0, ∞); (0, ∞)). Suppose that u in ∈ L 1 (Ω) and v in ∈ L 1 (Ω) are nonnegative, and that, for all T > 0, 10) are such that u ≥ 0 and v > 0 a.e. in (0, ∞) × Ω, and that, for all T > 0, u ∈ L 1 ((0, T ) × Ω) and v ∈ L 1 ((0, T ) × Ω),(1.uγ(v) ∈ L 1 ((0, T ) × Ω). (1.11) Then (u, v) is called a global very weak solution of (1.1) if − ∞ 0 Ω u∂ t ϕ dx dt − Ω u in ϕ(0) dx = ∞ 0 Ω uγ(v)∆ϕ dx dt, and − ∞ 0 Ω εv∂ t ϕ dx dt − Ω εv in ϕ(0) dx = ∞ 0 Ω v (∆ϕ − ϕ) dx dt + ∞ 0 Ω uϕ dx dt, hold for any ϕ ∈ C ∞ 0 ([0, ∞) × Ω) such that ∇ϕ · n = 0 on (0, ∞) × ∂Ω. We first show that an algebraic growth on 1/γ at infinity is sufficient to obtain the existence of a global very weak solution to the system (1.1) without imposing any smoothness assumption on γ. Theorem 1.2. Let N ≥ 2 and Ω ⊂ R N be a bounded domain with smooth boundary, and assume that ε > 0, that γ is continuous and bounded on [0, ∞), and that for some K 1 > 0 and k ≥ 0, 1 γ(z) ≤ K 1 (z + 1) k , z > 0. (1.12) Then for any choice of (u in , v in ) such that u in ∈ L p 0 (Ω), p 0 > N/2 , is nonnegative and v in ∈ L ∞ (Ω) is nonnegative, (1.13) there exists a global very weak solution (u, v) of (1.1) in the sense of Definition 1.1. Furthermore, for all q ∈ (1, ∞), p ∈ (1, 2), and T > 0, u ∈ L ∞ ((0, ∞); L 1 (Ω)) ∩ L p ((0, T ) × Ω) ∩ L ∞ ((0, T ); (H 1 (Ω)) ′ ) , v ∈ L ∞ ((0, ∞); L 2 (Ω)) ∩ L q ((0, T ) × Ω) ∩ L 2 ((0, T ); H 1 (Ω)) , u γ(v) ∈ L 2 ((0, T ) × Ω) . Furthermore, if v in ∈ W 1,q (Ω) for some q ∈ (1, 2), then v ∈ L ∞ ((0, T ); W 1,q (Ω)) for all T > 0. Observe that, since N ≥ 2, one has N/2 ≥ 2N/(N + 2), so that L p 0 (Ω) is continuously embedded in (H 1 ) ′ (Ω) and it follows from (1.13) that u in ∈ (H 1 ) ′ (Ω). (1.14) When γ ∈ C 3 ([0, ∞))∩L ∞ (0, ∞), existence of weak solutions to (1.1) is shown in [14] when ε is sufficiently small, namely, ε γ L ∞ (0,∞) < 1. Theorem 1.2 relaxes this condition at the expense of the algebraic growth condition (1.12). Let us recall that existence of weak solutions is also obtained in [22] when min{γ} > 0 and in [4,26] when γ(z) = 1/(c + z k ) for c ≥ 0 and k > 0 sufficiently small, these conditions being removed in Theorem 1.2 as well. The constraint on ε required in [14] and the algebraic growth (1.12) assumed in Theorem 1.2 are likely to be of a technical nature, as global existence of weak solutions is proved in [3] in the particular case γ(z) = e −z . As for classical solutions, well-posedness in that setting is shown in [7][8][9][10]12,18], provided that γ ′ ≤ 0 with γ(z) → 0 as z → ∞. Remark 1.3. If v in is bounded below by a strictly positive constant, then it is easy to see that the same will remain true for v(t, ·), with a constant that decays exponentially fast with time. Then the behavior of γ at zero is irrelevant and one can relax the assumption that γ is continuous on [0, ∞) and replace it with continuity on (0, ∞) only. This is true also for the next theorems of existence. Remark 1.4. Since u lies in L p ((0, T ) × Ω) for p ∈ (1, 2), v is actually a strong solution to (1.1b): each term lies in L p ((0, T ) × Ω) for all T > 0 and the equation holds almost everywhere on (0, ∞) × Ω. One can furthermore show that the formula (1.5) holds in a strong sense: thanks to the boundedness of γ, each term lies in L 2 ((0, T ) × Ω) for all T > 0 and the equation holds almost everywhere on (0, ∞) × Ω. We next turn to the large time behavior of solutions to (1.1). While a complete description of the dynamics for an arbitrary motility function γ seems to be out of reach, it is shown in [1] that, when ε = 0 and γ(z) = z −k for some k ∈ (0, 1], solutions converge to spatially homogeneous steady states. A key ingredient in their proof is the construction of a Lyapunov functional but this property breaks down when ε > 0. Nevertheless, we are able to prove that the system (1.1) admits a Lyapunov functional, which is different from that constructed in [1] but applies to the same call of motilities, and requires actually extra conditions on the monotonicity of γ, as described now. Theorem 1.5. Let Ω be a smooth bounded domain of R N , with N ≥ 2. Assume that ε > 0, that γ ∈ C([0, ∞)) ∩ C 3 ((0, ∞)) is positive, and that γ ′ ≤ 0, (z → z γ(z)) ′ ≥ 0 . (1.15) Consider nonnegative initial conditions (u in , v in ) ∈ W 1,r (Ω; R 2 ) for some r > N and denote the corresponding global classical solution to (1.1) by (u, v) [9,10]. Setting m := u in , we define G 0 ∈ C 1 ([0, ∞)) ∩ C 4 ((0, ∞)) by G ′ 0 (z) := 2zγ(z) − mγ(z) − mγ(m) , z ≥ 0 , G 0 (m) = 0 . (1.16) Then G 0 is nonnegative and convex on (0, ∞), and d dt L 0 (u(t), v(t)) + D 0 (u(t), v(t)) = 0 , t > 0 , (1.17) where L 0 (u, v) := 1 2 ∇K(u − m) 2 2 + ε Ω G 0 (v) dx ≥ 0,(1.18) and 20) and lim D 0 (u, v) := Ω G ′′ 0 (v)|∇v| 2 dx + Ω (u − v) 2 γ(v) dx + Ω (v − m)(vγ(v) − mγ(m)) dx ≥ 0 . (1.19) Moreover, sup t≥0 L 0 (u(t), v(t)) + v(t) 2 H 1 + ∞ 0 D 0 (u(s), v(s)) + ε ∂ t v(s) 2 2 ds < ∞ ,(1.t→∞ { K(u(t) − m) 2 + v(t) − m 2 } = 0 . The construction of the Lyapunov functional L 0 and its consequence with respect to the long-term behavior are actually the main contribution of Theorem 1.5, the existence and uniqueness of a global classical solution to (1.1) being granted by [9, Theorem 1.1] and [10, Remark 1.5]. As in [1] which is devoted to the parabolic-elliptic version of (1.1) corresponding to ε = 0, Theorem 1.5 applies to γ(z) = z −k , k ∈ (0, 1], and we have thus constructed a Lyapunov functional in that case. A side remark is that pattern formation is excluded by Theorem 1.5, which is consistent with the outcome of [5], where the formation of stripes is observed for a motility γ with a very fast decay at infinity. The following observation on the existence of nonconstant steady states indicates that the choice k = 1 in fact even corresponds to a critical nonlinearity in the family of such algebraic motility rates: Proposition 1.6. Let N ≥ 2, k ∈ (1, N +2 (N −2) + ) and γ(z) := z −k for z > 0. Then given any smooth bounded domain Ω 0 ⊂ R N , one can find R 0 > 0 such that whenever R > R 0 , defining Ω := RΩ 0 = {Rx : x ∈ Ω 0 }, there are positive nonconstant functions u ∈ C 2 (Ω) and v ∈ C 2 (Ω) satisfying      0 = ∆ uγ(v) in Ω, 0 = ∆v − v + u in Ω, 0 = ∇u · n = ∇v · n on ∂Ω. Proposition 1.6 holds also in space dimension one N = 1. We refer to [24] for a more complete description of steady states in that case. In the final part of this manuscript, we aim at making sure that in the presence of additional zero-order dissipative mechanisms in the flavor of logistic-type source and degradation terms, global solutions can be constructed actually without any substantial restriction on the strength of degeneracies in cell diffusion at large values of the signal. We work in a framework somewhat less relaxed than that considered above. Typically, only one integration by parts is performed, so that the solutions considered here are sometimes called "weak solutions" instead of "very weak solutions". Definition 1.7. Let Ω be a smooth bounded domain of R N , with N ≥ 2, and ε > 0, γ ∈ C((0, ∞)) and h ∈ C([0, ∞)), and suppose that u in ∈ L 1 (Ω) and v in ∈ L 1 (Ω) are nonnegative. We then call a pair (u, v) of functions such that, for all T > 0, u ∈ L 1 ((0, T ) × Ω) and v ∈ L 1 ((0, T ); W 1,1 (Ω)), (1.21) a global weak solution of (1.3) if u ≥ 0 and v > 0 a.e. in (0, ∞) × Ω, if, for all T > 0, u h(u) ∈ L 1 ((0, T ) × Ω) and uγ(v) ∈ L 1 ((0, T ); W 1,1 (Ω)),(1.22) and if − ∞ 0 Ω u∂ t ϕ dx dt − Ω u in ϕ(0) dx = ∞ 0 Ω ∇ uγ(v) · ∇ϕ dx dt + ∞ 0 Ω u h(u) dx dt (1.23) as well as − ∞ 0 Ω εv∂ t ϕ dx dt − Ω εv in ϕ(0) dx = − ∞ 0 Ω ∇v · ∇ϕ dx dt + ∞ 0 Ω (−v + u)ϕ dx dt (1.24) hold for any ϕ ∈ C ∞ 0 ([0, ∞) × Ω). Our analysis in this direction will be based on a strategy quite independent from that pursued in the previous parts, focusing on the detection of entropy-like features enjoyed by functionals of the form Ω u ln(u + e) + ε|∇v| 2 dx. Accordingly, in its most straightforward version detailed in Lemma 4.1, this approach will presuppose regularity properties of u in and especially of v in which go somewhat beyond those from (1.13). In view of our principal intention described above, we will refrain from scrutinizing minimal requirements on initial regularity, and rather formulate our main result in this respect in the following form conveniently accessible to a fairly compact analysis: Theorem 1.8. Let N ≥ 2 and Ω ⊂ R N be a bounded domain with smooth boundary, and suppose that ε > 0, that γ ∈ C 3 ((0, ∞)) is such that γ > 0 in [0, ∞) and sup s>s 0 γ(s) + sγ ′2 (s) γ(s) < ∞ for all s 0 > 0, (1.25) and that h ∈ C([0, ∞)) satisfies lim s→∞ h(s) ln s s = −∞ . (1.26) Then for any choice of u in ∈ C(Ω) and v in ∈ W 1,∞ (Ω) such that u in ≥ 0 and v in > 0 in Ω, the problem (1.3) possesses at least one global weak solution in the sense of Definition 1.7. This solution has the additional properties that, for all T > 0, u ∈ L ∞ ((0, T ); L log L(Ω)) ∩ L 2 ((0, T ); L 2 (Ω)) and v ∈ L ∞ ((0, T ); H 1 (Ω)) ∩ L 2 ((0, T ); H 2 (Ω)) (1.27) as well as u γ(v) ∈ L 4/3 ((0, T ); W 1,4/3 (Ω)). (1.28) The existence of classical bounded solutions to (1.3) is proven for growth functions h(z) = h 0 (1 − z l ) when l = 1 and h 0 is large enough in [16,23], and when h 0 > 0 and l > max(2, (N + 2)/2) in [17] (under additional assumptions on γ). The case where l = 1 and h 0 > 0 is treated in two-dimensional domains in [11]. Here, we obtain weak solutions for a growth function of this form with l ≥ 1 and h 0 > 0. The structure of the paper is the following: Section 2 is devoted to the proof of Theorem 1.2. Then Theorem 1.5 and Proposition 1.6 are proven in Section 3. Finally, the results on system (1.3) are discussed in Section 4. Existence in the absence of logistic-type growth Let ψ ∈ C ∞ 0 (R) such that ψ ≥ 0, supp ψ ⊂ (−1, 1), ψ L 1 (R) = 1, and let, for η > 0, ψ η (z) := 1 η ψ z η , z ∈ [0, ∞) . For η ∈ (0, 1), we define γ η (z) := η + (ψ η * γ) (z + η) , z ∈ [0, ∞) , where the symbol * indicates the convolution product on R (γ being extended on R by symmetry). We first obtain some properties of γ η . Lemma 2.1. For all η ∈ (0, 1), the function γ η satisfies 0 < η ≤ γ η (z) ≤ K 0 + 1 , z ∈ [0, ∞) , (2.1) 1 γ η (z) ≤ 3 k K 1 (z + 1) k , z ∈ [0, ∞) , (2.2) where K 0 = ||γ|| L ∞ (0,∞) and K 1 and k are defined in (1.12). Proof. We first see that thanks to the nonnegativity of γ and ψ, we have γ η ≥ η and γ η L ∞ (0,∞) ≤ γ L ∞ (0,∞) ψ η L 1 (R) + η = K 0 + η ≤ K 0 + 1 . Then, for z > 0, we compute, using (1.12), γ η (z) = η + η −η ψ η (z)γ(z + η −z) dz ≥ η + η −η ψ η (z) K 1 (1 + z + η −z) k dz ≥ η + η −η ψ η (z) K 1 (1 + z + 2η) k dz ≥ η + 1 K 1 (1 + 2η) k (1 + z) k ≥ 1 3 k K 1 (1 + z) k , and the proof is complete. Next, let (u in η , v in η ) η be nonnegative functions in C(Ω) × W 1,∞ (Ω) such that u in η = u in =: m , v in η = v in , (2.3) and lim η→0 u in η − u in (H 1 ) ′ + u in η − u in p 0 + v in η − v in 2 = 0 , sup η u in η p 0 ≤ 1 + u in p 0 , sup η v in η ∞ ≤ 1 + v in ∞ . (2.4) Moreover, if v in ∈ W 1,q (Ω) for some q ∈ (1, 2), then (v in η ) η can be constructed so as to satisfy sup η v in η W 1,q ≤ 1 + v in W 1,q . (2.5) Thanks to the regularity of γ η , u in η , and v in η , we are in a position to apply [22, Theorem 1.2] to obtain the existence of a nonnegative global weak solution (u η , v η ) to the initial value problem ∂ t u η − ∆(u η γ η (v η )) = 0 , (t, x) ∈ (0, ∞) × Ω , (2.6a) ε∂ t v η − ∆v η + v η = u η , (t, x) ∈ (0, ∞) × Ω , (2.6b) ∇(u η γ η (v η )) · n = ∇v η · n = 0 , (t, x) ∈ (0, ∞) × ∂Ω , (2.6c) (u η , v η )(0, ·) = (u in η , v in η ) , x ∈ Ω , (2.6d) satisfying u η ∈ L 2 ((0, T ) × Ω) ∩ L (N +2)/(N +1) ((0, T ); W 1,(N +2)/(N +1) (Ω)) , (2.7) v η ∈ L ∞ ((0, T ); H 1 (Ω)) ∩ L 2 ((0, T ); H 2 (Ω)) , (2.8) for any T > 0. As a first consequence of (2.3) and (2.6), we identify the time evolution of the space averages of u η and v η . Lemma 2.2. For t ≥ 0, u η (t) = m , v η (t) = v in e −t/ε + m(1 − e −t/ε ) ≤ max{ v in , m} . (2.9) Proof. The first identity in (2.9) readily follows from (2.3), (2.6a), and (2.6c), after integration over Ω. We next integrate (2.6b) over Ω and use (2.3) and (2.6c) to obtain ε d dt v η 1 + v η 1 = u η 1 = m|Ω| , t ≥ 0 . (2.10) Integrating (2.10) completes the proof of Lemma 2.2. We define then w η = w η (t, x) as the unique nonnegative solution of the elliptic equation (in the x variable, for a given t) −∆w η + w η = u η , (t, x) ∈ (0, ∞) × Ω , (2.11a) ∇w η · n = 0 , (t, x) ∈ (0, ∞) × ∂Ω . (2.11b) Thanks to this auxiliary problem, we have the following lemma. Lemma 2.3. For t ∈ [0, T ], there exists some constant C(T ) ≥ 0 such that u η (t) 2 (H 1 ) ′ + ||v η (t)|| 2 2 + t 0 ||v η (s)|| 2 H 1 ds + t 0 Ω u η (s, ·) 2 γ η (v η (s, ·)) dx ds ≤ C(T ). (2.12) Proof. We observe that, by (2.1), (2.6a), and (2.11), 1 2 d dt ||w η || 2 H 1 = Ω w η ∆(u η γ η (v η )) dx = Ω u η γ η (v η ) (w η − u η ) dx ≤ (K 0 + 1) ||w η || 2 H 1 − Ω u 2 η γ η (v η ) dx . (2.13) Also, it follows from (2.6b) that ε 2 d dt ||v η || 2 2 + ||v η || 2 H 1 = Ω u η v η dx = Ω v η w η + ∇v η · ∇w η dx ≤ 1 2 ||v η || 2 H 1 + 1 2 ||w η || 2 H 1 ,(2.14) so that d dt 1 2 ||w η || 2 H 1 + ε ||v η || 2 2 + ||v η || 2 H 1 + Ω u 2 η γ η (v η ) dx ≤ (2 + K 0 ) ||w η || 2 H 1 . (2.15) After integration with respect to time, for all T > 0, there exists some constant C(T ) ≥ 0 (we emphasize the dependence with respect to T , but it also depends on the parameters of the problem but not on η ∈ (0, 1)) such that ||w η (t)|| 2 H 1 + ||v η (t)|| 2 2 + t 0 ||v η (s)|| 2 H 1 ds + t 0 Ω u η (s, ·) 2 γ η (v η (s, ·)) dx ds ≤ C(T ) , t ∈ [0, T ] . Recalling the definition of A −1 in (1.7), we note that w η = A −1 u η , and conclude the proof of the lemma by equivalence of the norms (1.6) and (1.9). Building upon (2.12), we derive additional estimates on (v η ) η with the help of a comparison argument introduced in [6] (and subsequently developed further in [7][8][9][10]14,18,19]) and parabolic maximal regularity. Proof. We start with the comparison argument introduced in [6] and deduce from (2.1), (2.6), (2.11), and the definition (1.7) of A that, for t ≥ 0, A ∂ t w η (t) + u η (t)γ η (v η (t)) = u η (t)γ η (v η (t)) ≤ (1 + K 0 )u η (t) = A (1 + K 0 )w η (t) in Ω , with ∇ ∂ t w η (t) + u η (t)γ η (v η (t)) − (1 + K 0 )w η (t) · n = 0 on ∂Ω. We then infer from the (elliptic) comparison principle that ∂ t w η + u η γ η (v η ) ≤ (1 + K 0 )w η in [0, ∞) × Ω . Hence, since u η and γ η are nonnegative, 0 ≤ w η (t, x) ≤ e (1+K 0 )t w η (0, x) , (t, x) ∈ [0, ∞) × Ω . In addition, recalling that p 0 > N/2, the continuous embedding of W 2,p 0 in L ∞ (Ω), elliptic regularity, and (2.4) imply that w η (0) ∞ ≤ C A −1 (u in η ) W 2,p 0 ≤ C u in η p 0 ≤ C . Combining the above two estimates leads us to sup t∈[0,T ] w η (t) ∞ ≤ C(T ) (2.17) for each T > 0. We next define z η := A −1 v η and deduce from (2.6) that z η solves ε∂ t z η − ∆z η + z η = w η , (t, x) ∈ (0, ∞) × Ω , ∇z η · n = 0 , (t, x) ∈ (0, ∞) × ∂Ω , z η (0, ·) = z in η := A −1 v in η , x ∈ Ω ,(2.18) Now, let q ∈ (1, ∞). We recall that the operator A ε defined by D(A ε ) := φ ∈ W 2,q (Ω) : ∇φ · n = 0 on ∂Ω , A ε φ := 1 ε − ∆φ + φ , φ ∈ D(A ε ) ,(2.19) generates an analytic semigroup e −tAε t≥0 of contractions on L q (Ω) [20, Theorem 7.3.5] (note that A 1 = A, see (1.8)). With this notation, a representation formula for z η can be derived from (2.18) which reads εz η (t) = εe −tAε z in η + t 0 e −(t−s)Aε w η (s) ds , t ≥ 0 . (2.20) On the one hand, we infer from (2.17) and [13, Théorème 1] that t → On the other hand, classical properties of semigroups, elliptic regularity, and (2.4) entail that e −tAε z in η L q ((0,T );W 2,q (Ω)) ≤ C(T, q) z in η W 2,q ≤ C(T, q) v in η q ≤ C(T, q) . Consequently, ε z η L q ((0,T );W 2,q (Ω)) ≤ C(T, q) , an estimate which completes the proof since v η = Az η . We next turn to (u η ) η and draw the following consequence of Lemma 2.3 and Lemma 2.4. Proof. By (2.2) and Hölder's inequality, u η p p = Ω u η γ η (v η ) p γ η (v η ) −p/2 dx ≤ Ω u 2 η γ η (v η ) dx p/2 Ω γ η (v η ) −p/(2−p) dx (2−p)/2 ≤ 3 k K 1 u η γ η (v η ) p 2 Ω (1 + v η ) pk/(2−p) dx (2−p)/2 ≤ C(p) u η γ η (v η ) p 2 1 + v η pk/2 pk/(2−p) . Integrating the above inequality with respect to time over (0, T ) and using Hölder's inequality gives T 0 u η p p dt ≤ C(p) T 0 u η γ η (v η ) p 2 1 + v η pk/2 pk/(2−p) dt ≤ C(p) T 0 u η γ η (v η ) 2 2 dt p/2 T 0 1 + v η pk/2 pk/(2−p) 2 (2−p) dt (2−p)/2 ≤ C(p) T 0 u η γ η (v η ) 2 2 dt p/2 T 0 1 + v η pk/(2−p) pk/(2−p) dt (2−p)/2 . Lemma 2.5 then readily follows from the above estimate due to (2.12) and Lemma 2.4 (with q = pk/(2 − p)). Exploiting the outcome of Lemma 2.5 provides additional estimates on (v η ) η . Lemma 2.6. Let q ∈ (1, 2) and T > 0 and assume that v in ∈ W 1,q (Ω). Then there exists some constant C(T, q) ≥ 0 such that sup t∈[0,T ] v η (t) W 1,q ≤ C(T, q) . Proof. Recalling the notation introduced in (2.19), it follows from (2.6) that εv η (t) = εe −tAε v in η + t 0 e −(t−s)Aε u η (s) ds , t ≥ 0 . Using classical properties of the semigroup (e −tAε) ) t≥0 , see [2, V.Theorem 2.1.3] for instance, along with (2.5), we obtain ε v η (t) W 1,q ≤ C(q) + C(q) t 0 (t − s) −1/2 u η (s) q ds , t ≥ 0 . (2.22) We next fix p ∈ (q, 2) and set ω := 1 + q(p − 1) p(q − 1) ∈ 2, 2q(p − 1) p(q − 1) . We infer from (2.3) and Hölder's inequality that u η q ≤ u η p(q−1) q(p−1) p u η p−q q(p−1) 1 ≤ C(p, q) u η 1 ω−1 p . Together with Hölder's inequality, the above inequality ensures that, for t > 0, t 0 (t − s) −1/2 u η (s) q ds ≤ t 0 (t − s) − ω 2(ω−1) ds (ω−1)/ω t 0 u η (s) ω q ds 1/ω ≤ C(p, q)t ω−2 2(ω−1) t 0 u η (s) ω ω−1 p ds 1/ω . Since both ω/(ω − 1) and p lie in (1, 2), we deduce from Lemma 2.5 that, for T > 0 and t ∈ [0, T ], t 0 (t − s) −1/2 u η (s) q ds ≤ C(T, p, q) . Inserting the above estimate in (2.22) completes the proof. Compactness and convergence We now collect the estimates that are uniform with respect to η ∈ (0, 1), which will prove useful when passing to the limit η → 0. Proposition 2.7. Let T > 0 , q ∈ (1, ∞), and p ∈ (2N/(N + 2), 2). There are C 0 (T ) > 0, C 1 (T, q) > 0, and C 2 (T, p) > 0 such that u η (t) = m , 0 ≤ v η (t) ≤ max{ v in , m} , t ∈ [0, T ] , (2.23a) ∇K(u η (t) − m) 2 2 ≤ 2C 0 (T ) , t ∈ [0, T ] , (2.23b) v η (t) 2 2 ≤ C 0 (T ) , t ∈ [0, T ] , (2.23c) T 0 v η (s) 2 H 1 ds ≤ C 0 (T ) , (2.23d) T 0 (u η (s) − v η (s)) γ η (v η (s)) 2 2 ds ≤ C 0 (T ) , (2.23e) T 0 u η (s) γ η (v η (s)) 2 2 ds ≤ C 0 (T ) , (2.23f) T 0 ∂ t K(u η (s) − m) 2 2 ds ≤ C 0 (T ) , (2.23g) T 0 v η (s) q q ds ≤ C 1 (T, q) , (2.23h) T 0 u η (s) p p ds ≤ C 2 (T, p) , (2.23i) T 0 ∂ t v η (s) p (H 1 ) ′ ds ≤ C 2 (T, p∂ t K(u η − m) + u η γ η (v η ) = u η γ η (v η ) , (t, x) ∈ (0, ∞) × Ω ,(2.24) so that estimate (2.23g) is a consequence of (2.23f) and the (uniform) upper bound (2.1) on γ η . It next follows from (2.6b), Hölder's inequality, and the continuous embedding of H 1 (Ω) in L p/(p−1) (Ω) that, for ψ ∈ H 1 (Ω), ε ∂ t v η , ψ (H 1 ) ′ ,H 1 = Ω ∇v η · ∇ψ dx + Ω v η ψ dx − Ω u η ψ dx ≤ v η H 1 ψ H 1 + u η p ψ p/(p−1) ≤ C(p) ( v η H 1 + u η p ) ψ H 1 . Estimate (2.23j) is then a consequence of (2.23d), (2.23i), and the above inequality by a duality argument. Finally, estimate (2.23e) is obtained thanks to (2.23c), (2.23f), and the upper bound (2.1) on γ η . We are now ready to pass to the limit η → 0 and therefore prove the existence of a global weak solution to the system (1.1). End of the proof of Theorem 1.2. Thanks to the uniform estimates collected in Proposition 2.7, we can extract a sequence (u ηn , v ηn ) n≥1 such that, for all T > 0, q ∈ (1, ∞), and p ∈ (2N/(N + 2), 2), u ηn ⇀ u in L p ((0, T ) × Ω), (2.25a) K(u ηn − m) * ⇀ K(u − u ) in L ∞ ((0, T ); H 1 (Ω)), (2.25b) ∂ t K(u ηn − m) ⇀ ∂ t K(u − u ) in L 2 ((0, T ) × Ω)), (2.25c) v ηn ⇀ v in L 2 ((0, T ); H 1 (Ω)) ∩ L q ((0, T ) × Ω), (2.25d) v ηn * ⇀ v in L ∞ ((0, T ); L 2 (Ω)), (2.25e) ∂ t v ηn ⇀ ∂ t v in L p ((0, T ); (H 1 ) ′ (Ω). (2.25f) Thanks to (2.23b), (2.23c), (2.23d), (2.23g), and (2.23j), we can furthermore apply Aubin-Lions-Simon Theorem (see [21,Corollary 4]), and extract further subsequences (K(u ηn − m)) n≥1 and (v ηn ) n≥1 that converges in a strong sense, T ) × Ω) and a.e. in (0, T ) × Ω . K(u ηn − m) → K(u − u ) in C([0, T ]; L 2 (Ω)), (2.26) v ηn → v in C([0, T ]; (H 1 ) ′ (Ω)) ∩ L 2 ((0, (2.27) Next, since ∂ t u ηn = ∆(u ηn γ ηn (v ηn )) by (2.6a), we infer from Lemma 2.1 and (2.23f) that ∂ t u ηn n≥1 is bounded in L 2 ((0, T ); (H 2 ) ′ (Ω)), while (2.23b) guarantees that (u ηn ) n≥1 is bounded in L ∞ ((0, T ); (H 1 ) ′ (Ω)). Another application of [21,Corollary 4] implies that, after possibly extracting another subsequence, u ηn → u in C([0, T ]; (H 2 ) ′ (Ω)). (2.28) Since x → 1 belongs to H 2 (Ω), an immediate consequence of (2.23a) and (2.28) is Recalling (2.25a)-(2.25f) and (2.31), we have thus shown that (u, v) satisfies the regularity properties required in Theorem 1.2. We next identify the weak limits of u ηn γ ηn (v ηn ) n≥1 and u ηn γ ηn (v ηn ) n≥1 . To this end, we note that the uniform convergence of (γ ηn ) n≥1 to γ on compact subsets of [0, ∞), the bound from Lemma 2.1, the a.e. convergence (2.27) of (v ηn ) n≥1 , and Lebesgue's convergence theorem imply that, for all q ∈ [1, ∞), T ) × Ω) and a.e. in (0, T ) × Ω . u(t), 1 (H 2 ) ′ ,H 2 = lim n→∞ u ηn (t), 1 (H 2 ) ′ ,H 2 = |Ω| lim n→∞ u ηn (t) = m|Ω| , t ∈ [0, T ] .γ ηn (v ηn ) → γ(v) in L q ((0, (2.32) It then follows from (2.25a) and (2.32) that u ηn γ ηn (v ηn ) ⇀ u γ(v) in L 1 ((0, T ) × Ω) , u ηn γ ηn (v ηn ) ⇀ uγ(v) in L 1 ((0, T ) × Ω) . Since these two sequences are bounded in L 2 ((0, T ) × Ω) by Lemma 2.1 and (2.23f), we conclude that u ηn γ ηn (v ηn ) ⇀ u γ(v) in L 2 ((0, T ) × Ω) , u ηn γ ηn (v ηn ) ⇀ uγ(v) in L 2 ((0, T ) × Ω) . (2.33) Writing now a very weak formulation of system (2.6) like in Definition 1.1, we can pass to the limit when η → 0 and get that (u, v) indeed are very weak solutions of system (1.1) in the sense of Definition 1.1. Finally, assuming additionally that v in ∈ W 1,q (Ω) for some q ∈ (1, 2), so that the family (v in η ) satisfies (2.5), we infer from (2.27) and Lemma 2.6 that v ∈ L ∞ ((0, T ); W 1,q (Ω)), which completes the proof. Long-term behavior and a Lyapunov functional This section is devoted to the existence of a Lyapunov functional and the proof of Theorem 1.5. Let Ω be a smooth bounded domain of R N , with N ≥ 2 and ε > 0. Assume that γ ∈ C([0, ∞)) ∩ C 3 ((0, ∞)) satisfies (1.15) and consider nonnegative initial conditions (u in , v in ) ∈ W 1,r (Ω; R 2 ) for some r > N . It then follows from [9,10] that there is a unique global classical solution (u, v) to (1.1). Setting m := u in , it readily follows from (1.1) and the nonnegativity of (u, v) that u(t) 1 = |Ω| u(t) = m|Ω| , t ≥ 0 , (3.1a) and ε d dt v(t) 1 + v 1 = m|Ω| , t ≥ 0 , (3.1b) from which we deduce that v(t) 1 = u in 1 1 − e −t/ε + v in 1 e −t/ε ≤ max u in 1 , v in 1 , t ≥ 0 . (3.1c) A Lyapunov functional d dt L 0 (u(t), v(t)) + D 0 (u(t), v(t)) = 0 , t > 0 , (3.2) recalling that L 0 and D 0 are both nonnegative and defined in (1.18) and (1.19), respectively. In particular, L 0 (u(t), v(t)) + t 0 D 0 (u(s), v(s)) ds ≤ L 0 (u in , v in ) , t ≥ 0 . (3.3) Proof of Lemma 3.1. Since G ′′ 0 (z) = 2zγ ′ (z) + 2γ(z) − mγ ′ (z) ≥ 0 for z > 0 by (1.15) and (1.16), the function G 0 is convex on (0, ∞). We then deduce from the convexity of G 0 and (1.16) that G 0 (z) ≥ G 0 (m) + G ′ 0 (m)(z − m) = 0; that is, G 0 is nonnegative on (0, ∞). We next infer from (1.1a), (1.4), and (3.1b) that 1b) and (1.1c), 1 2 d dt ∇K(u − m) 2 2 = − Ω K(u − m)∂ t ∆K(u − m) dx = Ω K(u − m)∂ t u dx = Ω K(u − m)∆(uγ(v)) dx = Ω uγ(v)∆K(u − m) dx = Ω (mu − u 2 )γ(v) dx = Ω (mu + v 2 − 2uv)γ(v) dx − Ω (u − v) 2 γ(v) dx .Ω (m − 2v)uγ(v) dx = Ω (m − 2v)(ε∂ t v − ∆v + v)γ(v) dx = −ε Ω (mγ(m) + G ′ 0 (v))∂ t v dx − Ω G ′′ 0 (v)|∇v| 2 dx + Ω (mv − 2v 2 )γ(v) dx . Combining the above identity with (3.4) gives 1 2 d dt ∇K(u − m) 2 2 = −ε d dt Ω (mγ(m)v + G 0 (v)) dx − Ω (u − v) 2 γ(v) dx − Ω G ′′ 0 (v)|∇v| 2 dx + Ω (mv − v 2 )γ(v) dx , while we deduce from (3.1b) that −εmγ(m) d dt v 1 − mγ(m) v 1 = −m 2 γ(m)|Ω| . Adding the previous two formulas leads us to 1 2 d dt ∇K(u − m) 2 2 + ε d dt Ω G 0 (v) dx = − Ω (u − v) 2 γ(v) dx − Ω G ′′ 0 (v)|∇v| 2 dx − Ω (v − m)(vγ(v) − mγ(m)) dx , and we have proved (3.2). Now, the nonnegativity of L 0 and D 0 is a consequence of the already established nonnegativity and convexity of G 0 and the monotonicity of z → zγ(z) which is due to (1.15). Finally, the bound (3.3) readily follows from (3.2) after integration with respect to time. We next supplement the bound (3.3) with additional estimates on v. Lemma 3.2. sup t≥0 v(t) 2 H 1 + ε ∞ 0 ∂ t v(s) 2 2 ds < ∞ ,(3. 5) Proof. We multiply (1.1b) by ∂ t v and integrate over Ω to obtain ε ∂ t v 2 2 + 1 2 d dt v 2 H 1 = Ω u∂ t v dx = d dt Ω uv dx − Ω v∂ t u dx . Then, using again (1.1), ε ∂ t v 2 2 + d dt v 2 H 1 2 − uv 1 = − Ω uγ(v)∆v dx = − Ω (u − v)γ(v)∆v dx − Ω vγ(v)∆v dx . (3.6) On the one hand, by (1.1c), (1.15), and the nonnegativity of m, − Ω vγ(v)∆v dx = Ω vγ ′ (v) + γ(v) |∇v| 2 dx = 1 2 Ω G ′′ 0 (v) + mγ ′ (v) |∇v| 2 dx ≤ 1 2 Ω G ′′ 0 (v)|∇v| 2 dx . (3.7) On the other hand, we infer from (1.1b), (1.15), and Young's inequality that − Ω (u − v)γ(v)∆v dx = Ω (u − v)γ(v) (u − v − ε∂ t v) dx ≤ Ω (u − v) 2 γ(v) dx + ε 2 ∂ t v 2 2 + ε 2 γ(0) Ω (u − v) 2 γ(v) dx ≤ 2 + εγ(0) 2 Ω (u − v) 2 γ(v) dx + ε 2 ∂ t v 2 2 . (3.8) Recalling the definition (1.19) of D 0 which is the sum of three nonnegative terms, it follows from (3.6), (3.7), and (3.8) that ε 2 ∂ t v 2 2 + d dt v 2 H 1 2 − uv 1 ≤ 2 + εγ(0) 2 D 0 (u, v) . Now, let t > 0. Integrating the above differential inequality with respect to time over (0, t) and using (3.3) give ε t 0 ∂ t v(s) 2 2 ds + v(t) 2 H 1 ≤ v in 2 H 1 + 2 u(t)v(t) 1 + (2 + εγ(0)) t 0 D 0 (u(s), v(s)) ds ≤ v in 2 H 1 + (2 + εγ(0))L 0 (u in , v in ) + 2 u(t)v(t) 1 .u(t)v(t) 1 = Ω (u(t) − m)v(t) dx + m v(t) 1 = − Ω v(t)∆K(u(t) − m) dx + m v(t) 1 ≤ Ω ∇v(t) · ∇K(u(t) − m) dx + m max{ v in 1 , u in 1 } ≤ 1 4 ∇v(t) 2 2 + ∇K(u(t) − m) 2 2 + m max{ u in 1 , v in 1 } ≤ 1 4 v(t) 2 H 1 + 2L 0 (u(t), v(t)) + m max{ u in 1 , v in 1 } ≤ 1 4 v(t) 2 H 1 + 2L 0 (u in , v in ) + m max{ u in 1 , v in 1 } . (3.10) Combining (3.9) and (3.10) leads us to ε t 0 ∂ t v(s) 2 2 ds + v(t) 2 H 1 2 ≤ v in 2 H 1 + (6 + εγ(0))L 0 (u in , v in ) + 2m max{ u in 1 , v in 1 } , and completes the proof. Convergence to spatially homogeneous steady states Collecting the outcome of Lemma 3.1 and Lemma 3.2, we have established the identity (1.17) and the estimates (1.20). We are left with the long-term convergence and begin with some properties of G 0 which we gather in the next lemma. Lemma 3.3. There is K 1 > 0 depending only on γ such that γ satisfies (1.12) with k = 1. Moreover, |zγ(z) − mγ(m)| ≤ γ(0) |z − m| , z ∈ [0, ∞) . (3.11) In addition, recalling that G 0 is defined in (1.16) and is convex on (0, ∞), the function G ′′ 0 ∈ L 1 (0, z) for any z > 0 and its indefinite integral g 0 (z) := z 0 G ′′ 0 (z * ) dz * , z ∈ [0, ∞) , is well-defined and belongs to C 0, 1 2 ([0, z]) for all z > 0. Proof. Since γ satisfies (1.15), the function γ satisfies zγ(z) ≥ γ(1) > 0 for z ≥ 1, while the positivity and monotonicity of γ on [0, 1] implies that min [0,1] γ = γ(1) > 0. Combining these two facts ensures that γ satisfies (1.12) with k = 1 and K 1 = 1/γ(1). It next follows from the monotonicity of γ that 0 ≤ d dz (zγ(z)) = zγ ′ (z) + γ(z) ≤ γ(0) , z ≥ 0 . (3.12) Integrating the above differential inequality gives (3.11). Next, the convexity of G 0 provided by Lemma 3.1 guarantees that G ′′ 0 is well-defined. Using Cauchy-Schwarz inequality, we obtain that, for z 2 > z 1 > 0, z 2 z 1 G ′′ 0 (z) dz ≤ √ z 2 − z 1 z 2 z 1 G ′′ 0 (z) dz 1/2 = √ z 2 − z 1 G ′ 0 (z 2 ) − G ′ 0 (z 1 ) ≤ √ z 2 − z 1 2γ(0)z 2 + mγ(0) . The stated properties of g 0 then readily follow from the above inequality, which concludes Lemma 3.3. Proof of Theorem 1.5. As already mentioned, the identity (1.17) and the estimates (1.20) are shown in Lemma 3.1 and Lemma 3.2, respectively, and we now turn to the large time behavior. Since the monotonicity properties of γ guarantee that all the terms in L 0 (u, v) and D 0 (u, v) are nonnegative, we infer from (1.20) and the Poincaré-Wirtinger inequality that K(u(t) − m) H 1 + v(t) H 1 ≤ C , t ≥ 0 , (3.13) ∞ 0 ∂ t v(s) 2 2 ds + ∞ 0 Ω γ(v)(u − v) 2 dx ds < ∞ , (3.14) ∞ 0 Ω G ′′ 0 (v)|∇v| 2 dx ds + ∞ 0 Ω (v − m) vγ(v) − mγ(m) dx ds < ∞ . (3.15) We readily infer from (3.13) and the compactness of the embedding of H 1 (Ω) in L 2 (Ω) that {K(u(t) − m) : t ≥ 0} and {v(t) : t ≥ 0} are compact in L 2 (Ω) (3.16) and there are a sequence (t j ) j≥1 of positive times, t j → ∞, and (U ∞ , v ∞ ) ∈ H 1 (Ω, R 2 ) such that lim j→∞ K(u(t j ) − m) − U ∞ 2 + v(t j ) − v ∞ 2 = 0 .v j (s) − v ∞ 2 ≤ v j (s) − v j (0) 2 + v j (0) − v ∞ 2 = v(s + t j ) − v(t j ) 2 + v(t j ) − v ∞ 2 ≤ s+t j t j ∂ t v(s) 2 ds + v(t j ) − v ∞ 2 ≤ t j +1 t j −1 ∂ t v(s) 2 2 ds 1/2 + v(t j ) − v ∞ 2 . Since the right-hand side of the above inequality does not depend on s ∈ [−1, 1] and converges to zero as j → ∞ according to (3.14) and (3.17), we conclude that lim j→∞ sup s∈[−1,1] v j (s) − v ∞ 2 = 0 . (3.19) An immediate consequence of (3.19) is that, up to the extraction of a subsequence, we may assume that lim j→∞ v j (s, x) = v ∞ (x) for a.e. (s, x) ∈ (−1, 1) × Ω . (3.20) It next follows from (3.15) and the monotonicity (1.15) of z → zγ(z) that lim j→∞ 1 −1 Ω (v j − m) v j γ(v j ) − mγ(m) dx ds = lim j→∞ t j +1 t j −1 Ω (v − m) vγ(v) − mγ(m) dx ds = 0 , which gives, together with (3.20) and Fatou's lemma , Or m i = m s , and the property zγ(z) = mγ(m) for z ∈ I entails that G ′′ 0 (z) = −mγ ′ (z) = m 2 γ(m)/z 2 for z ∈ I. Introducing 1 −1 Ω (v ∞ − m) v ∞ γ(v ∞ ) − mγ(m) dx ds = 0 .g(z) :=                    0 , z ∈ [0, m i ) z m i G ′′ 0 (z * ) dz * = m γ(m) ln (z/m i ) , z ∈ I , ms m i G ′′ 0 (z * ) dz * = m γ(m) ln (m s /m i ) , z ∈ (m s , ∞) ( when m s < ∞) , we infer from (3.15), (3.19), and the Lipschitz continuity of g that lim j→∞ 1 −1 ∇g(v j (s)) 2 2 ds ≤ lim j→∞ t j +1 t j −1 Ω G ′′ 0 (v)|∇v| 2 2 dx ds = 0,(3.g(v j (s)) − g(v ∞ ) 2 = 0 . (3.24) Combining (3.23) and (3.24) implies that ∇g(v ∞ ) = 0 a.e. in Ω and we deduce from (3.22), the connectedness of Ω, and the strict monotonicity of g on I that there is a unique µ ∈ I such that v ∞ = µ a.e. in Ω. Recalling that v ∞ = m|Ω| by (3.18), we conclude that necessarily µ = m. Consequently, v ∞ ≡ m in this case as well, so that, recalling (3.19), we have shown that lim j→∞ sup s∈[−1,1] v j (s) − m 2 = 0 . (3.25) We next turn to the behaviour of u and the identification of U ∞ in (3.17). On the one hand, for p ∈ [1, 4N/(3N − 2)] ∩ [1, 2), Hölder's inequality gives 1 −1 Ω |u j − v j | p dx ds = t j +1 t j −1 Ω |u − v| p γ(v) p/2 γ(v) −p/2 dx ds ≤ t j +1 t j −1 Ω |u − v| 2 γ(v) dx ds p/2 t j +1 t j −1 Ω γ(v) −p/(2−p) dx ds (2−p)/2 , and we infer from (3.13), Lemma 3.3, and the continuous embedding of H 1 (Ω) in L p/(2−p) (Ω) that t j +1 t j −1 Ω γ(v) −p/(2−p) dx ds ≤ C t j +1 t j −1 Ω (1 + v) p/(2−p) dx ds ≤ C(p) 1 + t j +1 t j −1 v(s) p/(2−p) p/(2−p) ds ≤ C(p) 1 + sup s≥0 v(s) p/(2−p) H 1 ≤ C(p) . Combining the above inequalities leads us to 1 −1 Ω |u j − v j | p dx ds ≤ C(p) t j +1 t j −1 Ω |u − v| 2 γ(v) dx ds p/2 , which gives, along with (3.14), lim j→∞ 1 −1 Ω |u j − v j | p dx ds = 0 . Recalling (3.25), we end up with lim j→∞ 1 −1 Ω |u j − m| p dx ds = 0 . (3.26) On the other hand, it follows from Hölder's inequality that K∂ t u 2 = (u − v)γ(v) + vγ(v) − mγ(m) + mγ(m) − vγ(v) + (v − u)γ(v) 2 ≤ |Ω| (u − v)γ(v) + vγ(v) − mγ(m) + mγ(m) − vγ(v) 2 + (v − u)γ(v) 2 ≤ (1 + |Ω|) vγ(v) − mγ(m) 2 + (v − u)γ(v) 2 . Since (with K 0 = ||γ|| ∞ ) (v − u)γ(v) 2 2 ≤ K 0 Ω (v − u) 2 γ(v) dx, and vγ(v) − mγ(m) 2 2 ≤ K 0 Ω vγ(v) − mγ(m) |v − m| dx = K 0 Ω vγ(v) − mγ(m) (v − m) dx by (1.15) and Lemma 3.3, we conclude that K∂ t u 2 2 ≤ 2K 0 (1 + |Ω|) 2 Ω (u − v) 2 γ(v) + (v − m) vγ(v) − mγ(m) dx . Hence, thanks to (3.14) and (3.15), ∞ 0 K∂ t u(s) 2 2 ds < ∞ , and, since K∂ t u = ∂ t K(u − m), we argue as in the proof of (3.19) to deduce from (3.17) and the above integrability property that lim j→∞ sup s∈[−1,1] K(u j (s) − m) − U ∞ 2 = 0 . (3.27) According to (3.13) and (3.27), we may also assume that, up to the extraction of a subsequence, K(u j − m) * ⇀ U ∞ in L ∞ ([−1, 1]; H 1 (Ω)) . (3.28) We then infer from (3.26) and (3.28) that, for any ϕ ∈ H 1 (Ω), 0 = lim j→∞ 1 −1 Ω (u j (s) − m)ϕ dx ds = lim j→∞ 1 −1 Ω ∇K(u j (s) − m) · ∇ϕ dx ds = 1 −1 Ω ∇U ∞ · ∇ϕ dx ds = 2 Ω ∇U ∞ · ∇ϕ dx , which entails, together with (3.18), that U ∞ ≡ 0. We have thus proved that (0, m) is the only cluster point as t → ∞ of { K(u(t) − m), v(t) t ≥ 0} in L 2 (Ω; R 2 ). Together with the already established compactness (3.16) of this set in L 2 (Ω; R 2 ), this property implies that K(u(t) − m), v(t) converges to (0, m) in L 2 (Ω; R 2 ) as t → ∞ and completes the proof of Theorem 1.5. Proof of Proposition 1.6. According to [15,Proposition 1.2], there is d 0 > 0 such that if d ∈ (0, d 0 ), then there exists a nonconstant positive solution w = w (d) ∈ C 2 (Ω 0 ) of 0 = d∆w − w + w k in Ω 0 , 0 = ∇w · n on ∂Ω 0 . Setting R 0 := 1 √ d 0 , and picking R > R 0 , we then obtain that d : = 1 R 2 satisfies d ∈ (0, d 0 ), and that for v(x) := w (d) ( √ dx) and u(x) := v k (x), x ∈ RΩ 0 , we have ∇u · n = ∇v · n = 0 on ∂(RΩ 0 ) as well as ∆v(x) − v(x) + u(x) = d∆w( √ dx) − w( √ dx) + w k ( √ dx) = 0 for all x ∈ RΩ 0 and ∆ uγ(v) = ∆(v k · v −k ) = 0 in RΩ 0 , as claimed. System with logistic-type growth In this final part we address the problem (1.3) involving logistic-type zero order degradation. As our approach in the present section will no longer make use of a comparison argument, we may here employ a somewhat simpler regularization which enforces global solvability at the respective approximate level by involving a suitably strong damping in the signal production mechanism. More precisely, assuming throughout this section that γ, h, u in and v in comply with the requirements from Theorem 1.8, for η ∈ (0, 1) we shall consider ∂ t u η − ∆(u η γ(v η )) = u η h(u η ), (t, x) ∈ (0, ∞) × Ω, (4.1a) ε∂ t v η = ∆v η − v η + u η 1 + ηu η , (t, x) ∈ (0, ∞) × Ω, (4.1b) ∇(u η γ(v η )) · n = ∇v η · n = 0, (t, x) ∈ (0, ∞) × ∂Ω, (4.1c) (u η , v η )(0, ·) = (u in , v in ) . x ∈ Ω, (4.1d) Indeed, by straightforward adaptation from standard arguments from the theory of Keller-Segel type crossdiffusion systems (see, e.g., [22]) it can be seen that each of these problems admits a global classical solution (u η , v η ) with 0 ≤ u η ∈ C([0, ∞)×Ω)∩C 1,2 ((0, ∞)×Ω) and 0 ≤ v η ∈ q>1 C([0, ∞); W 1,q (Ω))∩C 1,2 ((0, ∞)× Ω), and that v η (t, x) ≥ inf Ω v in e −t/ε for all (t, x) ∈ [0, ∞) × Ω. (4.2) Now the core of this section is contained in the following. (1.26). Then for all T > 0 there exists C(T ) > 0 such that for all η ∈ (0, 1), T 0 Ω u η ln(u η + e)|h(u η )| + u 2 η + γ(v η ) |∇u η | 2 u η + e + |∆v η | 2 + |∇v η | 4 v 2 η dx dt ≤ C(T ), (4.3) as well as Ω u η (t) ln u η (t) + e dx + Ω |∇v η (t)| 2 dx ≤ C(T ) for all t ∈ (0, T ). and use (4.1) along with Young's inequality and (4.5) to see that whenever η ∈ (0, 1), d dt Ω (u η + e) ln(u η + e) − 1 dx + 1 2 Ω u η ln(u η + e)|h(u η )| dx + Ω u 2 η dx = Ω ∆(u η γ(v η )) + u η h(u η ) ln(u η + e) dx + 1 2 Ω u η ln(u η + e)|h(u η )| dx + Ω u 2 η dx = − Ω γ(v η ) |∇u η | 2 u η + e dx − Ω u η u η + e γ ′ (v η )∇u η · ∇v η dx + 3 2 Ω u η ln(u η + e)h + (u η ) dx − 1 2 Ω u η (u η + 1)µ(u η ) dx + Ω u 2 η dx ≤ − 1 2 Ω γ(v η ) |∇u η | 2 u η + e dx + 1 2 Ω u 2 η u η + e γ ′2 (v η ) γ(v η ) |∇v η | 2 dx + 3 2 Ω u η ln(u η + e)h + (u η ) dx − 1 2 Ω u η (u η + 1)µ(u η ) dx + Ω u 2 η dx and thereby implies (4.4), while (4.3) can be derived by direct integration in (4.10). An immediate consequence of (4.3) reveals some integrability feature of expressions related to the fluxes appearing in the first equation from (4.1), here slightly generalized by involving an exponent θ which can actually be an arbitrary element of [ 1 2 , ∞). for all η ∈ (0, 1). Proof. Let η ∈ (0, 1) and T > 0. Then in (v η )|γ ′ (v η )| 4/3 v 2/3 η dx dt ≤ T 0 Ω |∇v η | 4 v 2 η dx dt + T 0 Ω u 2 η v η γ ′2 (v η ) γ(v η ) γ 2θ−1 (v η ) dx dt. Collecting the above inequalities and using (4.3), (4.5) and θ ≥ 1/2 gives (4.11). For later reference, let us briefly note some basic information on mass control in the two components. With the regularity requirements in Definition 1.7 hence being asserted in view of (4.20), (4.21), (4.24) and (4.26), the derivation of the identities in (1.23) and (1.24) can be achieved by taking η = η j ց 0 in the corresponding weak formulation of (4.1) and using the convergence properties in (4.21), (4.22), (4.25) and (4.26). To finally verify the claimed additional regularity features, we observe that (1.27) follows from v ∈ L 2 (H 2 ) and (4.24) upon observing that the inclusions u ∈ L ∞ ((0, T ); L log L(Ω)) and v ∈ L ∞ ((0, T ); H 1 (Ω)) for all T > 0 are immediate consequences of (4.4). The coupled weak differentiability property in (1.28) can be concluded from Lemma 4.2 when applied to θ = 1 2 and combined with the fact that u η j γ(v η j ) → u γ(v) a.e. in (0, ∞) × Ω as j → ∞, the latter resulting from (4.22), (4.23) and the continuity of γ. w ∈ (H 1 ) ′ (Ω), not necessarily with zero average, we also define A −1 w ∈ H 1 (Ω) as the unique (variational) solution to− ∆A −1 w + A −1 w = w in Ω , ∇A −1 w · n = 0 on ∂Ω .(1.7) Clearly, A −1 is the extension to (H 1 ) ′ (Ω) of the inverse of the unbounded linear operator A on L 2 (Ω) with domain D(A) := {z ∈ H 2 (Ω) : ∇z · n = 0 on ∂Ω} , Az := −∆z + z for z ∈ D(A) . (1.8) Lemma 2 . 4 . 24For all q ∈ (1, ∞) and T > 0, there exists some constant C(T, q) ≥ 0 such that t 0 e 0−(t−s)Aε w η (s) ds L q ((0,T );W 2,q (Ω)) ≤ C(q) w η L q ((0,T )×Ω) ≤ C(T, q) . Lemma 2. 5 . 5For all p ∈ (1, 2) and T > 0, there exists some constant C(T, p) ≥ 0 such that space C([0, T ]; w − (H 1 ) ′ (Ω)) of functions from [0, T ] to (H 1 ) ′ (Ω) which are continuous with respect to time for the weak topology of (H 1 ) ′ (Ω), we recall thatL ∞ ((0, T ); (H 1 ) ′ (Ω)) ∩ C([0, T ]; (H 2 ) ′ (Ω)) ⊂ C([0, T ]; w − (H 1 ) ′ (Ω)) ,and deduce from (2.25b) and (2.29) thatu(t) = m , t ∈ [0, T ] .(2.30) Furthermore, u ∈ L 1 ((0, T ) × Ω) by (2.25a). Consequently, u(t) belongs to L 1 (Ω) for a.e. t ∈ [0, T ] which ensures, together with (2.30) and the nonnegativity of u, that u ∈ L ∞ ((0, T ); L 1 (Ω)) . (2.31) Lemma 3. 1 . 1The function G 0 defined in (1.16) is nonnegative and convex on (0, ∞), and 3.1c), a straightforward consequence of (3.17) and the definition of K is thatU ∞ = 0 and v ∞ = m|Ω| .(3.18)For j ≥ 1 and s ∈ [−1, 1], we set (u j , v j )(s) := (u, v)(s + t j ) and first observe that i := inf{z ∈ (0, ∞) : zγ(z) = mγ(m)} , m s := sup{z ∈ (0, ∞) : zγ(z) = mγ(m)} ∈ [m, ∞] , I := {z ∈ (0, ∞) : zγ(z) = mγ(m)} , we infer from the boundedness of γ and the monotonicity (1.15) of z → zγ(z) that m i ∈ (0, m] and I =    [m i , m s ] , m s < ∞ , [m i , ∞) , m s = ∞ .Combining this property with (3.21) implies in particular that v ∞ (x) ∈ I for a.e. x ∈ Ω . (3.22) At this point, either m i = m s = m and it readily follows from (3.22) that v ∞ ≡ m. From Assumption (1.25) and (4.2), we know that there exists c 1(T ) > 1 such that v η ≥ 1 c 1 (T ) , γ(v η ) ≤ c 1 (T ) and γ ′2 (v η ) γ(v η ) ≤ c 1 (T ) v η in (0, T ) × Ω for all η ∈ (0, 1). (4.5) We furthermore combine [25, Lemma 3.3] (with h(s) = e −s and with e −ϕ replaced by ϕ) with elliptic regularity theory to find c 2 for all ϕ ∈ C 2 (Ω) such that ϕ > 0 in Ω and ∇ϕ · n = 0 on ∂Ω. − (s) := max (−h(s), 0) , h + (s) := h(s) + h − (s), s ≥ 0, (4.7b) Lemma 4 . 2 . 42If (1.25) and (1.26) hold, then for all θ ≥ 1 2 and each T > 0, there exists C(θ, T ) v η )|γ ′ (v η )| 4/3 |∇v η | 4/3 dx dt,we twice use Young's inequality to estimate v η )|γ ′ (v η )| 4/3 |∇v η | Lemma 4. 3 . 3Assume (1.25) and (1.26). Then there exists C > 0 such that Ω u η (t) dx ≤ C and Ω v η (t) dx ≤ C for all t > 0 and η ∈ ( AcknowledgmentsThe fourth author acknowledges support of the Deutsche Forschungsgemeinschaft in the context of the project Emergence of structures and advantages in cross-diffusion systems (No. 411007140, GZ: WI 3707/5-1).η dx for all t ∈ (0, T ),(4.8)as well asSince (4.7), together with (1.26), ensures thatand thus there exists c 3 (T ) > 0 such thatCombining (4.8) with (4.9) and (4.6), we conclude that forfor all t > 0 and η ∈ (0, 1), (4.10)which after a time integration shows that y η (t) ≤ Ω (u in + e) ln(u in + e) − 1 dx + ε ∇v in 2 2 + c 3 (T )T |Ω| for all t ∈ (0, T ) and η ∈ (0, 1),Proof. Since (1.26) particularly entails the existence of s 1 > 0 such that h(s) ≤ −1 for all s > s 1 , it follows from (4.1) that|h| for all t > 0 and η ∈ (0, 1), and that thus, by a simple comparison argument,|h| for all t > 0 and η ∈ (0, 1).From the second equation in (4.1) we therefore obtain thatfor all t > 0 and η ∈ (0, 1), and a simple time integration completes the proof.for all η ∈ (0, 1).Proof. Since p > N and p ≥ 4, we can pick c 5 > 0 such that ψ ∞ + ∇ψ 4 ≤ c 5 for all ψ ∈ C 1 (Ω) fulfilling ψ W 1,p ≤ 1. Fixing any such ψ, from (4.1) we obtain that for all t > 0 and η ∈ (0, 1),where According to our choice of c 5 and Hölder and Young's inequalities, in (4.14) we can therefore estimateand−as well asfor all t ∈ (0, T ) and η ∈ (0, 1), where by (4.5), (4.15), (4.16) and Young's inequality,for all t ∈ (0, T ) and η ∈ (0, 1). Together with (4.17)-(4.19) inserted into (4.14), this shows that with some c 7 (T ) > 0, for all t ∈ (0, T ) and η ∈ (0, 1), we have is bounded in L 1 ((0, T ); (W 1,p (Ω)) ′ ) for all T > 0.Apart from that, Lemma 4.1 in conjunction with Lemma 4.3 warrants that (v η ) η∈(0,1) is bounded in L 2 ((0, T ); H 2 (Ω)) for all T > 0, whereas in view of (4.1) it is obvious that Lemma 4.1 (with Lemma 4.3) moreover entails thatOwing to the compactness of the embeddings of W 1,4/3 (Ω) and H 2 (Ω), respectively, in L 4/3 (Ω) and H 1 (Ω), respectively, and the continuity of the embeddings of L 4/3 (Ω) and H 1 (Ω), respectively, in (W 1,p (Ω)) ′ and L 2 (Ω), respectively, two applications of the Aubin-Lions-Simon lemma [21, Corollary 4] thus provide (η j ) j∈N ⊂ (0, 1) such that η j ց 0 as j → ∞, and also provide that for all T > 0,a.e. in (0, ∞) × Ω and in L 4/3 ((0, T ) × Ω), (4.20)in L 2 ((0, T ); H 1 (Ω)) and a.e. in (0, ∞) × Ω (4.22)for some nonnegative z ∈ L 4/3 ((0, T ); W 1,4/3 (Ω)), and some v ∈ L 2 ((0, T ); H 2 (Ω)) which satisfies v > 0 a.e. in (0, ∞) × Ω, according to(4.22)and (4.2). Now from (4.2), (4.20),(4.22)and the positivity of γ, it is evident thata.e. in (0, ∞) × Ω,(4.23)so that since (u η ) η∈(0,1) is bounded in L 2 ((0, T ) × Ω) for all T > 0, by Lemma 4.1, we infer that u ∈ L 2 ((0, T ) × Ω),(4.24)and that thanks to the Vitali convergence theorem, u η j → j→∞ u in L 1 ((0, T ) × Ω) for all T > 0.(4.25)Similarly, the L 1 estimate for u η ln(u η + e)h(u η ) η∈(0,1) contained in (4.3) can readily be seen to entail that u η h(u η ) η∈(0,1) is uniformly integrable over (0, T ) × Ω for all T > 0, while the continuity of h and (4.23) guarantees that u η j h(u η j ) → j→∞ uh(u) a.e. in (0, ∞) × Ω. Global well-posedness and stability of constant equilibria in parabolic-elliptic chemotaxis systems without gradient sensing. J Ahn, C Yoon, Nonlinearity. 32J. Ahn and C. Yoon, Global well-posedness and stability of constant equilibria in parabolic-elliptic chemotaxis systems without gradient sensing, Nonlinearity, 32 (2019), pp. 1327-1351. H Amann, Linear and quasilinear parabolic problems. Boston, MABirkhäuser Boston, IncIAbstract linear theoryH. Amann, Linear and quasilinear parabolic problems. Vol. I, vol. 89 of Monographs in Mathematics, Birkhäuser Boston, Inc., Boston, MA, 1995. Abstract linear theory. Delayed blow-up for chemotaxis models with local sensing. M Burger, Ph Laurençot, A Trescases, J. Lond. Math. Soc. 2M. Burger, Ph. Laurençot, and A. Trescases, Delayed blow-up for chemotaxis models with local sensing, J. Lond. Math. Soc. (2), 103 (2021), pp. 1596-1617. A logarithmic chemotaxis model featuring global existence and aggregation. L Desvillettes, Y.-J Kim, A Trescases, C Yoon, Nonlinear Anal. Real World Appl. 50L. Desvillettes, Y.-J. Kim, A. Trescases, and C. Yoon, A logarithmic chemotaxis model fea- turing global existence and aggregation, Nonlinear Anal. Real World Appl., 50 (2019), pp. 562-582. Stripe formation in bacterial systems with density-suppressed motility. X Fu, L H Tang, C Liu, J D Huang, T Hwa, P Lenz, Phys. Rev. Lett. 108X. Fu, L. H. Tang, C. Liu, J. D. Huang, T. Hwa, and P. Lenz, Stripe formation in bacterial systems with density-suppressed motility, Phys. Rev. Lett., 108 (2012), pp. 1981-1988. Global existence for a kinetic model of pattern formation with densitysuppressed motilities. K Fujie, J Jiang, J. Differential Equations. 269K. Fujie and J. Jiang, Global existence for a kinetic model of pattern formation with density- suppressed motilities, J. Differential Equations, 269 (2020), pp. 5338-5378. Boundedness of classical solutions to a degenerate Keller-Segel type model with signal-dependent motilities. K Fujie, J Jiang, Acta Appl. Math. 176Paper No. 3K. Fujie and J. Jiang, Boundedness of classical solutions to a degenerate Keller-Segel type model with signal-dependent motilities, Acta Appl. Math., 176 (2021), p. Paper No. 3. Comparison methods for a Keller-Segel-type model of pattern formations with density-suppressed motilities. K Fujie, J Jiang, Calc. Var. Partial Differential Equations. 6092K. Fujie and J. Jiang, Comparison methods for a Keller-Segel-type model of pattern formations with density-suppressed motilities, Calc. Var. Partial Differential Equations, 60 (2021), pp. 1-37. Id/No 92. Global existence and infinite time blow-up of classical solutions to chemotaxis systems of local sensing in higher dimensions. K Fujie, T Senba, arXiv:2102.120802021K. Fujie and T. Senba, Global existence and infinite time blow-up of classical solutions to chemotaxis systems of local sensing in higher dimensions, 2021. arXiv: 2102.12080. Global existence, uniform boundedness, and stabilization in a chemotaxis system with density-suppressed motility and nutrient consumption. J Jiang, Ph Laurençot, Y Zhang, Comm. Partial Differential Equations. 0J. Jiang, Ph. Laurençot, and Y. Zhang, Global existence, uniform boundedness, and stabilization in a chemotaxis system with density-suppressed motility and nutrient consumption, Comm. Partial Differential Equations, 0 (2022), pp. 1-46. Boundedness, stabilization, and pattern formation driven by density-suppressed motility. H.-Y Jin, Y.-J Kim, Z.-A Wang, SIAM J. Appl. Math. 78H.-Y. Jin, Y.-J. Kim, and Z.-A. Wang, Boundedness, stabilization, and pattern formation driven by density-suppressed motility, SIAM J. Appl. Math., 78 (2018), pp. 1632-1657. Critical mass on the Keller-Segel system with signal-dependent motility. H.-Y Jin, Z.-A Wang, Proc. Amer. Math. Soc. 148H.-Y. Jin and Z.-A. Wang, Critical mass on the Keller-Segel system with signal-dependent motility, Proc. Amer. Math. Soc., 148 (2020), pp. 4855-4873. Equations d'évolution linéaires associéesà des semi-groupes de contractions dans les espaces L p . (Evolution equations associated to contraction semigroups in L p spaces). D Lamberton, J. Funct. Anal. 72D. Lamberton, Equations d'évolution linéaires associéesà des semi-groupes de contractions dans les espaces L p . (Evolution equations associated to contraction semigroups in L p spaces), J. Funct. Anal., 72 (1987), pp. 252-262. Global existence of weak solutions to a signal-dependent Keller-Segel model for local sensing chemotaxis. H Li, J Jiang, Nonlinear Anal., Real World Appl. 61103338H. Li and J. Jiang, Global existence of weak solutions to a signal-dependent Keller-Segel model for local sensing chemotaxis, Nonlinear Anal., Real World Appl., 61 (2021), p. 14. Id/No 103338. Large amplitude stationary solutions to a chemotaxis system. C.-S Lin, W.-M Ni, I Takagi, J. Differential Equations. 72C.-S. Lin, W.-M. Ni, and I. Takagi, Large amplitude stationary solutions to a chemotaxis system, J. Differential Equations, 72 (1988), pp. 1-27. Large time behavior of solutions for density-suppressed motility system in higher dimensions. Z Liu, J Xu, J. Math. Anal. Appl. 475Z. Liu and J. Xu, Large time behavior of solutions for density-suppressed motility system in higher dimensions, J. Math. Anal. Appl., 475 (2019), pp. 1596-1613. An n-dimensional chemotaxis system with signal-dependent motility and generalized logistic source: global existence and asymptotic stabilization. W Lv, Q Wang, Proc. Roy. Soc. Edinburgh Sect. A. 151W. Lv and Q. Wang, An n-dimensional chemotaxis system with signal-dependent motility and gen- eralized logistic source: global existence and asymptotic stabilization, Proc. Roy. Soc. Edinburgh Sect. A, 151 (2021), pp. 821-841. Global classical solutions for a class of reaction-diffusion system with density-suppressed motility. W Lyu, Z.-A Wang, arXiv:2102.080422021W. Lyu and Z.-A. Wang, Global classical solutions for a class of reaction-diffusion system with density-suppressed motility, 2021. arXiv: 2102.08042. Logistic damping effect in chemotaxis models with density-suppressed motility. arXiv:2111.116692021, Logistic damping effect in chemotaxis models with density-suppressed motility, 2021. arXiv: 2111.11669. A Pazy, Semigroups of linear operators and applications to partial differential equations. ChamSpringer44A. Pazy, Semigroups of linear operators and applications to partial differential equations, vol. 44, Springer, Cham, 1983. Compact sets in the space L p (0, T ; B). J Simon, Ann. Mat. Pura Appl. 4J. Simon, Compact sets in the space L p (0, T ; B), Ann. Mat. Pura Appl. (4), 146 (1987), pp. 65-96. Effects of signal-dependent motilities in a Keller-Segel-type reactiondiffusion system. Y Tao, M Winkler, Math. Models Methods Appl. Sci. 27Y. Tao and M. Winkler, Effects of signal-dependent motilities in a Keller-Segel-type reaction- diffusion system, Math. Models Methods Appl. Sci., 27 (2017), pp. 1645-1683. Boundedness in the higher-dimensional Keller-Segel model with signaldependent motility and logistic growth. J.-P Wang, M.-X Wang, J. Math. Phys. 6014J.-P. Wang and M.-X. Wang, Boundedness in the higher-dimensional Keller-Segel model with signal- dependent motility and logistic growth, J. Math. Phys., 60 (2019), pp. 011507, 14. Steady states and pattern formation of the density-suppressed motility model. Z.-A Wang, X Xu, IMA J. Appl. Math. 86Z.-A. Wang and X. Xu, Steady states and pattern formation of the density-suppressed motility model, IMA J. Appl. Math., 86 (2021), pp. 577-603. Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops. M Winkler, Comm. Partial Differential Equations. 37M. Winkler, Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops, Comm. Partial Differential Equations, 37 (2012), pp. 319-351. Global existence and aggregation in a Keller-Segel model with Fokker-Planck diffusion. C Yoon, Y.-J Kim, Acta Appl. Math. 149C. Yoon and Y.-J. Kim, Global existence and aggregation in a Keller-Segel model with Fokker-Planck diffusion, Acta Appl. Math., 149 (2017), pp. 101-123.
[]
[ "Dynamic Response of Adhesion Complexes: Beyond the Single-Path Picture", "Dynamic Response of Adhesion Complexes: Beyond the Single-Path Picture" ]
[ "Denis Bartolo \nLaboratoire de Physico-Chimie Théorique\nCNRS\n7083esa\n\nESPCI\n10 rue VauquelinF-75231, Cédex 05ParisFrance\n", "Imre Derényi \nInstitut Curie\nUMR 168\n26 rue d'UlmF-75248, Cédex 05ParisFrance\n", "Armand Ajdari \nLaboratoire de Physico-Chimie Théorique\nCNRS\n7083esa\n\nESPCI\n10 rue VauquelinF-75231, Cédex 05ParisFrance\n" ]
[ "Laboratoire de Physico-Chimie Théorique\nCNRS\n7083esa", "ESPCI\n10 rue VauquelinF-75231, Cédex 05ParisFrance", "Institut Curie\nUMR 168\n26 rue d'UlmF-75248, Cédex 05ParisFrance", "Laboratoire de Physico-Chimie Théorique\nCNRS\n7083esa", "ESPCI\n10 rue VauquelinF-75231, Cédex 05ParisFrance" ]
[]
We analyze the response of molecular adhesion complexes to increasing pulling forces (dynamic force spectroscopy) when dissociation can occur along either one of two alternative trajectories in the underlying multidimensional energy landscape. A great diversity of behaviors (e.g. nonmonotonicity) is found for the unbinding force and time as a function of the rate at which the pulling force is increased. We highlight an intrinsic difficulty in unambiguously determining the features of the energy landscape from single-molecule pulling experiments. We also suggest a class of "harpoon" stickers that bind easily but resist strong pulling efficiently. 82.20.Kh, 33.15.Fm The last decades have witnessed a remarkable development of physical investigation methods to probe single molecules or complexes by various micromanipulation means. New techniques have been put forward to probe the unfolding of proteins and to quantify the strength of adhesion structures[1][2][3][4][5]. An important step in this direction is the proposal of the group of Evans to use soft structures to pull on adhesion complexes or molecules at various loading rates (dynamic force spectroscopy)[6]. Moving the other end of the soft structure at constant velocity induces on the complex a pulling force that increases linearly in time f = rt. Measuring the typical rupture time t typ yields a typical rupture force f typ = rt typ that depends on the pulling rate r. This provides information as to the energy landscape of the bound complex. Indeed, in many situations one observes a linear increase of f typ with log(r), which can be understood within a simple adiabatic Kramers picture for the escape from a well (bound/attached state) over a barrier of height E located at a projected distance x from the well along the pulling direction. The progressive increase of the force results in a corresponding increase of the escape rate, so that, in agreement with some experiments [6], the typical rupture force increases logarithmically with r:where ω is the escape rate in the absence of force. The rupture time on the other hand decreases with r. The occurrence in some cases of two successive straight lines in a [f typ , log(r)] plot has been argued to be the consequence of having two successive barriers along the 1D escape path, the intermediate one showing up in the response at fast pulling rates [6](Figs. 1a and 2). Other theories have tried to back up more complete information as to the overall effective 1D potential landscape by an analysis of the probability distribution for rupture time and of the statistics of trajectories before rupture[7,8]. Assemblies in series and in parallel of such 1D bonds have also been considered[9][10][11].In this Letter we point out limitations arising from the a priori assumption of a single-path topology of the energy landscape for the interpretation of such experiments. From the analysis of simple examples with a two-path topology, we draw three conclusions: (i) first, the dependence of the rupture force and rupture time on the pulling rate can take various forms, including nonmonotonic behavior (see e.g. Figs. 3 to 5). (ii) Second, the main features of the energy landscape can not be unambiguously deduced from a [f typ , log(r)] plot, as very different landscapes can yield similar curves(Fig. 6). (iii) Third, we propose simple "harpoon" designs (Fig. 1 c and d)for functionally efficient stickers that can bind easily but resist strongly in a range of pulling forces(Fig. 4). α path β path α path β path α path β path (b) Classical 0 0 0 a A 0 a a a b a b a 0 Switch (d) 0 a b B b Harpoon (c) (a) ' ' x Combo FIG. 1. Sketch of the topology of the main valley of the energy landscape for a few examples. 0 denotes the fundamental bound state, A and B are local minima, and a, a ′ , b and b ′ are passes to overcome. To the right (increasing values of x) of the last passes is the continuum that describes unbound states. (a) classical single-path scheme. (b,c,d) unbinding can occur through two alternative routes α and β.
10.1103/physreve.65.051910
[ "https://arxiv.org/pdf/cond-mat/0106543v1.pdf" ]
9,596,777
cond-mat/0106543
44bb6fc969660ae23a6a29efe604a18d50a0cb83
Dynamic Response of Adhesion Complexes: Beyond the Single-Path Picture 26 Jun 2001 (October 29, 2018) Denis Bartolo Laboratoire de Physico-Chimie Théorique CNRS 7083esa ESPCI 10 rue VauquelinF-75231, Cédex 05ParisFrance Imre Derényi Institut Curie UMR 168 26 rue d'UlmF-75248, Cédex 05ParisFrance Armand Ajdari Laboratoire de Physico-Chimie Théorique CNRS 7083esa ESPCI 10 rue VauquelinF-75231, Cédex 05ParisFrance Dynamic Response of Adhesion Complexes: Beyond the Single-Path Picture 26 Jun 2001 (October 29, 2018) We analyze the response of molecular adhesion complexes to increasing pulling forces (dynamic force spectroscopy) when dissociation can occur along either one of two alternative trajectories in the underlying multidimensional energy landscape. A great diversity of behaviors (e.g. nonmonotonicity) is found for the unbinding force and time as a function of the rate at which the pulling force is increased. We highlight an intrinsic difficulty in unambiguously determining the features of the energy landscape from single-molecule pulling experiments. We also suggest a class of "harpoon" stickers that bind easily but resist strong pulling efficiently. 82.20.Kh, 33.15.Fm The last decades have witnessed a remarkable development of physical investigation methods to probe single molecules or complexes by various micromanipulation means. New techniques have been put forward to probe the unfolding of proteins and to quantify the strength of adhesion structures[1][2][3][4][5]. An important step in this direction is the proposal of the group of Evans to use soft structures to pull on adhesion complexes or molecules at various loading rates (dynamic force spectroscopy)[6]. Moving the other end of the soft structure at constant velocity induces on the complex a pulling force that increases linearly in time f = rt. Measuring the typical rupture time t typ yields a typical rupture force f typ = rt typ that depends on the pulling rate r. This provides information as to the energy landscape of the bound complex. Indeed, in many situations one observes a linear increase of f typ with log(r), which can be understood within a simple adiabatic Kramers picture for the escape from a well (bound/attached state) over a barrier of height E located at a projected distance x from the well along the pulling direction. The progressive increase of the force results in a corresponding increase of the escape rate, so that, in agreement with some experiments [6], the typical rupture force increases logarithmically with r:where ω is the escape rate in the absence of force. The rupture time on the other hand decreases with r. The occurrence in some cases of two successive straight lines in a [f typ , log(r)] plot has been argued to be the consequence of having two successive barriers along the 1D escape path, the intermediate one showing up in the response at fast pulling rates [6](Figs. 1a and 2). Other theories have tried to back up more complete information as to the overall effective 1D potential landscape by an analysis of the probability distribution for rupture time and of the statistics of trajectories before rupture[7,8]. Assemblies in series and in parallel of such 1D bonds have also been considered[9][10][11].In this Letter we point out limitations arising from the a priori assumption of a single-path topology of the energy landscape for the interpretation of such experiments. From the analysis of simple examples with a two-path topology, we draw three conclusions: (i) first, the dependence of the rupture force and rupture time on the pulling rate can take various forms, including nonmonotonic behavior (see e.g. Figs. 3 to 5). (ii) Second, the main features of the energy landscape can not be unambiguously deduced from a [f typ , log(r)] plot, as very different landscapes can yield similar curves(Fig. 6). (iii) Third, we propose simple "harpoon" designs (Fig. 1 c and d)for functionally efficient stickers that can bind easily but resist strongly in a range of pulling forces(Fig. 4). α path β path α path β path α path β path (b) Classical 0 0 0 a A 0 a a a b a b a 0 Switch (d) 0 a b B b Harpoon (c) (a) ' ' x Combo FIG. 1. Sketch of the topology of the main valley of the energy landscape for a few examples. 0 denotes the fundamental bound state, A and B are local minima, and a, a ′ , b and b ′ are passes to overcome. To the right (increasing values of x) of the last passes is the continuum that describes unbound states. (a) classical single-path scheme. (b,c,d) unbinding can occur through two alternative routes α and β. We analyze the response of molecular adhesion complexes to increasing pulling forces (dynamic force spectroscopy) when dissociation can occur along either one of two alternative trajectories in the underlying multidimensional energy landscape. A great diversity of behaviors (e.g. nonmonotonicity) is found for the unbinding force and time as a function of the rate at which the pulling force is increased. We highlight an intrinsic difficulty in unambiguously determining the features of the energy landscape from single-molecule pulling experiments. We also suggest a class of "harpoon" stickers that bind easily but resist strong pulling efficiently. The last decades have witnessed a remarkable development of physical investigation methods to probe single molecules or complexes by various micromanipulation means. New techniques have been put forward to probe the unfolding of proteins and to quantify the strength of adhesion structures [1][2][3][4][5]. An important step in this direction is the proposal of the group of Evans to use soft structures to pull on adhesion complexes or molecules at various loading rates (dynamic force spectroscopy) [6]. Moving the other end of the soft structure at constant velocity induces on the complex a pulling force that increases linearly in time f = rt. Measuring the typical rupture time t typ yields a typical rupture force f typ = rt typ that depends on the pulling rate r. This provides information as to the energy landscape of the bound complex. Indeed, in many situations one observes a linear increase of f typ with log(r), which can be understood within a simple adiabatic Kramers picture for the escape from a well (bound/attached state) over a barrier of height E located at a projected distance x from the well along the pulling direction. The progressive increase of the force results in a corresponding increase of the escape rate, so that, in agreement with some experiments [6], the typical rupture force increases logarithmically with r: f typ ≃ k B T /x ln[rx/(k B T ω)], where ω is the escape rate in the absence of force. The rupture time on the other hand decreases with r. The occurrence in some cases of two successive straight lines in a [f typ , log(r)] plot has been argued to be the consequence of having two successive barriers along the 1D escape path, the intermediate one showing up in the response at fast pulling rates [6] (Figs. 1a and 2). Other theories have tried to back up more complete information as to the overall effective 1D potential landscape by an analysis of the probability distribution for rupture time and of the statistics of trajectories before rupture [7,8]. Assemblies in series and in parallel of such 1D bonds have also been considered [9][10][11]. In this Letter we point out limitations arising from the a priori assumption of a single-path topology of the energy landscape for the interpretation of such experiments. 6). (iii) Third, we propose simple "harpoon" designs ( Fig. 1 c and d) for functionally efficient stickers that can bind easily but resist strongly in a range of pulling forces (Fig. 4). 10 A 0 a a ' A 0 a a ' FIG. 2. Classical picture for a single-path energy landscape [6] (Fig. 1a): the probability density P (f ) for unbinding at force f is plotted in grey-scale as a function of the pulling rate r. The typical force ftyp (locus of the maximum of P ) is highlighted with a dashed-line. Plotted curves correspond to E a ′ = 12, x a ′ = 0.5, EA = 9, xA = 1, and Ea = 20, xa = 2. At very low pulling rates unbinding is not affected by the pulling and proceeds over barrier a with a "spontaneous" rate ω0 exp(−Ea). For larger pulling rates the typical unbinding force ftyp increases linearly with log(r), with a slope proportional to 1/xa. Increasing further the pulling rate can lead to a steeper slope ∝ 1/x a ′ corresponding to escape over the inner barrier a ′ . These asymptotes are depicted with solid lines. The dashed arrows along the drawings indicate which pairs of energy well and barrier are probed in these asymptotic limits. Inset: mean rupture time against pulling rate. Obviously for real binding/adhesion complexes, there are numerous (conformational) degrees of freedom, and the configurational space is clearly multidimensional. This allows for complex energy landscapes and various topologies for the structure of their valleys and passes [12]. Only the probing (pulling) is unidirectional. We note in passing that even for more macroscopic sticky systems, usual adhesion tests for soft adhesives often show up hysteresis loops associated with the existence of more than one degree of freedom [13]. We do not attempt here an exhaustive exploration of effects allowed by the multidimensionality of the phase space, but rather focus on a few simple two-path topologies (Fig. 1), to argue for the three points mentioned above. The three examples we consider, sketched in Figure 1 b, c, and d, correspond to simple hairpin schemes whereby detachment can proceed through two alternative routes α and β. These simple quasi 1D schematic situations can be conveniently dealt with using an adiabatic Kramers theory, which has been shown to be an efficient way of obtaining semi-quantitatively correct answers [14]. A common set of notations can be ascribed for all cases ( Fig. 1). From the fundamental bound state "0", the route α for escape (detachment) is over barriers a, of height E a located at a projected distance x a from "0". Alternatively, escape can occur through branch β, over barrier b, of height E b and projected distance x b . All energies and projected distances are measured relative to the state "0" (i.e. E 0 = 0 and x 0 = 0). Intermediate barriers a ′ , b ′ and local minima A and B may exist, with energies E a ′ , E b ′ , E A , E B (all positive), and projected distances x a ′ , x b ′ , x A , x B . In line with typical values from experiments, we choose to write energies in units of k B T ≃ 4 pN nm and distances in nm. Practically, we describe the time evolution of the probabilities of being in the potential minima (bound states) using "chemical" transition rates over the barriers as given by the Kramers formula. We furthermore assume the attempt frequencies to be constant and all equal to ω 0 which provides the only intrinsic time-scale in the problem, so that the transition rate from minimum I over the neighboring barrier i is ω 0 exp[−(E i −E I )+f (t)(x i −x I )]. For the plots of Figures 2 to 6 we take arbitrarily ω 0 = 10 8 s −1 . Jump over the rightmost barrier (a or b) of either path corresponds to rupture leading to escape to x → ∞. We focus on the case where either E b ′ or E b is larger than E a so that α is the "natural" route by which attachment and detachment proceeds in the absence of pulling. We also limit ourselves to simple scenarios in which the force is linearly increased in time f = rt. For further reference we recall the classical single-path scenario (Fig. 1a) in Figure 2 for a typical set of parameters, and then we turn to a brief analysis of the three . 1c): plot of the same quantities as in Figure 2, for Ea = 20, xa = −2, and E b = 40, x b = 2. Pulling here impedes unbinding through the "spontaneous" route α, so that as soon as the rate is strong enough for pulling to affect unbinding, the escape is controlled by the larger barrier b, resulting in an upward jump of the typical unbinding force and time. Inset: the average unbinding time is here non-monotonic. geometries we have introduced (Fig. 1 b, c, and d). First case: switch -Topology as in Figure 1b. Escape occurs through either barrier a or barrier b both located downwards in the pulling direction (x a , x b > 0). The escape proceeds through path α at weak pulling rates as E a < E b , but if x b > x a it can switch to path β for pulling forces f large enough such that E a −f x a > E b −f x b . The result (see Fig. 3) is then a succession of two straight lines of decreasing slopes in the [f typ , log(r)] plot, the first one (slope ∝ 1/x a ) characteristic of the spontaneous route α while the second (slope ∝ 1/x b ) provides information on the alternative route β. In the trivial case x a > x b route β is never explored so that the classical single-path picture applies. To clarify the calculation leading to the plot in Figure 3, we describe the evolution of the probability of attachment p(t) at time t of the system initially attached at time t = 0 [p(0) = 1] by ∂ t p(t) = −ω 0 (e −Ea+f (t)xa + e −E b +f (t)x b )p(t)(1) Solving (1) numerically with f = rt yields p(t) and therefore the probability density P (f ) = − 1 r ∂ t p( f r ) for the unbinding force. The typical values of f are highlighted in the plots, with the whole distribution P (f ) suggested through a grey-scale. Similar procedures will be used in the following examples, with thermal equilibrium between the bound states assumed as initial conditions. Second case: harpoon -Topology similar to the previous one but with x a < 0 (Fig. 1c). The main feature here is that as the pulling force increases, the probability to escape over a decreases. Therefore the system gets FIG. 5. "Selective harpoon" from the combo topology (Fig. 1d): same quantities as in Figure 2, for Ea = 20, xa = 2, E b ′ = 10, x b ′ = 0.5, EB = 5, xB = 1.5, and E b = 27, x b = 2.5. At low pulling rates the spontaneous path α is used. Upon increase of r, larger forces are employed and the minimum B becomes favorable as compared to 0. As E b ′ is not too large, equilibration of population then empties 0 in B, so that escape eventually occurs from B over b, resulting in a higher straight line of slope ∝ 1/(x b − xB). At even higher pulling rates, because xa > x b ′ , the escape over a becomes faster than this equilibration, and therefore, path α is used again. Barrier a controls the behavior at low and high rates, but in an intermediate window, a stronger bonding is provided by barrier b. The typical (dashed line) or average unbinding force is non-monotonic. "stuck" in route β. If the barrier E b is infinite (left side of Fig. 1c), there is a finite probability p ∞ = exp(− ω0e −Ea r|xa| ) that unbinding never occurs. For a finite but high barrier E b , pulling eventually results in unbinding but at high rupture forces (see Fig. 4). The topology thus allows here to form "easily" (i.e. over barrier a) a "harpoon" sticker that can resist strong pulling. Correspondingly the mean unbinding time increases first with pulling rate (a phenomenology connected to the negative resistance analyzed in Ref. [15]), before decreasing for larger values when activated escape over b dominates. Note that the probability distribution P (f ), now consists of two separate ensembles, which coexist over a narrow region of pulling rates. This is in contrast with Figure 3 where there is a continuous evolution of a single cloud. Third case: combo -The alternative route consists of two barriers and a local minimum B (Fig. 1d), and we focus on the case where E b ′ is smaller than the two others. Thanks to the increased complexity and number of parameters in this case many scenarios can occur, covering features already unveiled in Figures 2 to 4 (e.g. switch and harpoon). More intricate pictures can also show up, as depicted in Figure 5. An explanation of this example is given in the caption, illuminating how for low or high (c) combo two-path geometry (Fig. 1d), with Ea = 20, xa = 2, E b ′ = 18, x b ′ = 2.5, EB = 15, xB = 3, and E b = 27, x b = 3.5. In all cases the straight part for weak r corresponds to escape over a from 0. The second steeper slope corresponds to escape over a ′ from 0 in case (a), over a from A in case (b), over b from B in case (c). Neither the topology, nor the location of the probed segment of the energy landscape can be asserted from such data sets. pulling rates barrier a controls the behavior, whereas for intermediate values, the secondary and stronger barrier b limits unbinding. Two features are striking. First, the unbinding force (typical or average) is no more monotonic. Second, branch β results in a strengthening of the binding complex for a given window of pulling rates r (selective harpoon). Discussion -With the three simple examples above, we have clearly enlarged the numbers of behaviors that one may obtain from a classical dynamic force spectroscopy method (see Figs. 3 to 5). Conversely, we also want to stress the second point (ii) mentioned in the introduction: simple patterns (e.g. the succession of two lines of increasing slopes) can be the outcome of many diverse landscapes. For example, Figure 6 displays force-rate curves similar to that of Figure 2, but that correspond to sensibly different landscapes. Not only are the typical and average unbinding forces very similar in the three cases, but so are the probability distributions for most values of r. Only close to the cross-over between the two straight lines can slight differences be detected. To distinguish more selectively possible landscapes, it may be necessary to use other temporal sequences than the simple f = rt, e.g. to reveal equilibration processes between local minima. Eventually we would like to emphasize that the harpoon geometries proposed here constitute a very obvious paradigm for efficient stickers. Attachment of the sticker can proceed through route α with a possibly not too high barrier E a . The harpoon configuration then allows to benefit from the much stronger b barrier for a given window of pulling forces, making the sticker more efficient in these conditions. This "hook" design is obviously a favorable strategy for adhesion complexes, the function of which is to maintain adhesion under the action of welldefined tearing stresses. It would be surprising if advantage was not taken of this by some biological systems. * Email address: [email protected] † Email address: [email protected] ‡ PACS numbers: 82.37.-j, 87.15.-v, 82.20.Kh, 33.15.Fm FIG. 1 . 1Sketch of the topology of the main valley of the energy landscape for a few examples. 0 denotes the fundamental bound state, A and B are local minima, and a, a ′ , b and b ′ are passes to overcome. To the right (increasing values of x) of the last passes is the continuum that describes unbound states. (a) classical single-path scheme. (b,c,d) unbinding can occur through two alternative routes α and β. FIG. 3 . 3Switch geometry(Fig. 1b): plot of the same quantities as inFigure 2, for Ea = 20, xa = 0.5, and E b = 30, x b = 2. At low pulling rates unbinding is controlled by escapes over a whereas for large values of r it occurs mostly over b: the slope of the unbinding force (average or typical) decreases from 1/xa to 1/x b . FIG. 4. "Harpoon" geometry (Fig. 1c): plot of the same quantities as in Figure 2, for Ea = 20, xa = −2, and E b = 40, x b = 2. Pulling here impedes unbinding through the "spontaneous" route α, so that as soon as the rate is strong enough for pulling to affect unbinding, the escape is controlled by the larger barrier b, resulting in an upward jump of the typical unbinding force and time. Inset: the average unbinding time is here non-monotonic. FIG. 6 . 6Similar curves obtained from significantly different energy landscapes. (a) classical single-path of Figure 1a, data of Figure 2. (b) classical single-path of Figure 1a, with E a ′ = 11, x a ′ = 1, EA = 8, xA = 1.5, and Ea = 20, xa = 2. . M Rief, M Gautel, F Oesterhelt, J M Fernandez, H E Gaub, Sciences. 2761109M. Rief, M. Gautel, F. Oesterhelt, J. M. Fernandez, and H. E. Gaub, Sciences 276, 1109 (1997); . M S Kellermayer, S B Smith, H L Granzier, C Bustamante, 2761112M. S. Keller- mayer, S. B. Smith, H. L. Granzier, and C. Bustamante, ibid. 276, 1112 (1997). . M G Poirier, A Nemani, P Gupta, S Eroglu, J F Marko, Phys. Rev. Lett. 86360M. G. Poirier, A. Nemani, P. Gupta, S. Eroglu, and J. F. Marko, Phys. Rev. Lett. 86, 360 (2001). . D A Simons, M Strigl, M Hohenadl, R Merkel, Phys. Rev. Lett. 83652D. A. Simons, M. Strigl, M. Hohenadl, and R. Merkel, Phys. Rev. Lett. 83, 652 (1999). . T Nishizaka, R Seo, H Tadakuma, K Kinosita, S Ishiwata, Biophys. J. 79962T. Nishizaka, R. Seo, H. Tadakuma, K. Kinosita, and S. Ishiwata, Biophys. J. 79, 962 (2000). . A Pierre, A M Benoliel, P Bongrand, P A Van Der Merwe, Proc. Natl. Acad. Sci. USA. 9315114A. Pierre, A. M. Benoliel, P. Bongrand, and P. A. van der Merwe, Proc. Natl. Acad. Sci. USA 93, 15114 (1996). . E Evans, K Ritchie, Biophys. J. 721541E. Evans and K. Ritchie, Biophys. J. 72, 1541 (1997); . E Evans, Faraday Discuss. 1111E. Evans, Faraday Discuss. 111, 1 (1998); . R Merkel, P Nassoy, A Leung, K Ritchie, E Evans, Nature. 39750R. Merkel, P. Nassoy, A. Leung, K. Ritchie, and E. Evans, Nature 397, 50 (1999). G Hummer, A Szabo, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA983658G. Hummer and A. Szabo, Proc. Natl. Acad. Sci. USA 98, 3658 (2001). . B Heymann, H Grubmüller, Phys. Rev. Lett. 846126B. Heymann and H. Grubmüller, Phys. Rev. Lett. 84, 6126 (2000). . E Evans, Annu. Rev. Biophys. Biomol. Struct. 30105E. Evans, Annu. Rev. Biophys. Biomol. Struct. 30, 105 (2001). . U Seifert, Phys. Rev. Lett. 842750U. Seifert, Phys. Rev. Lett. 84, 2750 (2000). . M Rief, J M Fernandez, H E Gaub, Phys. Rev. Lett. 814764M. Rief, J. M. Fernandez, and H. E. Gaub, Phys. Rev. Lett. 81, 4764 (1998). E Paci, M Karplus, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA976521E. Paci and M. Karplus, Proc. Natl. Acad. Sci. USA 97, 6521 (2000). K L Johnson, K Kendall, A D Roberts, Proc. R. Soc. Lond. A. R. Soc. Lond. A324301K. L. Johnson, K. Kendall, and A. D. Roberts, Proc. R. Soc. Lond. A 324, 301 (1971). . J Shillcock, U Seifert, Phys. Rev. E. 577301J. Shillcock and U. Seifert, Phys. Rev. E 57, 7301 (1998). . G A Cecchi, M O Magnasco, Phys. Rev. Lett. 761968G. A. Cecchi and M. O. Magnasco, Phys. Rev. Lett. 76, 1968 (1996).
[]
[ "An acoustically-driven biochip -Impact of flow on the cell-association of targeted drug carriers Graphical contents entry", "An acoustically-driven biochip -Impact of flow on the cell-association of targeted drug carriers Graphical contents entry" ]
[ "Christian Fillafer \nDepartment of Pharmaceutical Technology and Biopharmaceutics\nFaculty of Life Sciences\nUniversity of Vienna\nA-1090ViennaAustria\n", "Gerda Ratzinger \nDepartment of Pharmaceutical Technology and Biopharmaceutics\nFaculty of Life Sciences\nUniversity of Vienna\nA-1090ViennaAustria\n", "Jürgen Neumann \nExperimentalphysik I -Biological Physics Group\nUniversity of Augsburg\n86135AugsburgGermany\n", "Zeno Guttenberg \nOlympus Life Science Research Europa GmbH\n81377MunichGermany\n", "Silke Dissauer \nDepartment of Pharmaceutical Technology and Biopharmaceutics\nFaculty of Life Sciences\nUniversity of Vienna\nA-1090ViennaAustria\n", "Irene K Lichtscheidl \nFaculty of Life Sciences\nCell Imaging and Ultrastructure Research\nUniversity of Vienna\nA-1090ViennaAustria\n", "Michael Wirth \nDepartment of Pharmaceutical Technology and Biopharmaceutics\nFaculty of Life Sciences\nUniversity of Vienna\nA-1090ViennaAustria\n", "Franz Gabor [email protected] \nDepartment of Pharmaceutical Technology and Biopharmaceutics\nFaculty of Life Sciences\nUniversity of Vienna\nA-1090ViennaAustria\n", "Matthias F Schneider [email protected] \nExperimentalphysik I -Biological Physics Group\nUniversity of Augsburg\n86135AugsburgGermany\n" ]
[ "Department of Pharmaceutical Technology and Biopharmaceutics\nFaculty of Life Sciences\nUniversity of Vienna\nA-1090ViennaAustria", "Department of Pharmaceutical Technology and Biopharmaceutics\nFaculty of Life Sciences\nUniversity of Vienna\nA-1090ViennaAustria", "Experimentalphysik I -Biological Physics Group\nUniversity of Augsburg\n86135AugsburgGermany", "Olympus Life Science Research Europa GmbH\n81377MunichGermany", "Department of Pharmaceutical Technology and Biopharmaceutics\nFaculty of Life Sciences\nUniversity of Vienna\nA-1090ViennaAustria", "Faculty of Life Sciences\nCell Imaging and Ultrastructure Research\nUniversity of Vienna\nA-1090ViennaAustria", "Department of Pharmaceutical Technology and Biopharmaceutics\nFaculty of Life Sciences\nUniversity of Vienna\nA-1090ViennaAustria", "Department of Pharmaceutical Technology and Biopharmaceutics\nFaculty of Life Sciences\nUniversity of Vienna\nA-1090ViennaAustria", "Experimentalphysik I -Biological Physics Group\nUniversity of Augsburg\n86135AugsburgGermany" ]
[]
mimic a wide range of physiological flow conditions on a thumbnail-format cell-chip. This acoustically-driven microfluidic system was used to study the interaction characteristics of protein-coated particles with cells. Poly(D,L-lactide-co-glycolide) (PLGA) microparticles (2.86 ± 0.95 µm) were conjugated with wheat germ agglutinin (WGA-MP, cytoadhesive protein) or bovine serum albumin (BSA-MP, nonspecific protein) and their binding to epithelial cell monolayers was investigated under stationary and flow conditions. While mean numbers of 1500 ± 307 mm -2 WGA-MP and 94 ± 64 mm -2 BSA-MP respectively were detected to be cell-bound in the stationary setup, incubation at increasing flow velocities increasingly antagonized the attachment of both types of surface-modified particles. However, while binding of BSA-MP was totally inhibited by flow, grafting with WGA resulted in a pronounced anchoring effect. This was indicated by a mean number of 747 ± 241 mm -2 and 104 ± 44 mm -2 attached particles at shear rates of 0.2 s -1 and 1 s -1 respectively. Due to the compactness of the fluidic chip which favours parallelization, this setup represents a highly promising approach towards a screening platform for the performance of drug delivery vehicles under physiological flow conditions. In this regard, the flow-chip is expected to provide substantial information for the successful design and development of targeted microand nanoparticulate drug carrier systems.
10.1039/b906006e
[ "https://arxiv.org/pdf/1107.5206v1.pdf" ]
18,875,198
1107.5206
25c1179f3e1ca5375b8d72d4795726f80902e94c
An acoustically-driven biochip -Impact of flow on the cell-association of targeted drug carriers Graphical contents entry Christian Fillafer Department of Pharmaceutical Technology and Biopharmaceutics Faculty of Life Sciences University of Vienna A-1090ViennaAustria Gerda Ratzinger Department of Pharmaceutical Technology and Biopharmaceutics Faculty of Life Sciences University of Vienna A-1090ViennaAustria Jürgen Neumann Experimentalphysik I -Biological Physics Group University of Augsburg 86135AugsburgGermany Zeno Guttenberg Olympus Life Science Research Europa GmbH 81377MunichGermany Silke Dissauer Department of Pharmaceutical Technology and Biopharmaceutics Faculty of Life Sciences University of Vienna A-1090ViennaAustria Irene K Lichtscheidl Faculty of Life Sciences Cell Imaging and Ultrastructure Research University of Vienna A-1090ViennaAustria Michael Wirth Department of Pharmaceutical Technology and Biopharmaceutics Faculty of Life Sciences University of Vienna A-1090ViennaAustria Franz Gabor [email protected] Department of Pharmaceutical Technology and Biopharmaceutics Faculty of Life Sciences University of Vienna A-1090ViennaAustria Matthias F Schneider [email protected] Experimentalphysik I -Biological Physics Group University of Augsburg 86135AugsburgGermany An acoustically-driven biochip -Impact of flow on the cell-association of targeted drug carriers Graphical contents entry mimic a wide range of physiological flow conditions on a thumbnail-format cell-chip. This acoustically-driven microfluidic system was used to study the interaction characteristics of protein-coated particles with cells. Poly(D,L-lactide-co-glycolide) (PLGA) microparticles (2.86 ± 0.95 µm) were conjugated with wheat germ agglutinin (WGA-MP, cytoadhesive protein) or bovine serum albumin (BSA-MP, nonspecific protein) and their binding to epithelial cell monolayers was investigated under stationary and flow conditions. While mean numbers of 1500 ± 307 mm -2 WGA-MP and 94 ± 64 mm -2 BSA-MP respectively were detected to be cell-bound in the stationary setup, incubation at increasing flow velocities increasingly antagonized the attachment of both types of surface-modified particles. However, while binding of BSA-MP was totally inhibited by flow, grafting with WGA resulted in a pronounced anchoring effect. This was indicated by a mean number of 747 ± 241 mm -2 and 104 ± 44 mm -2 attached particles at shear rates of 0.2 s -1 and 1 s -1 respectively. Due to the compactness of the fluidic chip which favours parallelization, this setup represents a highly promising approach towards a screening platform for the performance of drug delivery vehicles under physiological flow conditions. In this regard, the flow-chip is expected to provide substantial information for the successful design and development of targeted microand nanoparticulate drug carrier systems. Graphical contents entry The accumulation of targeted drug carriers in the diseased tissue is substantially affected by the locally acting hydrodynamic drag forces. A thumbnail-sized microfluidic chip incorporating a surface acoustic wave pump can be used to study the adhesion of particles to cells under physiological flow conditions. Summary The interaction of targeted drug carriers with epithelial and endothelial barriers in vivo is largely determined by the dynamics of the body fluids. To simulate these conditions in binding assays, a fully biocompatible in vitro model was developed which can accurately Introduction Physiological and pathological processes such as the site-specific adhesion of platelets, leukocytes or metastasizing cancer cells underlie sophisticated mechanisms in order to efficiently function in the presence of hydrodynamic flow. Substantial knowledge about these processes has been generated by simulating physiological shear conditions with in vitro fluidic systems such as the parallel plate flow chamber (PPFC) and the radial flow detachment assay (RFDA). [1][2][3][4] Although primarily aimed at understanding physiology, these fundamental studies also bear essential implications for the development of targeted colloidal drug carriers. [1,3,5] Presently, the target effect of site-specific drug delivery systems is by default determined with in vitro cell binding assays under stationary conditions. However, regarding the extent and specificity of particle binding, recent reports have identified clear discrepancies between the results obtained from stationary and more realistic, dynamic models of the in vivo environment. [6][7][8] Particularly, the presence of substantial hydrodynamic drag forces upon application in vivo is expected to explicitly affect the deposition characteristics of ligandcoated particles. [7,9,10] The peristaltic motion in the gastrointestinal tract, for example, leads to streaming velocities of ~85 µm s -1 in the jejunum and ~55 µm s -1 in the ileum. [11] The variation of flow conditions in the circulatory system is even more pronounced as illustrated by effective shear rates ranging from 1 s -1 in wide vessels to 10 5 s -1 in small arteries. [12] To attain preferential binding of the carrier to the diseased tissue in this environment, the size and ligand coating density of the colloids has to be adjusted according to the flow conditions as well as the expected receptor density and affinity at the target tissue. [1,2,5] In order to be able to practically optimize these parameters of potential drug delivery vehicles, parallelisable in vitro bioassays have to be developed which offer the possibility of controllable flow generation. At this, increased experimental throughput is highly necessary in order to cope with the extensive amount of samples which have to be processed to generate sufficient data sets. Although well suited for specific questions, the PPFC and RFDA are of limited use for such applications. When multiple experiments are demanded at high reproducibility, the complexity of these methods' tubing and chambers becomes cumbersome to handle. Microfluidic pumping techniques might allow for approaching these shortcomings successfully. The most common methods to generate flow in miniaturised systems are thermophoresis (via Marangoni forces), electrochemical reactions, surface patterning methods, electro-osmosis, induced charge electro-osmosis as well as several mechanical pumping strategies. [13,14] However, these techniques' suitability is limited for systems which are required to comply with cell growth and cell viability. For such applications a micropumping technology based on acoustic streaming might represent a highly promising alternative. This technique generates locally defined surface acoustic waves (SAWs) which, when coupled into liquid, result in a defined pressure gradient that can be utilized for streaming ( Figure 1A). In the work presented, the applicability of this technology to controllably produce fluid flow in a miniaturized pharmaceutically relevant in vitro bioassay was tested. At this, the effect of shearing on the association of protein-decorated particles with epithelial cell monolayers was investigated. Biocompatible poly(D,L-lactide-co-glycolide) (PLGA) microparticles (MP) conjugated with fluorescence labelled wheat germ agglutinin (WGA) and bovine serum albumin (BSA) served as representative targeted and non-targeted model drug carrier systems. This model system was chosen, since studies have shown that decoration with WGA mediates binding to Caco-2 cells under stationary conditions. [15,16] The extent of a targeting effect in the presence of hydrodynamic drag was investigated using the micropumping SAW-chip ( Figure 1B) to simulate physiological flow conditions. Results and discussion The acoustically-driven microfluidic system Similar to conventional flow assays such as the PPFC and RFDA, the acoustically-driven flow chip is based on a fluidic channel which is directly accessible during the experiment via microscopy. However, in contrast to the previously mentioned assays that are driven by external mechanical pumps, the developed microfluidic chip uses a planar, non invasive pumping principle for flow generation. This technique realizes the contamination-free pumping of liquid volumes as small as a few microliters by surface acoustic streaming which can generate shear rates between 0.01 s -1 and 1000 s -1 depending on the system's geometry. As a consequence of the high frequency signal based generation of the SAW, the pumping performance is continuously actuated and varied via the parameters of a conventional signal generator (see Video S1 of the Supporting Information). [17] By channelizing the streamed liquid into 3D-microchannels, which are readily fabricated in almost arbitrary geometries by elastomeric molding, a flow chamber can be created. In this study, a rather simple 44 mm x 4 mm x 3 mm (length x width x height) racetrack-format poly(dimethylsiloxane) (PDMS)-cast was used ( Figure 1B). The channel was dimensioned in such a manner that sufficient medium could be included for uncomplicated cell culturing of epithelial Caco-2 monolayers. Thus, the usually necessary transfer of a cell-covered substrate to the flow chamber or alignment of an elastomeric cast on the cultured cells can be avoided. However, the pumping technique is not limited to rectangular casts of this size, but can be easily adapted to drive flow in smaller structures with constrictions or bifurcations. [17] Channels including the latter geometries could be exceptionally insightful tools for a realistic simulation of the complex vessel-flow in the circulatory system. [9] At this, it is additionally helpful, that not only continuous flow modes can be accurately mimicked on the SAW-chip, but that also precisely defined pulsed pumping is possible. While being hardly achievable with mechanical pumps, this streaming mode is controllably induced by amplitude modulation of the high frequency signal (see Video S2 of the Supporting Information). Thereby, the effects of pulsating flow combined with defined shear rates could be collectively investigated regarding their impact on distribution and adhesion processes in the branched circulatory system. [9] Besides its compliance with microfluidic channels, which allows for reducing the necessary amount of reagents to a minimum, the chip-based setup additionally bears the advantage of omitting tubing and connectors which are otherwise necessary to pipeline liquid within the system. Consequently, the dead volume associated with mechanical pumps is avoidable and the entire device can be downsized notably ( Figure 1B). This feature is essential, as it permits parallelization of several SAW-chips in one platform thus increasing the amount of samples which can be processed simultaneously. Even a setup with multiple SAW-pumps and channels in a microplate format is realizable, entailing the ability to use microplate readers for analysis. Such an extension to more sensitive analytical methods could be a powerful tool to gain information on adhesion phenomena involving nanoparticles or proteins, since analyses based on microscopic imaging hit on their limits in this size regime. Cell-binding of surface-modified microparticles under stationary conditions To establish reference values for the experiments involving fluid flow, the extent of binding of BSA-and WGA-MP to Caco-2 monolayers was primarily determined in a stationary setup (flow velocity max = 0 µm s -1 ). For the used microparticles with a mean size of 2.86 ± 0.95 µm and a density of ~ 1.28 g cm -3 a sedimentation controlled particle deposition is expected. [18] In the equilibrium state between frictional force and gravitation, settling of the colloids occurs with a constant z-velocity of 1.4 µm s -1 . Consequently, 83% of the homogenously distributed particles are assumed to deposit on the surface within 30 min leading to a maximum coverage of 3500 mm -2 . In practice, however, only 94 ± 64 mm -2 BSA-MP were detected after stationary loading (30 min), washing and stationary chase-incubation (30 min) (Figure 2, stars). The rather low number of cell-associated particles, which corresponds to 3% of the theoretical load, can be explained by the reportedly low affinity of albumin-coated PLGAparticles to Caco-2 cells. [6,15] Due to the lack of interactive strength the washing steps resulted in the detachment of all except a few non-specifically bound particles. In contrast, conjugation with f-WGA led to a clearly enhanced adhesion of colloids to the monolayer. This is in line with previous studies which have identified WGA as an agent for enhancing adhesion to Caco-2 cells. [19,20] Following incubation under stationary conditions a mean number of 1500 ± 307 mm -2 cell-associated particles, which corresponds to 43% of the maximum load, were detected. This 16-fold increased interaction as compared to BSA-MP is explicitly illustrated in Figure 3 and can be attributed to specific binding of the lectin to membrane-associated N-acetyl-D-glucosamine and N-acetyl-neuraminic-acid residues. [15,21] WGA-MP which had not bound to the cell membrane were removed in course of the washing steps. Impact of flow on cell-bound particles To investigate the effect of different shear rates on cell-associated BSA-and WGA-MP, Figure 3C). Regarding reproducibility of the binding assays on the SAW-chip, it should be highlighted that the three data series for each particle type plotted in Figure 2 were obtained from separate experiments. The low deviation between the curves underlines the SAW-pump's ability to controllably and reproducibly generate flow in the 3D-microchannels, which is a crucial prerequisite that determines the practical usability of the system. The standard deviation of each data point is comparable with those in similar studies and very likely a result of the image-based quantification combined with the cellular substrate inherent characteristics. [7] To minimize this, homogenous receptor-coated surfaces could be used, albeit at the cost of setting aside the complexly constituted cell membrane. Hence, when probing the interaction of targeted drug carriers with biological barriers, a substitution of the cell monolayer by an artificial substrate is not expedient since it reduces the relevance for comparisons with the in vivo conditions. [3,7,8] When studying isolated adhesion phenomena, however, the incorporation of precisely surface-engineered substrates is certainly realisable as well as preferable in terms of analytical accuracy. [5,12,22] Cell-binding of surface-modified microparticles under flow conditions Impact of flow on particle deposition in the channel. To estimate the effect of flow on the deposition rate of particles in the 3D-microchannels, the On each of these rotations, particles below a critical height z crit settle out on the cell monolayer. For a flow velocity v max of 1700 µm s -1 the critical height z crit is determined to be ~224 µm. Taking into account that the particle density in the channel decreases with duration of the experiment, ~56% of the microparticles are expected to have deposited on the cell monolayer after 30 min. This corresponds to a theoretical surface coverage of 2400 mm -2 . In the case of lower flow velocities the increased critical height z crit interestingly does not lead to a higher deposition rate since its effect is widely levelled out by the proportionally lowered particle flux density. Consequently, almost equal particle deposition rates on the cell monolayer were to be expected under all investigated flow conditions. Binding of BSA-and WGA-MP to a Caco-2 monolayer under flow conditions. In order to investigate the binding of BSA-and WGA-MP from a streaming medium to Caco-2 monolayers, the particle suspension was transferred to the 3D-microchannels and fluid flow was instantly engaged. As illustrated in Figure 4 conditions led to a clearly decreased cell binding of the ligand-conjugated colloids. Partially, this effect is system inherent and can be attributed to the previously discussed reduction of particle deposition due to the parabolic flow in the channel. Taking this into account and considering the binding potential of 3% and 43% as determined in the absence of hydrodynamic forces, a maximal surface coverage of ~70 mm -2 and ~1000 mm -2 is expected for BSA-and WGA-MP respectively under flow conditions. This estimate seems quite realistic as illustrated by 77 ± 31 mm -2 and 747 ± 241 mm -2 respectively bound particles in case of the lowest shear force. However, aside from the generally lowered base deposition, the number of cell-bound colloids was additionally diminished by the antagonizing effect of flow on particle-surface bonds. [4,5] For BSA-MP this increased hydrodynamic drag resulted in a nearly complete inhibition of the association with the Caco-2 monolayer. This is explicated by a marginal surface coverage of 34 ± 16 mm -2 cell-associated particles at the highest shear rate. Apparently, albumin-modified particles which are deposited on the monolayer under flow conditions do not notably interact with the cell membrane. Similar conclusions have been described in a recent study where a gas lift driven ussing chamber setup was used to simulate the effect of gastrointestinal flow on particle binding. [6] In contrast to albumin-conjugated colloids, WGA-MP exhibited higher binding at all investigated flow-conditions. The advantage of lectin-conjugation was clearest at low shear rates. However, even at the highest flow velocities studied at least threefold more particles associated with the cell monolayer as compared with BSA-MP. These observations lead to the conclusion that colloids lacking appropriate surface chemistry do not notably attach to Caco-2 cells at relatively moderate shear rates ranging from 0.2 s -1 to 1 s -1 . When considering this combined with the very low cytoadhesion under stationary conditions, the use of BSA-MP as drug carriers is limited. The use of plain colloids for most pharmaceutically relevant applications has to be relativized even more due to the reportedly marginal cytoadhesion of unconjugated PLGA particles. [15] To efficiently compliment the advantageous biocompatibility and biodegradability of particles made from PLGA and similar polymers, surface modification with targeting ligands is essential. In this regard, however based on stationary studies, wheat germ agglutinin has been proposed as a promising candidate to serve this purpose for peroral applications. Due to its carbohydrate binding properties, which mediate adhesion to enterocytes as well as mucus, a prolonged gastrointestinal residence time of sustained release drug carriers might be achieved. [6,15,23] Using the SAW-driven microfluidic device, it was possible to show that the interaction between WGA and the Caco-2 cell's glycocalyx is indeed sufficient to mediate the binding of 3µm-sized particles under physiological flow conditions. This observation further underlines the potential of wheat germ agglutinin for peroral delivery, where low shear rates act on the carrier. However, in the presence of higher flow velocities, the studied WGA-MP exhibit a propensity to detach from the cell layer. This could be counteracted by using smaller particles, possibly in the nanometer size range, since the effective hydrodynamic drag forces decrease with particle diameter. [1] Moreover, conjugation of particles with alternative targeting moieties could lead to enhanced cytoadhesion under the shear rates encountered in the circulatory system. In this regard, the underlying mechanisms of shear-activated proteinic ligands like the FimH subunit of Type-1 fimbriae of E. coli or von Willebrand factor might lead the way for the engineering of site-specific drug carriers that efficiently adhere under flow conditions. [12,24] Experimental Preparation of functionalized microparticles PLGA microparticles with a mean diameter of 2.86 ± 0.95 µm were prepared by spray drying of a 6.5% (w/v) solution of PLGA in dichloromethane with a Buechi Mini Spray Dryer B-191 (Buechi, Flawil, Switzerland) as previously described. [19] For surface modification, 100 mg of the PLGA microparticles were suspended in 20 mM HEPES/NaOH pH 7.0 (10 mL) and Microparticle-cell interaction studies at stationary and flow conditions For the interaction studies the BSA-MP and WGA-MP were suspended in isotonic 20 mM HEPES/NaOH pH 7.4 at a concentration of 7.5 x 10 5 particles per 500 µL. Shortly before the experiment, the cell culture medium was removed from the 3D-microchannels and the monolayers were washed once with isotonic 20 mM HEPES/NaOH pH 7.4 (500 µL). Subsequently, the microparticle suspension was added and the channels were covered with a glass cover slip. In order to grant efficient transmission of the SAWs, 50 µL of water were pipetted on the piezoelectric chip as a coupling fluid before the 3D-microchannel was placed on top of it. This setup was mounted on a fluorescence microscope and connected to the high frequency generator, which had been configured to supply preset energy inputs corresponding to flow velocities of 0 µm s -1 , 300 µm s -1 , 800 µm s -1 , 1200 µm s -1 and 1700 µm s -1 respectively. Following incubation at either stationary or flow conditions for 30 min, the monolayers were washed three times with isotonic 20 mM HEPES/NaOH pH 7.4 (500 µL) and embedded as described below. For an alternative set of experiments, monolayers loaded for 30 min under stationary conditions were washed and subsequently subjected to stationary or flow chase-incubation. After two additional washing steps with isotonic 20 mM HEPES/NaOH (500 µL) the PDMS-structure was peeled off these channels as well. The glass cover slips with the adherent monolayers were embedded in a drop of FluorSave TM (Calbiochem ® , USA and Canada) and were stored at 4°C for 12 hours prior to further analyses. Fluorescence microscopy and software-based analysis The embedded Caco-2 monolayers were analysed on a Nikon Eclipse 50i microscope (Nikon Corp., Japan) equipped with an EXFO X-Cite 120 fluorescence illumination system. A random series of non-overlapping fluorescence microscopic images (n = 6) was acquired over the channel area, whereby care was taken to analyze monolayer parts located in the centre of the 3D-microchannels. To grant comparability, the settings of the fluorescence lamp and exposure time were left constant during the data acquisition process. Finally, the number of cell-associated microparticles in every image was determined with the threshold-dependant automated particle analysis of ImageJ (NIH, USA). The number of cell-associated particles represents the mean value which was calculated from the images acquired in independent channels (n = 2). Antibody staining After incubation at stationary and flow conditions respectively, the monolayers were fixed Conclusion Stationary binding assays have become the current standard in preclinical biopharmaceutical testing due to the rather simple handling and lack of alternative in vitro models. To approach this shortcoming, an acoustically-driven thumbnail-sized microfluidic chip was developed that can controllably and reproducibly generate flow in 3D-microchannels which are compatible with cell culture. This SAW-chip was used to investigate the binding properties of albumin-and wheat germ agglutinin-conjugated microparticles to an epithelial cell layer under flow conditions. As illustrated in the present work, the results obtained from binding assays under stationary conditions notably differ from those obtained in systems simulating the dynamic in vivo environment. It was found that non-targeted microparticles possess a very low propensity to bind and are detached in the presence of flow, while conjugation with WGA led to a distinctly improved adhesion of particles to the cell-layer at shear rates ranging from 0.2 s -1 to 1 s -1 . In conclusion, these results clearly underline the importance of surface functionalization for the design of nano-and microparticulate drug delivery vehicles. Elaborate models of the in vivo conditions are necessary in order to realistically predict and optimize the performance of such systems prior to animal studies. In this context, the developed microfluidic SAW-chip is expected to provide a highly versatile platform for an investigation of flow-associated effects on particle-cell adhesion processes. Fundamental information distilled from studies using this technology will deeply benefit the engineering of artificial drug carrier systems which perform with high efficiency in the dynamically complex physiological environment. Figure 1 . 1SAWs are generated on a piezoelectric chip and lead to fluid streaming with a parabolic flow profile when coupled into a liquid-filled channel (A). Dimensions of the SAW-chip with a positioned 3D-microchannel (B). Cross section of the flow velocity profile generated in a 3D-microchannel by acoustic streaming 4 mm x 3 mm (width x height) (C). Horizontal (D, solid line) and vertical cut (D, dashed line) through (C). Figure 2 . 2Mean number of WGA-(squares) and BSA-MP (stars) associated with Caco-2 monolayers after loading for 30 min at stationary conditions, washing and chase-incubation under stationary or flow conditions. Each set of data points was obtained from independent experimental series. Figure 3 . 3Cell-associated WGA-MP (A) and BSA-MP (C) respectively after loading for 30 min, washing and chase-incubation under stationary conditions (flow velocity max = 0 µm s -1 ). Chase-incubation performed under flow conditions (flow velocity max = 1700 µm s -1 ; B and D respectively). Microparticles (green) and tight junction associated protein ZO-1 (red). Bar represents 20 µm. stationary particle-loaded Caco-2 monolayers were chase-incubated under flow conditions. At this, acoustic streaming induces laminar fluid flow in the channel and thereby generates hydrodynamic forces acting on the attached microparticles. If the adhesive bonds break and do not re-establish elsewhere, the fluid flow transports the colloids to the coupling region of the SAW, where the considerable lift forces redisperse them in the channel cross-section.Consequently, most of these particles are not available for reattachment to the cells. Using this setup, a shear rate dependent reduction of the number of cell-bound particles was monitored for both types of surface-modified colloids (Figure 2). While low shear rates (flow velocity max = 300 µm s -1 and 800 µm s -1 ) led to the detachment of ~30% and 50% respectively of the initially cell-associated BSA-MP, incubation at higher shear rates (flow velocity max = 1200 µm s -1 and 1700 µm s -1 ) almost completely removed the albumin-conjugated colloids from the cell-surface (Figure 3D). Obviously, the few BSA-MP, which were associated with the Caco-2 monolayer after stationary loading and washing, are characterized by low adhesivity which is not sufficient to anchor the particles in the presence of shear forces. In contrast, conjugation of microparticles with carbohydrate-binding protein not only led to higher cell-binding under stationary conditions but also enhanced retention on the Caco-2 monolayer in the presence of flow(Figure 2, squares). This is illustrated by a mean number of 1057 ± 351 mm -2 and 400 ± 168 mm -2 monolayer-associated colloids at the lowest and highest flow velocity studied. Consequently, as compared with BSA-MP, WGA-MP were characterized by at least 17-fold increased retention over the whole range of flow velocities investigated. Interestingly, in case of the highest shear rates, the effect of the lectin-corona was even more pronounced as exemplified by 39-fold and 44-fold improved adhesion over albumin-conjugated colloids ( stationary condition was compared with streaming. The experimental microfluidic setup is characterized by Reynolds numbers of Re = 1 and Re = 6 for the lowest and highest velocity respectively which indicates laminar flow. As determined by computational fluid dynamics simulations the flow profile in the channel is parabolic (Figure 1C, D). Considering these conditions and that the images used for quantification of the adherent particles were taken in a small central region of the channel, the lateral y-component of the velocity was assumed to be widely independent from the vertical z-component. Therefore, a particle's trajectory in the flowing fluid contains components in the direction of flow (x) as well as in the vertical direction (z) with the former one depending on the latter. At the start of the experiment, the 3D-microchannels are filled with buffer containing 7.5 x 10 5 homogenously distributed microparticles which sediment with a constant z-velocity. Upon engaging the flow, these particles are travelling for one channel length until again reaching the coupling zone of the SAW-pump where they are homogenously redistributed over the cross-section of the channel. Figure 4 . 4Mean number of WGA-MP (squares) and BSA-MP (stars) associated with Caco-2 monolayers upon incubation under stationary and flow conditions. Monolayers pre-loaded with microparticles for 30 min under stationary conditions (filled symbols). Direct incubation of microparticles with monolayers under stationary and flow conditions (open symbols). Materials Sylgard ® 184 Silicone Elastomer Kit was purchased from Baltres (Baden, Austria). Resomer ® RG502H (PLGA, lactide/glycolide ratio 50:50, inherent viscosity 0.22 dL g -1 , acid number 9 mg KOH g -1 ) was obtained from Boehringer Ingelheim (Ingelheim, Germany). Fluoresceinlabeled wheat germ agglutinin (molar ratio fluorescein/protein (F/P) = 2.9) from Triticum vulgare was bought from Vector laboratories (Burlingame, USA). FITC-labeled bovine serum albumin (F/P = 12), N-(3-Dimethylaminopropyl)-N -ethylcarbodiimide hydrochloride (EDAC), N-hydroxysuccinimide (NHS), and Pluronic ® F-68 were purchased from Sigma Aldrich (Vienna, Austria). All other chemicals used were of analytical purity. activated with solutions of EDAC (360 mg in 1.5 mL) and NHS (15 mg in 1 mL) in the same buffer for 2 h under end-over-end rotation at room temperature. In order to remove excess coupling reagent, the suspension was diluted threefold with 20 mM HEPES/NaOH pH 7.4 and centrifuged (10 min, 2500 rpm, 4°C). The resulting pellet was resuspended in 20 mM HEPES/NaOH pH 7.4 (10 mL). Upon addition of F-WGA (1.00 mg) and F-BSA (1.83 mg) respectively, end-over-end incubation was performed overnight at room temperature.Remaining active ester intermediates were saturated by addition of glycine (450 mg) in 20 mM HEPES/NaOH pH 7.4 (6 mL) and further incubation for 30 min. Subsequently, the microparticles were washed three times by centrifugation (10 min, 3200 rpm, 4°C) and resuspension in 20 mM HEPES/NaOH pH 7.4 (30 mL). After the last centrifugation step, the particles were suspended in a solution of 0.1% Pluronic F-68 in isotonic 20 mM HEPES/NaOH pH 7.4 (10 mL).Fabrication of sterile 3D-microchannelsBase (10 g) and curing agent (1 g) of the silicone elastomer kit were mixed in a test tube, vigorously stirred and evacuated for 30min to remove gas bubbles. After pouring the liquid prepolymer into pre-structured aluminium molds and hardening over night at 70°C, the PDMS replicas were peeled from the master and placed on 24 x 24 mm (length x width) glass cover slips. Following assembly, the 3D-microchannels dimensioned 44 x 4 x 3 mm (length x width x height) were transferred to glass Petri dishes and autoclaved for 50 min at 121°C (1 bar).Cell Culture in 3D-microchannelsThe Caco-2 cell line was purchased from the German collection of microorganisms and cell culture (DSMZ, Braunschweig, Germany). Tissue culture reagents were obtained from Sigma(St. Louis, USA) and Gibco Life Technologies Ltd. (Invitrogen Corp., Carlsbad, USA). Cells were cultivated in RPMI 1640 cell-culture medium containing 10% fetal calf serum, 4 mM Lglutamine and 150 µg mL -1 gentamycine in a humidified 5% CO 2 /95% air atmosphere at 37°C and subcultured with Tryple Select from Gibco (Lofer, Austria). For the microfluidic experiments, each sterile 3D-microchannel was filled with 500 µL of Caco-2 single cell suspension (1.36 x 10 5 cells mL -1 ) and cultivated under standard cell culture conditions until a confluent cell monolayer had formed.Preparation of SAW-chipLiNbO 3 slides (128°-cut x-propagation) dimensioned 15 x 15 x 0.4 mm (length x width x height) were used as piezoelectric substrates. Interdigital metal structures (IDTs) were structured on these slides by standard lithographic processes in order to predominately generate (Rayleigh-mode) SAWs.[25] The used IDTs had 42 fingerpairs, an aperture of 600 µm and a periodicity of 26 µm, resulting in a resonance frequency of about 153 MHz. To enhance the resistance against mechanical cleaning procedures, the fingers were additionally coated with a radio frequency (RF)-sputtered SiO 2 protective coating. with a 2% solution of paraformaldehyde for 15 min at room temperature and were washed with phosphate buffered saline pH 7.4 (PBS; 500 µL). Upon treatment with a 50 mM solution of NH 4 Cl for 15 min and with a 0.1% solution of Triton X-100 for 10 min, the cells were washed again with PBS (500 µL). The tight junction associated protein ZO-1 was stained for 1 h at 37°C with a primary antibody (BD Biosciences, San Jose, USA) diluted 1:100 in a 1% solution of BSA in PBS. Upon washing thrice with a 1% solution of BSA in PBS, the monolayers were incubated with a 1:100 dilution of a secondary Anti-Mouse Immunoglobulin-RPE antibody (Dako Denmark A/S, Glostrup, Denmark) for 30 min at 37°C. Finally, the cell layer was washed three times with a 1% solution of BSA in PBS and mounted in a drop of FluorSave TM . AcknowledgementsThe authors thank U. Länger and Y.X. Wang for help with preparation of the microspheres, Rohde&Schwarz GmbH for assistance with the high frequency generator and K. Sritharan, S. . V R Shinde Patil, C J Campbell, Y H Yun, S M Slack, D J Goetz, Biophys. J. 80V. R. Shinde Patil, C. J. Campbell, Y. H. Yun, S. M. Slack, D. J. Goetz, Biophys. J., 2001, 80, 1733-1743. . J B Dickerson, J E Blackwell, J J Ou, V R Shinde Patil, D J Goetz, Biotechnol. Bioeng. 73J. B. Dickerson, J. E. Blackwell, J. J. Ou, V. R. Shinde Patil, D. J. Goetz, Biotechnol. Bioeng., 2001, 73, 500-509. . H S Sakhalkar, M K Dalal, A K Salem, R Ansari, J Fu, M F Kiani, D T , H. S. Sakhalkar, M. K. Dalal, A. K. Salem, R. Ansari, J. Fu, M. F. Kiani, D. T. . J Kurjiaka, K M Hanes, D J Shakesheff, Goetz, Proc. Natl. Acad. Sci. U. S. A. 100Kurjiaka, J. Hanes, K. M. Shakesheff, D. J. Goetz, Proc. Natl. Acad. Sci. U. S. A., 2003, 100, 15895-15900. . C Cozens-Roberts, J A Quinn, D A Lauffenberger, Biophys. J. 58C. Cozens-Roberts, J. A. Quinn, D. A. Lauffenberger, Biophys. J., 1990, 58, 107-125. . A O Eniola, D A Hammer, J. Controlled Release. 87A. O. Eniola, D. A. Hammer, J. Controlled Release, 2003, 87, 15-22. . A Weissenboeck, E Bogner, M Wirth, F Gabor, Pharm. Res. 21A. Weissenboeck, E. Bogner, M. Wirth, F. Gabor, Pharm. Res., 2004, 21, 1917-1923. . O C Farokhzad, A Khademhosseini, S Jon, A Hermmann, J Cheng, C Chin, A Kiselyuk, B Teply, G Eng, R Langer, Anal. Chem. 77O. C. Farokhzad, A. Khademhosseini, S. Jon, A. Hermmann, J. Cheng, C. Chin, A. Kiselyuk, B. Teply, G. Eng, R. Langer, Anal. Chem., 2005, 77, 5453-5459. . E Mennesson, P Erbacher, M Kuzak, C Kieda, P Midoux, C Pichon, J. Controlled Release. 114E. Mennesson, P. Erbacher, M. Kuzak, C. Kieda, P. Midoux, C. Pichon, J. Controlled Release, 2006, 114, 389-397. A T Florence, Nanoparticles as drug carriers. V. P. TorchilinLondonImperial College Press9A. T. Florence, in Nanoparticles as drug carriers, ed. V. P. Torchilin, Imperial College Press, London, 2006, pp 9. . P Decuzzi, M Ferrari, Biomaterials. 29P. Decuzzi, M. Ferrari, Biomaterials, 2008, 29, 377-384. . J B Dressman, G L Amidon, C Reppas, V P Shah, Pharm. Res. 15J. B. Dressman, G. L. Amidon, C. Reppas, V. P. Shah, Pharm. Res., 1998, 15, 11-22. . S W Schneider, S Nuschele, A Wixforth, C Gorzelanny, A Alexander-Katz, R R Netz, M F Schneider, Proc. Natl. Acad. Sci. U. S. A. 104S. W. Schneider, S. Nuschele, A. Wixforth, C. Gorzelanny, A. Alexander-Katz, R. R. Netz, M. F. Schneider, Proc. Natl. Acad. Sci. U. S. A., 2007, 104, 7899-7903. . T M Squires, S R Quake, Rev. Mod. Phys. 77T. M. Squires, S. R. Quake, Rev. Mod. Phys., 2005, 77, 977-1026. . N Lion, T C Rohner, L Dayon, I L Arnaud, E Damoc, N Youhnovski, Z Y Wu, C Roussel, J Josserand, H Jensen, J S Rossier, M Przybylski, H H Girault, Electrophoresis. 24N. Lion, T. C. Rohner, L. Dayon, I. L. Arnaud, E. Damoc, N. Youhnovski, Z. Y. Wu, C. Roussel, J. Josserand, H. Jensen, J. S. Rossier, M. Przybylski, H. H. Girault, Electrophoresis, 2003, 24, 3533-3562. . C Fillafer, D S Friedl, M Wirth, F Gabor, Small. 4C. Fillafer, D. S. Friedl, M. Wirth, F. Gabor, Small, 2008, 4, 627-633. . Y Mo, L Y Lim, J. Controlled Release. 107Y. Mo, L. Y. Lim, J. Controlled Release, 2005, 107, 30-42. . M F Schneider, Z Guttenberg, S W Schneider, K Sritharan, V M Myles, U Pamukci, A Wixforth, ChemPhysChem. 9M. F. Schneider, Z. Guttenberg, S. W. Schneider, K. Sritharan, V. M. Myles, U. Pamukci, A. Wixforth, ChemPhysChem, 2008, 9, 641-645. . C Vauthier, C Schmidt, P Couvreur, J. Nanoparticle Res. 1C. Vauthier, C. Schmidt, P. Couvreur, J. Nanoparticle Res., 1999, 1, 411-418. . B Ertl, F Heigl, M Wirth, F Gabor, J. Drug Targeting. 8B. Ertl, F. Heigl, M. Wirth, F. Gabor, J. Drug Targeting, 2000, 8, 173-184. . A P Gunning, S Chambers, C Pin, A L Man, V J Morris, C Nicoletti, Faseb J , 22A. P. Gunning, S. Chambers, C. Pin, A. L. Man, V. J. Morris, C. Nicoletti, FASEB J., 2008, 22, 2331-2339. . M Monsigny, A C Roche, C Sene, R Magetdana, F Delmotte, Eur. J. Biochemistry. 104M. Monsigny, A. C. Roche, C. Sene, R. Magetdana, F. Delmotte, Eur. J. Biochemistry, 1980, 104, 147-153. . B Prabhakarpandian, K Pant, R C Scott, C B Patillo, D Irimia, M F Kiani, S Sundaram, Biomed. Microdevices. 10B. Prabhakarpandian, K. Pant, R. C. Scott, C. B. Patillo, D. Irimia, M. F. Kiani, S. Sundaram, Biomed. Microdevices, 2008, 10, 585-595. . F Gabor, E Bogner, A Weissenboeck, M Wirth, Adv , Drug Delivery Rev. 56F. Gabor, E. Bogner, A. Weissenboeck, M. Wirth, Adv. Drug Delivery Rev., 2004, 56, 459-480. . W E Thomas, E Trintchina, M Forero, V Vogel, E V Sokurenko, Cell. 109W. E. Thomas, E. Trintchina, M. Forero, V. Vogel, E. V. Sokurenko, Cell, 2002, 109, 913-923. . T Frommelt, M Kostur, M Wenzel-Schäfer, P Talkner, P Hänggi, A Wixforth, art. no. 034502Phys. Rev. Lett. 100T. Frommelt, M. Kostur, M. Wenzel-Schäfer, P. Talkner, P. Hänggi, A. Wixforth, Phys. Rev. Lett., 2008, 100, art. no. 034502.
[]
[ "Thermodynamical second-order hydrodynamic coefficients", "Thermodynamical second-order hydrodynamic coefficients" ]
[ "Guy D Moore \nPhysics Department\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada\n", "Kiyoumars A Sohrabi \nPhysics Department\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada\n" ]
[ "Physics Department\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada", "Physics Department\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada" ]
[]
Transport coefficients in non-conformal second-order hydrodynamics can be classified as either dynamical or thermodynamical. We derive Kubo formuale for the thermodynamical coefficients and compute them at leading perturbative order in a theory with general matter content. We also discuss how to approach their evaluation on the lattice.
10.1007/jhep11(2012)148
[ "https://arxiv.org/pdf/1210.3340v2.pdf" ]
119,247,360
1210.3340
c64b53b7f21704076a8b3d1e6a873a3af40be717
Thermodynamical second-order hydrodynamic coefficients 3 Dec 2012 Guy D Moore Physics Department McGill University 3600 rue UniversityH3A 2T8MontréalQCCanada Kiyoumars A Sohrabi Physics Department McGill University 3600 rue UniversityH3A 2T8MontréalQCCanada Thermodynamical second-order hydrodynamic coefficients 3 Dec 2012Prepared for submission to JHEP Transport coefficients in non-conformal second-order hydrodynamics can be classified as either dynamical or thermodynamical. We derive Kubo formuale for the thermodynamical coefficients and compute them at leading perturbative order in a theory with general matter content. We also discuss how to approach their evaluation on the lattice. Introduction The theory of QCD is weakly coupled at short distances or high temperatures, but strongly coupled at long distances or low temperatures. One of the major goals of both the experimental and theoretical programs in QCD has been to understand how quickly this transition occurs and at what energy. A major purpose of the heavy ion collision program was to see if weak-coupling behavior emerges at available energies. Similarly, lattice studies have investigated how close thermodynamic properties come to their weak-coupling values as a function of the temperature. Broadly speaking, we can divide properties of thermal QCD into two categories: dynamical and thermodynamical. Most of our information on dynamical properties is from experiment. Experiments show [1] that at available temperatures, QCD displays excellent fluid behavior with remarkably low viscosity [2]. This is very different from weak-coupling behavior [3,4], but roughly consistent with strong-coupling behavior in similar theories which we can solve [5,6]. The story for thermodynamic properties, where most of our information is from the lattice, is more complex. At T ∼ 150 to 200 MeV, thermodynamic properties such as pressure, baryon susceptibility, and ψ ψ show strong temperature dependence and are far from their weak-coupling values [7]. As temperature rises most thermodynamic quantities approach weak-coupling behavior, but at different rates. Quark number susceptibilities come close to weakcoupling behavior already at a few T c [7,8]. Cross-correlations between strange and light quark numbers transition change from the expected behavior in a hadron gas to the behavior of a weakly coupled plasma over this same temperature range. The pressure takes rather longer to approach weak coupling behavior [9]. We feel that the more dynamical and thermodynamical quantities we have available, the more complete and nuanced a picture of the strong to weak coupling transition we can obtain. With this in mind, we advocate investigating the so-called second order hydrodynamic coefficients and their coupling dependence. As we will argue, some of these coefficients are thermodynamical and can be computed on the lattice. They also have simple weak-coupling behavior. Indeed, the main goal of this paper will be to compute their leading-order weak-coupling behavior in a general theory. We will also discuss what would be involved in evaluating them on the lattice. In the the next section we will review second-order hydrodynamics and explain how some of the coefficients of this theory are thermodynamical. Since at least the mid-rapidity regions in high-energy heavy ion collisions deal with QCD at small quark-number chemical potentials, and since the lattice can only deal well with the case where chemical potentials vanish, we will assume vanishing quark-number chemical potentials. We also ignore magnetic fields. However, QCD is far from a conformal theory in the interesting energy regime, so we will not assume conformal symmetry. In this case there are three independent second-order hydrodynamic coefficients which are thermodynamical in nature [10,12]. In the notation of Romatschke [13,14] these are κ, λ 3 and λ 4 . We compute their values at weak coupling and vanishing masses in Section 3; see particularly Eq. (3.38) and Eq. (3.39). As we remarked, it would be interesting to evaluate these coefficients for QCD on the lattice. This can be done because these coefficients all have Kubo relations directly in terms of finite-temperature, Euclidean correlation functions of the sort which can be evaluated on the lattice (without the need for analytic continuation). This is possible precisely because these transport coefficients are thermodynamical in nature. Explicit expressions for the Kubo relations are found in the next section, see Eq. (2.10) to Eq. (2.12). In Section 4, we present a brief discussion of how these Kubo relations might be applied on the lattice. In particular, we discuss operator normalization and contact terms. Based on this discussion, we think the evaluation of κ should be feasible with existing techniques, at least for pure-glue QCD [15]. The evaluation of λ 3 and λ 4 may be prohibitively difficult since computation of three-point functions are always much harder than two-point functions on lattice. Hydrodynamics Hydrodynamics is a general theoretical framework for describing the behavior of fluids locally near equilibrium. It is organized as an expansion in gradients of the fluid properties. (For a recent review of relativistic hydrodynamics see [16,17]). At lowest (zero) order in gradients, hydrodynamics is determined by equilibrium thermodynamics. The state of the fluid at each point is determined by the values of all conserved charge densities. For QCD, these are the momentum density P µ associated with the stress tensor T µν and the charge densities Q a , a = u, d, s, . . . associated with the conserved 4-currents J µ a . Assuming local equilibrium and an equation of state for the pressure P in terms of the energy density and charge densities, P = P (ǫ, n a ), these currents can be determined in terms of the conserved charge densities. In practice one uses a slightly more convenient set of variables; 1 the energy density and flow 4-velocity, P µ = ǫu µ with g µν u µ u ν = −1 (where g µν is the metric tensor, in flat space g µν = η µν = Diag [−1, 1, 1, 1]) and the number densities n a ≡ g µν u µ J ν a . In terms of these the stress tensor and current at lowest order are T µν (ǫ, u, n) = (ǫ+P )u µ u ν + P g µν = ǫu µ u ν + P ∆ µν , (2.1) J µ a (ǫ, u, n) = u µ n a . (2.2) Here ∆ µν ≡ g µν + u µ u ν is a projection operator onto the local spatial directions. When T µν and J µ satisfy these expressions, then stress conservation, ∇ µ T µν and current conservation, ∇ µ J µ a , close and completely determine the fluid dynamics. The relevant quantities at this order -the pressure P and its various derivatives which give the entropy density s, quark number susceptibilities χ ab , the speed of sound c s , and so forth -have been extensively studied on the lattice [7,8]. At first order in gradients there are two independent terms one can add to the righthand side of Eq. (2.1): T µν = RHS of Eq. (2.1) − ησ µν − ζ∆ µν ∇ α u α , (2.3) σ µν ≡ ∆ µα ∆ νβ ∇ α u β + ∇ β u α − 2 3 ∆ αβ ∇ γ u γ . (2.4) Here η, ζ are the shear and bulk viscosities respectively. These new coefficients η, ζ, and the diffusion coefficient which can be added to Eq. (2.2), are all dynamical quantities. If the coefficients and the terms they multiply are nonzero then entropy increases. They can be determined, via Kubo relations, from equilibrium correlation functions of stress tensor or current operators, but the relations involve evaluating these correlation functions at nonzero frequency, which makes a direct evaluation on the lattice impossible and an indirect evaluation at best very challenging [18][19][20]. However there is significant progress in determining them from experiment [21]. At second order there are a host of terms which can be added. The situation improves somewhat if we assume that charge densities are small, so J µ a terms can be neglected. If in addition we assume that the theory under consideration is conformally invariant, then there are 5 additional terms which must be included [14]. However, since we are interested in QCD at finite coupling and potentially in making contact with the lattice, we cannot assume conformal invariance. In this case, after reducing the number of terms by applying equations of motion and other interrelations, there are 15 independent terms which appear at second order, which have been enumerated by Romatschke [13]. To write these terms explicitly it is convenient to introduce the vorticity tensor Ω µν , 2Ω µν ≡ ∆ µα ∆ νβ (∇ α u β − ∇ β u α ) ,(2.5) as well as the curvature tensor R µναβ and Ricci tensor R µν = R µ α να and scalar R = R µ µ . And we will write R µ να β ≡ 1 2 R µκσβ ∆ ν κ ∆ α σ + ∆ ν σ ∆ α κ − 2 3 ∆ να ∆ κσ (2.6) and similarly for R µν . That is, the indices enclosed in angle brackets are spaceprojected, symmetrized, and trace-subtracted. Using all of this notation, the possible second-order terms, according to Romatschke [13], are T µν = Eq. (2.3) + ητ π u · ∇σ µν + ∇ · u 3 σ µν +κ R µν − 2u α u β R α µν β +λ 1 σ λ µ σ ν λ + λ 2 σ λ µ Ω ν λ − λ 3 Ω λ µ Ω ν λ +ητ * π ∇ · u 3 σ µν + λ 4 ∇ µ ln s∇ ν ln s + 2κ * u α u β R α µν β +∆ µν −ζτ Π u · ∇∇ · u + ξ 1 σ αβ σ αβ + ξ 2 (∇ · u) 2 +ξ 4 ∇ α⊥ ln s∇ α ⊥ ln s + ξ 3 Ω αβ Ω αβ + ξ 5 R + ξ 6 u α u β R αβ . (2.7) There are several ways to categorize these terms. Some are only relevant in curved space; κ, κ * , ξ 5 and ξ 6 . The others are relevant in flat or curved space. (Even though κ etc. only play a role in curved space, they mix with the other terms when we find Kubo relations in Eqs. (2.10-2.12), so they should generally be considered anyway [14].) We can also divide the terms into linear and nonlinear terms. Linear terms affect small fluctuations and can, for instance, influence their dispersion; nonlinear terms are only relevant at second order in small fluctuations about equilibrium and flat space. The linear terms are τ π , κ, κ * , τ Π , ξ 5 , and ξ 6 . The other terms, λ 1...4 , τ * π , ξ 1...4 are nonlinear. We can also group these terms into those which are thermodynamical in nature, and those which are dynamical. We call a term thermodynamical if it can give a nonzero contribution to T µν when the geometry and density matrix are fully timeindependent and the system is therefore in equilibrium. No term involving the shear tensor σ µν is thermodynamical because a system under shear flow is changing with time and is producing entropy. In nonconformal theories, the same is true of bulk flow ∇·u. However, it is completely consistent to have a system which is in equilibrium in a curved (but time-independent) geometry. Similarly, a time-independent but spacevarying g 00 (gravitational potential) makes ∇ µ⊥ s nonzero without any departure from equilibrium. Similarly, it is possible (in a curved geometry) to establish persistent vorticity which is sustained forever. 2 The system will be fully in equilibrium in the presence of this vorticity. Hence, the coefficients κ, κ * , λ 3 , λ 4 , ξ 3 , ξ 4 , ξ 5 , ξ 6 represent thermodynamical quantities. In Ref. [22] we showed how to derive Kubo relations for second-order hydro coefficients. There we did so only for conformal theories, but it is straightforward to do so for nonconformal theories as well. Doing so, we find that the Kubo relations for the thermodynamical coefficients can all be expressed in terms of retarded correlation functions evaluated directly at zero frequency. Up to powers of i, zero-frequency retarded correlators equal zero-frequency Euclidean correlators. In fact, we can derive (Kubo) relations between the thermodynamic coefficients and Euclidean correlators by working directly in Euclidean space. In particular, defining the Euclidean n-point function as G µ 1 ν 1 ...µnνn E (p 1 , . . , p n−1 , −p 1 −. . −p n−1 ) ≡ d 4 x 1 . . . d 4 x n−1 e −i(p 1 ·x 1 +...+p n−1 ·x n−1 ) × 2 n ∂ n ln Z ∂g µ 1 ν 1 (x 1 ) . . . ∂g µnνn (0) gµν =δµν (2.8) with Z [g µν ] = Dφ exp {−S E [φ, g µν ]} ,(2.9) we find κ = lim kz→0 ∂ 2 ∂k 2 z G xy,xy E (k)| k 0 =0 ,(2.10)λ 3 = 2κ * − 4 lim p z ,q z →0 ∂ 2 ∂p z ∂q z G xt,yt,xy E (p, q)| p 0 ,q 0 =0 ,(2.11)λ 4 = −2κ * + κ − c 4 s 2 lim p x ,q y →0 ∂ 2 ∂p x ∂q y G tt,tt,xy E (p, q)| p 0 ,q 0 =0 . (2.12) The remaining transport coefficients, including κ * which appears above, are not independent but are determined in terms of these three via five independent conditions. Two conditions were found by Romatschke [16], by demanding that the entropy current have non-negative divergence. His calculation was limited to second order in gradients; but a treatment to third order in gradients by Bhattacharyya [10] and Jensen et al [11] found three more constraints on second order transport coefficients. For an interesting physical interpretation of these constraints, see [12]. The five constraints found by Bhattacharyya, in our notation 3 , are κ * = κ − T 2 dκ dT , (2.13) ξ 5 = 1 2 c 2 s T dκ dT − c 2 s κ − κ 3 , (2.14) ξ 6 = c 2 s 3T dκ dT − 2T dκ * dT + 2κ * − 3κ − κ + 4κ * 3 + λ 4 c 2 s , (2.15) ξ 3 = 3c 2 s T 2 dκ * dT − dκ dT + 3 (c 2 s − 1) 2 (κ * − κ) − λ 4 c 2 s + 1 4 c 2 s T dλ 3 dT − 3c 2 s λ 3 + λ 3 3 , (2.16) ξ 4 = − λ 4 6 − c 2 s 2 λ 4 + T dλ 4 dT + c 4 s 1 − 3c 2 s T dκ dT − T dκ * dT + κ * − κ −c 6 s T 3 d 2 dT 2 κ − κ * T . (2.17) We take these constraints to determine all other coefficients in terms of κ, λ 3 , and λ 4 . In Appendix A we give a detailed derivation of Eqs.(2.10-2.12), and we find Kubo relations for the dependent transport coefficients mentioned in Eqs. (2.13-2.17) for completeness. Euclidean correlation functions have well behaved perturbative expansions at finite temperature (at least at low order), therefore it should be possible to evaluate these correlators perturbatively in a weakly coupled theory. We present the derivation at lowest order in a general massless theory for particles of spin zero, half and one in the next section. Evaluation at Weak Coupling To carry out the calculation of these transport coefficients, first we have to clarify the nature of the correlation functions that are derived by differentiating the curved space partition function. The definition for the n-point Green function established in Eq. (2.8) involves multiple derivatives acting on the energy functional. Each 3 Labeling the coefficients of [12] with a prime, the relations between their coefficients κ ′ 1 , κ ′ 2 , λ ′ 3 , λ ′ 4 , ζ ′ 2 , ζ ′ 3 , ξ ′ 3 , and ξ ′ 4 and our coefficients are: T κ ′ 1 = κ, T κ ′ 2 = 2κ − 2κ * , −T λ ′ 3 = λ 3 , c 4 s T λ ′ 4 = λ 4 , T ζ ′ 2 = ξ 5 , T ζ ′ 3 = ξ 6 , −T ξ ′ 3 = ξ 3 and c 4 s T ξ ′ 4 = ξ 4 . Moreover, unlike [12] our convention for R ρσµν is R ρ σµν = ∂ µ Γ ρ νσ − ∂ ν Γ ρ µσ + Γ ρ µλ Γ λ νσ − Γ ρ νλ Γ λ µσ . derivative can pull down a factor of −2∂L E /∂g µν = T µν , giving a conventional npoint stress-tensor correlator; but the g µν derivatives can also act on T αβ factors pulled down by previous g αβ derivatives, leading to contact terms. (In intermediate steps our T µν is really the stress tensor density √ g T µν ; the distinction is irrelevant in final expressions since in the end we evaluate correlators in flat space.) So in terms of the usual n-point stress tensor correlators, G µν,...,αβ E defined in Eq. (2.8) is G µν,αβ E (0, x) = T µν (0)T αβ (x) gµν =ηµν + 2 ∂T µν (0) ∂g αβ (x) gµν =ηµν , (3.1) G µν,αβ,γρ E (0, x, y) = T µν (0)T αβ (x)T γρ (y) gµν =ηµν + 2 ∂T µν (0) ∂g αβ (x) T γρ (y) gµν =ηµν (3.2) +2 ∂T µν (0) ∂g γρ (y) T αβ (x) gµν =ηµν + 2 T µν (0) ∂T αβ (x) ∂g γρ (y) gµν =ηµν +4 ∂ 2 T µν (0) ∂g αβ (x)∂g γρ (y) gµν =ηµν . The terms involving derivatives of the stress tensor are called contact terms, and are discussed in some detail in Ref. [23]. Since ∂T µν (x)/∂g αβ (y) ∝ δ 4 (x − y) they have very simple momentum dependence. In particular, the contact term in G µν,αβ E (x) is ∝ δ 4 (x); so its contribution to G µν,αβ E (k) is k-independent. Therefore it does not contribute to Eq. (2.10). Now consider the four contact terms in Eq. (3.2) and their contribution to Eq. (2.11). Defining 2 ∂T µν (x) ∂g αβ (y) ≡ X µναβ δ 4 (x − y) ,(3.3) we find three X-type contact terms, involving δ 4 (x), δ 4 (y), and δ 4 (x−y) respectively. The first gives a contribution which is independent of p and so does not contribute to Eq. (2.11); similarly the second is independent of q and also does not contribute. But the third term does contribute; λ 3 = −4 lim pz,qz→0 ∂ pz ∂ qz T xt (p)T yt (q)T xy (−p − q) −4 lim pz,qz→0 ∂ pz ∂ qz X xtyt (p + q)T xy (−p − q) . (3.4) In order to calculate these transport coefficients for a generic field theory, we need to find the explicit form of both the stress tensor T µν and of the contact term X µναβ by differentiating the action S = d 4 x (L scalar + L spinor + L vector ) (3.5) with respect to the metric. Since we only attempt a leading-order calculation here, it is sufficient to consider the free-theory action in curved space, L scalar = √ g 2 g µν ∂ ν φ∂ µ φ , (3.6) L spinor = |e|ψγ c e λ c ∂ λ + 1 2 G ab ω ab λ ψ , (3.7) L vector = √ g 4 F µν F ρτ g µρ g ντ . (3.8) Here g µν is the inverse of g µν , F µν = ∂ µ A ν − ∂ ν A µ is the field strength tensor, e a µ is the vierbein related to g µν by η ab e a µ e b ν = g µν and |e| = det e a µ is its determinant. Finally, ω ab λ is the spin connection and G ab = 1 4 γ a , γ b . Actually, more generally the scalar Lagrangian density should read L scalar = √ g 2 g µν ∂ ν φ∂ µ φ − ξRφ 2 , (3.9) where ξ is a dimensionless constant and R is the Ricci scalar introduced earlier. The action is conformal for the choice ξ = 1 6 and is called minimally coupled if ξ = 0 [24]. We will consider general ξ, but in the end our results for κ, λ 3 are ξ independent. Through the reminder of this section we will compute κ and λ 3 using Eq. (2.10) and Eq. (3.4), applying the action in Eq. (3.5). We also note that at the leading order calculation, λ 4 = 0 due to conformal symmetry. Since the effect of more degrees of freedom N 0 , N 1/2 and N 1 at this level is multiplicative, they can be counted in the final result respectively. Scalars In carrying out the variation of Eq. (3.9) with respect to g µν , we must consider the explicit dependence and the implicit dependence via the Ricci scalar R. The resulting stress tensor is T µν = (1 − 2ξ)g µα g νβ + 4ξ − 1 2 g µν g αβ ∂ α φ∂ β φ + 2ξ g µν g αβ −g µα g νβ φ∂ α ∂ β φ , (3.10) plus terms which vanish in flat space. With T µν in hand, we can compute the scalar contribution to κ. The lowest order diagram is shown in Figure 1. The momentum k enters at one T xy insertion and exits at the other, so the scalar propagators carry different momenta, p and p+k ≡ q. T xy inserted between these lines obeys the Feynman rule (note directions of momentum flow) Figure 1. Leading order scalar diagram contributing to T xy (−k)T xy (k) , necessary for evaluation of κ. The crosses are T insertions, the solid lines are scalar propagators, and the arrows indicate the momenta flowing on lines and entering or leaving T insertions. ❣ × p q T xy ✲ ✲ = (1 − 2ξ) p x q y + q x p y + 2ξ p x p y + q x q y . (3.11) ❣ × ❣ × ✬ ✩ ✫ ✪ ✲ ✲ ✲ ✛ k k p+k p Since we are only differentiating with respect to k z , we may set k x = 0 = k y from the outset, in which case p x = q x and p y = q y . Therefore the ξ terms cancel 4 and the diagram evaluates to κ = ∂ 2 kz T xy (−k)T xy (k) = 1 2 ∂ 2 kz p (2p x p y ) 2 p 2 (p+k) 2 k=0 = −4T p 1 p 6 − 4p 2 z p 8 p 2 x p 2 y = − T 2 72 , (3.12) where 1 2 is the symmetry factor of the diagram, and the integration-summation symbol is defined as p = T p 0 =2πnT d 3 p (2π) 3 (3.13) and n runs over the integers. In evaluating this and related sum-integrals we use the result p ( p 2 ) n (p 2 ) n+1 = (2n + 1)! 2 2n (n!) 2 T 2 12 . (3.14) Expressions with powers of (p 0 ) 2 in the numerator can be handled by rewriting (p 0 ) 2 = p 2 − p 2 and using this relation repeatedly; for instance, p (p 0 ) 6 (p 2 ) 4 = −1 16 T 2 12 . We handle p 2 x p 2 y p 2 z by angular averaging, p 2 x p 2 y p 2 z angle = p 6 /105. Our leading-order result for κ agrees with the result in Ref. [23]. The above result shows that the weak coupling expansion of κ starts at α 0 . Now we turn to the computation of λ 3 . The first term appearing in Eq. (3.4) is represented by the diagram shown in Figure 2. Once again, p, q only need have nonvanishing z-components, but the T operators only return x, y, 0 components, which simplifies the evaluation of the diagram and ensures that the result is ξ independent. Figure 2. Three-point correlation function T xt (p)T yt (q)T xy (−p − q) that contributes to the Kubo formula of λ 3 ; the leftmost vertex is T xy , the other vertices are T xt and T yt . p+q ✲ ✲ ✲ p q ❄ ✒ ❅ ■ k k−q k+p ❣ × ❣ × ❣ × ❅ ❅ ❅ The diagram evaluates to −4∂ pz ∂ qz T xt (p)T yt (q)T xy (−p − q) = − 4∂ pz ∂ qz k (2k x k y )(2k t k x )(2k t k y ) (k+p) 2 (k−q) 2 k 2 p,q=0 = 128 k k 2 t k 2 x k 2 y k 2 z k 10 = − T 2 36 . (3.15) For the contact term in Eq. (3.4), we need to calculate X ytxt as defined in Eq. (3.3). Variation of Eq. (3.10) with respect to g xt gives gives X ytxt = −∂ y φ∂ x φ ,(3.16) in flat space and for ξ = 0. In addition there are terms proportional to ξ, but they again always involve the combination ∂ α ∂ β φ 2 . These terms do not contribute to the correlation function we need for the same reason the ξ-proportional terms above did not contribute; the incoming and outgoing momenta equal for the components which make up the indices of X txty . Therefore the result is again ξ independent. The contribution from the T X correlator to λ 3 is − 4∂ pz ∂ qz X xtyt (p + q)T xy (−p − q) = 2∂ pz ∂ qz k (2k x k y ) 2 (k−p) 2 (k+q) 2 = −32 k k 2 x k 2 y k 2 z (k 2 ) 4 = − T 2 18 . (3.17) This diagram is actually the same as the diagram which determines κ; shifting the integration variable used above by p, the integral becomes the same one needed in evaluating κ except for the overall factor of 4. Summing up Eq. (3.15) and Eq. (3.17), we get λ 3 = − T 2 12 1 real scalar field . (3.18) We follow a similar calculation for gauge and fermion fields in the next sections. Gauge fields The gauge field stress tensor derived by g µν variation using the gauge field action, Eq. (3.8), is T µν = F µα F ν α − 1 4 g µν F αβ F αβ ,(3.19) and from the above relation, we derive the Feynman rule for the vertex, µ, a ✄ ✄ ✄ ✂✁✂✁ ❣ × T αβ ✄ ✄ ✄ ✂✁✂✁ ν, b ✲ p ✲ k δ ab (p α g µγ − p γ g µα )(k β g γ ν − k γ g νβ ) + (µ ↔ ν) − g αβ (p · k g µν − k µ p ν ) . (3.20) The expression for X µναβ is rather long, but for the case of interest, X txty , it is quite simple: X txty = −F xz F y z (3.21) leading to the Feynman rule µ, a ✄ ✄ ✄ ✂✁✂✁ ❣ × X 0x0y ✄ ✄ ✄ ✂✁✂✁ ν, b ✲ p ✲ k − δ ab (p x g µz − p z g µx )(k y g νz − k z g νy ) + (x ↔ y) . (3.22) The calculation of κ and λ 3 then proceeds via the same diagrams as in the scalar case, but with these somewhat more complicated Feynman rules for the vertices, and with gauge propagators. Note that, because T µν is built from field strengths, it applies a transverse projector onto the incoming gauge field index; contracting Eq. (3.20) with p µ or k ν gives zero. Therefore the result is gauge parameter independent within covariant gauges (and all linear gauges). After significant algebra we find that κ = T 2 18 for a single color , (3.23) while the two diagrams contributing to λ 3 give − 4∂ pz ∂ qz T xt (p)T yt (q)T xy (−p − q) = 2T 2 9 (3.24) and − 4∂ pz ∂ qz X xtyt (p + q)T xy (−p − q) = T 2 9 . (3.25) Therefore, the gauge field contribution to λ 3 is λ 3 = T 2 3 for a single color . Since each color possesses two spin states, we need to divide these results by 2 to get expressions per degree of freedom. Fermions The treatment of fermions in curved space requires the introduction of the vierbein (also called the frame vector or tetrad) e a µ (for a review and a treatment of their application to the stress tensor see Ref. [25]). The vierbein relates a local orthonormal coordinate system on the tangent space, with indices a and metric η ab (which is δ ab in Euclidean space) to the metric, via η ab e a µ e b ν = g µν ; (3.27) in a sense it is the square root of the metric. The Dirac action is e µ aψ γ a ∇ µ ψ, where the action of ∇ µ on a spinorial object is determined by the spin connection ω ab µ : ∇ µ ψ = ∂ µ ψ + 1 2 G [ab] ω ab µ ψ (3.28) where G [ab] = 1 4 [γ a , γ b ] , and the spin connection is related to the vierbein via ω ab µ = 1 2 e aν (∂ µ e b ν − ∂ ν e b µ ) − 1 2 e bν (∂ µ e a ν − ∂ ν e a µ ) + 1 2 e aν e bσ (∂ σ e c ν − ∂ ν e c σ )e cµ . (3.29) Because the action depends on the local frame components, the stress-tensor for fermions cannot be obtained by functional differentiation with respect to the metric tensor; instead one must use the more general expression T µν (x) = e ν a ∂Z ∂eaµ(x) , which reduces to T µν (x) = 2 ∂Z ∂gµν (x) for any terms which depend only on g µν because of Eq. (3.27). Applying this relation to the fermionic action, noting that δe aµ ∂e bν = −η ab g µν (since g µν is the inverse of g µν ; alternatively, because the variation of g µν g µν should vanish), and specializing to the non-diagonal entries in T µν , after some work one obtains [25] T µν = 1 4 ψ γ µ ∇ ν ψ − ∇ µψ γ ν ψ +ψγ ν ∇ µ ψ − ∇ νψ γ µ ψ . (3.30) The relevant Feynman rule is ✲ ❣ × ✲ p q T xy ✲ ✲ i 4 (γ x (p y + q y ) + γ y (p x + q x )) ,(3.31) leading to an expression for κ, κ = −∂ 2 kz ′ p i 2 (−i) 2 Tr ([(2p+k) x γ y + (2p+k) y γ x ]p /[(2p+k) x γ y + (2p+k) y γ x ][p /+/ k]) 16p 2 (p+k) 2 (3.32) where prime on the sum-integral indicates that the frequencies are (2n + 1)πT . In terms of this fermionic sum-integral, the equivalent of Eq. (3.14) is ′ p ( p 2 ) n (p 2 ) n+1 = (2n + 1)! 2 2n (n!) 2 (−T 2 ) 24 . (3.33) The rest of the evaluation is straightforward, yielding κ = T 2 72 for a single flavor (3.34) Since a Dirac fermion has 4 degrees of freedom, this should be divided by 4 to get the contribution per degree of freedom. Next we calculate λ 3 from Eq. (3.4). The three point diagram still looks like Figure 2, but with two terms depending on whether the fermion number follows or opposes the indicated momentum flow. A straightforward evaluation yields a contribution to λ 3 of T 2 /24. We specialize immediately to the contact term needed in the calculation; the general expression is not simple. The formula for X txty is Using the techniques already introduced, we find after some work that X txty = −3 16 ψ γ x ∇ y ψ +ψγ y ∇ x ψ − ∇ xψ γ y ψ − ∇ yψ γ x ψ = −3 4 T xy . (3.36) The T X diagram contributing to λ 3 therefore gives 3 times the contribution we found for the diagram contributing to κ, that is, T 2 /24. Adding these two terms, we find λ 3 = T 2 12 for a single flavor. (3.37) Once again, we need to divide by 4 to get the contribution per degree of freedom. Results Since we work, so far, at the free theory level, the result is a function only of the number of scalar, spinor, and vector degrees of freedom, which we will write as N 0 , N1 2 , N 1 . 5 Combining the results of the previous subsections, we find κ = T 2 288 −4N 0 + N1 2 + 8N 1 + O( √ α) , (3.38) λ 3 = T 2 48 −4N 0 + N1 2 + 8N 1 + O( √ α) . (3.39) The other coefficients vanish because the theory is conformal at this order. Curiously, at the free level λ 3 = 6κ regardless of the matter content. We have computed only the leading, coupling-independent contributions. We expect the first corrections to κ, λ 3 to arise at O(α Lattice implementation Here we will briefly discuss some of the challenges associated with evaluating the second-order coefficients on the lattice. One challenge we foresee is choosing and correctly normalizing the operators to use on the lattice. Another challenge is dealing with (incorrect or divergent) short-distance behavior of the correlators. We will not discuss the issue of overcoming fluctuations to achieve good statistics; instead we hope that existing techniques [26] will prove sufficient. In general, an operator written in terms of lattice variables will not correspond to the continuum operator of interest, but will renormalize and mix with all operators with the same symmetry properties. For instance, a proposed lattice implementation of T xy will generically be expressed in terms of the true T xy as T xy latt = Z T T xy contin + n c n O xy n , (4.1) where O xy n are all other operators with the same symmetries as T xy under the lattice symmetry group, and Z T , c n are some coefficients. Generally the operators O n are higher dimension than T xy and so the c n will carry positive powers of the lattice spacing. Therefore, to the extent that we can take the continuum limit the O n should be harmless except that they can introduce short-range contributions to correlators. However, both the operation of vacuum subtraction and the small momentum limits associated with any lattice implementation of ∂ 2 kz G(k)| kz→0 tend to remove sensitivity to short distance contributions to the correlators, so we expect this issue to be under control. 6 The problem is the renormalization constant Z T , which in general must be determined nonperturbatively. To evaluate Z T it is useful to recall the physical interpretation of the stress tensor. If we make a small change to the geometry, changing g µν = η µν to g µν = η µν + h µν , the action changes from S g=η = d 4 x L 0 to S g=η+h = d 4 x L 0 − 1 2 h µν T µν + O(h 2 ) , (4.2) where L 0 is the Lagrangian density evaluated assuming h µν = 0. For instance, if we modify the lattice action such that the lattice spacing in the x-direction increases, the change in the action, to leading order, is − 1 2 h xx T xx . Similarly, if −h xy T xy is added to the action, the geometry becomes skewed such that the separation between the point (0, 0, 0, 0) and the point (0, x, y, 0) is no longer x 2 + y 2 but is x 2 + 2xyh xy + y 2 . 6 The short distance behavior of the stress tensor two-point function is T xy (x)T xy (0) ∼ x −8 . For a dimension-6 operator O n , the correlator is T xy (x)O n (0) ∼ a 2 x −10 . The vacuum subtracted value at short distances is O(T 4 ) by OPE arguments, see Ref. [27]; hence T xy (x)O n (0) T ∼ a 2 T 4 x −6 . The short distance contribution to ∂ 2 k T O n (k) is ∼ x x 2 T (x)O n (0) ∼ x a 2 T 4 x −4 which is O(a 2 ) and at worst log UV divergent. Higher dimension contaminants carry more powers of (a/x) and also contribute at order a 2 . The general strategy for determining the normalization constant Z T on the stress tensor is then to include the proposed stress tensor, with small coefficient (−c/2), in the action, and to see how much it changes the effective lattice spacing. For instance, for a diagonal component such as T xx , one can measure correlation lengths along the x-axis and along other lattice axes, or measure the string tension in the xy and yz planes. The change in distance determines h xx , and the relation between the proposed T xx and the true one is −(c/2)T xx proposed = (−h xx /2)T xx true (unless the change also modifies other axis lengths, in which case the proposed T xx is a mixture of T xx , T yy etc). This technique has been well developed for diagonal components of the stress tensor, see for instance [28][29][30][31]. To our knowledge it is not as well developed for the off-diagonal components. Unfortunately, all of the Kubo relations we have found, specifically Eq. (2.10), Eq. (2.11) and Eq. (2.12), involve correlators of off-diagonal components of T µν . But this is easily fixed by performing rotations in our choice of axes. For the case of κ, we make a θ = π 4 rotation in the (x, y)− plane, which transforms Eq. (2.10) to κ = 1 4 lim kz→0 ∂ 2 ∂k 2 z G xx,xx E (k) − 2G xx,yy E (k) + G yy,yy E (k) . (4.3) Of course G xx,xx E = G yy,yy E at vanishing k x,y by lattice symmetries, so only one needs to be evaluated. The correlation function found in [22] involved all off-diagonal stress tensors. Arnold et al found an expression involving only T yt (z), T xt (z) [32], and we extend it to the nonconformal case in the Appendix, see Eq. (A.35), which we reproduce here: λ 3 = 2κ * − 2 lim p z ,q z →0 ∂ p z ∂ q z G yy,tx,tx E (p, q) . (4.4) Our expression for λ 3 still involves the non-diagonal stress tensor T xt . To reexpress it in terms of diagonal terms we must perform a 45 • rotation between the x and time axes. This requires a change in the implementation of the periodic boundary conditions in Euclidean space, as illustrated in Figure 4. As the figure shows, we typically consider a lattice with principal domain running from t = 0 to t = β; equivalently we can say that we consider the field theory over the whole x, t plane, but with an identification map which equates every point (x, t) with the point (x, t + β). Introducing rotated coordinates x ′ = (x + t)/ √ 2 and t ′ = (t − x)/ √ 2, the identification map equates a point at (x ′ , t ′ ) with a point at (x ′ +β/ √ 2, t ′ +β/ √ 2). We then choose to work on a lattice grid with principal axes along the x ′ and t ′ directions. We have to pick a principal domain, that is, a region of the plane holding exactly one copy of each equivalence class of points under the periodic identification. One choice is to stick with the band of points with t ∈ [0, β); as illustrated in the figure, the points labeled 1 are identified, as are the points labeled 2, 3 etc. This choice is to consider these points as the boundary points on the lattice, which are identified with each other. But the points labeled 2 ′ are also identified, as are the points labeled 3 ′ etc. Another choice, as illustrated in the figure, is to choose the band with t ′ ∈ [0, β/ √ 2), which has these points as the periodically identified boundary. The identification map relates a point on the bottom edge of this band with a point on the top edge, but shifted over a distance β/ √ 2 in the x ′ direction. Note that this band is also narrower than the original band; the inverse temperature β corresponds to the space separation of the identified points, not the extent of the new "time" coordinate t ′ . Therefore, implementing lattice gauge theory on a space where the periodic identification has a spatial shift corresponds to choosing axes which lie at an angle with respect to the (x, t) axes. In this way we can perform the required (x, t) rotation to make the stress tensor operators needed in the evaluation of λ 3 correspond to diagonal components of the stress tensor. Specifically, in terms of the x ′ , t ′ coordinates after this final rotation, λ 3 is determined by λ 3 = 2κ * − 1 2 lim pz,qz→0 ∂ 2 ∂p z ∂q z G tt,tt,yy E (p, q) − 2G tt,xx,yy E (p, q) + G xx,xx,yy E (p, q) . (4.5) In a completely analogous way, we find that λ 4 = κ − 2κ * − c 4 s 4 lim p,q→0 ∂ 2 ∂p x ∂q x G tt, Discussion Our central results are presented in Eq. (3.38) and Eq. (3.39). The thermodynamic coefficients, unlike the entropy-generating coefficients η etc, do not diverge in the weak coupling limit, but remain finite. Both κ and λ 3 are in general nonzero. In particular, the previous observation that λ 3 vanishes in strongly-coupled N =4 Super-Yang-Mills theory [14,35] appears to be an accident. Curiously, our results for κ and λ 3 , Eq. (3.38) and Eq. (3.39), yield zero when we insert the matter content of N =4 Super-Yang-Mills theory: N 0 = 3N 1 and N1 2 = 4N 1 . Therefore, both coefficients vanish in the weak-coupling limit. As we just mentioned, λ 3 also vanishes in this theory in the strong coupling limit. The fact that λ 3 vanishes in both limits is suggestive that it is strictly zero, but this is not the case; it has been shown [36] that λ 3 is nonzero at subleading order in the large-coupling expansion. And it is easy to find other examples of conformal theories where λ 3 is nonzero. For instance, SU(N c ) gauge theories with fundamental vectorlike matter and with the number of flavors N f slightly below 11 2 N c have weakly coupled, conformal fixed points [37]. Since such theories are weakly coupled, Eq. (3.39) applies. Both terms are of the same sign, so λ 3 is certainly not zero for these conformal gauge theories. Acknowledgments We thank Simon Caron-Huot and Harvey Meyer for illuminating discussions. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. A Non-conformal hydrodynamics In this section we derive Kubo formulae for all the second-order, thermodynamical transport coefficients of all non-conformal hydrodynamics. We work in Minkowski space and analytically continue to Euclidean space. Consider a hydrodynamic system in equilibrium with some arbitrary background of form ds 2 = δ µν dx µ dx ν + h 00 ( x)dt 2 + h 0i ( x)dtdx i + h ij ( x)dx i dx j . (A.1) In this curved background the expectation value of the stress tensor can be expanded about flat space in powers of h µν : T µν E h = G µν E + d 4 xG αβ,(A.3) where here the retarded Green functions are the correlation functions of a T r with one or two T a , as is explained in detail in [22]. We can shift from each of these two signatures to the other accordingly by noticing that t indices of the Green function are multiplied by a factor of i and also for each a index we get an extra minus sign. The following expressions for the curvature tensors will be quite handy for the derivations of Kubo formulae. To first and second order of perturbations in the metric of the form, g µν (x) = η µν + h µν (x), we have (borrowing the result from [38]) R αβγδ = S σ[δ,βγ] − h λ σ,[γ S λδ],β + 1 2 η µρ S σ[γ,µ S ρδ],β + O(h 3 µν ) (A.4) R βδ = 1 2 η µν (h µδ,βν − h βδ,µν − h µν,βδ + h βν,µδ ) − h µν ,µ − 1 2 h ,ν h ν(δ,β) + 1 2 (h µν h βδ,ν ) ,µ + 1 4 h µν ,δ h µν,β − 1 4 h ,µ h βδ,µ + h µδ,ν h [µ,ν] β + 1 2 h µν h µν,βδ −h µν h µ(β,δ)ν + O(h 3 µν ) R = h αβ ,αβ − h α ,α − h µν ,µ h ,α να + h µν ,µ h ,ν + 3 4 h µν,α h µν,α − 1 4 h ,µ h ,µ − 1 2 h µν,α h αµ,ν − 2h µν h α µ,να + h µν h ,µν + h µν h µν,α α + O(h 3 µν ) and in the above expressions, we have S λδ,β = h λδ,β + h βλ,δ − h βδ,λ . We will write thermodynamic variables u µ , ǫ, P in an expansion about h = 0; u µ (x) =ū µ (x) + u µ h (x) + u µ h 2 (x) and similarly for ǫ, P (the barred variables are the h = 0 values). It is important to understand the role of the rest frame; u µ need not equalū µ = (1, 0, 0, 0); rather we must determine u µ h by solving conservation equations, ∇ µ T µν = 0, consistently and truncating the expansion. Finally the fluid vector in equilibrium must satisfy both σ µν = 0 and ∇ · u = 0. We also find it useful to take the trace of the energy-momentum tensor in the case of non-conformal transport coefficients, T µ µ = P 3 − 1 c 2 s + 3Π . (A.5) To find the pressure in terms of the background source, we use equations of motion, in equilibrium we have 0 = ∇ ν ∇ µ T µν (A.6) = ∇ ν ∇ µ T µν ideal + ∇ ν ∇ µ π µν + ∇ ν ∇ µ (∆ µν Π) where for the ideal fluid, we have ∇ ν ∇ µ T µν ideal = u µ u ν ∇ ν ∇ µ (ǫ + P ) + (ǫ + P )∇ ν ∇ µ (u µ u ν ) (A.7) +∇ µ (ǫ + P )∇ ν (u µ u ν ) + ∇ ν (ǫ + P )∇ µ (u µ u ν ) + P . Since u i h , u i h 2 → 0 for ω → 0, the first term is identically zero, for the second term we get (ǫ+P )∇ ν ∇ µ (u µ u ν ) = (ǫ + P ) (∇ ν u ν ∇ µ u µ + u ν ∇ ν ∇ µ u µ + ∇ ν u µ ∇ µ u ν + u µ ∇ ν ∇ µ u ν ) ≃ (ǫ + P ) (Γ ν νλ u λ ) 2 + Γ µ να Γ ν µβ u α u β + R σµ u σ u µ + O(ω, h 3 µν ) and we used [∇ µ , ∇ ν ]u ρ = R ρ σµν u σ . Similarly, for the third and fourth term in Eq. (A.7), we get ∇ µ (ǫ + P )∇ ν (u µ u ν ) + ∇ ν (ǫ + P )∇ µ (u µ u ν ) ≃ 2∂ α (ǫ + P )Γ α βγū βūγ + O(ω, h 3 µν ) . (A.8) Adding up previous results, finally the pressure reads P =P − ∇ µ ∇ ν π µν + R σµū σūµ (ǫ + P + Π) + χ − Π + O(ω, h 3 µν ) (A.9) where = 3 i=1 ∂ 2 i and χ = (ǫ + P ) (Γ ν νλ u λ ) 2 + Γ µ να Γ ν µβ u α u β + R σµ u σ u µ + 2∂ α (ǫ + P )Γ α βγū βūγ . (A.10) This is the generalization of the result in [32]. As pointed out in the main text, some Kubo formulae don't directly relate a transport coefficient to the zero frequency and momentum limit of Green's functions but they mix these parameters. Throughout the next section we'll try to find the simplest setup that can give rise to Green's functions manageable for lattice calculations. A.1 Kubo relation for κ and ξ 5 We will start the calculation with ξ 5 , by turning on an h xy (x, y) perturbation and evaluating T tt . But this is ǫ, the energy density of the fluid, we can find it at different orders of h µν from Eq. (A.9) or by solving the equations of motion directly. For illustrative reasons, we do the second approach. Since only h xy (x, y) is nonzero we can assume u µ h is also a function only of x, y. The viscous tensor is here and through the following sections we neglect dissipative terms since they will be proportional to the time derivatives of h µν or fluid vector in general. Solving the equations of motion for ∇ µ T µx = 0, ∇ µ T µy = 0, ∇ µ T µz = 0, ∇ µ T µt = 0, we get accordingly, Since we are interested in the zero frequency limit all time derivatives are zero. As we can see in the above equations terms proportional to the metric perturbation appear, which act as a source for P and for u µ . Higher order terms that include the interaction of the fluid vector with background perturbation have been neglected. From the first two equations we obtain Π xx = 1 3 ∂ 2 h xy (x,∂P h ∂x + ∂u x h ∂t (ǭ +P ) + ∂ 3 h xy ∂y∂x 2 κ 3 + 2ξ 5 − ζ ∂ 2 u y h ∂x∂y + ∂ 2 u x h ∂x 2 −η 4 3 ∂ 2 u x h ∂x 2 + ∂ 2 u x h ∂y 2 + 1 3 ∂ 2 u y h ∂x∂y = 0 ∂P h ∂y + ∂u y h ∂t ǭ +P + ∂ 3 h xy ∂x∂y 2 κ 3 + 2ξ 5 − ζ ∂ 2 u x h ∂x∂y + ∂ 2 uP h = − κ 3 + 2ξ 5 ∂ 2 h xy ∂y∂x + O(∂ t ) (A.13) and we know that pressure and energy density are related through P = c 2 s ǫ. Similarly we get u x h = 1 3 ωh xy q 2 x q y (6ξ 5 + κ) c 2 s (q 2 x + q 2 y ) (ǭ+P )+O(ω 2 ) , u y h = 1 3 ωh xy q 2 y q x (6ξ 5 + κ) c 2 s (q 2 x + q 2 y ) (ǭ+P )+O(ω 2 ) . Similarly the Kubo relation for κ in terms of off-diagonal components of stresstensors is given by κ = −∂ 2 kz G xy,xy ar (k). In terms of the Euclidean correlator this is κ = ∂ 2 kz G xy,xy This formula determines a linear combination of ξ 6 and κ * . To get both coefficients separately we need to look for another relation for κ * . We do so by investigating an off-diagonal component of energy-momentum tensor. We have T xy = (ǫ + P )u x u y + P g xy + Π xy . (A.25) Since κ * is a coefficient involving the curvature tensor, which first arises at linear order in h, we need only work to this order, in which case the first term is zero; and if we choose h xy nonzero then P g xy and ∆ xy Π are also zero. Therefore we consider h tt (x, y). Then the only contribution comes from π xy , after expansion in the orders of h tt and u µ (x) =ū µ (x) + u µ h (x) + O(h 2 ), we find Now we turn to nonlinear transport coefficients, where we must work to second order in h. We begin with λ 3 , which is the traceless contribution arising at second order in vorticity. Vorticity is generated by a nonvanishing value of h ti,j ; specifically the vorticity term for which λ 3 is a parameter (see Eq. (2.7)) is Ω i λ Ω j λ = 1 12 δ ij δ mn − 3δ i m δ j n ǫ mkl ǫ nrs ∂ k h lt ∂ r h st . (A.29) The easiest way to proceed [22,32] is to consider an off-diagonal component of T , such as T xy ; then complications involving the pressure (such as those of the last subsections) do not arise. However as we have discussed it is most convenient on the lattice to use a relation involving a diagonal component of the stress tensor. Therefore we will instead consider the vorticity-related contributions to T xx : T xx = (ǫ + p)u x u x + pg xx + Π xx . (A.30) Since we have to keep all terms to the order of O(h 2 ), we need to know u x h , P h 2 , and Π xx h 2 . We will consider nonvanishing h ty (z), which is general enough for Π xx to contain a λ 3 dependent term. For this choice u x h vanishes. To find the contribution 1 2 1), and the first contributions to the nonconformal coefficients to be O(α). 5 N 0 is 1 per real scalar field; N1 2 is two per Weyl spinor field, or 4 per Dirac field; and N 1 is two per massless spin-1 field (one per spin state).For 3-flavor QCD, N 0 = 0, N1 2 = 4 × 3 × 3 = 36 [4 for a Dirac spinor, times three colors times three flavors], and N 1 = 16 [2 spin states times 8 color combinations]. For U(N ) N =4 SYM theory, N 0 = 6N 2 , N1 2 = 8N 2 , and N 1 = 2N 2 . the same number are to be identified the temporal direction and the extension of lattice along spacial directions goes to infinity. Figure 3 . 3How to handle a rotation which mixes a time and a space direction, on the lattice. G 4 xG αβ,tt ar (x, 0)h αβ (x) . (A.15) Fourier transforming and taking the variation of both sides with respect to h xy , xy,tt ar (k 0 , k) = +G xy,tt E (k 0 = 0, k) A. 3 3x ∂k y G xy,tt E (k) . (A.27)Eq. (A.27) and Eq. (A.18) involve the same Green function, so we find a relation between ξ 5 and κ * , specifically Kubo relation for λ 3 and ξ 3 Note that we are using the Landau-Lifshitz frame. For instance, consider a spacetime which is S 2 × R 2 , with time in one of the flat directions.The fluid can spin about the equator of the S 2 and this flow will persist forever, and will therefore reach equilibrium. The same will not happen if we compute non-conformal coefficients. (k).A.2 Kubo relation for κ * and ξ 6To find a Kubo relation for ξ 6 , we use the perturbation h tt (z). This choice, shifts the local rest frame by u t = 1 + 1/2h tt + O(h 2 ). Expanding the fluid vector in terms of metric perturbation h µν , we find the following viscous tensors,If we assume that the hydrodynamic waves are only functions of z then for ∇ µ T µz = 0 we have.20)and once again the last term acts as a source for pressure. For T tt we haveThe first term in the above relation is a pure gauge. For the second term from linear-response we haveThe corresponding Kubo formula for ξ 6 will beand accordingly in Euclidean space using lim kt→0 G tt,tt ar (k t , k) = −G tt,tt E (k t = 0, k), it can be rewritten asof P h 2 , we look into Eq. (A.9). Since Γ ν tt is zero for our metric perturbation, the only possible contributions come from ∂ µ ∂ ν π µν and Π. All derivatives other than ∂ z are zero, so finally we haveCalculating Π zz , Π xx and recalling that u µ h = 0, the final result readsNow for Eq. (A.30) we can write T xx = P + Π xx =P − Π zz + Π xx , which reads The coefficients λ 4 and ξ 4 arise when the entropy, and therefore temperature, vary in space. For this to occur in equilibrium, the gravitational potential h tt must vary in space. Therefore we consider perturbations of h tt (x, y) to second order. The off-diagonal component of the stress-tensor, T xy , isthat after analytic continuation reduces toTo find ξ 4 we evaluate T µ µ for a perturbation of h tt (z). ¿From Eq. (A.5), we find the pressure through Eq. (A.9). We have Γ t tz = −1/2∂ z h tt , Γ z tt = −1/2∂ z h, andfrom Eq. (A.21), andAfter Fourier transforming and straightforward simplifications, the pressure reads . K Adcox, PHENIX CollaborationNucl. Phys. A. 757184K. Adcox et al. [PHENIX Collaboration], Nucl. Phys. A 757 (2005) 184; . B B Back, PHOBOS CollaborationNucl. Phys. A. 75728B. B. Back et al. [PHOBOS Collaboration], Nucl. Phys. A 757 (2005) 28; . I Arsene, BRAHMS CollaborationNucl. Phys. A. 7571I. Arsene et al. [BRAHMS Collaboration], Nucl. Phys. A 757 (2005) 1; . J Adams, STAR CollaborationNucl. Phys. A. 757102J. Adams et al. [STAR Collaboration], Nucl. Phys. A 757 (2005) 102. . D Teaney, J Lauret, E V Shuryak, Phys. Rev. Lett. 864783D. Teaney, J. Lauret and E. V. Shuryak, Phys. Rev. Lett. 86 (2001) 4783; . P Huovinen, P F Kolb, U W Heinz, P V Ruuskanen, S A Voloshin, Phys. Lett. B. 50358P. Huovinen, P. F. Kolb, U. W. Heinz, P. V. Ruuskanen and S. A. Voloshin, Phys. Lett. B 503 (2001) 58; . P F Kolb, U W Heinz, P Huovinen, K J Eskola, K Tuominen, Nucl. Phys. A. 696P. F. Kolb, U. W. Heinz, P. Huovinen, K. J. Eskola and K. Tuominen, Nucl. Phys. A 696 (2001) 197; . T Hirano, K Tsuda, Phys. Rev. C. 6654905T. Hirano and K. Tsuda, Phys. Rev. C 66 (2002) 054905; . P F Kolb, R Rapp, Phys. Rev. C. 6744903P. F. Kolb and R. Rapp, Phys. Rev. C 67 (2003) 044903. . G Baym, H Monien, C J Pethick, D G , Phys. Rev. Lett. 641867G. Baym, H. Monien, C. J. Pethick and D. G. Ravenhall, Phys. Rev. Lett. 64, 1867 (1990). . P B Arnold, G D Moore, L G Yaffe, hep-ph/0302165JHEP. 030551P. B. Arnold, G. D. Moore and L. G. Yaffe, JHEP 0305, 051 (2003) [hep-ph/0302165]. . G Policastro, D T Son, A O Starinets, hep-th/0104066Phys. Rev. Lett. 8781601G. Policastro, D. T. Son and A. O. Starinets, Phys. Rev. Lett. 87, 081601 (2001) [hep-th/0104066]. . P Kovtun, D T Son, A O Starinets, hep-th/0405231Phys. Rev. Lett. 94111601P. Kovtun, D. T. Son and A. O. Starinets, Phys. Rev. Lett. 94, 111601 (2005) [hep-th/0405231]. . S Borsányi, G Endrodi, Z Fodor, A Jakovác, S D Katz, S Krieg, C Ratti, K K Szábo, arXiv:1007.2580JHEP. 101177hep-latS. Borsányi, G. Endrodi, Z. Fodor, A. Jakovác, S. D. Katz, S. Krieg, C. Ratti and K. K. Szábo, JHEP 1011, 077 (2010) [arXiv:1007.2580 [hep-lat]]. . A Bazavov, T Bhattacharya, M Cheng, N H Christ, C Detar, S Ejiri, S Gottlieb, R Gupta, arXiv:0903.4379Phys. Rev. D. 8014504hep-latA. Bazavov, T. Bhattacharya, M. Cheng, N. H. Christ, C. DeTar, S. Ejiri, S. Gottlieb and R. Gupta et al., Phys. Rev. D 80, 014504 (2009) [arXiv:0903.4379 [hep-lat]]. . Y Aoki, Z Fodor, S D Katz, K K Szábo, hep-lat/0510084JHEP. 060189Y. Aoki, Z. Fodor, S. D. Katz and K. K. Szábo, JHEP 0601, 089 (2006) [hep-lat/0510084]. . C R Allton, S Ejiri, S J Hands, O Kaczmarek, F Karsch, E Laermann, C Schmidt, hep-lat/0305007Phys. Rev. D. 6814507C. R. Allton, S. Ejiri, S. J. Hands, O. Kaczmarek, F. Karsch, E. Laermann and C. Schmidt, Phys. Rev. D 68, 014507 (2003) [hep-lat/0305007]. . C R Allton, S Ejiri, S J Hands, O Kaczmarek, F Karsch, E Laermann, C Schmidt, L Scorzato, hep-lat/0204010Phys. Rev. D. 6674507C. R. Allton, S. Ejiri, S. J. Hands, O. Kaczmarek, F. Karsch, E. Laermann, C. Schmidt and L. Scorzato, Phys. Rev. D 66, 074507 (2002) [hep-lat/0204010]. . F Karsch, E Laermann, A Peikert, hep-lat/0002003Phys. Lett. B. 478447F. Karsch, E. Laermann and A. Peikert, Phys. Lett. B 478, 447 (2000) [hep-lat/0002003]. . F Karsch, S Ejiri, K Redlich, hep-ph/0510126Nucl. Phys. A. 774619F. Karsch, S. Ejiri and K. Redlich, Nucl. Phys. A 774, 619 (2006) [hep-ph/0510126]. . S Bhattacharyya, arXiv:1201.4654hep-thS. Bhattacharyya, arXiv:1201.4654 [hep-th]. . K Jensen, M Kaminski, P Kovtun, R Meyer, A Ritz, A Yarom, arXiv:1203.3556Phys. Rev. Lett. 109101601hep-thK. Jensen, M. Kaminski, P. Kovtun, R. Meyer, A. Ritz and A. Yarom, Phys. Rev. Lett. 109, 101601 (2012) [arXiv:1203.3556 [hep-th]]. . N Banerjee, J Bhattacharya, S Bhattacharyya, S Jain, S Minwalla, T Sharma, arXiv:1203.3544hep-thN. Banerjee, J. Bhattacharya, S. Bhattacharyya, S. Jain, S. Minwalla and T. Sharma, arXiv:1203.3544 [hep-th]. . P Romatschke, Class. Quant. Grav. 2725006P. Romatschke, Class. Quant. Grav. 27, 025006 (2010). . R Baier, P Romatschke, D Son, A Starinets, M Stephanov, JHEP. 0804100R. Baier, P. Romatschke, D. Son, A. Starinets and M. Stephanov, JHEP 0804, 100 (2008). Computation of the 2nd order transport coefficient κ in the gluon plasma. C Schaefer, Swansea, United KingdomSwansea UniversityC. Schaefer, "Computation of the 2nd order transport coefficient κ in the gluon plasma," poster presented at Strong and Electroweak Matter 2012, Swansea University, Swansea, United Kingdom. . P Romatschke, arXiv:0902.3663Int. J. Mod. Phys. E. 191hep-phP. Romatschke, Int. J. Mod. Phys. E 19, 1 (2010) [arXiv:0902.3663 [hep-ph]]. . D A Teaney, arXiv:0905.2433nucl-thD. A. Teaney, arXiv:0905.2433 [nucl-th]. . G Aarts, J M Martinez Resco, hep-lat/0209033Nucl. Phys. Proc. Suppl. 119505G. Aarts and J. M. Martinez Resco, Nucl. Phys. Proc. Suppl. 119, 505 (2003) [hep-lat/0209033]. . G Aarts, J M Martinez Resco, hep-ph/0203177JHEP. 020453G. Aarts and J. M. Martinez Resco, JHEP 0204, 053 (2002) [hep-ph/0203177]. . H B Meyer, arXiv:0805.4567Prog. Theor. Phys. Suppl. 174hep-latH. B. Meyer, Prog. Theor. Phys. Suppl. 174, 220 (2008) [arXiv:0805.4567 [hep-lat]]. . H B Meyer, arXiv:0907.4095Nucl. Phys. A. 830641hep-latH. B. Meyer, Nucl. Phys. A 830, 641C (2009) [arXiv:0907.4095 [hep-lat]]. . H B Meyer, arXiv:1104.3708Eur. Phys. J. A. 4786hep-latH. B. Meyer, Eur. Phys. J. A 47, 86 (2011) [arXiv:1104.3708 [hep-lat]]. . Y Burnier, M Laine, arXiv:1201.1994Eur. Phys. J. C. 721902hep-latY. Burnier and M. Laine, Eur. Phys. J. C 72, 1902 (2012) [arXiv:1201.1994 [hep-lat]]. . J Schukraft, arXiv:1112.0550hep-exJ. Schukraft, arXiv:1112.0550 [hep-ex]. . S Esumi, PHENIX CollaborationarXiv:1110.3223J. Phys. G G. 38124010nucl-exS. Esumi [PHENIX Collaboration], J. Phys. G G 38, 124010 (2011) [arXiv:1110.3223 [nucl-ex]]. . R Lacey, PHENIX CollaborationarXiv:1108.0457J. Phys. G G. 38124048nucl-exR. Lacey [for the PHENIX Collaboration], J. Phys. G G 38, 124048 (2011) [arXiv:1108.0457 [nucl-ex]]. . H Agakishiev, STAR CollaborationarXiv:1106.4334Phys. Lett. B. 704467nucl-exH. Agakishiev et al. [STAR Collaboration], Phys. Lett. B 704, 467 (2011) [arXiv:1106.4334 [nucl-ex]]. . J D De Deus, A S Hirsch, C Pajares, R P Scharenberg, B K Srivastava, arXiv:1106.4271nucl-exJ. D. de Deus, A. S. Hirsch, C. Pajares, R. P. Scharenberg and B. K. Srivastava, arXiv:1106.4271 [nucl-ex]. . A Dainese, ALICE CollaborationarXiv:1106.1341nucl-exA. Dainese [ALICE Collaboration], arXiv:1106.1341 [nucl-ex]. . G D Moore, K A Sohrabi, arXiv:1007.5333Phys. Rev. Lett. 106122302hep-phG. D. Moore and K. A. Sohrabi, Phys. Rev. Lett. 106, 122302 (2011) [arXiv:1007.5333 [hep-ph]]. . P Romatschke, D T Son, Phys. Rev. D. 8065021P. Romatschke and D. T. Son, Phys. Rev. D 80, 065021 (2009). L E Parker, D J Toms, Quantum Field Theory in Curved Spacetime. L. E. Parker and D. J. Toms, Quantum Field Theory in Curved Spacetime. The global approach to quantum field theory. B S Dewitt, Int. Ser. Monogr. Phys. 11B. S. DeWitt, The global approach to quantum field theory. Vol. 1, 2, Int. Ser. Monogr. Phys. 114, 1 (2003). . M Lüscher, arXiv:1002.4232hep-latM. Lüscher, arXiv:1002.4232 [hep-lat]. . S Caron-Huot, arXiv:0903.3958Phys. Rev. D. 79125009hep-phS. Caron-Huot, Phys. Rev. D 79, 125009 (2009) [arXiv:0903.3958 [hep-ph]]. . F Karsch, Nucl. Phys. B. 205285F. Karsch, Nucl. Phys. B 205, 285 (1982). . S Caracciolo, G Curci, P Menotti, A Pelissetto, Annals Phys. 197119S. Caracciolo, G. Curci, P. Menotti and A. Pelissetto, Annals Phys. 197, 119 (1990). . T R Klassen, hep-lat/9803010Nucl. Phys. B. 533557T. R. Klassen, Nucl. Phys. B 533 (1998) 557 [hep-lat/9803010]. . J Engels, F Karsch, T Scheideler, hep-lat/9905002Nucl. Phys. B. 564303J. Engels, F. Karsch and T. Scheideler, Nucl. Phys. B 564, 303 (2000) [hep-lat/9905002]. . P Arnold, D Vaman, C Wu, W Xiao, arXiv:1105.4645JHEP. 111033hep-thP. Arnold, D. Vaman, C. Wu and W. Xiao, JHEP 1110, 033 (2011) [arXiv:1105.4645 [hep-th]]. . H B Meyer, arXiv:0704.1801Phys. Rev. D. 76101701hep-latH. B. Meyer, Phys. Rev. D 76, 101701 (2007) [arXiv:0704.1801 [hep-lat]]. . H B Meyer, arXiv:0711.0738Nucl. Phys. B. 795230hep-latH. B. Meyer, Nucl. Phys. B 795, 230 (2008) [arXiv:0711.0738 [hep-lat]]. . S Bhattacharyya, V E Hubeny, S Minwalla, M Rangamani, JHEP. 080245S. Bhattacharyya, V. E. Hubeny, S. Minwalla and M. Rangamani, JHEP 0802, 045 (2008). . O Saremi, K A Sohrabi, arXiv:1105.4870JHEP. 1111147hep-thO. Saremi and K. A. Sohrabi, JHEP 1111, 147 (2011) [arXiv:1105.4870 [hep-th]]. . T Banks, A Zaks, Nucl. Phys. B. 196189T. Banks and A. Zaks, Nucl. Phys. B 196, 189 (1982). . A Campos, E Verdaguer, gr-qc/9307027Phys. Rev. D. 491861A. Campos and E. Verdaguer, Phys. Rev. D 49, 1861 (1994) [gr-qc/9307027].
[]
[ "Categorical and Geographical Separation in Science OPEN", "Categorical and Geographical Separation in Science OPEN" ]
[ "Julian Sienkiewicz \nFaculty of Physics\nCenter of Excellence for Complex Systems Research\nWarsaw University of Technology\nKoszykowa 7500662WarsawPoland\n", "Krzysztof Soja \nFaculty of Physics\nCenter of Excellence for Complex Systems Research\nWarsaw University of Technology\nKoszykowa 7500662WarsawPoland\n", "Janusz A Hołyst \nFaculty of Physics\nCenter of Excellence for Complex Systems Research\nWarsaw University of Technology\nKoszykowa 7500662WarsawPoland\n\nNational Research University ITMO\n49 Kronverkskiy av197101Saint PetersburgRussia\n", "Peter M A Sloot \nNational Research University ITMO\n49 Kronverkskiy av197101Saint PetersburgRussia\n\nInstitute for Advanced Study\nUniversity of Amsterdam\nOude Turfmarkt 1471012 GCAmsterdamThe Netherlands\n\nComplexity Institute\nNanyang Technological University\n61 Nanyang Drive637335Singapore\n" ]
[ "Faculty of Physics\nCenter of Excellence for Complex Systems Research\nWarsaw University of Technology\nKoszykowa 7500662WarsawPoland", "Faculty of Physics\nCenter of Excellence for Complex Systems Research\nWarsaw University of Technology\nKoszykowa 7500662WarsawPoland", "Faculty of Physics\nCenter of Excellence for Complex Systems Research\nWarsaw University of Technology\nKoszykowa 7500662WarsawPoland", "National Research University ITMO\n49 Kronverkskiy av197101Saint PetersburgRussia", "National Research University ITMO\n49 Kronverkskiy av197101Saint PetersburgRussia", "Institute for Advanced Study\nUniversity of Amsterdam\nOude Turfmarkt 1471012 GCAmsterdamThe Netherlands", "Complexity Institute\nNanyang Technological University\n61 Nanyang Drive637335Singapore" ]
[]
We study scientific collaboration at the level of universities. The scope of this study is to answer two fundamental questions: (i) can one indicate a category (i.e., a scientific discipline) that has the greatest impact on the rank of the university and (ii) do the best universities collaborate with the best ones only? Restricting ourselves to the 100 best universities from year 2009 we show how the number of publications in certain categories correlates with the university rank. Strikingly, the expected negative trend is not observed in all cases -for some categories even positive values are obtained. After applying Principal Component Analysis we observe clear categorical separation of scientific disciplines, dividing the papers into almost separate clusters connected to natural sciences, medicine and arts and humanities. Moreover, using complex networks analysis, we give hints that the scientific collaboration is still embedded in the physical space and the number of common papers decays with the geographical distance between them.The idea of so-called science of science is not entirely new: 20th century is well known for its critical works of Kuhn 1 , Popper 2 , Lakatos 3 and Feyerabend 4 who tried to build models describing how science should work or, which is far more important, to show how it in fact does work. However it is only in recent times that, owing to the start of the era of overwhelming data, it is now possible to track this problem quantitatively 5,6 . Several studies are on a journey to answer such intriguing questions like "Who is the best scientist?", "What makes the best university" etc 7-14 .There are at least three separate factors that can be regarded as key components of today's science and the way it is recognized: papers, citations and rankings. The last one is devoted rather to whole unities like universities or departments although recent studies consider it also in the scope of individuals 14 . It has been argued that rankings still can be perceived as not enough deep measures "providing finalized, seemingly unrelated indicator values" 15 . On the other hand it is well known that scientific impact is a multi-dimensional construct and that using a single measure is not advisable 16 .Nonetheless, rankings are clearly a derivative of the number of published papers. However apart from just raw numbers the quality of science comes often with two additional factors: specialization and collaboration. Interestingly the type of the scientific category can dramatically change both the way the paper is written and received, e.g., in the case of simple lexical factors as title length its impact on the acquired citations change significantly from one category to another 17 . In the same manner it is possible to spot that the number of citations per paper can vary by several orders of magnitude and are highest in multidisciplinary sciences, general internal medicine, and biochemistry and lowest in literature, poetry, and dance 18 . These studies can go even as deep as to fascinating notion of scientific meme propagating along the citation graph 19,20 .Collaboration has been in the scope of interest for a long time 21,22 and it is generally considered that it leads to high impact publications 23 . One of recognized factors affecting the level of collaboration is undoubtedly geographic proximity: usually one expects to find a decaying probability of citation as well as common papers with distance 24,25 , however it can also be connected to such features as ethnicity or level of economic development 26 .In this study we perform an investigation for a selected group of 100 best universities to unravel how the scientific productivity measured in the number of published papers per scientific categories (e.g, physics, art etc) correlates with the rank of the university. Using Principal Component Analysis (PCA) we study whether scientific categories coming from different areas (natural science, humanities etc) tend to stick together. In the second part of the paper we examine the complex network 27 of scientific collaboration among 100 best universities and study the properties of such a network using the concept of weight threshold 28 .
10.1038/s41598-018-26511-4
null
38,266,694
1307.0788
6fbeee4bd651ff677bb289b66236fd4e730d9077
Categorical and Geographical Separation in Science OPEN Published: xx xx xxxx Julian Sienkiewicz Faculty of Physics Center of Excellence for Complex Systems Research Warsaw University of Technology Koszykowa 7500662WarsawPoland Krzysztof Soja Faculty of Physics Center of Excellence for Complex Systems Research Warsaw University of Technology Koszykowa 7500662WarsawPoland Janusz A Hołyst Faculty of Physics Center of Excellence for Complex Systems Research Warsaw University of Technology Koszykowa 7500662WarsawPoland National Research University ITMO 49 Kronverkskiy av197101Saint PetersburgRussia Peter M A Sloot National Research University ITMO 49 Kronverkskiy av197101Saint PetersburgRussia Institute for Advanced Study University of Amsterdam Oude Turfmarkt 1471012 GCAmsterdamThe Netherlands Complexity Institute Nanyang Technological University 61 Nanyang Drive637335Singapore Categorical and Geographical Separation in Science OPEN Published: xx xx xxxx10.1038/s41598-018-26511-4Received: 20 July 2017 Accepted: 9 May 20181 SCiEnTiFiC REPORtS | (2018) 8:8253 | Singapore. Correspondence and requests for materials should be addressed to P.M.A.S. (email: [email protected]) We study scientific collaboration at the level of universities. The scope of this study is to answer two fundamental questions: (i) can one indicate a category (i.e., a scientific discipline) that has the greatest impact on the rank of the university and (ii) do the best universities collaborate with the best ones only? Restricting ourselves to the 100 best universities from year 2009 we show how the number of publications in certain categories correlates with the university rank. Strikingly, the expected negative trend is not observed in all cases -for some categories even positive values are obtained. After applying Principal Component Analysis we observe clear categorical separation of scientific disciplines, dividing the papers into almost separate clusters connected to natural sciences, medicine and arts and humanities. Moreover, using complex networks analysis, we give hints that the scientific collaboration is still embedded in the physical space and the number of common papers decays with the geographical distance between them.The idea of so-called science of science is not entirely new: 20th century is well known for its critical works of Kuhn 1 , Popper 2 , Lakatos 3 and Feyerabend 4 who tried to build models describing how science should work or, which is far more important, to show how it in fact does work. However it is only in recent times that, owing to the start of the era of overwhelming data, it is now possible to track this problem quantitatively 5,6 . Several studies are on a journey to answer such intriguing questions like "Who is the best scientist?", "What makes the best university" etc 7-14 .There are at least three separate factors that can be regarded as key components of today's science and the way it is recognized: papers, citations and rankings. The last one is devoted rather to whole unities like universities or departments although recent studies consider it also in the scope of individuals 14 . It has been argued that rankings still can be perceived as not enough deep measures "providing finalized, seemingly unrelated indicator values" 15 . On the other hand it is well known that scientific impact is a multi-dimensional construct and that using a single measure is not advisable 16 .Nonetheless, rankings are clearly a derivative of the number of published papers. However apart from just raw numbers the quality of science comes often with two additional factors: specialization and collaboration. Interestingly the type of the scientific category can dramatically change both the way the paper is written and received, e.g., in the case of simple lexical factors as title length its impact on the acquired citations change significantly from one category to another 17 . In the same manner it is possible to spot that the number of citations per paper can vary by several orders of magnitude and are highest in multidisciplinary sciences, general internal medicine, and biochemistry and lowest in literature, poetry, and dance 18 . These studies can go even as deep as to fascinating notion of scientific meme propagating along the citation graph 19,20 .Collaboration has been in the scope of interest for a long time 21,22 and it is generally considered that it leads to high impact publications 23 . One of recognized factors affecting the level of collaboration is undoubtedly geographic proximity: usually one expects to find a decaying probability of citation as well as common papers with distance 24,25 , however it can also be connected to such features as ethnicity or level of economic development 26 .In this study we perform an investigation for a selected group of 100 best universities to unravel how the scientific productivity measured in the number of published papers per scientific categories (e.g, physics, art etc) correlates with the rank of the university. Using Principal Component Analysis (PCA) we study whether scientific categories coming from different areas (natural science, humanities etc) tend to stick together. In the second part of the paper we examine the complex network 27 of scientific collaboration among 100 best universities and study the properties of such a network using the concept of weight threshold 28 . Results We use the QS World University Ranking and service Web of Science datasets to examine patterns of category and geographic separation (see Methods for details). The data describes 100 best universities in a form of two matrices P ij (100 universities by 181 categories) and C ij (100 by 100 universities). The first matrix contains information about the number of papers published by a specific university i in a given scientific category j while the second one stores the total number of common papers among universities i and j (regardless of the category). The main text of this paper concerns absolute numbers of quantities P ij and C ij while the Supplementary Information contains some results for the scaled cases. Rank-number correlations for categories. It is interesting to understand how the university rank correlates with the number of scientific publications and, which is even far more intriguing, to split these relations according to different scientific categories. Naively one would expect a strong negative correlation between these quantities as larger number of papers should be reflected in acquiring higher rank (thus smaller number). The results for our data analysis are shown in Tables 1, 2 and Fig. 1, where we plot correlation coefficient ρ against the total number of papers N published in the given category (an alternative and much more straightforward method would be to use regression analysis however, in this case, it brings unreliable results -see SI for details). In each case ρ was obtained by taking one of the columns j of matrix P ij , ranking it and correlating with the university rank, thus calculating Spearman's rank correlation coefficient. The outcome clearly suggests that there are categories for which we observe even positive correlation coefficient. On the other hand, one has to take into account the fact that in these cases statistical significance of such results is usually very low (p-value > 0.05) as depicted in Fig. 1. When treated as a whole the data points give evidence of a log-linear relationship ρ = a + b log N (blue solid line in Fig. 1) between correlation coefficient and the number of papers with a = 0.098 ± 0.056 (p = 0.08) and b = −0.0415 ± 0.0068 (p < 0.001). A similar fit performed only for the highly significant categories (red solid line in Fig. 1) yields a = −0.285 ± 0.072 (p < 0.001) and b = −0.0127 ± 0.0081 (p = 0. 13). An insignificant value of b in this case means that the level of correlations for the selected group of categories is in fact constant, contrary to the previous situation where we observe a significant decrease with N. It is worth to mention here that using not absolute but relative numbers of papers (i.e., divide by the total number of papers from a given university) leads to different results where positive correlations for certain categories are significant (see Fig. S1 in Supplementary Information). Interestingly, the category of Multidisciplinary Sciences seems to be unexpectedly robust, regardless of the method used (cf Fig. 1 and S1 in SI) it yields the highest correlation value, which might suggest that interdisciplinary research has a substantial influence on university ranking. Categorical separation. As a next step of our analysis, we check the hypothesis of categorical separation of science. In order to test this assumption we perform a Principal Component Analysis (PCA) for matrix P ij where we restrict ourselves to those categories that were identified as highly correlated ones (see Fig. 1). Figure 2 presents the results of this PCA: the main panel (Fig. 2a) shows a 3D projection of the original 44 categories onto the first three principal components. As can be seen in Fig. 2d, the first three principal components explain around 75% of data variability. Each category was marked with a color connected to its OECD classification 29 that contains six different areas: Natural Sciences, Engineering and Technology, Medical & Health Sciences, Agricultural Sciences, Social Sciences and Humanities, marking with a different color the scientific category Multidisciplinary Sciences. The 3D plot suggests two separate bundles of categories -one connected to medical sciences combined with complementary natural sciences (such as Virology or Cell Biology) and the second identified as mainly social sciences and humanities. Interestingly, such core natural sciences like Physics and Mathematics tend to point in directions separated from these two bundles. The other intriguing fact is almost complete absence of agricultural and engineering sciences (except for one category) in this scheme. Another typical way often used to present the results of PCA is to show them in a form of so-called bi-plot, i.e., two dimensional projections of consecutive PCs. Figure 2b,c provides this additional information: the values of the first PC are if the same sign, while the 2nd PC differentiates between natural sciences and other. It is Fig. 2c that uncovers a very clear distinction among natural sciences, medical sciences and social sciences with humanities. This distinction comes also in a clear way from the cluster analysis - Fig. 2e provides results from k-means algorithm used in case of the outcomes from PCA. When searching for three clusters we obtain almost perfect separation among natural sciences, medicine and humanities and social sciences. Network analysis. Apart from the categorical point of view we can also consider university quality by analyzing the direct connections between universities i and j on the basis of the collaboration matrix C ij where the element C ij gives the number of common publications of institutions i and j. The structure of such a collaboration network is depicted in Fig. 3a where each node (vertex) is a university and links (edges) show the connections between them. The width of each link corresponds to the number of common publications between the universities. The algorithm used to obtain this structure is the following. Using 100 highest ranked universities, for each of them (u 1 , u 2 , …, u 100 ) we search for its publications p 1 , p 2 , …, p M(u1) . Then, if among the co-authors of p 1 there is any that comes from either of the universities u 2 , …, u 100 a link of weight w = 1 between those universities (e.g., u 1 and u 2 ) is established. The weight is increased by one each time u 2 is found among the following publications of u 1 . Finally the weight of the link between nodes u 1 and u 2 is just the number of their common publications (as seen in the database). Weights probability distribution. In order to examine the fundamental properties of the weighted network of collaboration we need to compute link weight probability distribution function (PDF) which can give an idea about the diversity of number of publications between universities. Figure 3b presents link weight PDF, suggesting a fat-tail distribution where the majority of link weights can be found between w = 1 and w = 10. Weight threshold. In the following analysis will use the concept of weight threshold 28 depicted in Fig. 4. Let us take the original network of 5 fully connected universities seen in Fig. 4a and assume now that we are interested in constructing an unweighted network that would take into account only the connections with weight higher than a certain threshold weight w T (w > w T ). A possible outcome of this procedure is presented in Fig. 4b -all the links with w < w T are omitted and as a result we obtain a network where links indicate only connections between nodes (i.e., they do not have any value). Using weight threshold as a parameter it is possible to obtain several unweighted networks -for each value of w T in the range 〈w min ; w max 〉 we get a different network NT(w T ) whose structure is determined only by w T . Then, for each of these networks it is possible to compute standard network quantities: (i) number of nodes N that have a at least one link (i.e., nodes with degree k i = 0 are not taken into account), (ii) Number of edges (links) E between the nodes, (iii) the average shortest path 〈l〉, (iv) clustering coefficient C, (v) assortativity coefficient r (vi) size S of largest connected component with number n of components (see Materials and Methods for details). Network observables as a function of weight threshold. Figure 5 depicts the above described network parameters as a function of the weight threshold w T . First, as can be seen in Fig. 5a, the number of nodes N is a linearly decreasing function of the weight threshold w T . The number edges E decreases faster, following an exponential function (Fig. 5b). On the other hand the average shortest path 〈l〉 (Fig. 5c) is a non-monotonic function of weight threshold, reaching its peak for w T ≈ 200. Clustering coefficient C (Fig. 5d) decreases with weight threshold up to the point w T ≈ 500 where it rapidly drops down to 0. The most interesting is the behavior of r(w T ) shown in Fig. 5e: the coefficient starts with r < 0, while for larger thresholds it crosses r = 0 and for w T ≈ 200 it takes its maximal value. Then once again it drops down below zero reaching r ≈ −0.4 for w T around 500. Finally it increases toward zero for larger w T . In the case of largest connected component S Fig. 5f) we observe a series of rapid decreases, e.g., for w t ≈ 100 where S drops down by 20%. These results are quantitatively different from the ones obtained by randomly reshuffling the weights of the network (see SI for details). Network visualisation. The above described non-trivial behavior of quantities r, C and 〈l〉 and S cannot be the sole cause of the relations presented in Fig. 3b although a high number of points with w T ≈ 100 can be responsible for some of these effects. It seems that there has to be another phenomenon leading to such an effect. Using R's 30 package igraph 31 we visualize connections between universities and community structure (denoted by color) for different values of w T . The results for w T = 100, 200, 300 and w T = 400, 500, 1000 are shown in Figs 6 and 7, providing an input for further analysis. For w T = 100 (Fig. 6a) the network is still percolated, i.e., it is possible to reach any node from another one; over that value a separation occurs -Chinese, Australian and Singapore, Japanese, Danish and Swedish as well as Swiss universities all form separate clusters. This observation is connected with large loss of S in Fig. 5f. The remaining giant cluster is built out of American, Canadian, British, Dutch, and German universities (Fig. 6b). This is the area where both average path length 〈l〉 and assorativity r take their maximal values. For w T = 300 we witness the separation between US and British universities and from now on (with small exceptions) different clusters can be described as connected to different countries (or even smaller administrative units as English and Scottish universities are separated). Further plots depict progressing decay of connections between the universities that form either star-like structures (Japanese, Canadian, English and American in Fig. 7a,b) or ultimately chains (Fig. 7c). A possible explanation to this phenomenon is in the geographical distance between the universities. In fact, Fig. 8 supports partially this assumption. The number of publications between universities i and j can be fitted with a decreasing power-law function of the geographical distance between them. The gap around d = 5000 is most probably caused by the presence of continents. Similar results regarding the role of geographical distance in science were obtained in previous studies 25,32 . On the other hand the error bars in Fig. 8 give evidence that for relatively short distances (d ∈ [1; 300] km) the number common papers can be considered constant. This in turn would support the hypothesis of country-driven rather than geographically-driven collaboration. A lower than expected value of collaboration for shorter distances could also have its origin in the fact that usually there is lack of universities of the same scientific profile in the direct vicinity. Conclusions Our results indicate that even such fundamental and straightforward analysis as calculation of correlation coefficient between position of the university in the ranking and the number of papers published by its employees may reveal some non-trivial relationships. Although it would be natural to expect strictly negative correlation (i.e., the more you publish the higher rank you acquire) our analysis shows several scientific disciplines such as Agricultural Engineering, Horticulture or Hospitality, Leisure, Sport & Tourism where this is not the case. For the whole set of examined scientific categories we found a log-linear relationship between correlation and the number of papers. Intriguingly this relation breaks down when the most reliable correlations (i.e., most significant statistically) are selected. This study also underlines the differences among specific science areas -our PCA results give a clear picture that the separation between natural, medical and social sciences really takes place. Figure 1. Correlations coefficients. Each data point represents a separate scientific category and gives the Spearman's correlation coefficient ρ between the rank of the university and the ranked number of papers N in this category (shown as X-axis). The colors reflect statistical significance of the measure (see legend) and category names are shown only for the most significant points (p-value < 0.001). Solid lines represent log-linear fits to all points (blue) and most significant points (p-value < 0.001, red). Shades surrounding the lines represent 95% confidence interval. The second part of the paper is devoted to network analysis of the collaboration among 100 best universities. We used the concept of weight threshold to obtain several slices of the original weighted network at different levels of collaboration intensity. Treating the threshold as a control parameter we were able to track such network observables as assortativity revealing its rich behavior. Our analysis shows that the scientific collaboration is highly embedded in the physical space -it seems that the key aspect that governs the number of common publications is the geographical vicinity of the universities which confirms previous observations 25,32 . On the other hand the dependence of network properties on the weight threshold cannot be explained just by using geographical distance rationale suggesting rather country-driven collaboration. Discussion The problem of the role of scientific categories and relations among them has intrigued the greatest minds of the past century. Lately, Dias et al. 33 have explicitly quoted Karl Popper's The Nature of philosophical problems and their roots in science 34 where this great philosopher had questioned the traditional identification of scientific disciplines, convinced instead that one should rather look at cognitive and social aspect thereof. Dias et al. follow this trail by comparing coincidences among disciplines retrieved by (i) classification given by experts 29 , (ii) Jaccard-like coefficient for citations and (iii) language-based Jensen-Shannon measure of dissimilarity 35,36 in articles' abstracts. The same aspect, although in much more indirect way, has been lately addressed by one of us, arguing that scientific segregation is visible even while examining relations between text length (or emotional content) and citation patterns 17 . While these considerations may seem to be academic (e.g., detecting similarities among disciplines that are "obviously" similar) they earn an additional dimension when treated as a dynamical process. Given the masses of data the usage unsupervised methods that require no manual classification of documents is the best choice to track the evolution of science. In this way such phenomena as convergence and divergence of specific disciplines 33 , life cycles of paradigms 37 or inheritance of scientific memes 20 can be instantly spotted. When used for temporal data, our analysis of principal components basing on the number of published papers could also serve as an index for changing relations among disciplines. In particular, one may use it as indicator of the interest a certain scientific area gains over the years. It is possible to spot the emergence of certain trends in science and, in effect, react by for example establishing a new direction of research in the university. Geographical distances among the nodes of the network usually come in the form of Tinbergen's gravity model 38 . Manifestations of spatial embedding of networks 39 are truly omnipotent, ranging from the original inter-country trade 40,41 through inter-city telecommunication flows 42 and online friendship 43 to active protesters 44 . In the case of scientific collaboration Pan et al. show a clear preference for researchers to seek partners in their geographical proximity 25 , however underlining that the very form of the gravity model (i.e., a power law) does not forbid long-distance interactions. In this study we restricted ourselves to only top universities showing which particular links break up first. Although the geographical proximity is an important factor, the results clearly show that in the case of small distances the connections are not formed distance-wise but rather country-wise. Moreover it also seems that the choice of data handling method (absolute values vs. normalized one) can play a crucial role: the description as well as Figs S2 and S3 in the Supplementary Material reveal a strong clustering between continents for the normalized data. Methods Dataset. We used two prominent data providers: QS World University Ranking 45 and Web of Science 46 service. The first dataset consisted of 100 best universities ranked in the year 2009. The second dataset was obtained by querying the database of years 2008-2009 for publications coming from one of the above mentioned universities and store information about so-called subject category (i.e., the scientific category) and affiliation of co-authors. The obtained matrices P ij (100 universities by 181 categories) and C ij (100 by 100 universities) that were created on-the-fly without physically saving partial data contain, respectively, 1363821 and 496684 papers. Abbreviations. The seemingly straightforward procedure of querying for a specific university name encounters some problems that could have a strong impact on the further results. Web of Science has a set of abbreviations commonly used for searching such as Univ for "University" or Coll for "College". Moreover it is essential to notice that one has to form a very specific query in order to get rid of severe mistakes. Table 3 shows an exemplary list of the search universities together with the exact search phrase that had to be used. Ambiguity of queries. The 'Search' field is a search key that we use to associate with the authors of the publications and it can consist of one of the operators: + stands for AND operator in Boolean logic and | stands for NOT operator in Boolean logic. These operators are used to clearly assess the origin of the publication. Table 2 shows that using just the names of universities from the list (first column) would lead in the case of number 98 to obtaining publications of both Technical University in Munich and University of Munich, instead of just the latter. To avoid this problem one has to insert a query Univ Munich | Tech Univ Munich that ensures achieving proper results. On the other hand for instance for the case shown as number 78, it was not sufficient to enter Washington Univ, as there are many universities with such an abbreviation; it was necessary to add St. Louis in the query text. Network analysis. Clustering coefficient C i for node i is defined as the number of existing links among its nearest neighbors e i (i.e., nodes to which it has links) divided by the total number of possible links among them k i (k i − 1)/2 The colors of vertices correspond to the assignment from a community detection algorithm (fast greedy modularity optimization algorithm 47 ) and therefore they can change from one panel to another. Plots were created combining open-source packages igraph 31 (nodes and links) and maps 48 (world map) for R language 30 . = − C e k k 2 ( 1) (1) i i i i The total clustering coefficient for the whole network is calculated as the average over all C i . Assortativity coefficient r defined by where i goes over all edges in the network. The coefficient is in the range [−1; 1], r = 1 means that the highly connected nodes have the affinity to connect to other nodes with high k i while r = −1 happens when highly connected nodes tend to link to nodes with very low k i . Average shortest path 〈l〉 is calculated as the average value of shortest distance (measured in the number of steps) between all pairs of nodes i, j in the network. Table 3. University names and search queries. SCiEnTiFiC REPORtS | (2018) 8:8253 | DOI:10.1038/s41598-018-26511-4 Figure 2 . 2Principal Component Analysis (PCA) of scientific category data. Given the number of papers each of the 100 universities published in 44 different scientific categories (chosen according to results obtained in Fig. 1) we perform Principal Component Analysis. Panel (a) presents the outcome for three most important principal components: each arrow represents the position of an original category (e.g., Physics, Multidisciplinary Sciences) in the new set coordinates. The colors of arrows are connected to the OECD classification 29 (see legend). Panels (b) and (c) show the projection of PCA results onto, respectively, 2nd PC -1st PC and 3rd PC -2nd PC planes. Panel (d) presents the cumulative value of variance explained by the consecutive PCs. Panel (e) shows the outcomes of cluster analysis (k-means algorithm) for the results obtained by PCA (we set the number of clusters to 3). Figure 3 . 3(a) Representation of the university collaboration network. Each node is a university and links show the connections between them. The width of each link corresponds to the number of common publications between the nodes in question. (b) Link weight probability distribution function (PDF). SCiEnTiFiC REPORtS | (2018) 8:8253 | DOI:10.1038/s41598-018-26511-4 Figure 4 . 4Illustration of the weight threshold concept: (a) a weighted university network with weights proportional to the number of common publications, (b) an unweighted network constructed from the weighted network of panel (a) by imposing a weight threshold -only links with weights w > w T are kept. Figure 5 . 5Comparison of collaboration networks observable as functions of weight threshold w T : (a) number of nodes N (b) number of edges E, (c) average shortest path 〈l〉, (d) clustering coefficient C, (e) assortativity coefficient r, (f) size of the largest connected component S (red points) and number of components n (grey points). Figure 6 . 6Snapshots of network topology for different thresholds: (a) w T = 100, (b) w T = 200 and (c) w T = 300. Figure 7 . 7Snapshots of network topology for different thresholds: (a) w T = 400, (b) w T = 500 and (c) w T = 1000. The colors of vertices correspond to the assignment from a community detection algorithm (fast greedy modularity optimization algorithm 47 ) and therefore they can change from one panel to another. Plots were created combining open-source packages igraph 31 (nodes and links) and maps 48 (world map) for R language 30 . SCiEnTiFiC REPORtS | (2018) 8:8253 | DOI:10.1038/s41598-018-26511-4 Figure 8 . 8Link weight w vs the geographical distance d between universities in a double logarithmic scale. Orange-gray circles are raw data while the blue circles with error bars come from logarithmic binning of data with. Dashed line is a power-law fit w = Ad α with A = 461.0 ± 1.4 and α = −0.364 ± 0.058. Table 1. Correlation coefficients in categories.Category N ρ Category N ρ Acoustics 2997 −0.183 Agricultural Economics and Policy 262 −0.221* Agricultural Engineering 480 0.177 Agriculture 2921 0.044 Agronomy 1267 0.015 Allergy 2539 −0.191 Anatomy and Morphology 1096 −0.231* Andrology 257 −0.301** Anesthesiology 2602 −0.249* Anthropology 3535 −0.297** Archaeology 1341 −0.207* Architecture 616 −0.356*** Area Studies 2337 −0.371*** Art 775 −0.325*** Asian Studies 869 −0.403*** Astronomy and Astrophysics 23507 −0.458*** Automation and Control Systems 5809 −0.238* Behavioral Sciences 5393 −0.345*** Biochemical Research Methods 8789 −0.390*** Biochemistry and Molecular Biology 39647 −0.442*** Biodiversity Conservation 1509 −0.247* Biology 6769 −0.501*** Biophysics 8981 −0.356*** Biotechnology and Applied Microbiology 11698 −0.344*** Business 6739 −0.313** Cardiac and Cardiovascular Systems 17817 −0.287** Cell Biology 20596 −0.470*** Cell and Tissue Engineering 1738 −0.358*** Chemistry 65996 −0.174. Classics 745 −0.141 Clinical Neurology 24176 −0.339*** Communication 1558 −0.105 Computer Science 53600 −0.243* Construction and Building Technology 2157 −0.098 Criminology and Penology 748 −0.219* Critical Care Medicine 3945 −0.269** Crystallography 2690 0.062 Dance 17 −0.072 Demography 614 −0.287** Dentistry 4079 −0.042 Dermatology 5267 −0.232* Developmental Biology 5417 −0.468*** Ecology 9358 −0.217* Economics 12516 −0.449*** Education 2488 −0.238* Education and Educational Research 4373 −0.178 Electrochemistry 2876 −0.109 Emergency Medicine 2003 −0.214* Endocrinology and Metabolism 15241 −0.334*** Energy and Fuels 4709 −0.081 Engineering 82305 −0.182 Entomology 1348 −0.000 Environmental Sciences 12350 −0.274** Environmental Studies 3078 −0.294** Ergonomics 634 0.024 Ethics 1325 −0.347*** Ethnic Studies 483 −0.151 Evolutionary Biology 5809 −0.283** Family Studies 1198 −0.265** Film 376 −0.246* Fisheries 1122 0.074 Folklore 91 −0.114 Food Science and Technology 4087 −0.027 Forestry 1299 −0.076 Gastroenterology and Hepatology 9901 −0.323** Genetics and Heredity 17932 −0.430*** Geochemistry and Geophysics 9285 −0.295** Geography 4426 −0.060 Geology 1719 −0.080 Geosciences 10126 −0.185 Geriatrics and Gerontology 3801 −0.430*** Gerontology 4331 −0.328*** Health Care Sciences and Services 6751 −0.311** Health Policy and Services 4840 −0.307** Hematology 18635 −0.301** History 7000 −0.249* History Of Social Sciences 852 −0.255* History and Philosophy Of Science 2196 −0.434*** Horticulture 755 0.088 Hospitality 740 0.113 Humanities 3110 −0.317** Imaging Science and Photographic Technology 2152 −0.234* Immunology 18895 −0.392*** Industrial Relations and Labor 664 −0.227* Infectious Diseases 8625 −0.373*** Information Science and Library Science 2132 −0.201* Instruments and Instrumentation 5474 −0.168 Integrative and Complementary Medicine 634 −0.223* International Relations 1983 −0.342*** Language and Linguistics 2253 −0.148 Law 2684 −0.343*** Limnology 1012 −0.113 Linguistics 2670 −0.220* Literary Reviews 633 −0.264** www.nature.com/scientificreports/ 4 SCiEnTiFiC REPORtS | (2018) 8:8253 | DOI:10.1038/s41598-018-26511-4 Washington University in St. Louis Washington Univ + St Louis 98Ludwig-Maximilians-Universität München Univ Munich | Tech Univ Munich1 Harvard University Harvard Univ 2 University of Cambridge Univ Cambridge 4 UCL University College London UCL 10 California Institute of Technology Caltech 73 SCiEnTiFiC REPORtS | (2018) 8:8253 | DOI:10.1038/s41598-018-26511-4 © The Author(s) 2018 AcknowledgementsAuthor ContributionsJ.A.H. and P.M.A.S. conceived the study, K.S. collected the data, K.S. and J.S. analyzed the data, J.S. wrote the manuscript. All authors reviewed the manuscript.Additional InformationSupplementary information accompanies this paper at https://doi.org/10.1038/s41598-018-26511-4.Competing Interests: The authors declare no competing interests.Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. T S Kuhn, The Structure of Scientific Revolutions. University of Chicago PressKuhn, T. S. The Structure of Scientific Revolutions (University of Chicago Press, 1996). The Logic of Scientific Discovery (Routledge. K Popper, Popper, K. The Logic of Scientific Discovery (Routledge, 2002). I Lakatos, The Methodology of Scientific Research Programmes. Cambridge University PressLakatos, I. The Methodology of Scientific Research Programmes (Cambridge University Press, 1980). Against method (Verso. P Feyerabend, Feyerabend, P. Against method (Verso, 2010). The matthew effect in science. R K Merton, Science. 159Merton, R. K. The matthew effect in science. Science 159, 56-63 (1968). The scientific impact of nations. D A King, Nature. 430311King, D. A. The scientific impact of nations. Nature 430, 311 (2004). An index to quantify an individual's scientific research output. J E Hirsh, Proceedings of the National Academy of Sciences of the United States of America. 10216569Hirsh, J. E. An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America 102, 16569 (2005). Universality of citation distributions: Toward an objective measure of scientific impact. F Radicchi, S Fortunato, C Castellano, Proceedings of the National Academy of Sciences of the United States of America. 10517268Radicchi, F., Fortunato, S. & Castellano, C. Universality of citation distributions: Toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences of the United States of America 105, 17268 (2008). Diffusion of scientific credits and the ranking of scientists. F Radicchi, S Fortunato, B Markines, A Vespignani, Physical Review E. 8056103Radicchi, F., Fortunato, S., Markines, B. & Vespignani, A. Diffusion of scientific credits and the ranking of scientists. Physical Review E 80, 056103 (2009). Methods for measuring the citations and productivity of scientists across time and discipline. A M Petersen, F Wang, H E Stanley, Physical Review E. 8136114Petersen, A. M., Wang, F. & Stanley, H. E. Methods for measuring the citations and productivity of scientists across time and discipline. Physical Review E 81, 036114 (2010). Rescaling citations of publications in physics. F Radicchi, C Castellano, Physical Review E. 8346116Radicchi, F. & Castellano, C. Rescaling citations of publications in physics. Physical Review E 83, 046116 (2011). How citation boosts promote scientific paradigm shift and Nobel Prizes. A Mzaloumian, E Young-Ho, D Helbing, S Lozano, S Fortunato, PLoS One. 618975Mzaloumian, A., Young-Ho, E., Helbing, D., Lozano, S. & Fortunato, S. How citation boosts promote scientific paradigm shift and Nobel Prizes. PLoS One 6, e18975 (2011). Analysis of scientific productivity using maximum entropy principle and fluctuationdissipation theorem. P Fronczak, A Fronczak, J A Hołyst, Physical Review E. 7526103Fronczak, P., Fronczak, A. & Hołyst, J. A. Analysis of scientific productivity using maximum entropy principle and fluctuation- dissipation theorem. Physical Review E 75, 026103 (2007). Quantifying the evolution of individual scientific impact. R Sinatra, D Wang, P Deville, C Song, A.-L Barabási, Science. 3546312Sinatra, R., Wang, D., Deville, P., Song, C. & Barabási, A.-L. Quantifying the evolution of individual scientific impact. Science 354, 6312 (2016). A critical comparative analysis of five world university rankings. H F Moed, Scientometrics. 110Moed, H. F. A critical comparative analysis of five world university rankings. Scientometrics 110, 967-990 (2017). A principal component analysis of 39 scientific impact measures. J Bollen, H Van De Sompel, A Hagberg, R Chute, PLoS One. 4Bollen, J., Van de Sompel, H., Hagberg, A. & Chute, R. A principal component analysis of 39 scientific impact measures. PLoS One 4, 1-11 (2009). Impact of lexical and sentiment factors on the popularity of scientific papers. J Sienkiewicz, E G Altmann, Royal Society Open Science. 3160140Sienkiewicz, J. & Altmann, E. G. Impact of lexical and sentiment factors on the popularity of scientific papers. Royal Society Open Science 3, 160140 (2016). Citation analysis of scientific categories. G S Patience, C A Patience, B Blais, F Bertrand, Heliyon. 3300Patience, G. S., Patience, C. A., Blais, B. & Bertrand, F. Citation analysis of scientific categories. Heliyon 3, e00300 (2017). Self-organization of progress across the century of physics. M Perc, Scientific Reports. 31720Perc, M. Self-organization of progress across the century of physics. Scientific Reports 3, 1720 (2013). Inheritance patterns in citation networks reveal scientific memes. T Kuhn, M Perc, D Helbing, Physical Review X. 441036Kuhn, T., Perc, M. & Helbing, D. Inheritance patterns in citation networks reveal scientific memes. Physical Review X 4, 041036 (2014). Scientific co-operation in europe and the citation of multinationally authored papers. F Narin, K Stevens, E S Whitlow, Scientometrics. 21Narin, F., Stevens, K. & Whitlow, E. S. Scientific co-operation in europe and the citation of multinationally authored papers. Scientometrics 21, 313-323 (1991). A bibliometric analysis of international scientific cooperation of the European Union (1985-1995). W Glänzel, A Schubert, H J Czerwon, Scientometrics. 45Glänzel, W., Schubert, A. & Czerwon, H. J. A bibliometric analysis of international scientific cooperation of the European Union (1985-1995). Scientometrics 45, 185-202 (1999). . 10.1038/s41598-018-26511-4SCiEnTiFiC REPORtS |. 88253SCiEnTiFiC REPORtS | (2018) 8:8253 | DOI:10.1038/s41598-018-26511-4 Multi-university research teams: Shifting impact, geography, and stratification in science. B F Jones, S Wuchty, B Uzzi, Science. 322Jones, B. F., Wuchty, S. & Uzzi, B. Multi-university research teams: Shifting impact, geography, and stratification in science. Science 322, 1259-1262 (2008). Mapping the diffusion of scholarly knowledge among major US research institutions. K Börner, S Penumarthy, M Meiss, W Ke, Scientometrics. 68Börner, K., Penumarthy, S., Meiss, M. & Ke, W. Mapping the diffusion of scholarly knowledge among major US research institutions. Scientometrics 68, 415-426 (2006). World citation and collaboration networks: uncovering the role of geography in science. R K Pan, K Kaski, S Fortunato, Scientific Reports. 2902Pan, R. K., Kaski, K. & Fortunato, S. World citation and collaboration networks: uncovering the role of geography in science. Scientific Reports 2, 902 (2012). Visualizing the world's scientific publications. R H Chen, .-G Chen, C.-M , Journal of the Association for Information Science and Technology. 67Chen, R. H.-G. & Chen, C.-M. Visualizing the world's scientific publications. Journal of the Association for Information Science and Technology 67, 2477-2488 (2016). Statistical mechanics of complex networks. A L Barabási, R Albert, Reviews of Modern Physics. 7447Barabási, A. L. & Albert, R. Statistical mechanics of complex networks. Reviews of Modern Physics 74, 47 (2002). Networks of companies and branches in Poland. A Chmiel, J Sienkiewicz, K Suchecki, J A Hołyst, Physica A. 383134Chmiel, A., Sienkiewicz, J., Suchecki, K. & Hołyst, J. A. Networks of companies and branches in Poland. Physica A 383, 134 (2007). R Core Team, R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. Vienna, AustriaR Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, https://www.R-project.org/ (2017). The igraph software package for complex network research. G Csardi, T Nepusz, InterJournal Complex Systems. 1695Csardi, G. & Nepusz, T. The igraph software package for complex network research. InterJournal Complex Systems, 1695, http:// igraph.org (2006). The myth of global science collaboration patterns in epistemic communities. S Hennemann, D Rybski, I Liefner, Journal of Informetrics. 6Hennemann, S., Rybski, D. & Liefner, I. The myth of global science collaboration patterns in epistemic communities. Journal of Informetrics 6, 217-225 (2012). Using text analysis to quantify the similarity and evolution of scientific disciplines. L Dias, M Gerlach, J Scharloth, E G Altmann, Royal Society Open Science. 5171545Dias, L., Gerlach, M., Scharloth, J. & Altmann, E. G. Using text analysis to quantify the similarity and evolution of scientific disciplines. Royal Society Open Science 5, 171545 (2018). The nature of philosophical problems and their roots in science. K R Popper, The British Journal for the Philosophy of Science. 3124Popper, K. R. The nature of philosophical problems and their roots in science. The British Journal for the Philosophy of Science 3, 124 (1952). Similarity of symbol frequency distributions with heavy tails. M Gerlach, F Font-Clos, E G Altmann, Physical Review X. 621009Gerlach, M., Font-Clos, F. & Altmann, E. G. Similarity of symbol frequency distributions with heavy tails. Physical Review X 6, 021009 (2016). Generalized entropies and the similarity of texts. E G Altmann, L Dias, M Gerlach, Journal of Statistical Mechanics: Theory and Experiment. 14002Altmann, E. G., Dias, L. & Gerlach, M. Generalized entropies and the similarity of texts. Journal of Statistical Mechanics: Theory and Experiment 2017, 014002 (2017). Phylomemetic patterns in science evolution-the rise and fall of scientific fields. D Chavalarias, J.-P Cointet, PLoS One. 8Chavalarias, D. & Cointet, J.-P. Phylomemetic patterns in science evolution-the rise and fall of scientific fields. PLoS One 8, 1-11 (2013). Tinbergen's Legacy for Economic Networks: From the Gravity Model to Quantum Statistics. T Squartini, D Garlaschelli, Springer International PublishingChamSquartini, T. & Garlaschelli, D. Jan Tinbergen's Legacy for Economic Networks: From the Gravity Model to Quantum Statistics, 161-186 (Springer International Publishing, Cham 2014). Spatial networks. M Barthélemy, Physics Reports. 499Barthélemy, M. Spatial networks. Physics Reports 499, 1-101 (2011). The complex network of global cargo ship movements. P Kaluza, A Kölzsch, M T Gastner, B Blasius, Journal of The Royal Society Interface. 7Kaluza, P., Kölzsch, A., Gastner, M. T. & Blasius, B. The complex network of global cargo ship movements. Journal of The Royal Society Interface 7, 1093-1103 (2010). International trade network: Fractal properties and globalization puzzle. M Karpiarz, P Fronczak, A Fronczak, Physical Review Letters. 113248701Karpiarz, M., Fronczak, P. & Fronczak, A. International trade network: Fractal properties and globalization puzzle. Physical Review Letters 113, 248701 (2014). Urban gravity: a model for inter-city telecommunication flows. G Krings, F Calabrese, C Ratti, V D Blondel, Journal of Statistical Mechanics: Theory and Experiment. 7003Krings, G., Calabrese, F., Ratti, C. & Blondel, V. D. Urban gravity: a model for inter-city telecommunication flows. Journal of Statistical Mechanics: Theory and Experiment 2009, L07003 (2009). Geographic routing in social networks. D Liben-Nowell, J Novak, R Kumar, P Raghavan, A Tomkins, Proceedings of the National Academy of Sciences of the United States of America. 102Liben-Nowell, D., Novak, J., Kumar, R., Raghavan, P. & Tomkins, A. Geographic routing in social networks. Proceedings of the National Academy of Sciences of the United States of America 102, 11623-11628 (2005). Modelling the distance impedance of protest attendance. V Traag, R Quax, P Sloot, Physica A. 468Traag, V., Quax, R. & Sloot, P. Modelling the distance impedance of protest attendance. Physica A 468, 171-182 (2017). Finding community structure in very large networks. A Clauset, M E J Newman, C Moore, Physical Review E. 7066111Clauset, A., Newman, M. E. J. & Moore, C. Finding community structure in very large networks. Physical Review E 70, 066111 (2004). maps: Draw Geographical Maps. Thomas P. Minka, A. R. W. R. & Deckmyn, A.R package version 3.2.0code by Richard, A. & Becker, O. S. version by Ray Brownrigg. Enhancements by Thomas P. Minka, A. R. W. R. & Deckmyn, A. maps: Draw Geographical Maps, https://CRAN.R-project.org/package=maps, R package version 3.2.0 (2017).
[]
[ "On the Cost of Concurrency in Transactional Memory", "On the Cost of Concurrency in Transactional Memory" ]
[ "Verteilter Management \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "Systeme \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "Srivatsan Ravi \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "M E \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "ProfUwe Nestmann \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "Ph. DT U Berlin \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "ProfAnja Feldmann \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "Ph. DT U Berlin \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "ProfPetr Kuznetsov \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "TélécomParistech Gutachterin \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "ProfHagit Attiya \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "ProfRachid Guerraoui \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n", "ProfMichel Raynal \nFakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n\n" ]
[ "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n", "Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze\nThe Technion\nvon der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades\nTechnische Universität Berlin\n" ]
[]
Eidesstattliche ErklärungIch versichere an Eides statt, dass ich diese Dissertation selbständig verfasst und nur die angegebenen Quellen und Hilfsmittel verwendet habe.DatumSrivatsan Ravi (M.E.)3 Abstract Current general-purpose CPUs are multicores, offering multiple computing units within a single chip. The performance of programs on these architectures, however, does not necessarily increase proportionally with the number of cores. Designing concurrent programs to exploit these multicores emphasizes the need for achieving efficient synchronization among threads of computation. When there are several threads that conflict on the same data, the threads will need to coordinate their actions for ensuring correct program behaviour.Traditional techniques for synchronization are based on locking that provides threads with exclusive access to shared data. Coarse-grained locking typically forces threads to access large amounts of data sequentially and, thus, does not fully exploit hardware concurrency. Program-specific fine-grained locking or non-blocking (i.e., not using locks) synchronization, on the other hand, is a dark art to most programmers and trusted to the wisdom of a few computing experts. Thus, it is appealing to seek a middle ground between these two extremes: a synchronization mechanism that relieves the programmer of the overhead of reasoning about data conflicts that may arise from concurrent operations without severely limiting the program's performance. The Transactional Memory (TM) abstraction is proposed as such a mechanism: it intends to combine an easy-to-use programming interface with an efficient utilization of the concurrent-computing abilities provided by multicore architectures. TM allows the programmer to speculatively execute sequences of shared-memory operations as atomic transactions with all-or-nothing semantics: the transaction can either commit, in which case it appears as executed sequentially, or abort, in which case its update operations do not take effect. Thus, the programmer can design software having only sequential semantics in mind and let TM take care, at run-time, of resolving the conflicts in concurrent executions.Intuitively, we want TMs to allow for as much concurrency as possible: in the absence of severe data conflicts, transactions should be able to progress in parallel. But what are the inherent costs associated with providing high degrees of concurrency in TMs? This is the central question of the thesis.To address this question, we first focus on the consistency criteria that must be satisfied by a TM implementation. We precisely characterize what it means for a TM implementation to be safe, i.e., to ensure that the view of every transaction could have been observed in some sequential execution. We then present several lower and upper bounds on the complexity of three classes of safe TMs: blocking TMs that allow transactions to delay or abort due to overlapping transactions, non-blocking TMs which adapt to step contention by ensuring that a transaction not encountering steps of overlapping transactions must commit, and partially non-blocking TMs that provide strong non-blocking guarantees (wait-freedom) to only a subset of transactions. We then propose a model for hybrid TM implementations that complement hardware transactions with software transactions. We prove that there is an inherent trade-off on the degree of concurrency allowed between hardware and software transactions and the costs introduced on the hardware. Finally, we show that optimistic synchronization techniques based on speculative executions are, in a precise sense, better equipped to exploit concurrency than inherently pessimistic techniques based on locking. 5 Zusammenfassung Aktuelle Allzweck-CPUs haben mehrere Rechenkerne innerhalb eines einzelnen Chipsatzes. Allerdings erhöht sich die Leistung der Programme auf diesen Architekturen nicht notwendigerweise proportional in der Anzahl der Kerne. Das Entwerfen nebenläufiger Programme um diese Multicores zu nutzen, erfordert die Überwindung einiger nicht-trivialer Herausforderungen; die wichtigste ist, eine effiziente Synchronisierung der Threads der Berechnung herzustellen. Greifen mehrere Threads gleichzeitig auf dieselben Daten zu, müssen diese ihre Aktionen koordinieren, um ein korrektes Programmverhalten zu gewährleisten. Die traditionelle Methode zur Synchronisierung ist "Locking", welches jeweils nur einem einzelnen Thread Zugriff auf gemeinsam genutzten Daten gewährt. Bei grobkörnigem "Lockingërfolgt der Zugang zu einer großen Menge von Daten meist seriell, sodass die Hardware-Parallelität nich in vollem Unfang ausgenutzt wird. Auf der anderen Seite stellt programmspezifisches feinkörniges Locking, oder auch nicht-blockierende (d.h. keine Locks benutzende) Synchronisierung, eine dunkle Kunst für die meisten Programmierer dar, welche auf die Weisheit weniger Computerexperten vertraut. So ist es angebracht, einen Mittelweg zwischen diesen beiden Extremen zu suchen: einen Synchronisierungsmechanismus, der den Programmierer bezüglich der Datenkonflikte, die aus gleichzeitigen Operationen entstehen, entlastet, ohne jedoch die Leistung des Programms zu stark zu beeinträchtigen. Die Transactional Memory (TM) Abstraktion wird als solcher Mechanismus vorgeschlagen: ihr Ziel ist es, eine einfach zu bedienende Programmierschnittstelle mit einer effizienten Nutzung der gleichzeitigen Computing-Fähigkeiten von Multicore-Architekturen zu kombinieren. TM erlaubt es dem Programmierer, Sequenzen von Operationen auf dem gemeinsamen Speicher als atomare Transaktionen mit Alles-oder-Nichts Semantik zu erklären: Die Transaktion wird entweder übergeben, und somit sequentiell ausgeführt, oder abgebrochen, sodass ihre Operationen nicht durchgeführt werden. Dies ermöglicht dem Programmierer, Software mit nur sequentieller Semantik zu konzipieren, und die aus gleichzeitger Ausführung entstehenden Konflikte TM zu überlassen. Intuitiv sollen die TMs so viel Nebenläufigkeit wie möglich berücksichtigen: Falls keine Datenkonflikte vorhanden sind, sollen alle Transaktionen parallel ausgeführt werden. Gibt es in TMs Kosten, die durch diesen hohen Grad an Nebenläufigkeit entstehen? Das ist die zentrale Frage dieser Arbeit. Um diese Frage zu beantworten, konzentrieren wir uns zunächst auf das Kriterium der Konsistenz, welche von der TM-Implementierung erfüllt werden muss. Wir charakterisieren auf präzise Art, was es für eine TM-Implementierung heißt, sicher zu sein, d.h. zu gewährleisten, dass die Sicht einer jeden Transaktion auch von einer sequentiellen Transaktion hätte beobachtet werden können. Danach präsentieren wir mehrere untere und obere Schranken für die Komplexität dreier Klassen von sicheren TMs: blockierende TMs, die Blockierungen oder Abbrüche der Transaktionen erlauben, sollten diese sich überlappen, nicht-blockierende TMs die einen schrittweisen Zugriffskonflikt berücksichtigen, d.h. Transaktionen, die keinen Zugriff überlappender anderer Transaktionen beobachten, müssen übergeben, und partiell nichtblockierende TMs, die nur für eine Teilmenge von Transaktionen nicht-blockierend sind. Wir schlagen daraufhin ein Modell für hybride TM-Implementierungen vor, welches die Hardware Transaktionen mit Software Transaktionen ergänzt. Wir beweisen, dass es eine inherente Trade-Off zwischen Grad der erlaubten Nebenläufigkeit zwischen Hard-und Software Transaktionen und den Kosten der Hardware gibt. Schlussendlich beweisen wir, dass optimistische, auf spekulativen Ausführungen basierende, Synchronisierungstechniken, in einem präzisen Sinne, besser geeignet sind um Nebenläufigkeit auszunutzen als pessimistische Techniken, die auf "Locking"basieren. 6
10.1007/978-3-642-25873-2_9
[ "https://arxiv.org/pdf/1511.01779v1.pdf" ]
38,739
1511.01779
8a8351c2e0a361975e5358a193ab74983c5d2ca5
On the Cost of Concurrency in Transactional Memory 5 Nov 2015 Verteilter Management Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin Systeme Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin Srivatsan Ravi Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin M E Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin ProfUwe Nestmann Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin Ph. DT U Berlin Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin ProfAnja Feldmann Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin Ph. DT U Berlin Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin ProfPetr Kuznetsov Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin TélécomParistech Gutachterin Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin ProfHagit Attiya Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin ProfRachid Guerraoui Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin ProfMichel Raynal Fakultät für Elektrotechnik und Informatik Lehrstuhl für Intelligente Netze The Technion von der Fakultät IV -Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Technische Universität Berlin On the Cost of Concurrency in Transactional Memory 5 Nov 2015Tag der wissenschaftlichen Aussprache: 18 June, 2015 Berlin 2015 D 83vorgelegt von Doktor der Ingenieurwissenschaften (Dr.-Ing.) genehmigte Dissertation Promotionsausschuss: Eidesstattliche ErklärungIch versichere an Eides statt, dass ich diese Dissertation selbständig verfasst und nur die angegebenen Quellen und Hilfsmittel verwendet habe.DatumSrivatsan Ravi (M.E.)3 Abstract Current general-purpose CPUs are multicores, offering multiple computing units within a single chip. The performance of programs on these architectures, however, does not necessarily increase proportionally with the number of cores. Designing concurrent programs to exploit these multicores emphasizes the need for achieving efficient synchronization among threads of computation. When there are several threads that conflict on the same data, the threads will need to coordinate their actions for ensuring correct program behaviour.Traditional techniques for synchronization are based on locking that provides threads with exclusive access to shared data. Coarse-grained locking typically forces threads to access large amounts of data sequentially and, thus, does not fully exploit hardware concurrency. Program-specific fine-grained locking or non-blocking (i.e., not using locks) synchronization, on the other hand, is a dark art to most programmers and trusted to the wisdom of a few computing experts. Thus, it is appealing to seek a middle ground between these two extremes: a synchronization mechanism that relieves the programmer of the overhead of reasoning about data conflicts that may arise from concurrent operations without severely limiting the program's performance. The Transactional Memory (TM) abstraction is proposed as such a mechanism: it intends to combine an easy-to-use programming interface with an efficient utilization of the concurrent-computing abilities provided by multicore architectures. TM allows the programmer to speculatively execute sequences of shared-memory operations as atomic transactions with all-or-nothing semantics: the transaction can either commit, in which case it appears as executed sequentially, or abort, in which case its update operations do not take effect. Thus, the programmer can design software having only sequential semantics in mind and let TM take care, at run-time, of resolving the conflicts in concurrent executions.Intuitively, we want TMs to allow for as much concurrency as possible: in the absence of severe data conflicts, transactions should be able to progress in parallel. But what are the inherent costs associated with providing high degrees of concurrency in TMs? This is the central question of the thesis.To address this question, we first focus on the consistency criteria that must be satisfied by a TM implementation. We precisely characterize what it means for a TM implementation to be safe, i.e., to ensure that the view of every transaction could have been observed in some sequential execution. We then present several lower and upper bounds on the complexity of three classes of safe TMs: blocking TMs that allow transactions to delay or abort due to overlapping transactions, non-blocking TMs which adapt to step contention by ensuring that a transaction not encountering steps of overlapping transactions must commit, and partially non-blocking TMs that provide strong non-blocking guarantees (wait-freedom) to only a subset of transactions. We then propose a model for hybrid TM implementations that complement hardware transactions with software transactions. We prove that there is an inherent trade-off on the degree of concurrency allowed between hardware and software transactions and the costs introduced on the hardware. Finally, we show that optimistic synchronization techniques based on speculative executions are, in a precise sense, better equipped to exploit concurrency than inherently pessimistic techniques based on locking. 5 Zusammenfassung Aktuelle Allzweck-CPUs haben mehrere Rechenkerne innerhalb eines einzelnen Chipsatzes. Allerdings erhöht sich die Leistung der Programme auf diesen Architekturen nicht notwendigerweise proportional in der Anzahl der Kerne. Das Entwerfen nebenläufiger Programme um diese Multicores zu nutzen, erfordert die Überwindung einiger nicht-trivialer Herausforderungen; die wichtigste ist, eine effiziente Synchronisierung der Threads der Berechnung herzustellen. Greifen mehrere Threads gleichzeitig auf dieselben Daten zu, müssen diese ihre Aktionen koordinieren, um ein korrektes Programmverhalten zu gewährleisten. Die traditionelle Methode zur Synchronisierung ist "Locking", welches jeweils nur einem einzelnen Thread Zugriff auf gemeinsam genutzten Daten gewährt. Bei grobkörnigem "Lockingërfolgt der Zugang zu einer großen Menge von Daten meist seriell, sodass die Hardware-Parallelität nich in vollem Unfang ausgenutzt wird. Auf der anderen Seite stellt programmspezifisches feinkörniges Locking, oder auch nicht-blockierende (d.h. keine Locks benutzende) Synchronisierung, eine dunkle Kunst für die meisten Programmierer dar, welche auf die Weisheit weniger Computerexperten vertraut. So ist es angebracht, einen Mittelweg zwischen diesen beiden Extremen zu suchen: einen Synchronisierungsmechanismus, der den Programmierer bezüglich der Datenkonflikte, die aus gleichzeitigen Operationen entstehen, entlastet, ohne jedoch die Leistung des Programms zu stark zu beeinträchtigen. Die Transactional Memory (TM) Abstraktion wird als solcher Mechanismus vorgeschlagen: ihr Ziel ist es, eine einfach zu bedienende Programmierschnittstelle mit einer effizienten Nutzung der gleichzeitigen Computing-Fähigkeiten von Multicore-Architekturen zu kombinieren. TM erlaubt es dem Programmierer, Sequenzen von Operationen auf dem gemeinsamen Speicher als atomare Transaktionen mit Alles-oder-Nichts Semantik zu erklären: Die Transaktion wird entweder übergeben, und somit sequentiell ausgeführt, oder abgebrochen, sodass ihre Operationen nicht durchgeführt werden. Dies ermöglicht dem Programmierer, Software mit nur sequentieller Semantik zu konzipieren, und die aus gleichzeitger Ausführung entstehenden Konflikte TM zu überlassen. Intuitiv sollen die TMs so viel Nebenläufigkeit wie möglich berücksichtigen: Falls keine Datenkonflikte vorhanden sind, sollen alle Transaktionen parallel ausgeführt werden. Gibt es in TMs Kosten, die durch diesen hohen Grad an Nebenläufigkeit entstehen? Das ist die zentrale Frage dieser Arbeit. Um diese Frage zu beantworten, konzentrieren wir uns zunächst auf das Kriterium der Konsistenz, welche von der TM-Implementierung erfüllt werden muss. Wir charakterisieren auf präzise Art, was es für eine TM-Implementierung heißt, sicher zu sein, d.h. zu gewährleisten, dass die Sicht einer jeden Transaktion auch von einer sequentiellen Transaktion hätte beobachtet werden können. Danach präsentieren wir mehrere untere und obere Schranken für die Komplexität dreier Klassen von sicheren TMs: blockierende TMs, die Blockierungen oder Abbrüche der Transaktionen erlauben, sollten diese sich überlappen, nicht-blockierende TMs die einen schrittweisen Zugriffskonflikt berücksichtigen, d.h. Transaktionen, die keinen Zugriff überlappender anderer Transaktionen beobachten, müssen übergeben, und partiell nichtblockierende TMs, die nur für eine Teilmenge von Transaktionen nicht-blockierend sind. Wir schlagen daraufhin ein Modell für hybride TM-Implementierungen vor, welches die Hardware Transaktionen mit Software Transaktionen ergänzt. Wir beweisen, dass es eine inherente Trade-Off zwischen Grad der erlaubten Nebenläufigkeit zwischen Hard-und Software Transaktionen und den Kosten der Hardware gibt. Schlussendlich beweisen wir, dass optimistische, auf spekulativen Ausführungen basierende, Synchronisierungstechniken, in einem präzisen Sinne, besser geeignet sind um Nebenläufigkeit auszunutzen als pessimistische Techniken, die auf "Locking"basieren. 6 Abstract Current general-purpose CPUs are multicores, offering multiple computing units within a single chip. The performance of programs on these architectures, however, does not necessarily increase proportionally with the number of cores. Designing concurrent programs to exploit these multicores emphasizes the need for achieving efficient synchronization among threads of computation. When there are several threads that conflict on the same data, the threads will need to coordinate their actions for ensuring correct program behaviour. Traditional techniques for synchronization are based on locking that provides threads with exclusive access to shared data. Coarse-grained locking typically forces threads to access large amounts of data sequentially and, thus, does not fully exploit hardware concurrency. Program-specific fine-grained locking or non-blocking (i.e., not using locks) synchronization, on the other hand, is a dark art to most programmers and trusted to the wisdom of a few computing experts. Thus, it is appealing to seek a middle ground between these two extremes: a synchronization mechanism that relieves the programmer of the overhead of reasoning about data conflicts that may arise from concurrent operations without severely limiting the program's performance. The Transactional Memory (TM) abstraction is proposed as such a mechanism: it intends to combine an easy-to-use programming interface with an efficient utilization of the concurrent-computing abilities provided by multicore architectures. TM allows the programmer to speculatively execute sequences of shared-memory operations as atomic transactions with all-or-nothing semantics: the transaction can either commit, in which case it appears as executed sequentially, or abort, in which case its update operations do not take effect. Thus, the programmer can design software having only sequential semantics in mind and let TM take care, at run-time, of resolving the conflicts in concurrent executions. Intuitively, we want TMs to allow for as much concurrency as possible: in the absence of severe data conflicts, transactions should be able to progress in parallel. But what are the inherent costs associated with providing high degrees of concurrency in TMs? This is the central question of the thesis. To address this question, we first focus on the consistency criteria that must be satisfied by a TM implementation. We precisely characterize what it means for a TM implementation to be safe, i.e., to ensure that the view of every transaction could have been observed in some sequential execution. We then present several lower and upper bounds on the complexity of three classes of safe TMs: blocking TMs that allow transactions to delay or abort due to overlapping transactions, non-blocking TMs which adapt to step contention by ensuring that a transaction not encountering steps of overlapping transactions must commit, and partially non-blocking TMs that provide strong non-blocking guarantees (wait-freedom) to only a subset of transactions. We then propose a model for hybrid TM implementations that complement hardware transactions with software transactions. We prove that there is an inherent trade-off on the degree of concurrency allowed between hardware and software transactions and the costs introduced on the hardware. Finally, we show that optimistic synchronization techniques based on speculative executions are, in a precise sense, better equipped to exploit concurrency than inherently pessimistic techniques based on locking. Zusammenfassung Aktuelle Allzweck-CPUs haben mehrere Rechenkerne innerhalb eines einzelnen Chipsatzes. Allerdings erhöht sich die Leistung der Programme auf diesen Architekturen nicht notwendigerweise proportional in der Anzahl der Kerne. Das Entwerfen nebenläufiger Programme um diese Multicores zu nutzen, erfordert die Überwindung einiger nicht-trivialer Herausforderungen; die wichtigste ist, eine effiziente Synchronisierung der Threads der Berechnung herzustellen. Greifen mehrere Threads gleichzeitig auf dieselben Daten zu, müssen diese ihre Aktionen koordinieren, um ein korrektes Programmverhalten zu gewährleisten. Die traditionelle Methode zur Synchronisierung ist "Locking", welches jeweils nur einem einzelnen Thread Zugriff auf gemeinsam genutzten Daten gewährt. Bei grobkörnigem "Lockingërfolgt der Zugang zu einer großen Menge von Daten meist seriell, sodass die Hardware-Parallelität nich in vollem Unfang ausgenutzt wird. Auf der anderen Seite stellt programmspezifisches feinkörniges Locking, oder auch nicht-blockierende (d.h. keine Locks benutzende) Synchronisierung, eine dunkle Kunst für die meisten Programmierer dar, welche auf die Weisheit weniger Computerexperten vertraut. So ist es angebracht, einen Mittelweg zwischen diesen beiden Extremen zu suchen: einen Synchronisierungsmechanismus, der den Programmierer bezüglich der Datenkonflikte, die aus gleichzeitigen Operationen entstehen, entlastet, ohne jedoch die Leistung des Programms zu stark zu beeinträchtigen. Die Transactional Memory (TM) Abstraktion wird als solcher Mechanismus vorgeschlagen: ihr Ziel ist es, eine einfach zu bedienende Programmierschnittstelle mit einer effizienten Nutzung der gleichzeitigen Computing-Fähigkeiten von Multicore-Architekturen zu kombinieren. TM erlaubt es dem Programmierer, Sequenzen von Operationen auf dem gemeinsamen Speicher als atomare Transaktionen mit Alles-oder-Nichts Semantik zu erklären: Die Transaktion wird entweder übergeben, und somit sequentiell ausgeführt, oder abgebrochen, sodass ihre Operationen nicht durchgeführt werden. Dies ermöglicht dem Programmierer, Software mit nur sequentieller Semantik zu konzipieren, und die aus gleichzeitger Ausführung entstehenden Konflikte TM zu überlassen. Intuitiv sollen die TMs so viel Nebenläufigkeit wie möglich berücksichtigen: Falls keine Datenkonflikte vorhanden sind, sollen alle Transaktionen parallel ausgeführt werden. Gibt es in TMs Kosten, die durch diesen hohen Grad an Nebenläufigkeit entstehen? Das ist die zentrale Frage dieser Arbeit. Um diese Frage zu beantworten, konzentrieren wir uns zunächst auf das Kriterium der Konsistenz, welche von der TM-Implementierung erfüllt werden muss. Wir charakterisieren auf präzise Art, was es für eine TM-Implementierung heißt, sicher zu sein, d.h. zu gewährleisten, dass die Sicht einer jeden Transaktion auch von einer sequentiellen Transaktion hätte beobachtet werden können. Danach präsentieren wir mehrere untere und obere Schranken für die Komplexität dreier Klassen von sicheren TMs: blockierende TMs, die Blockierungen oder Abbrüche der Transaktionen erlauben, sollten diese sich überlappen, nicht-blockierende TMs die einen schrittweisen Zugriffskonflikt berücksichtigen, d.h. Transaktionen, die keinen Zugriff überlappender anderer Transaktionen beobachten, müssen übergeben, und partiell nichtblockierende TMs, die nur für eine Teilmenge von Transaktionen nicht-blockierend sind. Wir schlagen daraufhin ein Modell für hybride TM-Implementierungen vor, welches die Hardware Transaktionen mit Software Transaktionen ergänzt. Wir beweisen, dass es eine inherente Trade-Off zwischen Grad der erlaubten Nebenläufigkeit zwischen Hard-und Software Transaktionen und den Kosten der Hardware gibt. Schlussendlich beweisen wir, dass optimistische, auf spekulativen Ausführungen basierende, Synchronisierungstechniken, in einem präzisen Sinne, besser geeignet sind um Nebenläufigkeit auszunutzen als pessimistische Techniken, die auf "Locking"basieren. Acknowledgements In his wonderfully sarcastic critique of the scientific community in His Master's Voice, the great Polish writer Stanisław Lem refers to a specialist as a barbarian whose ignorance is not well-rounded. Writing a Ph. D. thesis is essentially an attempt at becoming a specialist on some topic; whether this thesis on Transactional Memory makes me one is a questionable claim, but I am culturedly not totally ignorant, I think. The thesis itself was a long time in the making and would not have been possible without the wonderful support and gratitudes I have received these past four years. My advisors Anja Feldmann and Petr Kuznetsov guided me throughout my Ph. D. term. A fair amount of whatever good I have learnt these past few years, both scientifically and metascientifically, I owe it to Petr. He taught me, by example, what it takes to achieve nontrivial scientific results. He spent several hours schooling me when I had misunderstood some topic and as such suffered the worst of my writing, especially in the first couple of years. He was hard on me when I did badly, but always happy for me when I did well. Apart from being a deep thinker and a brilliant researcher, his scientific integrity and mental discipline have indelibly made me a better human being and student of science. At a personal level, I wish to thank him and his family for undeserved kindness shown to me over the years. I would like to thank Anja for the extraordinary amount of freedom she gave me to pursue my own research and the trust she placed in me, as she does in all her students. I am especially grateful to Robbert Van Renesse and Bryan Ford, who gave me a taste for independent research and in many ways, helped shape the course of my graduate career. I am of course extremely grateful to all my co-authors who allowed me to include content, written in conjunction with them, in the thesis. So special thanks to Dan Alistarh, Hagit Attiya, Vincent Gramoli, Sandeep Hans, Justin Kopinsky, Petr Kuznetsov and Nir Shavit. The results in Chapter 3 were initiated during a memorable visit to the Technion, Haifa in the Spring of '12. I am thankful to have been hosted by Hagit Attiya and to have had the chance to work with her and Sandeep Hans. I am also very grateful to David Sainz for taking time off and introducing me to some beautiful parts of Israel. Chapter 7 essentially stemmed from a visit to MIT in the summer of '13. I am very grateful to Dan Alistarh and Nir Shavit for hosting me. Special thanks to Justin Kopinsky, who helped keep our discussions alive during our lengthy dry spells when we were seemingly spending all our time thinking about the problem, but without producing any tangible results. Chapter 8 represents the most excruciatingly painful part of the thesis purely in terms of the number of iterations the paper based on this chapter went through. Yet, it was a procedure from which I learnt a lot and I am very thankful to Vincent Gramoli, who initiated the topic during my visit to EPFL in Spring '11. In general, I have benefitted immensely from just talking to researchers in distributed computing during conferences, workshops and research visits. These include Yehuda Afek, Panagiota Fatourou, Rachid Guerraoui, Maurice Herlihy, Victor Luchangco, Adam Morrison and Michel Raynal. Also, great thanks to the anonymous reviewers of my paper submissions whose critiques and comments helped improve the contents of the thesis. Back here in Berlin, so many of my INET colleagues have shaped my thought processes and enriched my experience in grad school. Thanks to Stefan Schmid, whose ability to execute several tasks concurrently with minimal synchronization overhead, never ceases to amaze anyone in this group. I am also very grateful to Anja, Petr and Stefan for allowing me to be a Teaching Assistant in their respective courses. Apart from being great friends, Felix Poloczek and Matthias Rost have been wonderful office mates and indulged my random discussions about life and research. Arne Ludwig and Carlo Fürst have been great friends; Arne, thanks for all the football discussions and Carlo, for exposing me to some social life. Thanks to Dan Levin, who has been a great friend and always been there to motivate and give me a fillip whenever I needed it. Ingmar Poese has been a wonderful friend as well as a constant companion to the 1 Introduction While the performance of programs would increase proportionally with the performance of a singlecore CPU, the performance of programs on multicore CPU architectures, however, does not necessarily increase proportionally with the number of cores. In order to exploit these multicores, the amount of concurrency provided by programs will need to increase as well. Designing concurrent programs that exploit the hardware concurrency provided by modern multicore CPU architectures requires achieving efficient synchronization among threads of computation. However, due to the asynchrony resulting from the CPU's context switching and scheduling policies, it is hard to specify reasonable bounds on relative thread speeds. This makes the design of efficient and correct concurrent programs a difficult task. The Transactional Memory (TM) abstraction [80,117] is a synchronization mechanism proposed as a solution to this problem: it combines an easy-to-use programming interface with an efficient utilization of the concurrent-computing abilities provided by multicore architectures. This chapter introduces the TM abstraction and presents an overview of the thesis. Concurrency and synchronization In this section, we introduce the challenges of concurrent computing and overview the drawbacks associated with traditional synchronization techniques. Concurrent computing overview Shared memory model. A process represents a thread of computation that is provided with its own private memory which cannot be accessed by other processes. However, these independent processes will have to synchronize their actions in an asynchronous environment in order to implement a user application, which they do by communicating via the CPU's shared memory. In the shared memory model of computation, processes communicate by reading and writing to a fragment of the shared memory, referred to as a base object, in a single atomic (i.e., indivisible) instruction. Modern CPU architectures additionally allow processes to invoke certain powerful atomic read-modify-write (rmw) instructions [75], which allow processes to write to a base object subject to the check of an invariant. For example, the compare-and-swap instruction is a rmw instruction that is supported by most modern architectures: it takes as input old , new and atomically updates the value of a base object to new and returns true iff its value prior to applying the instruction is equal to old ; otherwise it returns false. Chapter 1 Introduction Concurrent implementations. A concurrent implementation provides each process with an algorithm to apply CPU instructions on the shared base objects for the operations of the user application. For example, consider the problem of implementing a concurrent list-based set [81]. the set abstraction implemented as a sorted linked list supporting operations insert(v), remove(v) and contains(v); v ∈ Z. The set abstraction stores a set of integer values, initially empty. The update operations, insert(v) and remove(v), return a boolean response, true if and only if v is absent (for insert(v)) or present (for remove(v)) in the list. After insert(v) is complete, v is present in the list, and after remove(v) is complete, v is absent in the list. The contains(v) returns a boolean, true if and only if v is present in the list. A concurrent implementation of the list-based set is simply an emulation of the set abstraction that is realized by processes applying the available CPU instructions on the underlying base objects. Safety and liveness. What does it mean for a concurrent implementation to be correct? Firstly, the implementation must satisfy a safety property: there are no bad reachable states in any execution of the implementation. Intuitively, we characterize safety for a concurrent implementation of a data abstraction by verifying if the responses returned in the concurrent execution may have been observed in a sequential execution of the same. For example, the safety property for a concurrent list-based set implementation stipulates that the response of the set operations in a concurrent execution is consistent with some sequential execution of the list-based set. However, a concurrent set implementation that does not return any response trivially ensures safety; thus, the implementation must satisfy some liveness property specifying the conditions under which the operations must return a response. For example, one liveness property we may wish to impose on the concurrent list-based set is wait-freedom: every process completes the operations it invokes irrespective of the behaviour of other processes. As another example, consider the mutual exclusion problem [41] which involves sharing some critical data resource among processes. The safety property for mutual exclusion stipulates that at most one process has access to the resource in any execution, in which case, we say that the process is inside the critical section. However, one may notice that an implementation which ensures that no process ever enters the critical section is trivially safe, but not very useful. Thus, the mutual exclusion implementation must satisfy some liveness property specifying the conditions under which the processes must eventually enter the critical section. For example, we expect that the implementation is deadlock-free: if every process is given the chance to execute its algorithm, some process will enter the critical section. In contrast to safety, a liveness property can be violated only in an infinite execution, e.g., by no process ever entering the critical section. In shared memory computing, we are concerned with deriving concurrent implementations with strong safety and liveness properties, thus emphasizing the need for efficient synchronization among processes. Synchronization using locks A lock is a concurrency abstraction that implements mutual exclusion and is the traditional solution for achieving synchronization among processes. Processes acquire a lock prior to executing code inside the critical section and release the lock afterwards, thereby allowing other processes to modify the data accessed by the code within the critical section. In essence, after acquiring the lock, the code within the critical section can be executed atomically. However, lock-based implementations suffer from some fundamental drawbacks. Ease of designing lock-based programs. Ideally, to reduce the programmer's burden, we would like to take any sequential implementation and transform it to an efficient concurrent one with minimal effort. Consider a simple locking protocol that works for most applications: coarse-grained locking which typically serializes access to a large amount of data. Although trivial for the programmer to implement, it does not exploit hardware concurrency. In contrast, fine-grained locking may exploit concurrency better, but requires the programmer to have a good understanding of the data-flow relationships in the application and precisely specify which locks provide exclusive access to which data. For example, consider the problem of implementing a concurrent list-based set. The sequential implementation of the list-based set uses a sorted linked list data structure in which each data item (except the tail of the list) maintains a next field to provide a pointer to the successor. Every operation (insert, remove and contains) invoked with a parameter v ∈ Z traverses the list starting from the head up to the data item storing value v ≥ v. If v = v, then contains returns true, remove(v) unlinks the corresponding element and returns true and insert(v) returns false. Otherwise, contains(v) and remove(v) return false, while insert(v) adds a new data item with value v to the list and returns true. Given such a sequential implementation, we may derive a coarse-grained implementation of the list-based set by having processes acquire a lock on the head of the list, thus, forcing one operation to complete before the next starts. Alternatively, a fine-grained protocol may involve acquiring locks hand-overhand [29]: a process holds the lock on at most two adjacent data items of the list. Yet, while such a protocol produces a correct set implementation [26], it is not a universal strategy that applies to other popular data abstractions like queues and stacks. Composing lock-based programs. It is hard to compose smaller atomic operations based on locks to produce a larger atomic operation without affecting safety [81,117]. Consider the fifo queue abstraction supporting the enqueue(v); v ∈ Z and dequeue operations. Suppose that we wish to solve the problem of atomically dequeuing from a queue Q 1 and enqueuing the item returned, say v, to a queue Q 2 . While the individual actions of dequeuing from Q 1 and enqueuing v to Q 2 may be atomic, we wish to ensure that the combined action is atomic: no process must observe the absence of v or that it is present in both Q 1 and Q 2 . A possible solution to this specific problem is to force a process attempting atomic modification of Q 1 and Q 2 to acquire a lock. Firstly, this requires prior knowledge of the identities of the two queue instances. Secondly, this solution does not exploit hardware concurrency since the lock itself becomes a concurrency bottleneck. Moreover, imagine that processes p 1 and p 2 need to acquire two locks L 1 and L 2 in order to atomically modify a set of queue instances. Without imposing a pre-agreed upon order on such lock acquisitions, there is the possibility of introducing deadlocks where processes wait infinitely long without completing their operations. For example, imagine the following concurrency scenario: process p 1 (and resp., p 2 ) holds the lock L 1 (and resp., L 2 ) and attempts to acquire the lock L 2 (and resp., L 1 ). Thus, process p 1 (and resp., p 2 ) waits infinitely long for p 2 (and resp., p 1 ) to complete its operation. Non-blocking synchronization It is impossible to derive lock-based implementations that provide non-blocking liveness guarantees, i.e., some process completes its operations irrespective of the behaviour of other processes. In fact, even the weak non-blocking liveness property of obstruction-freedom [16] cannot be satisfied by lock-based implementations: a process must complete its operation if it eventually runs solo without interleaving events of other processes. Concurrent implementations providing non-blocking liveness properties are appealing in practice since they overcome problems like deadlocks and priority inversions [30] inherent to lock-based implementations. Thus, non-blocking (without using locks) solutions using conditional rmw instructions like compare-and-swap have been proposed as an alternative to lock-based implementations. However, as with fine-grained locking, implementing correct non-blocking algorithms can be hard and requires handcrafted problem-specific strategies. For example, the state-of-the-art list-based set implementation by 81,105] is non-blocking: the insert and remove operations, as they traverse the list, help concurrent operations to physically remove data items (using compare-and-swap) that are logically deleted, i.e., "marked for deletion". But one cannot employ an identical algorithmic technique for implementing a non-blocking queue [106], whose semantics is orthogonal to that of the set abstraction. Moreover, addressing the compositionality issue, as with lock-based solutions, requires ad-hoc strategies that are not easy to realize [81]. prev ← head 3: curr ← read(prev .next) 4: while (tval ← read(curr .val)) < v do 5: prev ← curr 6: curr ← read(curr .next) 7: end while 8: if tval = v then 9: tnext ← read(curr .next) 10: write(prev .next, tnext) 11: Return (tval = v) Algorithm 1.2 Using TM to implement remove operation of list-based set 1: remove(v): atomic{ 2: prev ← head 3: curr ← tx-read(prev .next) 4: while (tval ← tx-read(curr .val)) < v do 5: prev ← curr 6: curr ← tx-read(curr .next) 7: end while 8: if tval = v then 9: tnext ← tx-read(curr .next) 10: tx-write(prev .next, tnext) 11: tryCommit() 12: Return (tval = v) } catch {AbortException a } 13: { Return ⊥ } Transactional Memory (TM) Transactional Memory (TM) [80,117] addresses the challenge of resolving conflicts (concurrent reading and writing to the same data) in an efficient and safe manner by offering a simple interface in which sequences of shared memory operations on data items can be declared as optimistic transactions. The underlying idea of TM, inspired by databases [58], is to treat each transaction as atomic: a transaction may either commit, in which case it appears as executed sequentially, or abort, in which case none of its update operations take effect. Thus, it enables the programmer to design software applications having only sequential semantics in mind and let TM take care of dynamically handling the conflicts resulting from concurrent executions at run-time. A TM implementation provides processes with algorithms for implementing transactional operations such as read, write, tryCommit and tryAbort on data items using base objects. TM implementations typically ensure that all committed transactions appear to execute sequentially in some total order respecting the timing of non-overlapping transactions. Moreover, unlike database transactions, intermediate states witnessed by the read operations of an incomplete transaction may affect the user application. Thus, to ensure that the TM implementation does not export any pathological executions, it is additionally expected that every transaction (including aborted and incomplete ones) must return responses that is consistent with some sequential execution of the TM implementation. In general, given a sequential implementation of a data abstraction, a corresponding TM-based concurrent one encapsulates the sequential (high-level) operations within a transaction. Then, the TM-based concurrent implementation of the data abstraction replaces each read and write of a data item with the transactional read and write implementations, respectively. If the transaction commits, then the result of the high-level operation is returned to the application. Otherwise, one of the transactional operations may be aborted, in which case, the write operations performed by the transaction do not take effect and the high-level operation is typically re-started with a new transaction. To illustrate this, we refer to the sequential implementation of the remove operation of list-based set depicted in Algorithm 1.1 of Figure 1.1. In a TM-based concurrent implementation of the list-based set (Algorithm 1.2), each read (and resp. write) operation performed by remove(v) on a data item X of the list is replaced with tx-read(X) (and resp., tx-write(X, arg)). tx-read(X) returns the value of the data item X or aborts the transaction while tx-write(X, arg) writes the value arg to X or aborts the transaction. Finally, the process attempts to commit the transaction by invoking the tryCommit operation. If the tryCommit is successful, the response of remove(v) is returned; otherwise a failed response (denoted ⊥) is returned, in which case, the write operations performed by the transaction are "rolled back". Intuitively, it is easy to understand how TM simplifies concurrent programming. Deriving a TM-based concurrent implementation of the list-based set simply requires encapsulating the operations to be executed atomically within a transaction using an atomic delimiter 1 . The underlying TM implementation endeavours to dynamically execute the transactions by resolving the conflicts that might arise from processes reading and writing to the same data item at run-time. Intuitively, since the TM implementation enforces a strong safety property, the resulting list-based implementation is also safe: the responses of its operations are consistent with some sequential execution of the list-based set. One may view TM as a universal construction [13,28,45,49,75] that accepts as input the operations of a sequential implementation and strives to execute them concurrently. Specifically, TM is designed to work in a dynamic environment where neither the sequence of operations nor the data items accessed by a transaction are known a priori. Thus, the response of a read operation performed by a transaction is returned immediately to the application, and the application determines the next data item that must accessed by the transaction. TM-based implementations overcome the drawbacks of traditional synchronization techniques based on locks and compare-and-swap. Firstly, the TM interface places minimal overhead on the programmer: using TM only requires encapsulating the sequential operations within transactions and handling an exception should the transaction be aborted. Secondly, the ability to execute multiple operations atomically allows TM-based implementations to seamlessly compose smaller atomic operations to produce larger ones. For example, suppose that we wish to atomically dequeue from a queue Q 1 and enqueue(v) in queue Q 2 , where v is the value returned by Q 1 .dequeue. Solving this problem using TM simply requires encapsulating the sequential implementation of Q 1 .dequeue, followed by Q 2 .enqueue(v) within a transaction. Note that a TM implementation may internally employ locks or conditional rmw instructions like compare-and-swap. However, TM raises the level of abstraction by exposing an easy-to-use compositional transactional interface for the user application that is oblivious to the specifics of the implementation and the semantics of the user application. Summary of the thesis TM allows the programmer to speculatively execute sequences of shared-memory operations as atomic transactions: if the transaction commits, the operations appear as executed sequentially, or if the transaction aborts, the update operations do not take effect. The combination of speculation and the simple programming interface provided by TM seemingly overcomes the problems associated with traditional synchronization techniques based on locks and compare-and-swap. But are there some fundamental drawbacks associated with the TM abstraction? Does providing high degrees of concurrency in TMs come with inherent costs? This is the central question of the thesis. In the rest of this introductory chapter, we provide a summary of the results in the thesis that give some answers to this question. Safety for transactional memory We first need to define the consistency criteria that must be satisfied by a TM implementation. We formalize the semantics of a safe TM: every transaction, including aborted and incomplete ones, must observe a view that is consistent with some sequential execution. This is important, since if the intermediate view is not consistent with any sequential execution, the application may experience a fatal irrevocable error or enter an infinite loop. Additionally, the response of a transaction's read should not depend on an ongoing transaction that has not started committing yet. This restriction, referred to as deferred-update semantics appears desirable, since the ongoing transaction may still abort, thus rendering the read inconsistent. We define the notion of deferred-update semantics formally and apply it to several TM consistency criteria proposed in literature. We then verify if the resulting TM consistency criterion is a safety property [11,100,108] in the formal sense, i.e., the set of histories (interleavings of invocations and responses of transactional operations) is prefix-closed and limit-closed. We first consider the popular consistency criterion of opacity [64]. Opacity requires the states observed by all transactions, included uncommitted ones, to be consistent with a global serialization, i.e., a serial execution constituted by committed transactions. Moreover, the serialization should respect the realtime order : a transaction that completed before (in real time) another transaction started should appear first in the serialization. By definition, opacity reduces correctness of a TM history to correctness of all its prefixes, and thus is prefix-closed and limit-closed. Thus, to verify that a history is opaque, one needs to verify that each of its prefixes is consistent with some global serialization. To simplify verification and explicitly introduce deferred-update semantics into a TM correctness criterion, we specify a general criterion of du-opacity [18], which requires the global serial execution to respect the deferred-update property. Informally, a du-opaque history must be indistinguishable from a totally-ordered history, with respect to which no transaction reads from a transaction that has not started committing. Assuming that in an infinite history, every transaction completes each of the operations it invoked, we prove that du-opacity is a safety property. One may notice that the intended safety semantics does not require, as opacity does, that all transactions observe the same serial execution. As long as committed transactions constitute a serial execution and every transaction witnesses a consistent state, the execution can be considered "safe": no run-time error that cannot occur in a serial execution can happen. Two definitions in literature have adopted this approach [43,85]. We introduce "deferred-update" versions of these properties and discuss how the resulting properties relate to du-opacity. Complexity of transactional memory One may observe that a TM implementation that aborts or never commits any transaction is trivially safe, but not very useful. Thus, the TM implementation must satisfy some nontrivial liveness property specifying the conditions under which the transactional operations must return some response and a progress property specifying the conditions under which the transaction is allowed to abort. Two properties considered important for TM performance are read invisibility [23] and disjoint-access parallelism [86]. Read invisibility may boost the concurrency of a TM implementation by ensuring that no reading transaction can cause any other transaction to abort. The idea of disjoint-access parallelism is to allow transactions that do not access the same data item to proceed independently of each other without memory contention. We investigate the inherent complexities in terms of time and memory resources associated with implementing safe TMs that provide strong liveness and progress properties, possibly combined with attractive requirements like read invisibility and disjoint-access parallelism. Which classes of TM implementations are (im)possible to solve? Blocking TMs. We begin by studying TM implementations that are blocking, in the sense that, a transaction may be delayed or aborted due to concurrent transactions. • We prove that, even inherently sequential TMs, that allow a transaction to be aborted due to a concurrent transaction, incur significant complexity costs when combined with read invisibility and disjoint-access parallelism. • We prove that, progressive TMs, that allow a transaction to be aborted only if it encounters a readwrite or write-write conflict with a concurrent transaction [62], may need to exclusively control a linear number of data items at some point in the execution. • We then turn our focus to strongly progressive TMs [64] that, in addition to progressiveness, ensures that not all concurrent transactions conflicting over a single data item abort. We prove that in any strongly progressive TM implementation that accesses the shared memory with read, write and conditional primitives, such as compare-and-swap, the total number of remote memory references [14,22] (RMRs) that take place in an execution in which n concurrent processes perform transactions on a single data item might reach Ω(n log n) in the worst-case. • We show that, with respect to the amount of expensive synchronization patterns like compare-andswap instructions and memory barriers [17,103], progressive implementations are asymptotically optimal. We use this result to establish a linear (in the transaction's data set size) separation between the worst-case transaction expensive synchronization complexity of progressive TMs and permissive TMs that allow a transaction to abort only if committing it would violate opacity. Non-blocking TMs. Next, we focus on TMs that avoid using locks and rely on non-blocking synchronization: a prematurely halted transaction cannot not prevent other transactions from committing. Possibly the weakest non-blocking progress condition is obstruction-freedom [78,82] stipulating that every transaction running in the absence of step contention, i.e., not encountering steps of concurrent transactions, must commit. In fact, several early TM implementations [52,79,101,117,120] satisfied obstruction-freedom. However, circa. 2005, several papers presented the case for a shift from TMs that provide obstruction-free TM-progress to lock-based progressive TMs [39,40,48]. They argued that lock-based TMs tend to outperform obstruction-free ones by allowing for simpler algorithms with lower complexity overheads. We prove the following lower bounds for obstruction-free TMs. • Combining invisible reads with even weak forms of disjoint-access parallelism [24] in obstructionfree TMs is impossible, • A read operation in a n-process obstruction-free TM implementation incurs Ω(n) memory stalls [16,46]. • A read-only transaction may need to perform a linear (in n) number of expensive synchronization patterns. We then present a progressive TM implementation that beats all of these lower bounds, thus suggesting that the course correction from non-blocking (obstruction-free) TMs to blocking (progressive) TMs was indeed justified. Partially non-blocking TMs. Lastly, we explore the costs of providing non-blocking progress to only a subset of transactions. Specifically, we require read-only transactions to commit wait-free, i.e., every transaction commits within a finite number of its steps, but updating transactions are guaranteed to commit only if they run in the absence of concurrency. We show that combining this kind of partial wait-freedom with read invisibility or disjoint-access parallelism comes with inherent costs. Specifically, we establish the following lower bounds for TMs that provide this kind of partial wait-freedom. • This kind of partial wait-freedom equipped with invisible reads results in maintaining unbounded sets of versions for every data item. • It is impossible to implement a strict form of disjoint-access parallelism [60]. • Combining with the weak form of disjoint-access parallelism means that a read-only transaction (with an arbitrarily large read set) must sometimes perform at least one expensive synchronization pattern per read operation in some executions. Hybrid transactional memory We turn our focus on Hybrid transactional memory (HyTM) [35,37,88,99]. The TM abstraction, in its original manifestation, augmented the processor's cache-coherence protocol and extended the CPU's instruction set with instructions to indicate which memory accesses must be transactional [80]. Most popular TM designs, subsequent to the original proposal in [80] have implemented all the functionality in software [36,52,79,101,117]. More recently, CPUs have included hardware extensions to support small transactions [1,107,111]. Hardware transactions may be spuriously aborted due to several reasons: cache capacity overflow, interrupts etc. This has led to proposals for best-effort HyTMs in which the fast, but potentially unreliable hardware transactions are complemented with slower, but more reliable software transactions. However, the fundamental limitations of building a HyTM with nontrivial concurrency between hardware and software transactions are not well understood. Typically, hardware transactions usually employ code instrumentation techniques to detect concurrency scenarios and abort in the case of contention. But are there inherent instrumentation costs of implementing a HyTM, and what are the trade-offs between these costs ands provided concurrency, i.e., the ability of the HyTM to execute hardware and software transactions in parallel? The thesis makes the following contributions which help determine the cost of concurrency in HyTMs. • We propose a general model for HyTM implementations, which captures the notion of cached accesses as performed by hardware transactions, and precisely defines instrumentation costs in a quantifiable way. • We derive lower and upper bounds in this model, which capture for the first time, an inherent trade-off on the degree of concurrency allowed between hardware and software transactions and the instrumentation overhead introduced on the hardware. Optimism for boosting concurrency Lock-based implementations are conventionally pessimistic in nature: the operations invoked by processes are not "abortable" and return only after they are successfully completed. The TM abstraction is a realization of optimistic concurrency control: speculatively execute transactions, abort and roll back on dynamically detected conflicts. But are optimistic implementations fundamentally better equipped to exploit concurrency than pessimistic ones? We compare the amount of concurrency one can obtain by converting a sequential implementation of a data abstraction into a concurrent one using optimistic or pessimistic synchronization techniques. To establish fair comparison of such implementations, we introduce a new correctness criterion for concurrent implementations, called locally serializable linearizability, defined independently of the synchronization techniques they use. We treat an implementation's concurrency as its ability to accept schedules of sequential operations from different processes. More specifically, we assume an external scheduler that defines which processes execute which steps of the corresponding sequential implementation in a dynamic and unpredictable fashion. This allows us to define concurrency provided by an implementation as the set of interleavings of steps of sequential operations (or schedules) it accepts, i.e., is able to effectively process. Then, the more schedules the implementation would accept without hampering correctness, the more concurrent it would be. The thesis makes the following contributions. • We provide a framework to analytically capture the inherent concurrency provided by two broad classes of synchronization techniques: pessimistic implementations that implement some form of mutual exclusion and optimistic implementations based on speculative executions. • We show that, implementations based on pessimistic synchronization and "semantics-oblivious" TMs are suboptimal, in the sense that, there exist there exist simple schedules of the list-based set which cannot be accepted by any pessimistic or TM-based implementation. Specifically, we prove that TM-based implementations accept schedules of the list-based set that cannot be accepted by any pessimistic implementation. However, we also show pessimistic implementations of the list-based set which accept schedules that cannot be accepted by any TM-based implementation. • We show that, there exists an optimistic implementation of the list-based set that is concurrency optimal, i.e., it accepts all correct schedules. Our results suggest that "semantics-aware" optimistic implementations may be better suited to exploiting concurrency than their pessimistic counterparts. We first define the TM model, the TM properties proposed in literature and the complexity metrics considered in Chapter 2. Chapter 3 is on safety for TMs. Chapter 4 is on the complexity of blocking TMs, non-blocking TMs that satisfy obstruction-freedom are covered in Chapter 5 and we present lower bounds for partially non-blocking TMs in Chapter 6. Chapter 7 is devoted to the study of hybrid TMs. In Chapter 8, we compare the relative abilities of optimistic and pessimistic synchronization techniques in exploiting concurrency. Chapter 9 presents closing comments and future directions. Viewed collectively, the results hopefully shine light on the foundations of the TM abstraction that is widely expected to be the Zeitgeist of the concurrent computational model. 2 Transactional Memory model All models are wrong, but some models are useful. George Edward Pelham Box In this chapter, we formalize the TM model and discuss some important TM properties proposed in literature. In Section 2.1, we formalize the specification of TMs. In Section 2.2, we introduce the basic TM-correctness property of strict serializability that we consider in the thesis. Sections 2.3 and 2.4 overview progress and liveness properties for TMs respectively and identifies the relations between them. Section 2.5 defines the notion of invisible reads while Section 2.6 is on disjoint-access parallelism. Finally, in Section 2.7, we introduce some of the complexity metrics considered in the thesis. TM interface and TM implementations In this section, we first describe the shared memory model of computation and then introduce the TM abstraction. The shared memory model. The thesis considers the standard asynchronous shared memory model of computation in which a set of n ∈ N processes (that may fail by crashing), communicate by applying operations on shared objects [16]. An object is an instance of an abstract data type. An abstract data type τ is a mealy machine that is specified as a tuple (Φ, Γ, Q, q 0 , δ) where Φ is a set of operations, Γ is a set of responses, Q is a set of states, q 0 ∈ Q is an initial state and δ ⊆ Q × Φ × Q × Γ is a transition relation that determines, for each state and each operation, the set of possible resulting states and produced responses [6]. Here, (q, π, q , r) ∈ δ implies that when an operation π ∈ Φ is applied on an object of type τ in state q, the object moves to state q and returns response r. An implementation of an object type τ provides a specific data-representation of τ that is realized by processes applying primitives on shared base objects, each of which is assigned an initial value. In order to implement an object, processes are provided with an algorithm, which is a set of deterministic statemachines, one for each process. In the thesis, we use the term primitive to refer to operations on base objects and reserve the term operation for the object that is implemented from the base objects. A primitive is a generic atomic read-modify-write (rmw ) procedure applied to a base object [46,75]. It is characterized by a pair of functions g, h : given the current state of the base object, g is an update function that computes its state after the primitive is applied, while h is a response function that specifies the outcome of the primitive returned to the process. A rmw primitive is trivial if it never changes the value of the base object to which it is applied. Otherwise, it is nontrivial. An rmw primitive g, h is conditional if there exists v, w such that g(v, w) = v and there exists v, w such that g(v, w) = v [51]. Read is an example of a trivial rmw primitive that takes no input arguments: when applied to a base object with value v, the update function leaves the state of the base object unchanged and the response function returns the value v. Write is an example of a nontrivial rmw primitive that takes an input argument v : when applied to a base object with value v, its update function changes the value of the base object to v and its response function returns ok. Compare-and-swap is an example of a nontrivial conditional rmw primitive: its update function receives an input argument old , new and changes the value v of the base object to which it is applied iff v = old . Load-linked/store-conditional is another example of a nontrivial conditional rmw primitive: the load-linked primitive executed by some process p i returns the value of the base object to which it is applied and the store-conditional primitive's update function receives an input new and atomically changes the value of the base object to new iff the base object has not been updated by any other process since the load-linked event by p i . Fetch-and-add is an example of a nontrivial rmw primitive that is not conditional: its update function applied to base object with an integer value v takes an integer w as input and changes the value of the base object to v + w. Transactional memory (TM). Transactional memory allows a set of data items (called t-objects) to be accessed via transactions. Every transaction T k has a unique identifier k. We make no assumptions on the size of a t-object, i.e., the cardinality on the set V of possible values a t-object can store. A transaction T k may contain the following t-operations, each being a matching pair of an invocation and a response: read k (X) returns a value in V , denoted read k (X) → v, or a special value A k / ∈ V (abort); write k (X, v), for a value v ∈ V , returns ok or A k ; tryC k returns C k / ∈ V (commit) or A k . As we show in the subsequent Section 2.2, we can specify TM as an abstract data type. Note that a TM interface may additionally provide a start k t-operation that returns ok or A k , which is the first t-operation transaction T k must invoke, or a tryA k t-operation that returns A k . However, the actions performed inside the start k may be performed as part of the first t-operation performed by the transaction. The tryA k t-operation allows the user application to explicitly abort a transaction and can be useful, but since each of the individual t-read or t-write are allowed to abort, the tryA k t-operation provides no additional expressive power to the TM interface. Thus, for simplicity, we do not incorporate these t-operations in our TM specification. TM implementations. A TM implementation provides processes with algorithms for implementing read k , write k and tryC k () of a transaction T k by applying primitives from a set of shared base objects, each of which is assigned an initial value. We assume that a process starts a new transaction only after its previous transaction has committed or aborted. In the rest of this section, we define the terms specifically in the context of TM implementations, but they may be used analogously in the context of any concurrent implementation of an abstract data type. Executions and configurations. An event of a process p i (sometimes we say step of p i ) is an invocation or response of an operation performed by p i or a rmw primitive g, h applied by p i to a base object b along with its response r (we call it a rmw event and write (b, g, h , r, i)). A configuration (of an implementation) specifies the value of each base object and the state of each process. The initial configuration is the configuration in which all base objects have their initial values and all processes are in their initial states. An execution fragment is a (finite or infinite) sequence of events. An execution of an implementation M is an execution fragment where, starting from the initial configuration, each event is issued according to M and each response of a rmw event (b, g, h , r, i) matches the state of b resulting from all preceding events. An execution E · E denotes the concatenation of E and execution fragment E , and we say that E is an extension of E or E extends E. TM interface and TM implementations Let E be an execution fragment. For every transaction (resp., process) identifier k, E|k denotes the subsequence of E restricted to events of transaction T k (resp., process p k ). If E|k is non-empty, we say that T k (resp., p k ) participates in E, else we say E is T k -free (resp., p k -free). Two executions E and E are indistinguishable to a set T of transactions, if for each transaction T k ∈ T , E|k = E |k. A TM history is the subsequence of an execution consisting of the invocation and response events of t-operations. Two histories H and H are equivalent if txns(H) = txns(H ) and for every transaction T k ∈ txns(H), H|k = H |k. Data sets of transactions. The read set (resp., the write set) of a transaction T k in an execution E, denoted Rset E (T k ) (resp., Wset E (T k )), is the set of t-objects that T k reads (resp., writes to) in E. More specifically, if E contains an invocation of read k (X) (resp., write k (X, v)), we say that X ∈ Rset E (T k ) (resp., Wset E (T k )) (for brevity, we sometimes omit the subscript E from the notation). The data set of T k is Dset(T k ) = Rset(T k ) ∪ Wset(T k ). A transaction is called read-only if Rset(T k ) = ∅ ∧ Wset(T k ) = ∅; write-only if Wset(T k ) = ∅ ∧ Rset(T k ) = ∅ and updating if Wset(T k ) = ∅. Note that, in our TM model, the data set of a transaction is not known apriori, i.e., at the start of the transaction and it is identifiable only by the set of data items the transaction has invoked a read or write on in the given execution. Transaction orders. Let txns(E) denote the set of transactions that participate in E. In an infinite history H, we assume that each T k ∈ txns(H), H|k is finite; i.e., transactions do not issue an infinite number of t-operations. An execution E is sequential if every invocation of a t-operation is either the last event in the history H exported by E or is immediately followed by a matching response. We assume that executions are well-formed, i.e., for all T k , E|k begins with the invocation of a t-operation, is sequential and has no events after A k or C k . A transaction T k ∈ txns(E) is complete in E if E|k ends with a response event. The execution E is complete if all transactions in txns(E) are complete in E. A transaction T k ∈ txns(E) is t-complete if E|k ends with A k or C k ; otherwise, T k is t-incomplete. T k is committed (resp., aborted ) in E if the last event of T k is C k (resp., A k ). The execution E is t-complete if all transactions in txns(E) are t-complete. For transactions {T k , T m } ∈ txns(E), we say that T k precedes T m in the real-time order of E, denoted T k ≺ RT E T m , if T k is t-complete in E and the last event of T k precedes the first event of T m in E. If neither T k ≺ RT E T m nor T m ≺ RT E T k , then T k and T m are concurrent in E. An execution E is t-sequential if there are no concurrent transactions in E. Latest written value and legality. Let H be a t-sequential history. For every operation read k (X) in H, we define the latest written value of X as follows: if T k contains a write k (X, v) preceding read k (X), then the latest written value of X is the value of the latest such write to X. Otherwise, the latest written value of X is the value of the argument of the latest write m (X, v) that precedes read k (X) and belongs to a committed transaction in H. (This write is well-defined since H starts with T 0 writing to all t-objects.) We say that read k (X) is legal in a t-sequential history H if it returns the latest written value of X, and H is legal if every read k (X) in H that does not return A k is legal in H. We also assume, for simplicity, that the user application invokes a read k (X) at most once within a transaction T k . This assumption incurs no loss of generality, since a repeated read can be assigned to return a previously returned value without affecting the history's legality. Contention. We say that a configuration C after an execution E is quiescent (resp., t-quiescent) if every transaction T k ∈ txns(E) is complete (resp., t-complete) in C. If a transaction T is incomplete in an execution E, it has exactly one enabled event, which is the next event the transaction will perform according to the TM implementation. Events e and e of an execution E contend on a base object b if they are both events on b in E and at least one of them is nontrivial (the event is trivial (resp., nontrivial) if it is the application of a trivial (resp., nontrivial) primitive). We say that T is poised to apply an event e after E if e is the next enabled event for T in E. We say that transactions T and T concurrently contend on b in E if they are poised to apply contending events on b after E. We say that an execution fragment E is step contention-free for t-operation op k if the events of E|op k are contiguous in E. We say that an execution fragment E is step contention-free for T k if the events of E|k are contiguous in E. We say that E is step contention-free if E is step contention-free for all transactions that participate in E. TM-correctness Correctness for TMs is specified as a safety property on TM histories [11,100,108]. In this section, we introduce the popular TM-correctness condition strict serializability [109]: all committed transactions appear to execute sequentially in some total order respecting the real-time transaction orders. We then explain how strict serializability is related to linearizability [83]. In the thesis, we only consider TM-correctness conditions like strict serializability and its restrictions. We formally define strict serializability below, but other TM-correctness conditions studied in the thesis can be found in Chapter 3. First, we define how to derive a t-complete history from a t-incomplete one. -First, for every transaction T k ∈ txns(H) with an incomplete t-operation op k in H, if op k = read k ∨ write k , insert A k somewhere after the invocation of op k ; otherwise, if op k = tryC k (), insert C k or A k somewhere after the last event of T k . -After all transactions are complete, for every transaction T k that is not t-complete, insert tryC k ·A k after the last event of transaction T k . We refer to S as a serialization of H. We say that H is strictly serializable if there exists a serialization S of H such that for any two trans- actions T k , T m ∈ txns(H), if T k ≺ RT H T m , then T k precedes T m in S. In general, given a TM-correctness condition C, we say that a TM implementation M satisfies C if every execution of M satisfies C. Strict serializability as linearizability. We now show we can specify TM as an abstract data type. The sequential specification of a TM is specified as follows: 1. Φ is the set of all transactions {T i } i∈N 2. Γ is the set of incommensurate vectors {[r 1 , . . . , r i ]}; i ∈ N; where each r j ; 1 ≤ j ≤ i − 1 ∈ {v ∈ V, A, ok} and r i ∈ {A, C} 3. The state of TM is a vector of the state of each t-object X m . The state of a t-object X m is a value v m ∈ V of X m . Thus, Q ⊆ {[v i 1 , . . . , v i m , . . .]}; where each v i m ∈ V 4. q 0 ∈ Q = [ov 1 , . . . , ov m , . . .], where each ov m ∈ V 5 . δ is defined as follows: Let T k be a transaction applied to the TM in state q = [v 1 , . . . , v m , . . .]. • For every X ∈ Rset(T k ), the response of read k (X) is defined as follows: If T k contains a write k (X, v) prior to read k (X), then the response is v; else the response is the current state of X. • For every X ∈ Wset(T k ), the response of write k (X, v) is ok. TM-progress • Transaction T k returns the response C in which case the TM moves to state q defined as follows: every X j ∈ Wset(T k ) to which T k writes values nv j , q [j] = nv j ; else if X j ∈ Wset(T k ), q [j] is unchanged. Otherwise, T k returns the response A in which case q = q. In general, the correctness of an implementation of a data type is commonly captured by the criterion of linearizability. In the TM context, a t-complete history H is linearizable with respect to the TM type if there exists a t-sequential history S equivalent to H such that (1) S respects the real-time ordering of transactions in H and (2) S is consistent with the sequential specification of TM. The following lemma, which illustrates the similarity between strict serializability and linearizability with respect to the TM type, is now immediate. TM-progress One may notice that a TM implementation that forces, in every execution to abort every transaction is trivially strictly serializable, but not very useful. A TM-progress condition specifies the conditions under which a transaction is allowed to abort. Technically, a TM-progress condition specified this way is a safety property since it can be violated in a finite execution (cf. Chapter 3 for details on safety properties). Ideally, a TM-progress condition must provide non-blocking progress, in the sense that a prematurely halted transaction cannot prevent all other transactions from committing. Such a TM-progress condition is also said to be lock-free since it cannot be achieved by use of locks and mutual-exclusion. A nonblocking TM-progress condition is considered useful in asynchronous systems with process failures since it prevents the TM implementation from deadlocking (processes wait infinitely long without committing their transactions). Obstruction-freedom. Perhaps, the weakest non-blocking TM-progress condition is obstructionfreedom, which stipulates that a transaction may be aborted only if it encounters steps of a concurrent transaction [82]. Definition 2.3 (Obstruction-free (OF) TM-progress). We say that a TM implementation M provides obstruction-free (OF) TM-progress if for every execution E of M , if any transaction T k ∈ txns(E) returns A k in E, then E is not step contention-free for T k . We now survey the popular blocking TM-progress properties proposed in literature. Intuitively, unlike non-blocking TM-progress conditions that adapt to step contention, a blocking TM-progress condition allows a transaction to be aborted due to overlap contention. Minimal progressiveness. Intuitively, the most basic TM-progress condition is one which provide only sequential TM-progress, i.e., a transaction may be aborted due to a concurrent transaction. In literature, this is referred to as minimal progressiveness [64]. Definition 2.4 (Minimal progressiveness). We say that a TM implementation M provides minimal progressive TM-progress (or minimal progressiveness) if for every execution E of M and every transaction T k ∈ txns(E) that returns A k in E, there exists a transaction T m ∈ txns(E) that is concurrent to T k in E [64]. Given TM conditions C 1 and C 2 , if every TM implementation that satisfies C 1 also satisfies C 2 , but the converse is not true, we say that C 2 C 1 . Observation 2.2. Minimal progressiveness Obstruction-free. TM-liveness Mv-permissiveness. Perelman et al. introduced the notion of mv-permissiveness, a TM-progress property designed to prevent read-only transactions from being aborted. Definition 2.8 (Mv-permissiveness). A TM implementation M is mv-permissive if for every execution E of M and for every transaction T k ∈ txns(E) that returns A k in E, we have that Wset(T k ) = ∅ and there exists an updating transaction T m ∈ txns(E) such that T k and T m conflict in E. We observe that mv-permissiveness is strictly stronger than progressiveness, but incomparable to strong progressiveness. Observation 2.5. Progressiveness Mv-permissiveness. Proof. Since mv-permissive TMs allow a transaction to be aborted only on read-write conflicts, they also satisfy progressiveness. But the converse is not true. Consider an execution in which a read-only transaction T i that runs concurrently with a conflicting updating transaction T j . By the definition of progressiveness, both T i and T j may be aborted in such an execution. However, a mv-permissive TM would not allow T i to be aborted since it is read-only. Observation 2.6. Strong progressiveness Mv-permissiveness and Mv-permissiveness Strong progressiveness. Proof. Consider an execution in which a read-only transaction T i that runs concurrently with an updating transaction T j such that T i and T j conflict on at least two t-objects. By the definition of strong progressiveness, both T i and T j may be aborted in such an execution. However, a mv-permissive TM would not allow T i to be aborted since it is read-only. On the other hand, consider an execution in which two updating transactions T i and T j that conflict on a single t-object. A mv-permissive TM allows both T i and T j to be aborted, but strong progressiveness ensures that at least one of T i or T j is not aborted in such an execution. TM-liveness Observe that a TM-progress condition only specifies the conditions under which a transaction is aborted, but does not specify the conditions under which it must commit. For instance, the OF TM-progress condition specifies that a transaction T may be aborted only in executions that are not step contentionfree for T , but does not guarantee that T is committed in a step contention-free execution. Thus, in addition to a progress condition, we must stipulate a liveness [11,100] condition. We now define the TM-liveness conditions considered in the thesis. Definition 2.9 (Sequential TM-liveness). A TM implementation M provides sequential TM-liveness if for every finite execution E of M , and every transaction T k that runs t-sequentially and applies the invocation of a t-operation op k immediately after E, the finite step contention-free extension for op k contains a response. Definition 2.10 (Interval contention-free (ICF) TM-liveness). A TM implementation M provides interval contention-free (ICF) TM-liveness if for every finite execution E of M such that the configuration after E is quiescent, and every transaction T k that applies the invocation of a t-operation op k immediately after E, the finite step contention-free extension for op k contains a response. Definition 2.11 (Starvation-free TM-liveness). A TM implementation M provides starvation-free TMliveness if in every execution of M , each t-operation eventually returns a matching response, assuming that no concurrent t-operation stops indefinitely before returning. Definition 2.12 (Obstruction-free (OF) TM-liveness). A TM implementation M provides obstructionfree (OF) TM-liveness if for every finite execution E of M , and every transaction T k that applies the invocation of a t-operation op k immediately after E, the finite step contention-free extension for op k contains a matching response. Definition 2.13 (Wait-free (WF) TM-liveness). A TM implementation M provides wait-free (WF) TM-liveness if in every execution of M , every t-operation returns a response in a finite number of its steps. The following observations are immediate from the definitions: TM-liveness ICF TM-liveness OF TM-liveness WF TM-liveness and Starvation-free TM-liveness WF TM-liveness. Observation 2.7. Sequential Since ICF TM-liveness guarantees that a t-operation returns a response if there is no other concurrent t-operation, we have: Observation 2.8. ICF TM-liveness Starvation-free TM-liveness. However, we observe that OF TM-liveness and starvation-free TM-liveness are incomparable. Observation 2.9. Starvation-free TM-liveness OF TM-liveness and OF TM-liveness Starvationfree TM-liveness. Proof. Consider the step contention-free execution of t-operation op k concurrent with t-operation op m : op k must return a matching response within a finite number of its steps, but this is not necessarily ensured by starvation-free TM-liveness (op m may be delayed indefinitely). On the other hand, in executions where two concurrent t-operations op k and op k encounter step contention, but neither stalls indefinitely, both must return matching responses. But this is not guaranteed by OF TM-liveness. Invisible reads In this section, we introduce the notion of invisible reads that intuitively ensures that a reading transaction does not cause a concurrent transaction to abort. Since most TM worklods are believed to be read-dominated, this is considered to be an important TM property for performance [25,65]. Invisible reads. Informally, in a TM using invisible reads, a transaction cannot reveal any information about its read set to other transactions. Thus, given an execution E and some transaction T k with a non-empty read set, transactions other than T k cannot distinguish E from an execution in which T k 's read set is empty. This prevents TMs from applying nontrivial primitives during t-read operations and from announcing read sets of transactions during tryCommit. Most popular TM implementations like TL2 [39] and NOrec [36] satisfy this property. Definition 2.14 (Invisible reads [23]). We say that a TM implementation M uses invisible reads if for every execution E of M : • for every read-only transaction T k ∈ txns(E), no event of E|k is nontrivial in E, • for every updating transaction T k ∈ txns(E); Rset E (T k ) = ∅, there exists an execution E of M such that -Rset E (T k ) = ∅, txns(E) = txns(E ) and ∀T m ∈ txns(E) \ {T k }: E|m = E |m for any two transactions T i , T j ∈ txns(E), if the last event of T i precedes the first event of T j in E, then the last event of T i precedes the first event of T j in E . Weak invisible reads. We introduce the notion of weak invisible reads that prevents t-read operations from applying nontrivial primitives only in the absence of concurrent transactions. Specifically, weak read invisibility allows t-read operations of a transaction T to be "visible", i.e., write to base objects, only if T is concurrent with another transaction. Disjoint-access parallelism (DAP) Definition 2.15 (Weak invisible reads). For any execution E and any t-operation π k invoked by some transaction T k ∈ txns(E), let E|π k denote the subsequence of E restricted to events of π k in E. We say that a TM implementation M satisfies weak invisible reads if for any execution E of M and every transaction T k ∈ txns(E); Rset(T k ) = ∅ that is not concurrent with any transaction T m ∈ txns(E), E|π k does not contain any nontrivial events, where π k is any t-read operation invoked by T k in E. For example, the popular TM implementation DSTM [79] satisfies weak invisible reads, but not invisible reads. Algorithm 5.1 in Chapter 4 depicts a TM implementation that is based on DSTM satisfying weak invisible reads, but not the stronger definition of invisible reads. Disjoint-access parallelism (DAP) The notion of disjoint-access parallelism (DAP) [86] is considered important in the TM context since it allows two transactions accessing unrelated t-objects to execute without memory contention. In this section, we preview the DAP definitions proposed in literature and identify the relations between them. Strict data-partitioning. Let E|X denote the subsequence of the execution E derived by removing all events associated with t-object X. A TM implementation M is strict data-partitioned [64], if for every t-object X, there exists a set of base objects Base M (X) such that • for any two t-objects X 1 , X 2 ; Base M (X 1 ) ∩ Base M (X 2 ) = ∅, • for every execution E of M and every transaction T ∈ txns(E), every base object accessed by T in E is contained in Base M (X) for some X ∈ Dset(T ) • for all executions E and E of M , if E|X = E|X for some t-object X, then the configurations after E and E only differ in the states of the base objects in Base M (X). Proof. Let M be any strict data-partitioned TM implementation. Then, M is also strict DAP. Indeed, since any two transactions accessing mutually disjoint data sets in a strict data-partitioned implementation cannot access a common base object in any execution E of M , E also ensures that any two transactions contend on the same base object in E only if they access a common t-object. Consider the following execution E of a strict DAP TM implementaton M that begins with two transactions T 1 and T 2 that access disjoint data sets in E. A strict data-partitioned TM implementation would preclude transactions T 1 and T 2 from accessing the same base object, but a strict DAP TM implementation does not preclude this possibility. We now describe two relaxations of strict DAP. For the formal definitions, we introduce the notion of a conflict graph which captures the dependency relation among t-objects accessed by transactions. Read-write (RW) disjoint-access parallelism. Informally, read-write (RW) DAP means that two transactions can contend on a common base object only if their data sets are connected in the conflict graph, capturing write-set overlaps among all concurrent transactions. We denote by τ E (T i , T j ), the set of transactions (T i and T j included) that are concurrent to at least one of T i and T j in an execution E. LetG(T i , T j , E) be an undirected graph whose vertex set is T ∈τ E (Ti,Tj ) Dset(T ) and there is an edge between t-objects X and Y iff there exists T ∈ τ E (T i , T j ) such that {X, Y } ∈ Wset(T ). We say that T i and T j are read-write disjoint-access in E if there is no path between a t-object in Dset(T i ) and a t-object in Dset(T j ) inG(T i , T j , E). A TM implementation M is read-write disjoint-access parallel (RW DAP) if, for all executions E of M , transactions T i and T j contend on the same base object in E only if T i and T j are not read-write disjoint-access in E or there exists a t-object X ∈ Dset(T i ) ∩ Dset(T j ). Proposition 2.2. RW DAP Strict DAP. Proof. From the definitions, it is immediate that every strict DAP TM implementation satisfies RW DAP. But the converse is not true (Algorithm 5.1 describes a TM implementation that satisfies RW and weak DAP, but not strict DAP). Consider the following execution E of a weak DAP or RW DAP TM implementaton M that begins with the t-incomplete execution of a transaction T 0 that accesses t-objects X and Y , followed by the step contention-free executions of two transactions T 1 and T 2 which access X and Y respectively. Transactions T 1 and T 2 may contend on a base object since there is a path between X and Y in G(T 1 , T 2 , E). However, a strict DAP TM implementation would preclude transactions T 1 and T 2 from contending on the same base object since Dset(T 1 ) ∩ Dset(T 2 ) = ∅ in E. Weak disjoint-access parallelism. Informally, weak DAP means that two transactions can concurrently contend on a common base object only if their data sets are connected in the conflict graph, capturing data-set overlaps among all concurrent transactions. Let G(T i , T j , E) be an undirected graph whose vertex set is T ∈τ E (Ti,Tj ) Dset(T ) and there is an edge between t-objects X and Y iff there exists T ∈ τ E (T i , T j ) such that {X, Y } ∈ Dset(T ). We say that T i and T j are disjoint-access in E if there is no path between a t-object in Dset(T i ) and a t-object in Dset(T j ) in G(T i , T j , E). A TM implementation M is weak disjoint-access parallel (weak DAP) if, for all executions E of M , transactions T i and T j concurrently contend on the same base object in E only if T i and T j are not disjoint-access in E or there exists a t-object X ∈ Dset(T i ) ∩ Dset(T j ) [24,110]. We now prove an auxiliary lemma, inspired by [24], concerning weak DAP TM implementations that will be useful in subsequent proofs. Intuitively, the lemma states that, two transactions that are disjointaccess and running one after the other in an execution of a weak DAP TM cannot contend on the same base object. Lemma 2.10. Let M be any weak DAP TM implementation. Let α · ρ 1 · ρ 2 be any execution of M where ρ 1 (resp., ρ 2 ) is the step contention-free execution fragment of transaction T 1 ∈ txns(α) (resp., T 2 ∈ txns(α)) and transactions T 1 , T 2 are disjoint-access in α · ρ 1 · ρ 2 . Then, T 1 and T 2 do not contend on any base object in α · ρ 1 · ρ 2 . Proof. Suppose, by contradiction that T 1 and T 2 contend on the same base object in α · ρ 1 · ρ 2 . If in ρ 1 , T 1 performs a nontrivial event on a base object on which they contend, let e 1 be the last event in ρ 1 in which T 1 performs such an event to some base object b and e 2 , the first event in ρ 2 that accesses b. Otherwise, T 1 only performs trivial events in ρ 1 to base objects on which it contends with T 2 in α · ρ 1 · ρ 2 : let e 2 be the first event in ρ 2 in which ρ 2 performs a nontrivial event to some base object b on which they contend and e 1 , the last event of ρ 1 in T 1 that accesses b. Let ρ 1 (and resp. ρ 2 ) be the longest prefix of ρ 1 (and resp. ρ 2 ) that does not include e 1 (and resp. e 2 ). Since before accessing b, the execution is step contention-free for T 1 , α · ρ 1 · ρ 2 is an execution of M . By construction, T 1 and T 2 are disjoint-access in α · ρ 1 · ρ 2 and α · ρ 1 · ρ 2 is indistinguishable to T 2 from α·ρ 1 ·ρ 2 . Hence, T 1 and T 2 are poised to apply contending events e 1 and e 2 on b in the configuration after α · ρ 1 · ρ 2 -a contradiction since T 1 and T 2 cannot concurrently contend on the same base object. We now show that weak DAP is a weaker property than RW DAP. Proof. Clearly, every implementation that satisfies RW DAP also satisfies weak DAP since the conflict graphG(T i , T j , E) (for RW DAP) is a subgraph of G(T i , T j , E) (for weak DAP). However, the converse is not true (Algorithm 5.2 describes a TM implementation that satisfies weak DAP, but not RW DAP). Consider the following execution E of a weak DAP TM implementaton M that begins with the t-incomplete execution of a transaction T 0 that reads X and writes to Y , followed by the step contention-free executions of two transactions T 1 and T 2 which write to X and read Y respectively. Transactions T 1 and T 2 may contend on a base object since there is a path between X and Y in G(T 1 , T 2 , E). However, a RW DAP TM implementation would preclude transactions T 1 and T 2 from contending on the same base object: there is no edge between t-objects X and Y in the corresponding conflict graphG(T 1 , T 2 , E) because X and Y are not contained in the write set of T 0 . Thus, the above propositions imply: Corollary 2.11. Weak DAP RW DAP Strict DAP Strict data-partitioning. TM complexity metrics We now present an overview of some of the TM complexity metrics we consider in the thesis. Step complexity. The step complexity metric, is the total number of events that a process performs on the shared memory, in the worst case, in order to complete its operation on the implementation. RAW/AWAR patterns. Attiya et al. identified two common expensive synchronization patterns that frequently arise in the design of concurrent algorithms: read-after-write (RAW) or atomic write-afterread (AWAR) [17,103] and showed that it is impossible to derive RAW/AWAR-free implementations of a wide class of data types that include sets, queues and deadlock-free mutual exclusion. Note the shared memory model in the thesis makes the assumption that CPU events are performed atomically: every "read" of a base object returns the value of "latest write" to the base object. In practice however, the CPU architecture's memory model [3] that specifies the outcome of CPU instructions is relaxed without enforcing a strict order among the shared memory instructions. Intuitively, RAW (read-after-write) or AWAR (atomic-write-after-read) patterns [17] capture the amount of "expensive synchronization", i.e., the number of costly memory barriers or conditional primitives [3] incurred by the implementation in relaxed CPU architectures. The metric appears to be more practically relevant than simply counting the number of steps performed by a process, as it accounts for expensive cachecoherence operations or instructions like compare-and-swap. Detailed coverage on memory fences and the RAW/AWAR metric can be found in [103]. Definition 2.16 (Read-after-write metric). A RAW (read-after-write) pattern performed by a transaction T k in an execution π is a pair of its events e and e , such that: (1) e is a write to a base object b by T k , (2) e is a subsequent read of a base object b = b by T k , and (3) no event on b by T k takes place between e and e . In the thesis, we are concerned only with non-overlapping RAWs, i.e., the read performed by one RAW precedes the write performed by the other RAW. Definition 2.17 (Atomic write-after-read metric). An AWAR (atomic-write-after-read) pattern e in an execution π · e is a nontrivial rmw event on an object b which atomically returns the value of b (resulting after π) and updates b with a new value. For example, consider the execution π · e where e is the application of a compare-and-swap rmw primitive that returns true. Stall complexity. Intuitively, the stall metric captures the fact that the time a process might have to spend before it applies a primitive on a base object can be proportional to the number of processes that try to update the object concurrently. Let M be any TM implementation. Let e be an event applied by process p to a base object b as it performs a transaction T during an execution E of M . Let E = α · e 1 · · · e m · e · β be an execution of M , where α and β are execution fragments and e 1 · · · e m is a maximal sequence of m ≥ 1 consecutive nontrivial events by distinct distinct processes other than p that access b. Then, we say that T incurs m memory stalls in E on account of e. The number of memory stalls incurred by T in E is the sum of memory stalls incurred by all events of T in E [16,46]. In the thesis, we adopt the following definition of a k-stall execution from [16,46]. Definition 2. 18. An execution α · σ 1 · · · σ i is a k-stall execution for t-operation op executed by process p if • α is p-free, • there are distinct base objects b 1 , . . . , b i and disjoint sets of processes S 1 , . . . , S i whose union does not include p and has cardinality k such that, for j = 1, . . . i, each process in S j has an enabled nontrivial event about to access base object b j after α, and in σ j , p applies events by itself until it is the first about to apply an event to b j , then each of the processes in S j applies an event that accesses b j , and finally, p applies an event that accesses b j , • p invokes exactly one t-operation op in the execution fragment σ 1 · · · σ i • σ 1 · · · σ i contains no events of processes not in ({p} ∪ S 1 ∪ · · · ∪ S i ) • in every ({p} ∪ S 1 ∪ · · · ∪ S i )-free execution fragment that extends α, no process applies a nontrivial event to any base object accessed in σ 1 · · · σ i . Observe that in a k-stall execution E for t-operation op, the number of memory stalls incurred by op in E is k. The following lemma will be of use in our proofs. Lemma 2.12. Let α · σ 1 · · · σ i be a k-stall execution for t-operation op executed by process p. Then, α · σ 1 · · · σ i is indistinguishable to p from a step contention-free execution [16]. Remote memory references(RMR) [22]. Modern shared memory CPU architectures employ a memory hierarchy [73]: a hierarchy of memory devices with different capacities and costs. Some of the memory is local to a given process while the rest of the memory is remote. Accesses to memory locations (i.e. base objects) that are remote to a given process are often orders of magnitude slower than a local access of the base object. Thus, the performance of concurrent implementations in the shared memory model may depend on the number of remote memory references made to base objects [14]. In the cache-coherent (CC) shared memory, each process maintains local copies of shared base objects inside its cache, whose consistency is ensured by a coherence protocol. Informally, we say that an access to a base object b is remote to a process p and causes a remote memory reference (RMR) if p's cache contains a cached copy of the object that is out of date or invalidated ; otherwise the access is local. In the write-through (CC) protocol, to read a base object b, process p must have a cached copy of b that has not been invalidated since its previous read. Otherwise, p incurs a RMR. To write to b, p causes a RMR that invalidates all cached copies of b and writes to the main memory. In the write-back (CC) protocol, p reads a base object b without causing a RMR if it holds a cached copy of b in shared or exclusive mode; otherwise the access of b causes a RMR that (1) invalidates all copies of b held in exclusive mode, and writing b back to the main memory, (2) creates a cached copy of b in shared mode. Process p can write to b without causing a RMR if it holds a copy of b in exclusive mode; otherwise p causes a RMR that invalidates all cached copies of b and creates a cached copy of b in exclusive mode. In the distributed shared memory (DSM), each base object is forever assigned to a single process and it is remote to the others. Any access of a remote register causes a RMR. Overview In the context of Transactional memory, intermediate states witnessed by the read operations of an incomplete transaction may affect the user application through the outcome of its read operations. If the intermediate state is not consistent with any sequential execution, the application may experience a fatal irrecoverable error or enter an infinite loop. Thus, it is important that each transaction, including aborted ones observes a consistent state so that the implementation does not export any pathological executions. A state should be considered consistent if it could result from a serial application of transactions observed in the current execution. In this sense, every transaction should witness a state that could have been observed in some execution of the sequential code put by the programmer within the transactions. Additionally, a consistent state should not depend on a transaction that has not started committing yet (referred to as deferred-update semantics). This restriction appears desirable, since the ongoing transaction may still abort (explicitly by the user or because of consistency reasons) and, thus, render the read inconsistent. Further, the set of histories specified by the consistency criterion must constitute a safety property, as defined by Owicki and Lamport [108], Alpern and Schneider [11] and refined by Lynch [100]: it must be non-empty, prefix-closed and limit-closed. In this chapter, we define the notion of deferred-update semantics formally, which we then apply to a spectrum of TM consistency criteria. Additionally, we verify if the resulting TM consistency criterion is a safety property, as defined by Lynch [100]. We begin by considering the popular criterion of opacity [64], which was the first TM consistency criterion that was proposed to grasp this semantics formally. Opacity requires the states observed by all transactions, included uncommitted ones, to be consistent with a global serialization, i.e., a serial execution constituted by committed transactions. Moreover, the serialization should respect the real-time order : a transaction that completed before (in real time) another transaction started should appear first in the serialization. By definition, opacity reduces correctness of a history to correctness of all its prefixes, and thus is prefix-closed and limit-closed by definition. Thus, to verify that a history is opaque, one needs to verify that each of its prefixes is consistent with some global serialization. To simplify verification and explicitly introduce deferred-update semantics into a TM correctness criterion, we specify a general criterion of du-opacity [18], which requires the global serial execution to respect the deferred-update property. Informally, a du-opaque history must be indistinguishable from a totally-ordered history, with respect to which no transaction reads from a transaction that has not started committing. We show that du-opacity is prefix-closed, that is, every prefix of a du-opaque history is also du-opaque. We then show that extending opacity (and du-opacity) to infinite histories in a non-trivial way (i.e., requiring that even infinite histories should have proper serializations), does not result in a limit-closed property. However, under certain restrictions, we show that du-opacity is limit-closed. In particular, assuming that in an infinite history, every transaction completes each of the operations it invoked, the limit of any sequence of ever extending du-opaque histories is also du-opaque. Therefore, under this assumption, du-opacity is a safety property [11,100,108], and to prove that a TM implementation that complies with the assumption is du-opaque, it suffices to prove that all its finite histories are du-opaque. One may notice that the intended safety semantics does not require that all transactions observe the same serial execution. Intuitively, to avoid pathological executions, we only need that every transaction witnesses some consistent state, while the views of different aborted and incomplete transactions do not have to be consistent with the same serial execution. As long as committed transactions constitute a serial execution and every transaction witnesses a consistent state, the execution can be considered "safe": no run-time error that cannot occur in a serial execution can happen. Several definitions like virtualworld consistency (VWC) [85] and Transactional Memory Specification 1 (TMS1) [43] have adopted this approach. We introduce "deferred-update" versions of these properties and discuss how the resulting properties relate to du-opacity. Finally, we also study the consistency criterion Transactional Memory Specification 2 (TMS2) [43,98], which was proposed as a restriction of opacity and verify if it is a safety property. Roadmap of Chapter 3. In Section 3.2 of this chapter, we formally define safety properties. In Section 3.3, we introduce the notion of deferred-update semantics and apply it to the correctness criterions of opacity and strict serializability in Sections 3.4 and 3.5 respectively. Section 3.6 studies two relaxations of opacity: VWC and TMS1 and a restriction of opacity, TMS2. Section 3.7 summarizes the relations between the TM correctness properties proposed in the thesis and presents our concluding remarks. Safety properties A property P is a set of (transactional) histories. Intuitively, a safety property says that "no bad thing ever happens". Definition 3.1 (Lynch [100]). A property P is a safety property if it satisfies the following two conditions: Prefix-closure: For every history H ∈ P, every prefix H of H ( i.e., every prefix of the sequence of the events in H) is also in P. Opacity and deferred-update(DU) semantics Limit-closure: For every infinite sequence of finite histories H 0 , H 1 , . . . such that for every i, H i ∈ P and H i is a prefix of H i+1 , the limit of the sequence is also in P. Notice that the set of histories produced by a TM implementation M is, by construction, prefix-closed. Therefore, every infinite history of M is the limit of an infinite sequence of ever-extending finite histories of M . Thus, to prove that M satisfies a safety property P , it is enough to show that all finite histories of M are in P . Indeed, limit-closure of P then implies that every infinite history of M is also in P . Opacity and deferred-update(DU) semantics In this section, we formalize the notion of deferred-update semantics and apply to the TM correctness condition of opacity [64]. We say that S is a final-state serialization of H. Final-state opacity is not prefix-closed. Figure 3.1 depicts a t-complete sequential history H that is final-state opaque, with T 1 · T 2 being a legal t-complete t-sequential history equivalent to H. Let H = write 1 (X, 1), read 2 (X) be a prefix of H in which T 1 and T 2 are t-incomplete. Transaction T i (i = 1, 2) is completed by inserting tryC i · A i immediately after the last event of T i in H. Observe that neither T 1 · T 2 nor T 2 · T 1 allow us to derive a serialization of H (we assume that the initial value of X is 0). A restriction of final-state opacity, which we refer to as opacity [64] explicitly filters out histories that are not prefix-closed. It can be easily seen that opacity is prefix-and limit-closed, and, thus, it is a safety property. tryC 2 R2(X) → 1 tryC 1 W1(X, 1) T 1 C 1 T 2 C 2 H H We now give a formal definition of opacity with deferred-update semantics. Then we show that the property is prefix-closed and, under certain liveness restrictions, limit-closed. Let H be any history and let S be a legal t-complete t-sequential history that is equivalent to some completion of H. Let < S be the total order on transactions in S. Definition 3.4 (Local serialization). For any read k (X) that does not return A k , let S k,X be the prefix of S up to the response of read k (X) and H k,X be the prefix of H up to the response of read k (X). S k,X H , the local serialization of read k (X) with respect to H and S, is the subsequence of S k,X derived by removing from S k,X the events of all transactions T m ∈ txns(H) \ {T k } such that H k,X does not contain an invocation of tryC m (). We are now ready to present our correctness condition, du-opacity. Definition 3.5 (Du-opacity). A history H is du-opaque if there is a legal t-complete t-sequential history S such that 1. there is a completion of H that is equivalent to S, and 2. for every pair of transactions T k , T m ∈ txns(H), if T k ≺ RT H T m , then T k < S T m , i.e., S respects the real-time ordering of transactions in H, and 3. each read k (X) in S that does not return A k is legal in S k,X H . We then say that S is a (du-opaque) serialization of H. Informally, a history H is du-opaque if there is a legal t-sequential history S that is equivalent to H, respects the real-time ordering of transactions in H and every t-read is legal in its local serialization with respect to H and S. The third condition reflects the implementation's deferred-update semantics, i.e., the legality of a t-read in a serialization does not depend on transactions that start committing after the response of the t-read. For any du-opaque serialization S, seq(S) denotes the sequence of transactions in S and seq(S)[k] denotes the k th transaction in this sequence. On the safety of du-opacity In this section, we examine the safety properties of du-opacity, i.e., whether it is prefix-closed and limit-closed. Proof. Given H, S and H i , we construct a t-complete t-sequential history S i as follows: Du-opacity is prefix-closed -for every transaction T k that is t-complete in H i , S i |k = S|k. -for every transaction T k that is complete but not t-complete in H i , S i |k consists of the sequence of events in H i |k, immediately followed by tryC k () · A k . -for every transaction T k with an incomplete t-operation, op k = read k ∨ write k ∨ tryA k () in H i , S i |k is the sequence of events in S|k up to the invocation of op k , immediately followed by A k . -for every transaction T k ∈ txns(H i ) with an incomplete t-operation, op k = tryC k (), S i |k = S|k. By the above construction, S i is indeed a t-complete history and every transaction that appears in S i also appears in S. We order transactions in S i so that seq(S i ) is a subsequence of seq(S). Note that S i is derived from events contained in some completion H of H that is equivalent to S and some other events to derive a completion of S i . Since S i contains events from every complete t-operation in H i and other events included satisfy Definition 2.1, there is a completion of H i that is equivalent to S i . We now argue that S i is a serialization of H i . First we observe that S i respects the real-time order of Each finite prefix of the history is du-opaque, but the infinite limit of the ever-extending sequence is not du-opaque. H i . Indeed, if T j ≺ RT H i T k , then T j ≺ RT H T k and T j < S T k . Since seq(S i ) is a subsequence of seq(S), we have T j < S i T k . W1(X, 1) tryC 1 R2(X) → 1 Ri(X) → 0 R3(X) → 0 T 1 T 2 T 3 T i To show that S i is legal, suppose, by way of contradiction, that there is some read k (X) that returns v = A k in H i such that v is not the latest written value of X in S i . If T k contains a write k (X, v ) preceding read k (X) such that v = v and v is not the latest written value for read k (X) in S i , it is also not the latest written value for read k (X) in S, which is a contradiction. Thus, the only case to consider is when read k (X) should return a value written by another transaction. Since S is a serialization of H, there is a committed transaction T m that performs the last write m (X, v) that precedes read k (X) in T k in S. Moreover, since read k (X) is legal in the local serialization of read k (X) in H with respect to S, the prefix of H up to the response of read k (X) must contain an invocation of tryC m (). Thus, read k (X) ≺ RT H tryC m () and T m ∈ txns(H i ). By construction of S i , T m ∈ txns(S i ) and T m is committed in S i . We have assumed, towards a contradiction, that v is not the latest written value for read k (X) in S i . Hence, there is a committed transaction T j that performs write j (X, v ); v = v in S i such that T m < S i T j < S i T k . But this is not possible since seq(S i ) is a subsequence of seq(S). Thus, S i is a legal t-complete t-sequential history equivalent to some completion of H i . Now, by the construction of S i , for every read k (X) that does not return A k in S i , we have S i k,X H i = S k,X H . Indeed, the transactions that appear before T k in S i k,X H i are those with a tryC event before the response of read k (X) in H and are committed in S. Since seq(S i ) is a subsequence of seq(S), we have S i k,X H i = S k,X H . Thus, read k (X) is legal in S i k,X H i . Lemma 3.1 implies that every prefix of a du-opaque history has a du-opaque serialization and thus: Corollary 3.2. Du-opacity is a prefix-closed property. The limit of du-opaque histories We observe, however, that du-opacity is, in general, not limit-closed. We present an infinite history that is not du-opaque, but each of its prefixes is. Proposition 3.1. Du-opacity is not a limit-closed property. Proof. Let H j denote a finite prefix of H of length j. Consider an infinite history H that is the limit of the histories H j defined as follows (see Figure 3.2): -Transaction T 1 performs a write 1 (X, 1) and then invokes tryC 1 () that is incomplete in H. -Transaction T 2 performs a read 2 (X) that overlaps with tryC 1 () and returns 1. -There are infinitely many transactions T i , i ≥ 3, each of which performing a single read i (X) that returns 0 such that each T i overlaps with both T 1 and T 2 . We now prove that, for all j ∈ N, H j is a du-opaque history. Clearly, H 0 and H 1 are du-opaque histories. For all j > 1, we first derive a completion of H j as follows: 1. tryC 1 () (if it is contained in H j ) is completed by inserting C 1 immediately after its invocation, 2. for all i ≥ 2, any incomplete read i (X) that is contained in H j is completed by inserting A i and tryC i · A i immediately after its invocation, and 3. for all i ≥ 2 and every complete read j (X) that is contained in H j , we include tryC i ·A i immediately after the response of this read j (X). We can now derive a t-complete t-sequential history S j equivalent to the above derived completion of H j from the sequence of transactions T 3 , . . . , T i , T 1 , T 2 (depending on which of these transactions participate in H j ), where i ≥ 3. It is easy to observe that S j so derived is indeed a serialization of H j . However, there is no serialization of H. Suppose that such a serialization S exists. Since every transaction that participates in H must participate in S, there exists n ∈ N such that seq(S)[n] = T 1 . Consider the transaction at index n + 1, say T i in seq(S). But for any i ≥ 3, T i must precede T 1 in any serialization (by legality), which is a contradiction. Notice that all finite prefixes of the infinite history depicted in Figure 3.2 are also opaque. Thus, if we extend the definition of opacity to cover infinite histories in a non-trivial way, i. e., by explicitly defining opaque serializations for infinite histories, we can reformulate Proposition 3.1 for opacity. Du-opacity is limit-closed for complete histories We show now that du-opacity is limit-closed if the only infinite histories we consider are those in which every transaction eventually completes (but not necessarily t-completes). We first prove an auxiliary lemma on du-opaque serializations. For a transaction T ∈ txns(H), the live set of T in H, denoted Lset H (T ) (T included), is defined as follows: every transaction T ∈ txns(H) such that neither the last event of T precedes the first event of T in H nor the last event of T precedes the first event of T in H is contained in Lset H (T ). We say that transaction T ∈ txns(H) succeeds the live set of T and we write T ≺ LS H T if in H, for all T ∈ Lset H (T ), T is complete and the last event of T precedes the first event of T . Proof. Since H is du-opaque, there is a serializationS of H. Let S be a t-complete t-sequential history such that txns(S) = txns(S), and ∀ T i ∈ txns(S) : S|i =S|i. We now perform the following procedure iteratively to derive seq(S) from seq(S). Initially seq(S) = seq(S). For each T k ∈ txns(H), let T ∈ txns(H) denote the earliest transaction inS such that T k ≺ LS H T . If T <S T k (implying T k is not t-complete), then move T k to immediately precede T in seq(S). By construction, S is equivalent toS and for all T k , T m ∈ txns(H); T k ≺ LS H T m , T k < S T m We claim that S is a serialization of H. Observe that any two transactions that are complete in H, but not t-complete are not related by real-time order in H. By construction of S, for any transaction T k ∈ txns(H), the set of transactions that precede T k inS, but succeed T k in S are not related to T k by real-time order. Sincẽ S respects the real-time order in H, this holds also for S. We now show that S is legal. Consider any read k (X) performed by some transaction T k that returns v ∈ V in S and let T ∈ txns(H) be the earliest transaction inS such that T k ≺ LS H T . Suppose, by contradiction, that read k (X) is not legal in S. Thus, there is a committed transaction T m that performs write m (X, v) inS such that T m = T or T <S T m <S T k . Note that, by our assumption, read k (X) ≺ RT H tryC (). Since read k (X) must be legal in its local serialization with respect to H andS, read k (X) ≺ RT H tryC m (). Thus, T m ∈ Lset H (T k ). Therefore T m = T . Moreover, T m is complete, and since it commits inS, it is also t-complete in H and the last event of T m precedes the first event of T in H, i.e., T m ≺ RT H T . Hence, T cannot precede T m inS-a contradiction. Observe also that since T k is complete in H but not t-complete, H does not contain an invocation of tryC k (). Thus, the legality of any other transaction is unaffected by moving T k to precede T in S. Thus, S is a legal t-complete t-sequential history equivalent to some completion of H. The above arguments also prove that every t-read in S is legal in its local serialization with respect to H and S and, thus, S is a serialization of H. The proof uses König's Path Lemma [87] formulated as follows. Let G on a rooted directed graph and let v 0 be the root of G. We say that v k , a vertex of G, is reachable from v 0 , if there is a sequence of vertices v 0 . . . , v k such that for each i, there is an edge from v i to v i+1 . G is connected if every vertex in G is reachable from v 0 . G is finitely branching if every vertex in G has a finite out-degree. G is infinite if it has infinitely many vertices. Lemma 3.4 (König's Path Lemma [87]). If G is an infinite connected finitely branching rooted directed graph, then G contains an infinite sequence of distinct vertices v 0 , v 1 , . . ., such that v 0 is the root, and for every i ≥ 0, there is an edge from v i to v i+1 . Theorem 3.5. Under the restriction that in any infinite history H, every transaction T k ∈ txns(H) is complete, du-opacity is a limit-closed property. Proof. We want to show that the limit H of an infinite sequence of finite ever-extending du-opaque histories is du-opaque. By Corollary 3.2, we can assume the sequence of du-opaque histories to be H 0 , H 1 , . . . H i , H i+1 , . . . such that for all i ∈ N, H i+1 is the one-event extension of H i . We construct a rooted directed graph G H as follows: 1. The root vertex of G H is (H 0 , S 0 ) where S 0 and H 0 contain the initial transaction T 0 . 2. Each non-root vertex of G H is a tuple (H i , S i ), where S i is a du-opaque serialization of H i that satisfies the condition specified in Lemma 3.3: for all T k , T m ∈ txns(H); T k ≺ LS H i T m implies T k < S i T m . Note that there exist several possible serializations for any H i . For succinctness, in the rest of this proof, when we refer to a specific S i , it is understood to be associated with the prefix H i of H. 3. Let cseq i (S j ), j ≥ i, denote the subsequence of seq(S j ) restricted to transactions whose last event in H is a response event and it is contained in H i . For every pair of vertices v = (H i , S i ) and v = (H i+1 , S i+1 ) in G H , there is an edge from v to v if cseq i (S i ) = cseq i (S i+1 ). The out-degree of a vertex v = (H i , S i ) in G H is defined by the number of possible serializations of H i+1 , bounded by the number of possible permutations of the set txns(S i+1 ), implying that G H is finitely branching. By Lemma 3.1, given any serialization S i+1 of H i+1 , there is a serialization S i of H i such that seq(S i ) is a subsequence of seq(S i+1 ) . Indeed, the serialization S i of H i also respects the restriction specified in Lemma 3.3. Since seq(S i+1 ) contains every complete transaction that takes its last step in H in H i , cseq i (S i ) = cseq i (S i+1 ). Therefore, for every vertex (H i+1 , S i+1 ), there is a vertex (H i , S i ) such that cseq i (S i ) = cseq i (S i+1 ). Thus, we can iteratively construct a path from (H 0 , S 0 ) to every vertex (H i , S i ) in G H , implying that G H is connected. We now apply König's Path Lemma (Lemma 3.4) to G H . Since G H is an infinite connected finitely branching rooted directed graph, we can derive an infinite sequence of distinct vertices L = (H 0 , S 0 ), (H 1 , S 1 ), . . . , (H i , S i ), . . . such that cseq i (S i ) = cseq i (S i+1 ). The rest of the proof explains how to use L to construct a serialization of H. We begin with the following claim concerning L. Claim 3.6. For any j > i, cseq i (S i ) = cseq i (S j ). Proof. Recall that cseq i (S i ) is a prefix of cseq i (S i+1 ), and cseq i+1 (S i+1 ) is a prefix of cseq i+1 (S i+2 ). Also, cseq i (S i+1 ) is a subsequence of cseq i+1 (S i+1 ). Hence, cseq i (S i ) is a subsequence of cseq i+1 (S i+2 ). But, cseq i+1 (S i+2 ) is a subsequence of cseq i+2 (S i+2 ). Thus, cseq i (S i ) is a subsequence of cseq i+2 (S i+2 ). Inductively, for any j > i, cseq i (S i ) is a subsequence of cseq j (S j ). But cseq i (S j ) is the subsequence of cseq j (S j ) restricted to complete transactions in H whose last step is in H i . Thus, cseq i (S i ) is indeed equal to cseq i (S j ). Let f : N → txns(H) be defined as follows: f (1) = T 0 . For every integer k > 1, let i k = min{ ∈ N|∀j > : cseq (S )[k] = cseq j (S j )[k]} Then, f (k) = cseq i k (S i k )[k]. Claim 3.7. The function f is total and bijective. Proof. (Totality and surjectivity) Since each transaction T ∈ txns(H) is complete in some prefix H i of H, for each k ∈ N, there exists i ∈ N such that cseq i (S i )[k] = T . By Claim 3.6, for any j > i, cseq i (S i ) = cseq i (S j ). Since a transaction that is complete in H i w.r.t H is also complete in H j w.r.t H, it follows that for every j > i, cseq j (S j )[k ] = T , with k ≥ k. By construction of G H and the assumption that each transaction is complete in H, there exists i ∈ N such that each T ∈ Lset H i (T ) is complete in H and its last step is in H i , and T precedes in S i every transaction whose first event succeeds the last event of each T ∈ Lset H i (T ) in H i . Indeed, this implies that for each k ∈ N, there exists i ∈ N such that cseq i (S i )[k] = T ; ∀j > i : cseq j (S j )[k] = T . This shows that for every T ∈ txns(H), there are i, k ∈ N; cseq i (S i )[k] = T , such that for every j > i, cseq j (S j )[k] = T . Thus, for every T ∈ txns(H), there is k such that f (k) = T .By Claim 3.7, F = f (1), f (2), . . . , f (i), . . . is an infinite sequence of transactions. Let S be a t-complete t-sequential history such that seq(S) = F and for each t-complete transaction T k in H, S|k = H|k; and for transaction that is complete, but not t-complete in H, S|k consists of the sequence of events in H|k, immediately followed by tryA k () · A k . Clearly, there is a completion of H that is equivalent to S. Let F i be the prefix of F of length i, and S i be the prefix of S such that seq( S i ) = F i . Claim 3.8. Let H j i be a subsequence of H j reduced to transactions T k ∈ txns( S i ) such that the last event of T k in H is a response event and it is contained in H j . Then, for every i, there is j such that S i is a serialization of H j i . Proof. Let H j be the shortest prefix of H (from L) such that for each T ∈ txns( S i ), if seq(S j )[k] = T , then for every j > j, seq(S j )[k] = T . From the construction of F, such j and k exist. Also, we observe that txns( S i ) ⊆ txns(S j ) and F i is a subsequence of seq(S j ). Using arguments similar to the proof of Lemma 3.1, it follows that S i is indeed a serialization of H j i . Since H is complete, there is exactly one completion of H, where each transaction T k that is not tcomplete in H is completed with tryC k · A k after its last event. By Claim 7.11, the limit t-sequential t-complete history is equivalent to this completion, is legal, respects the real-time order of H, and ensures that every read is legal in the corresponding local serialization. Thus, S is a serialization of H. W1(X, 1) tryC 1 R2(X) → 1 W3(X, 1) tryC 3 T 1 A 1 T 2 T 3 C 3 Du-opacity vs. opacity We now compare our deferred-update requirement with the conventional TM correctness property of opacity [64]. Theorem 3.10. Du-opacity Opacity. Proof. We first claim that every finite du-opaque history is opaque. Let H be a finite du-opaque history. By definition, there is a final-state serialization S of H. Since du-opacity is a prefix-closed property, every prefix of H is final-state opaque. Thus, H is opaque. Again, since every prefix of a du-opaque history is also du-opaque, by Definition 3.3, every infinite du-opaque history is also opaque. To show that the inclusion is strict, we present an an opaque history that is not du-opaque. Consider the finite history H depicted in Figure 3.3: transaction T 2 performs a read 2 (X) that returns the value 1. Observe that read 2 (X) → 1 is concurrent to tryC 1 , but precedes tryC 3 in real-time order. Although tryC 1 returns A 1 in H, the response of read 2 (X) can be justified since T 3 concurrently writes 1 to X and commits. Thus, read 2 (X) → 1 reads-from transaction T 2 in any serialization of H, but since read 2 (X) ≺ RT H tryC 3 , H is not du-opaque even though each of its prefixes is final-state opaque. We now formally prove that H is opaque. We proceed by examining every prefix of H. 1. Each prefix up to the invocation of read 2 (X) is trivially final-state opaque. 2. Consider the prefix, H i of H where the i th event is the response of read 2 (X). Let S i be a tcomplete t-sequential history derived from the sequence T 1 , T 2 by inserting C 1 immediately after the invocation of tryC 1 (). It is easy to see that S i is a final-state serialization of H i . 3. Consider the t-complete t-sequential history S derived from the sequence T 1 , T 3 , T 2 in which each transaction is t-complete in H. Clearly, S is a final-state serialization of H. Since H and every (proper) prefix of it are final-state opaque, H is opaque. Clearly, the required final-state serialization S of H is specified by seq(S) = T 1 , T 3 , T 2 in which T 1 is aborted while T 3 is committed in S (the position of T 1 in the serialization does not affect legality). Consider read 2 (X) in S; since H 2,X , the prefix of H up to the response of read 2 (X) does not contain an invocation of tryC 3 (), the local serialization of read 2 (X) with respect to H and S, S 2,X H is T 1 · read 2 (X). But read 2 (X) is not legal in S 2,X H , which is a contradiction. Thus, H is not du-opaque. The unique-write case We now show that du-opacity is equivalent to opacity assuming that no two transactions write identical values to the same t-object ("unique-write" assumption). Let Opacity uw ⊆ Opacity, be a property defined as follows: W1(X, 1) R2(X) → 1 R2(Y ) → 1 W3(X, 1) W3(Y, 1) C 1 T 2 T 3 C 3write m (X, v ), v = v . Theorem 3.11. Opacity uw =du-opacity. Proof. We show first that every finite history H ∈Opacity uw is also du-opaque. Let H be any finite opaque history such that for every pair of write operations write k (X, v) and write m (X, v), performed by transactions T k , T m ∈ txns(H), respectively, v = v . Since H is opaque, there is a final-state serialization S of H. Suppose by contradiction that H is not du-opaque. Thus, there is a read k (X) that returns a value v ∈ V in S that is not legal in S k,X H , the local serialization of read k (X) with respect to H and S. Let H k,X and S k,X denote the prefixes of H and S, respectively, up to the response of read k (X) in H and S. Recall that S k,X H , the local serialization of read k (X) with respect to H and S, is the subsequence of S k,X that does not contain events of any transaction T i ∈ txns(H) so that the invocation of tryC i () is not in H k,X . Since read k (X) is legal in S, there is a committed transaction T m ∈ txns(H) that performs write m (X, v) that is the latest such write in S that precedes T k . Thus, if read k (X) is not legal in S k,X H , the only possibility is that read k (X) ≺ RT H tryC m (). Under the assumption of unique writes, there does not exist any other transaction T j ∈ txns(H) that performs write j (X, v). Consequently, there does not exist any H k,X (some completion of H k,X ) and a t-complete t-sequential history S , such that S is equivalent to H k,X and S contains any committed transaction that writes v to X. This is, H k,X is not final-state opaque. However, since H is opaque, every prefix of H must be final-state opaque, which is a contradiction. By Definition 3.3, an infinite history H is opaque if every finite prefix of H is final-state opaque. Theorem 3.5 now implies that Opacity uw ⊆ du-Opacity. Definition 3.3 and Corollary 3.2 imply that du-Opacity ⊆ Opacity uw . The sequential-history case The deferred-update semantics was mentioned by Guerraoui et al. [59] and later adopted by Kuznetsov and Ravi [90]. In both papers, opacity is only defined for sequential histories, where every invocation of a t-operation is immediately followed by a matching response. In particular, these definitions require the final-state serialization to respect the read-commit order : in these definitions, a history H is opaque if there is a final-state serialization S of H such that if a t-read of a t-object X by a transaction T k precedes the tryC of a transaction T m that commits on X in H, then T k precedes T m in S. As we observed in Figure 3.4, this definition is not equivalent to opacity even for sequential histories. The property considered in [59,90] is strictly stronger than du-opacity: the sequential history H in Figure 3.4 is du-opaque (and consequently opaque by Theorem 3.10): a du-opaque serialization (in fact the only possible one) for this history is T 1 , T 3 , T 2 . However, in the restriction of opacity defined above, T 2 must precede T 3 in any serialization, since the response of read 2 (X) precedes the invocation of tryC 3 (). Strict serializability with DU semantics In this section, we discuss the deferred-update restriction of strict serializability from Definition 2.2. First, we remark that, just as final-state opacity, strict serializability is not prefix-closed (cf. Figure 3.1). However, we show that the restriction of deferred-update semantics applied to strict serializability induces a safety property. Definition 3.6 (Strict serializability with du semantics). A finite history H is strictly serializable if there is a legal t-complete t-sequential history S, such that 1. there is a completion H of H, such that S is equivalent to cseq(H), where cseq(H) is the subse- quence of H reduced to committed transactions in H, 2. for any two transactions T k , T m ∈ txns(H), if T k ≺ RT H T m , then T k precedes T m in S, and 3. each read k (X) in S that does not return A k is legal in S k,X H . Notice that every du-opaque history is strictly serializable, but not vice-versa. Theorem 3.12. Strict serializability is a safety property. Proof. Observe that any strictly serializable serialization of a finite history H does not include events of any transaction that has not invoked tryC in H. To show prefix-closure, a proof almost identical to that of Lemma 3.1 implies that, given a strictly serializable history H and a serialization S, there is a serialization S of H (H is some prefix of H) such that seq(S ) is a prefix of seq(S). Consider an infinite sequence of finite histories H 0 , . . . , H i , H i+1 , . . . , where H i+1 is a one-event extension of H i , we prove that the infinite limit H of this ever-extending sequence is strictly serializable. As in Theorem 3.5, we construct an infinite rooted directed graph G H : a vertex is a tuple (H i , S i ) (note that for each i ∈ N, there are several such vertices of this form), where S i is a serialization of H i and there is an edge from (H i , S i ) to (H i+1 , S i+1 ) if seq(S i ) is a prefix of seq(S i+1 ). The resulting graph is finitely branching since the out-degree of a vertex is bounded by the number of possible serializations of a history. Observe that for every vertex (H i+1 , S i+1 ), there is a vertex H i , S i ) such that seq(S i ) is a prefix of seq(S i+1 ). Thus, G H is connected since we can iteratively construct a path from the root (H 0 , S 0 ) to every vertex (H i , S i ) in G H . Applying König's Path Lemma to G H , we obtain an infinite sequence of distinct vertices, (H 0 , S 0 ), (H 1 , S 1 ), . . . , (H i , S i ), . . .. Then, S = lim i→∞ S i gives the desired serialization of H. Du-opacity vs. other deferred-update criteria In this section, we first study two relaxations of opacity: Virtual-world consistency [85] and Transactional Memory Specification 1 [43]. We then study Transactional Memory Specification 2 which is a restriction of opacity. R1(X) → 1 R1(Y ) → 0 W2(X, 1) W3(Y, 1) R3(X) → 0 T1 A1 T2 C2 T3 C3 Figure 3.5: A history that is du-VWC, but not du-opaque. Virtual-world consistency Virtual World Consistency (VWC) [85] was proposed as a relaxation of opacity (in our case, du-opacity), where each aborted transaction should be consistent with its causal past (but not necessarily with a serialization formed by committed transactions). Intuitively, a transaction T 1 causally precedes T 2 if T 2 reads a value written and committed by T 1 . The original definition [85] required that no two write operations are ever invoked with the same argument (the unique-writes assumption). Therefore, the causal precedence is unambiguously identified for each transactional read. Below we give a more general definition. Given a t-sequential legal history S and transactions T i , T j ∈ txns(S), we say that (2) T j is the last committed transaction that writes v to X and precedes T i in S. T i reads X from T j if (1) T i reads v in X and Now consider a (not necessarily t-sequential) history H. We say that T i could have read X from T j in H if T j writes a value v to a t-object X, T i reads v in X, and read i (X) ≺ RT H tryC j (). Given T ⊆ txns(H), let H T denote the subsequence of H restricted to events of transactions in T . Definition 3.7 (du-VWC). A finite history H is du-virtual-world consistent if it is strictly serializable (with du-semantics) , and for every aborted or t-incomplete transaction T i ∈ txns(H), there is T ⊆ txns(H) including T i and a t-sequential t-complete legal history S such that: 1. S is equivalent to a completion of H T , 2. For all T j , T k ∈ txns(S), if T j reads X from T k in S, then T j could have read X from T k in H, 3. S respects the per-process order of H: if T j and T k are executed by the same process and T j ≺ RT H T k , then T j ≺ S T k . We refer to S as a du-VWC serialization for T i in H. Intuitively, with every t-read on X performed by T i in H, the du-VWC serialization S associates some transaction T j from which T i could have read the value of X. Recursively, with every read performed by T j , S associates some T m from which T j could have read, etc. Altogether, we get a "plausible" causal past of T i that constitutes a serial history. Notice that to ensure deferred-update semantics, we only allow a transaction T j to read from a transaction T k that invoked tryC k by the time of the read operation of T j . We now prove that du-VWC is a strictly weaker property than du-opacity. Since du-TMS2 is strictly weaker than du-opacity, it follows that Du-TMS2 du-VWC. Proof. If a history H is du-opaque, then there is a du-opaque serialization S equivalent to H, where H is some completion of H. By construction, S is a total-order on the set of all transactions that participate in S. Trivially, by taking T = txns(H), we derive that S is a du-VWC serialization for every aborted or t-incomplete transaction T i ∈ txns(H). Indeed, S respects the real-time order and, thus, the per-process order of H. Since S respects the deferred-update order in H, every t-read in S "could have happened" in H. To show that the inclusion is strict, Figure 3.5 depicts a history H that is du-VWC, but not du-opaque. Clearly, H is strictly serializable. Here T 2 , T 1 is the required du-VWC serialization for aborted transaction T 1 . However, H has no du-opaque serialization. Theorem 3.14. Du-VWC is a safety property. Proof. By Definition 3.7, a history H is du-VWC if and only if H is strictly serializable and there is a du-VWC serialization for every transaction T i ∈ txns(H) that is aborted or t-incomplete in H. To prove prefix-closure, recall that strict serializability is a prefix-closed property (Theorem 3.12). Therefore, any du-VWC serialization S for a transaction T i in history H is also a du-VWC serialization S for a transaction T i in any prefix of H that contains events of T i . To prove limit-closure, consider an infinite sequence of du-VWC histories H 0 , H 1 , . . ., H i , H i+1 , . . ., where each H i+1 is the one-event extension of H i and prove that the infinite limit, H of this sequence is also a du-VWC history. Theorem 3.12 establishes that there is a strictly serializable serialization for H. Since, for all i ∈ N, H i is du-VWC, for every transaction T i that is t-incomplete or aborted in H i , there is a VWC serialization for T i . Consequently, there is a du-VWC serialization for every aborted or incomplete transaction T i in H. Transactional memory specification (TMS) Transactional Memory Specification (TMS) 1 and 2 were formulated in I/O automata [43]. Following [15], we adapt these definitions to our framework and explicitly introduce the deferred-update requirement. TMS1. Given a history H, TMS1 requires us to justify the behavior of all committed transactions in H by a legal t-complete t-sequential history that preserves the real-time order in H (strict serializability), and to justify the response of each complete t-operation performed in H by a legal t-complete t-sequential history S. The t-sequential history S used to justify a complete t-operation op i,k (the i th t-operation performed by transaction T k ) includes T k and a subset of transactions from H whose operations justify op i,k . (Our description follows [15].) Let H k,i denote the prefix of a history H up to (and including) the response of i th t-operation op k,i of transaction T k . We say that a history H is a possible past of H k,i if H is a subsequence of H k,i and consists of all events of transaction T k and all events from some subset of committed transactions and transactions that have invoked tryC in H k,i such that if a transaction T ∈ H , then for a transaction T ≺ RT H k,i T , T ∈ H if and only if T is committed in H k,i . Let cTMSpast(H, op k,i ) denote the set of possible pasts of H k,i . For any history H ∈ cTMSpast(H, op k,i ), let ccomp(H ) denote the history generated from H by the following procedure: for all m = k, replace every event A m by C m and complete every incomplete tryC m with including C m at the end of H ; include tryC k · A k at the end of H . Definition 3.8 (du-TMS1). A history H satisfies du-TMS1 if 1. H is strictly serializable (with du-semantics), and 2. for each complete t-read op i,k that returns a non-A k response in H, there exist a legal t-complete t-sequential history S and a history H such that: -H = ccomp(H ), where H ∈ cTMSpast(H, op k,i ) -H is equivalent to S -for any two transactions T k and T m in H , if T k ≺ RT H T m then T k < S T m We refer to S as the du-TMS1 serialization for op i,k . Theorem 3.15. Du-TMS1 is a safety property. W1(X, 1) tryC 1 W2(X, 0) tryC 2 R3(X) → 0 tryC 3 T1 C1 T2 C2 T3 A3 Figure 3.6: A history which is du-VWC but not du-TMS1. To see that du-TMS1 is prefix closed, recall that strict serializability is a prefix-closed property. Let H be any du-TMS1 history and H i , any prefix of H. We now need to prove that, for every t-operation tryC 1 tryC 2 tryC 3 tryC 4 R1(X) → 0 W1(Y, 1) W2(X, 2) R3(X) → 0 W3(Z, 3) R4(X) → 2 R4(Y ) → 0 R4(Z) → 3 T 1 C 1 T 2 C 2 T 3 A 3 T 4 A 4op k,i = tryC k that returns a non-A k response in H i , there is a du-TMS1 serialization for op k,i . But this is immediate since the du-TMS1 serialization for op i,k in H is also the required du-TMS1 serialization for op k,i in H i . To see that du-TMS1 is limit closed, consider an infinite sequence H 0 , H 1 , . . . H i , H i+1 , . . . of finite du-TMS1 histories, such that H i+1 is a one-event extension of H i . Let let H be the corresponding infinite limit history. We want to show that H is also du-TMS1. Since strict serializability is a limit-closed property (Theorem 3.12), H is strictly serializable. By assumption, for all i ∈ N, H i is du-TMS1. Thus, for every transaction T i that participates in H i , there is a du-TMS1 serialization S i,k for each t-operation op k,i . But S i,k is also the required du-TMS1 serialization for op k,1 in H. The claim follows. It has been shown [98] that Opacity is a strictly stronger property than du-TMS1, that is, Opacity du-TMS1. Since Du-Opacity Opacity (Theorem 3.10) it follows that Du-Opacity du-TMS1. On the other hand, du-TMS1 is incomparable to du-VWC, as demonstrated by the following examples. Proposition 3.2. There is a history that is du-TMS1, but not du-VWC. Proof. Figure 3.7 depicts a history H that is du-TMS1, but not du-VWC. Observe that H is strictly serializable. To prove that H is du-TMS1, we need to prove that there is a TMS1 serialization for each t-read that returns a non-abort response in H. Clearly, the serialization in which only T 3 participates is the required TMS1 serialization for read 3 (X) → 0. Now consider the aborted transaction T 4 . The TMS1 serialization for read 4 (X) → 2 is T 2 , T 4 , while the TMS1 serialization that justifies the response of read 4 (Y )− > 0 includes just T 4 itself. The only nontrivial t-read whose response needs to be justified is read 4 (Z) → 3. Indeed, tryC 3 overlaps with read 4 (Z) and thus, the response of read 4 (Z) can be justified by choosing transactions in cTMSpart(H, read 4 (Z)) to be {T 3 , T 2 , T 4 } and then deriving a TMS1 serialization S = T 3 , T 2 , T 4 for read 4 (Z) → 3 in which tryC 3 may be completed by including the commit response. However, H is not du-VWC. Consider transaction T 3 which returns A 3 in H: T 3 must be aborted in any serialization equivalent to some direct causal past of T 4 . But read 4 (Z) returns the value 3 that is written by T 3 . Thus, read 4 (Z) cannot be legal in any du-VWC serialization for T 4 . Figure 3.8: A history that is du-opaque, but not TMS2 [43]. R1(X) → 0 W1(X, 1) tryC 1 R2(X) → 0 W2(Y, 1) tryC 2 T 1 C 1 T 2 C 2Proposition 3.3. There is a history that is du-VWC, but not du-TMS1. Proof. Figure 3.6 depicts a history H that is du-VWC, but not du-TMS1. Clearly, H is strictly serializable. Observe that T 3 could have read only from T 1 in H (T 1 writes the value 0 to X that is returned by read 3 (X)). Therefore, T 1 , T 3 is the required du-VWC serialization for aborted transaction T 3 . However, H is not du-TMS1: since both transactions T 1 and T 2 are committed and precede T 3 in realtime order, they must be included in any du-TMS1 serialization for read 3 (X) → 0. But there is no such du-TMS1 serialization that would ensure the legality of read 3 (X). TMS2. We now study the TMS2 definition which imposes an extra restriction on the opaque serialization. Definition 3.9 (du-TMS2). A history H is du-TMS2 if there is a legal t-complete t-sequential history S equivalent to some completion, H of H such that 1. for any two transactions T k , T m ∈ txns(H), such that T m is a committed updating transaction, if C k ≺ RT H tryC m or A k ≺ RT H tryC m , then T k ≺ S T m , and 2. for any two transactions T k , T m ∈ txns(H), if T k ≺ RT H T m , then T k < S T m , and 3. each read k (X) in S that does not return A k is legal in S k,X H . We refer to S as the du-TMS2 serialization of H. It has been shown [98] that TMS2 is a strictly stronger property than Opacity, i.e., TMS2 Opacity. We now show that du-TMS2 is strictly stronger than du-opacity. Indeed, from Definition 3.9, we observe that every history that is du-TMS2 is also du-opaque. The following proposition completes the proof. Proposition 3.4. There is a history that is du-opaque, but not du-TMS2. Proof. Figure 3.8 depicts a history H that is du-opaque, but not du-TMS2. Indeed, there is a du-opaque serialization S of H such that seq(S) = T 2 , T 1 . On the other hand, since T 1 commits before T 2 , T 1 must precede T 2 in any du-TMS2 serialization, there does not exist any such serialization that ensures every t-read is legal. Thus, H is not du-TMS2. Theorem 3.16. Du-TMS2 is prefix-closed. Proof. Let H be any du-TMS2 history. Then, H is also du-opaque. By Corollary 3.2, for every i ∈ N, there is a du-opaque serialization S i for H i . We now need to prove that, for any two transactions T k , T m ∈ txns(H i ), such that T m is a committed updating transaction, if C k ≺ RT H i tryC m or A k ≺ RT H i tryC m , there is a du-opaque serialization S i with the restriction that T k ≺ S i T m . Suppose by contradiction that there exist transactions T k , T m ∈ txns(H i ), such that T m is a committed updating transaction and C k ≺ RT H i tryC m or A k ≺ RT H i tryC m , but T m must precede T k in any du-opaque serialization S i . Since T m ≺ RT H i T k , the only possibility is that T m performs write m (X, v) and there is read k (X) → v. However, by our assumption, write k (X, v) ≺ RT H i tryC m : thus, read k (X) is not legal in its local serialization with respect to H i and S i -contradicting the assumption that S i is a du-opaque serialization of H i . Thus, there is a du-TMS2 serialization for H i , proving that du-TMS2 is a prefix-closed property. Proposition 3.5. Du-TMS2 is not limit-closed. du-opacity du-VWC du-TMS1 du-TMS2 du-opacity du-VWC × du-TMS1 × du-TMS2 Proof. The counter-example to establish that du-opacity is not limit-closed ( Figure 3.2) also shows that du-TMS2 is not limit-closed: all histories discussed in the counter-example are in du-TMS2. Related work and Discussion The properties discussed in this chapter explicitly preclude reading from a transaction that has not yet invoked tryCommit, which makes them prefix-closed and facilitates their verification. We believe that this constructive definition is useful to TM practitioners, since it streamlines possible implementations of t-read and tryCommit operations. We showed that du-opacity is limit-closed under the restriction that every operation eventually terminates, while du-VWC and du-TMS1 are (unconditionally) limit-closed, which makes them safety properties [100]. Table 3.1 summarizes the containment relations between the properties discussed in this chapter: opacity, du-opacity, du-VWC, du-TMS1 and du-TMS2. For example, "du-opacity opacity" means that the set of du-opaque histories is a proper subset of the set of opaque histories, i.e., du-opacity is a strictly stronger property than opacity. Incomparable (not related by containment) properties, such as du-TMS1 and du-VWC are marked with ×. Linearizability [27,83], when applied to objects with finite nondeterminism (i.e., an operation applied to a given state may produce only finitely many outcomes) sequential specifications is a safety property [66,100]. Recently, it has been shown [66] that linearizability is not limit-closed if the implemented object may expose infinite non-determinism [66], that is, an operation applied to a given state may produce infinitely many different outcomes. The limit-closure proof (cf. Theorem 3.5), using König's lemma, cannot be applied with infinite non-determinism, because the out-degree of the graph G H , constructed for the limit infinite history H, is not finite. In contrast, the TM abstraction is deterministic, since reads and writes behave deterministically in serial executions, yet du-opacity is not limit-closed. It turns out that the graph G H for the counterexample history H in Figure 3.2 is not connected. For example, one of the finite prefixes of H can be serialized as T 3 , T 1 , T 2 , but no prefix has a serialization T 3 , T 1 and, thus, the root is not connected to the corresponding vertex of G H . Thus, the precondition of König's lemma does not hold for G H : the graph is in fact an infinite set of isolated vertices. This is because du-opacity requires even incomplete reading transactions, such as T 2 , to appear in the serialization, which is not the case for linearizability, where incomplete operations may be removed from the linearization. 4 Complexity bounds for blocking TMs "I can't believe that!" said Alice "Can't you?" the Queen said in a pitying tone. "Try again: draw a long breath, and shut your eyes." Alice laughed. "There's no use trying," she said: "one can't believe impossible things." "I daresay you haven't had much practice," said the Queen. "When I was your age, I always did it for half-an-hour a day. Why, sometimes I've believed as many as six impossible things before breakfast." Lewis Carroll -Through the Looking-Glass Overview In this chapter, we present complexity bounds for TM implementations that provide no non-blocking progress guarantees for transactions and typically allow a transaction to block (delay) or abort in concurrent executions. We refer to Section 2.7 in Chapter 2 for an overview of the complexity metrics considered in the thesis. Sequential TMs. We start by presenting complexity bounds for single-lock TMs that satisfy sequential TM-progress. We show that a read-only transaction in an opaque TM featured with weak DAP, weak invisible reads, ICF TM-liveness and sequential TM-progress must incrementally validate every next read operation. This results in a quadratic (in the size of the transaction's read set) step-complexity lower bound. Secondly, we prove that if the TM-correctness property is weakened to strict serializability, there exist executions in which the tryCommit of some transaction must access a linear (in the size of the transaction's read set) number of distinct base objects. We then show that expensive synchronization in TMs cannot be eliminated: even single-lock TMs must perform a RAW (read-after-write) or AWAR (atomic-write-after-read) pattern [17]. Progressive TMs. We turn our focus to progressive TM implementations which allow a transaction to be aborted only due to read-write conflicts with concurrent transactions. We introduce a new metric called protected data size that, intuitively, captures the amount of data that a transaction must exclusively control at some point of its execution. All progressive TM implementations we are aware of (see, e.g., an overview in [62]) use locks or timing assumptions to give an updating transaction exclusive access to all objects in its write set at some point of its execution. For example, lock-based progressive implementations like TL [40] and TL2 [39] require that a transaction grabs all locks on its write set before updating the corresponding base objects. Our result shows that this is an inherent price to pay for providing progressive concurrency: every committed transaction in a progressive and strict DAP TM implementation providing starvation-free TM-liveness must, at some point of its execution, protect every t-object in its write set. We also present a very cheap progressive opaque strict DAP TM implementation from read-write base objects with constant expensive synchronization and constant memory stall complexities. Strongly progressive TMs. We then prove that in any strongly progressive strictly serializable TM implementation that accesses the shared memory with read, write and conditional primitives, such as compare-and-swap and load-linked/store-conditional, the total number of remote memory references (RMRs) that take place in an execution of a progressive TM in which n concurrent processes perform transactions on a single t-object might reach Ω(n log n). The result is obtained via a reduction to an analogous lower bound for mutual exclusion [22]. In the reduction, we show that any TM with the above properties can be used to implement a deadlock-free mutual exclusion, employing transactional operations on only one t-object and incurring a constant RMR overhead. The lower bound applies to RMRs in both the cache-coherent (CC) and distributed shared memory (DSM) models, and it appears to be the first RMR complexity lower bound for transactional memory. We also present a constant expensive synchronization strongly progressive TM implementation from readwrite base objects. Our implementation provides starvation-free TM-liveness, thus showing one means of circumventing the lower bound of Rachid et al. [64] who proved the impossibility of implementing strongly progressive strictly serializable TMs providing wait-free TM-liveness from read-write base objects. Permissive TMs. We conclude our study of blocking TMs by establishing a linear (in the transaction's data set size) separation between the worst-case transaction expensive synchronization complexity of strongly progressive TMs and permissive TMs that allow a transaction to abort only if committing it would violate opacity. Specifically, we show that an execution of a transaction in a permissive opaque TM implementation that provides starvation-free TM-liveness may require to perform at least one RAW/AWAR pattern per t-read. Roadmap of Chapter 4. Section 4.2 studies "single-lock" TMs that provide minimal progressiveness or sequential TM-progress, Section 4.3 is devoted to progressive TMs while Section 4.4 is on strongly progressive TMs. In Section 4.5, we study the cost of permissive TMs that allow a transaction to abort only if committing it would violate opacity. Finally, we present related work and open questions in Section 4.6. Sequential TMs We begin with "single-lock", i.e., sequential TMs. Our first result proves that a read-only transaction in a sequential TM featured with weak DAP and weak invisible reads must incur the cost of validating its read set. This results in a quadratic (and resp., linear) (in the size of the transaction's read set) step-complexity lower bound if we assume opacity (and resp., strict serializability). Secondly, we show that expensive synchronization cannot be avoided even in such sequential TMs, i.e., a serializable TM must perform a RAW/AWAR even when transactions are guaranteed to commit only in the absence of any concurrency. We first prove the following auxiliary lemma that will be of use in subsequent proofs. Sequential TMs Rφ(X1) · · · Rφ(Xi−1) i − 1 t-reads Rφ(Xi) → nv Wi(Xi, nv) Ti commits Tφ Ti (a) R φ (X i ) must return nv by strict serializability Rφ(X1) · · · Rφ(Xi−1) i − 1 t-reads Wi(Xi, nv) Ti commits Rφ(Xi) → nv new value Tφ Tφ Ti (b) T i does not observe any conflict with T φ Lemma 4.1. Let M be any strictly serializable, weak DAP TM implementation that provides sequential TM-progress and sequential TM-liveness. Then, for all i ∈ N, M has an execution of the form π i−1 ·ρ i ·α i where, • π i−1 is the complete step contention-free execution of read-only transaction T φ that performs (i−1) t-reads: read φ (X 1 ) · · · read φ (X i−1 ), • ρ i is the t-complete step contention-free execution of a transaction T i that writes nv = v to X i and commits (v is the initial value of X − i), • α i is the complete step contention-free execution fragment of T φ that performs its i th t-read: read φ (X i ) → nv i . Proof. By sequential TM-progress and sequential TM-liveness, M has an execution of the form ρ i · π i−1 . Since Dset(T k ) ∩ Dset(T i ) = ∅ in ρ i · π i−1 , by Lemma 2.10, transactions T φ and T i do not contend on any base object in execution ρ i · π i−1 . Thus, ρ i · π i−1 is also an execution of M . By assumption of strict serializability, ρ i · π i−1 · α i is an execution of M in which the t-read of X i performed by T φ must return nv. But ρ i · π i−1 · α i is indistinguishable to T φ from π i−1 · ρ i · α i . Thus, M has an execution of the form π i−1 · ρ i · α i . A quadratic lower bound on step complexity In this section, we present our step complexity lower bound for sequential TMs. (1) If M is opaque, for every m ∈ N, there exists an execution E of M such that some transaction T ∈ txns(E) performs Ω(m 2 ) steps, where m = |Rset(T k )|. (2) if M is strictly serializable, for every m ∈ N, there exists an execution E of M such that some transaction T k ∈ txns(E) accesses at least m − 1 distinct base objects during the executions of the m th t-read operation and tryC k (), where m = |Rset(T k )|. Proof. For all i ∈ {1, . . . , m}, let v be the initial value of t-object X i . (1) Suppose that M is opaque. Let π m denote the complete step contention-free execution of a transaction T φ that performs m t-reads: read φ (X 1 ) · · · read φ (X m ) such that for all i ∈ {1, . . . , m}, read φ (X i ) → v. By Lemma 4.1, for all i ∈ {2, . . . , m}, M has an execution of the form E i = π i−1 · ρ i · α i . For each i ∈ {2, . . . , m}, j ∈ {1, 2} and ≤ (i − 1), we now define an execution of the form E i j = π i−1 · β · ρ i · α i j as follows: • β is the t-complete step contention-free execution fragment of a transaction T that writes nv = v to X and commits Proof. For all i ∈ {2, . . . , m}, π i−1 is an execution of M . By assumption of weak invisible reads and sequential TM-progress, T must be committed in π i−1 · ρ and M has an execution of the form π i−1 · β . By the same reasoning, since T i and T have disjoint data sets, M has an execution of the form π i−1 ·β ·ρ i . • α i 1 (and resp. α i 2 ) is the complete step contention-free execution fragment of read φ (X i ) → v (and resp. read φ (X i ) → A φ ). Since the configuration after Suppose by contradiction that there exists an execution of M such that π i−1 · β · ρ i is extended with the complete execution of read φ (X i ) → r; r ∈ {A φ , v}. The only plausible case to analyse is when r = nv. Since read φ (X i ) returns the value of X i updated by T i , the only possible serialization for transactions is T , T i , T φ ; but read φ (X ) performed by T k that returns the initial value v is not legal in this serialization-contradiction. π i−1 · β · ρ i is quiescent, by ICF TM-liveness, π i−1 · β · ρ i extended with read φ (X i ) must return a matching response. If read φ (X i ) → v i , then clearly E i 1 is an execution of M with T φ , T i−1 , T i being a valid serialization of transactions. If read φ (X i ) → A φ ,We now prove that, for all i ∈ {2, . . . , m}, j ∈ {1, 2} and ≤ (i − 1), transaction T φ must access (i − 1) different base objects during the execution of read φ (X i ) in the execution π i−1 · β · ρ i · α i j . By the assumption of weak invisible reads, the execution π i−1 ·β ·ρ i ·α i j is indistinguishable to transactions T and T i from the executionπ i−1 · β · ρ i · α i j , where Rset(T φ ) = ∅ inπ i−1 . But transactions T and T i are disjoint-access inπ i−1 · β · ρ i and by Lemma 2.10, they cannot contend on the same base object in this execution. Consider the (i − 1) different executions: π i−1 · β 1 · ρ i , . . ., π i−1 · β i−1 · ρ i . For all , ≤ (i − 1); = , M has an execution of the form π i−1 · β · ρ i · β in which transactions T and T access mutually disjoint data sets. By weak invisible reads and Lemma 2.10, the pairs of transactions T , T i and T , T do not contend on any base object in this execution. This implies that π i−1 · β · β · ρ i is an execution of M in which transactions T and T each apply nontrivial primitives to mutually disjoint sets of base objects in the execution fragments β and β respectively (by Lemma 2.10). This implies that for any j ∈ {1, 2}, ≤ (i − 1), the configuration C i after E i differs from the configurations after E i j only in the states of the base objects that are accessed in the fragment β . Consequently, transaction T φ must access at least i−1 different base objects in the execution fragment π i j to distinguish configuration C i from the configurations that result after the (i − 1) different executions π i−1 · β 1 · ρ i , . . ., π i−1 · β i−1 · ρ i respectively. Thus, for all i ∈ {2, . . . , m}, transaction T φ must perform at least i − 1 steps while executing the i th t-read in π i j and T φ itself must perform m−1 i=1 i = m(m−1) 2 steps. (2) Suppose that M is strictly serializable, but not opaque. Since M is strictly serializable, by Lemma 4.1, it has an execution of the form E = π m−1 · ρ m · α m . For each ≤ (i − 1), we prove that M has an execution of the form E = π m−1 · β · ρ m ·ᾱ m wherē α m is the complete step contention-free execution fragment of read φ (X m ) followed by the complete execution of tryC φ . Indeed, by weak invisible reads, π m−1 does not contain any nontrivial events and the execution π m−1 · β · ρ m is indistinguishable to transactions T and T m from the executionsπ m−1 · β andπ m−1 · β · ρ m respectively, where Rset(T φ ) = ∅ inπ m−1 . Thus, applying Lemma 2.10, transactions β ·ρ m do not contend on any base object in the execution π m−1 ·β ·ρ m . By ICF TM-liveness, read φ (X m ) and tryC φ must return matching responses in the execution fragmentᾱ m that extends π m−1 · β · ρ m . Consequently, for each ≤ (i − 1), M has an execution of the form E = π m−1 · β · ρ m ·ᾱ m such that transactions T and T m do not contend on any base object. Strict serializability of M means that if read φ (X m ) → nv in the execution fragmentᾱ m , then tryC φ must return A φ . Otherwise if read φ (X m ) → v (i.e. the initial value of X m ), then tryC φ may return A φ or C φ . Thus, as with (1), in the worst case, T φ must access at least m − 1 distinct base objects during the executions of read φ (X m ) and tryC φ to distinguish the configuration C i from the configurations after the m − 1 different executions π m−1 · β 1 · ρ m , . . ., π m−1 · β m−1 · ρ m respectively. Expensive synchronization in Transactional memory cannot be eliminated In this section, we show that serializable TMs must perform a RAW/AWAR even if they are guaranteed to commit only when they run in the absence of any concurrency. Theorem 4.4. Let M be a serializable TM implementation providing sequential TM-progress and sequential TM-liveness. Then, every execution of M in which a transaction running t-sequentially performs at least one t-read and at least one t-write contains a RAW/AWAR pattern. Proof. Consider an execution π of M in which a transaction T 1 running t-sequentially performs (among other events) read 1 (X), write 1 (Y, v) and tryC 1 (). Since M satisfies sequential TM-progress and sequential TM-liveness, T 1 must commit in π. Clearly π must contain a write to a base object. Otherwise a subsequent transaction reading Y would return the initial value of Y instead of the value written by T 1 . Let π w be the first write to a base object in π. Thus, π can be represented as π s · π w · π f . Now suppose by contradiction that π contains neither RAW nor AWAR patterns. Since π s contains no writes, the states of base objects in the initial configuration and in the configuration after π s is performed are the same. Consider an execution π s · ρ where in ρ, a transaction T 2 performs read 2 (Y ), write 2 (X, 1), tryC 2 () and commits. Such an execution exists, since ρ is indistinguishable to T 2 from an execution in which T 2 runs t-sequentially and thus T 2 cannot be aborted in π s · ρ. Since π w contains no AWAR, π s · ρ · π w is an execution of M . Since π w · π f contains no RAWs, every read performed in π w · π f is applied to base objects which were previously written in π w ·π f . Thus, there exists an execution π s ·ρ·π w ·π f , such that T 1 cannot distinguish π s · π w · π f and π s · ρ · π w · π f . Hence, T 1 commits in π s · ρ · π w · π f . But T 1 reads the initial value of X and T 2 reads the initial value of Y in π s · ρ · π w · π f , and thus T 1 and T 2 cannot be both committed (at least one of the committed transactions must read the value written by the other)-a contradiction. Progressive TMs We move on to the stronger (than sequential TMs) class of progressive TMs. We introduce a new metric called protected data size that, intuitively, captures the number of t-objects that a transaction must exclusively control at some prefix of its execution. We first prove that any strict DAP progressive opaque TM must protect its entire write set at some point in its execution. Secondly, we describe a constant stall, constant RAW/AWAR strict DAP opaque progressive TM that provides invisible reads and is implemented from read-write base objects. A linear lower bound on the amount of protected data Let M be a progressive TM implementation providing starvation-free TM-liveness. Intuitively, a t-object X j is protected at the end of some finite execution π of M if some transaction T 0 is about to atomically change the value of X j in its next step (e.g., by performing a compare-and-swap) or does not allow any concurrent transaction to read X j (e.g., by holding a "lock" on X j ). Formally, let α·π be an execution of M such that π is a t-sequential t-complete execution of a transaction T 0 , where Wset(T 0 ) = {X 1 , . . . , X m }. Let u j (j = 1, . . . , m) denote the value written by T 0 to t-object X j in π. In this section, let π t denote the t-th shortest prefix of π. Let π 0 denote the empty prefix. For any X j ∈ Wset(T 0 ), let T j denote a transaction that tries to read X j and commit. Let E t j = α · π t · ρ t j denote the extension of α · π t in which T j runs solo until it completes. Note that, since we only require the implementation to be starvation-free, ρ t j can be infinite. We say that α · π t is (1, j)-valent if the read operation performed by T j in α · π t · ρ t j returns u j (the value written by T 0 to X j ). We say that α · π t is (0, j)-valent if the read operation performed by T j in α · π t · ρ t j does not abort and returns an "old" value u = u j . Otherwise, if the read operation of T j aborts or never returns in α · π t · ρ t j , we say that α · π t is (⊥, j)-valent. Definition 4.1. We say that T 0 protects an object X j in α · π t , where π t is the t-th shortest prefix of π (t > 0) if one of the following conditions holds: (1) α · π t is (0, j)-valent and α · π t+1 is (1, j)-valent, or (2) α · π t or α · π t+1 is (⊥, j)-valent. For strict disjoint-access parallel progressive TM, we show that every transaction running t-sequentially must protect every t-object in its write set at some point of its execution. We observe that the no prefix of π can be 0 and 1-valent at the same time. Lemma 4.5. There does not exist π t , a prefix of π, and i, j ∈ {1, . . . , m} such that α · π t is both (0, i)-valent and (1, j)-valent. Proof. By contradiction, suppose that there exist i, j and α · π t that is both (0, i)-valent and (1, j)valent. Since the implementation is strict DAP, there exists an execution of M , E t ij = α · π t · ρ t j · ρ t i that is indistinguishable to T i from α · π t · ρ t i . In E t ij , the only possible serialization is T 0 , T j , T i . But T i returns the "old" value of X i and, thus, the serialization is not legal-a contradiction. If α · π t is (0, i)-valent (resp., (1, i)-valent) for some i, we say that it is 0-valent (resp., 1-valent). By Lemma 4.5, the notions of 0-valence and 1-valence are well-defined. Theorem 4.6. Let M be a progressive, opaque and strict disjoint-access-parallel TM implementation that provides starvation-free TM-liveness. Let α · π be an execution of M , where π is a t-sequential t-complete execution of a transaction T 0 . Then, there exists π t , a prefix of π, such that T 0 protects |Wset(T 0 )| t-objects in α · π t . Proof. Let Wset T0 = {X 1 , . . . , X m }. Consider two cases: (1) Suppose that π has a prefix π t such that α · π t is 0-valent and α · π t+1 is 1-valent. By Lemma 4.5, there does not exists i, such that α · π t is (1, i)-valent and α · π t+1 is (0, i)-valent. Thus, one of the following are true • For every i ∈ {1, . . . , m}, α · π t is (0, i)-valent and α · π t+1 is (1, i)-valent • At least one of α · π t and α · π t+1 is (⊥, i)-valent, i.e., the t-operation of T i aborts or never returns In either case, T 0 protects m t-objects in α · π t . Progressive TMs (2) Now suppose that such π t does not exists, i.e., there is no i ∈ {1, . . . , m} and t ∈ {0, |π| − 1} such that E t i exists and returns an old value, and E t+1 i exists and returns a new value. Suppose there exists s, t, 0 < s + 1 < t, S ⊆ {1, . . . , m}, such that: • α · π s is 0-valent, • α · π t is 1-valent, • for all r, s < r < t, and for all i ∈ S, α · π r is (⊥, i)-valent. We say that s + 1, . . . , t − 1 is a protecting fragment for t-objects {X j |j ∈ S}. Since M is opaque and progressive, α·π 0 = α is 0-valent and α·π is 1-valent. Thus, the assumption of Case (2) implies that for each X i , there exists a protecting fragment for {X i }. In particular, there exists a protecting fragment for {X 1 }. Now we proceed by induction. Let π s+1 , . . . , π t−1 be a protecting fragment for {X 1 , . . . , X u−1 } such that u ≤ m. Now we claim that there must be a subfragment of s + 1, . . . , t − 1 that protects {X 1 , . . . , X u }. Suppose not. Thus, there exists r, s < r < t, such that α · π r is (0, u)-valent or (1, u)-valent. Suppose first that α · π r is (1, u)-valent. Since α · π s is (0, i)-valent for some i = u, by Lemma 4.5 and the assumption of Case (2), there must exist s , t , s < s + 1 < t ≤ r such that • α · π s is 0-valent, • α · π t is 1-valent, • for all r , s < r < t , α · π r is (⊥, u)-valent. As a result, s + 1, . . . , t − 1 is a protecting fragment for {X 1 , . . . , X u }. The case when α · π r is (0, u)-valent is symmetric, except that now we should consider fragment r, . . . , t instead of s, . . . , r. Thus, there exists a subfragment of s + 1, . . . , t − 1 that protects {X 1 , . . . , X u }. By induction, we obtain a protecting fragment s + 1, . . . , t − 1 for {X 1 , . . . , X m }. Thus, any prefix α · π r , where s < r < t protects exactly m t-objects. In both cases, there is a prefix of α · π that protects exactly m t-objects. The lower bound of Theorem 4.6 is tight: it is matched by all progressive implementations we are aware of, including Algorithm 4.1 described in the next section. A constant stall and constant expensive synchronization strict DAP opaque TM In this section, we describe a cheap progressive, opaque TM implementation LP (Algorithm 4.1). Our TM LP , every transaction performs at most a single RAW, every t-read operation incurs O(1) memory stalls and maintains exactly one version of every t-object at any prefix of an execution. Moreover, the implementation is strict DAP and uses only read-write base objects. Base objects. For every t-object X j , LP maintains a base object v j that stores the value of X j . Additionally, for each X j , we maintain a bit L j , which if set, indicates the presence of an updating transaction writing to X j . Also, for every process p i and t-object X j , LP maintains a single-writer bit r ij to which only p i is allowed to write. Each of these base objects may be accessed only via read and write primitives. Read operations. The implementation first reads the value of t-object X j from base object v j and then reads the bit L j to detect contention with an updating transaction. If L j is set, the transaction is aborted; if not, read validation is performed on the entire read set. If the validation fails, the transaction v j , for each t-object X j , allows reads and writes 3: r ij , for each process p i and t-object X j 4: single-writer bit 5: allows reads and writes 6: L j , for each t-object X j 7: allows reads and writes 8: Local variables: 9: Rset k , Wset k for every transaction T k ; 10: dictionaries storing {Xm, vm} 11: read k (X j ): 12: if X j ∈ Rset(T k ) then 13: [ov j , k j ] := read(v j ) 14: Rset(T k ) := Rset(T k ) ∪ {X j , [ov j , k j ]} 15: if read(L j ) = 0 then 16: Return A k 17: if validate() then 18: Return A k 19: Return ov j 20: else 21: [ov j , ⊥] := Rset(T k ).locate(X j ) 22: Return ov j 23: write k (X j , v): 24: nv j := v 25: Wset(T k ) := Wset(T k ) ∪ {X j } 26: Return ok 27: tryC k (): 28: if |Wset(T k )| = ∅ then 29: Return C k 30: locked := acquire(Wset(T k )) 31: if ¬ locked then 32: Return A k 33: if isAbortable() then 34: release(Wset(T k )) 35: Return A k // Exclusive write access to each v j 36: for all X j ∈ Wset(T k ) do 37: write(v j , [nv j , k]) 38: release(Wset(T k )) 39: Return C k for all X j ∈ Q do 42: write(L j , 0) 43: for all X j ∈ Q do 44: write(r ij , 0) 45: Return ok 46: Function: acquire(Q): 47: for all X j ∈ Q do 48: write(r ij , 1) 49: if ∃X j ∈ Q; t = k : read(r tj ) = 1 then 50: for all X j ∈ Q do 51: write(r ij , 0) 52: Return false // Exclusive write access to each L j 53: for all X j ∈ Q do 54: write(L j , 1) 55: Return true 56: Function: isAbortable() : 57: if ∃X j ∈ Rset(T k ) : X j ∈ Wset(T k ) ∧ read(L j ) = 0 then 58: Return true 59: if validate() then Return false is aborted. Otherwise, the implementation returns the value of X j . For a read-only transaction T k , tryC k simply returns the commit response. Updating transactions. The write k (X, v) implementation by process p i simply stores the value v locally, deferring the actual updates to tryC k . During tryC k , process p i attempts to obtain exclusive write access to every X j ∈ Wset(T k ). This is realized through the single-writer bits, which ensure that no other transaction may write to base objects v j and L j until T k relinquishes its exclusive write access to Wset(T k ). Specifically, process p i writes 1 to each r ij , then checks that no other process p t has written 1 to any r tj by executing a series of reads (incurring a single RAW). If there exists such a process that concurrently contends on write set of T k , for each X j ∈ Wset(T k ), p i writes 0 to r ij and aborts T k . If successful in obtaining exclusive write access to Wset(T k ), p i sets the bit L j for each X j in its write set. Implementation of tryC k now checks if any t-object in its read set is concurrently contended by another transaction and then validates its read set. If there is contention on the read set or validation fails (indicating the presence of a conflicting transaction), the transaction is aborted. If not, p i writes the values of the t-objects to shared memory and relinquishes exclusive write access to each X j ∈ Wset(T k ) by writing 0 to each of the base objects L j and r ij . Complexity. Read-only transactions do not apply any nontrivial primitives. Any updating transaction performs at most a single RAW in the course of acquiring exclusive write access to the transaction's write set. Thus, every transaction performs O(1) non-overlapping RAWs in any execution. Recall that a transaction may write to base objects v j and L j only after obtaining exclusive write access to t-object X j , which in turn is realized via single-writer base objects. Thus, no transaction performs a write to any base object b immediately after a write to b by another transaction, i.e., every transaction incurs only O(1) memory stalls on account of any event it performs. The read k (X j ) implementation reads base objects v j and L j , followed by the validation phase in which it reads v k for each X k in its current read set. Note that if the first read in the validation phase incurs a stall, then read k (X j ) aborts. It follows that each t-read incurs O(1) stalls in every execution. Proof of opacity. We now prove that LP implements an opaque TM. We introduce the following technical definition: process p i holds a lock on X j after an execution π of Algorithm 4.1 if π contains the invocation of acquire(Q), X j ∈ Q by p i that returned true, but does not contain a subsequent invocation of release(Q ), X j ∈ Q , by p i in π. Lemma 4.7. For any object X j , and any execution π of Algorithm 4.1, there exists at most one process that holds a lock on X j after π. Proof. Assume, by contradiction, that there exists an execution π after which processes p i and p k hold a lock on the same object, say X j . In order to hold the lock on X j , process p i writes 1 to register r ij and then checks if any other process p k has written 1 to r kj . Since the corresponding operation acquire(Q), X j ∈ Q invoked by p i returns true, p i read 0 in r kj in Line 49. But then p k also writes 1 to r kj and later reads that r ij is 1. This is because p k can write 1 to r kj only after the read of r kj returned 0 to p i which is preceded by the write of 1 to r ij . Hence, there exists an object X j such that r ij = 1; i = k, but the conditional in Line 49 returns true to process p k -a contradiction. Observation 4.8. Let π be any execution of Algorithm 4.1. Then, any updating transaction T k ∈ txns(π) executed by process p i writes to base object v j (in Line 37) for some X j ∈ Wset(T k ) immediately after π iff p i holds the lock on X j after π. Proof. Let E by any finite execution of Algorithm 4.1. Let < E denote a total-order on events in E. Let H denote a subsequence of E constructed by selecting linearization points of t-operations performed in E. The linearization point of a t-operation op, denoted as op is associated with a base object event or an event performed between the invocation and response of op using the following procedure. Completions. First, we obtain a completion of E by removing some pending invocations and adding responses to the remaining pending invocations involving a transaction T k as follows: every incomplete read k , write k operation is removed from E; an incomplete tryC k is removed from E if T k has not performed any write to a base object during the release function in Line 38, otherwise it is completed by including C k after E. Linearization points. Now a linearization H of E is obtained by associating linearization points to t-operations in the obtained completion of E as follows: • For every t-read op k that returns a non-A k value, op k is chosen as the event in Line 13 of Algorithm 4.1, else, op k is chosen as invocation event of op k • For every op k = write k that returns, op k is chosen as the invocation event of op k • For every op k = tryC k that returns C k such that Wset(T k ) = ∅, op k is associated with the response of acquire in Line 30, else if op k returns A k , op k is associated with the invocation event of op k • For every op k = tryC k that returns C k such that Wset(T k ) = ∅, op k is associated with Line 29 < H denotes a total-order on t-operations in the complete sequential history H. Serialization points. The serialization of a transaction T j , denoted as δ Tj is associated with the linearization point of a t-operation performed within the execution of T j . We obtain a t-complete historyH from H as follows: for every transaction T k in H that is complete, but not t-complete, we insert tryC k · A k after H. A t-complete t-sequential history S is obtained by associating serialization points to transactions inH as follows: • If T k is an update transaction that commits, then δ T k is tryC k • If T k is a read-only or aborted transaction inH, δ T k is assigned to the linearization point of the last t-read that returned a non-A k value in T k < S denotes a total-order on transactions in the t-sequential history S. Claim 4.10. If T i ≺ H T j , then T i < S T j Proof. This follows from the fact that for a given transaction, its serialization point is chosen between the first and last event of the transaction implying if T i ≺ H T j , then δ Ti < E δ Tj implies T i < S T j . Claim 4.11. Let T k be any updating transaction that returns false from the invocation of isAbortable in Line 33. Then, T k returns C k within a finite number of its own steps in any extension of E. Proof. Observer that T k performs the write to base objects v j for every X j ∈ Wset(T k ) and then invokes release in Lines 37 and 38 respectively. Since neither of these involve aborting the transaction or contain unbounded loops or waiting statements, it follows that T k will return C k within a finite number of its steps. Claim 4.12. S is legal. Proof. Observe that for every read j (X m ) → v, there exists some transaction T i that performs write i (X m , v) and completes the event in Line 37 such that read j (X m ) ≺ RT H write i (X m , v). More specifically, read j (X m ) returns as a non-abort response, the value of the base object v m and v m can be updated only by a transaction T i such that X m ∈ Wset(T i ). Since read j (X m ) returns the response v, the event in Line 13 succeeds the event in Line 37 performed by tryC i . Consequently, by Claim 4.11 and the assignment of linearization points, tryC i < E readj (Xm) . Since, for any updating committing transaction T i , δ Ti = tryC i , by the assignment of serialization points, it follows that δ Ti < E δ Tj . Thus, to prove that S is legal, it suffices to show that there does not exist a transaction T k that returns C k in S and performs write k (X m , v ); v = v such that T i < S T k < S T j . Suppose that there exists a committed transaction T k , X m ∈ Wset(T k ) such that T i < S T k < S T j . T i and T k are both updating transactions that commit. Thus, (T i < S T k ) ⇐⇒ (δ Ti < E δ T k ) (δ Ti < E δ T k ) ⇐⇒ ( tryC i < E tryC k ) Since, T j reads the value of X written by T i , one of the following is true: tryC i < E tryC k < E readj (Xm) or tryC i < E readj (Xm) < E tryC k . Let T i and T k be executed by processes p i and p k respectively. Consider the case that tryC i < E tryC k < E readj (Xm) . By the assignment of linearization points, T k returns a response from the event in Line 30 before the read of v m by T j in Line 13. Since T i and T k are both committed in E, p k returns true from the event in Line 30 only after T i writes 0 to r im in Line 44 (Lemma 4.17). Recall that read j (X m ) checks if X m is locked by a concurrent transaction (i.e L j = 0), then performs read-validation (Line 15) before returning a matching response. Consider the following possible sequence of events: T k returns true from the acquire function invocation, sets L j to 1 for every X j ∈ Wset(T k ) (Line 54) and updates the value of X m to shared-memory (Line 37). The implementation of read j (X m ) then reads the base object v m associated with X m after which T k releases X m by writing 0 to r km and finally T j performs the check in Line 15. However, read j (X m ) is forced to return A j because X m ∈ Rset(T j ) (Line 14) and has been invalidated since last reading its value. Otherwise suppose that T k acquires exclusive access to X m by writing 1 to r km and returns true from the invocation of acquire, updates v m in Line 37), T j reads v m , T j performs the check in Line 15 and finally T k releases X m by writing 0 to r km . Again, read j (X m ) returns A j since T j reads that r km is 1-contradiction. Thus, tryC i < E readj (X) < E tryC k . We now need to prove that δ Tj indeed precedes tryC k in E. Consider the two possible cases: • Suppose that T j is a read-only or aborted transaction inH. Then, δ Tj is assigned to the last t-read performed by T j that returns a non-A j value. If read j (X m ) is not the last t-read performed by T j that returned a non-A j value, then there exists a read j (X z ) performed by T j such that readj (Xm) < E tryC k < E readj (Xz) . Now assume that tryC k must precede readj (Xz) to obtain a legal S. Since T k and T j are concurrent in E, we are restricted to the case that T k performs a write k (X z , v) and read j (X z ) returns v. However, we claim that this t-read of X z must abort by performing the checks in Line 15. Observe that T k writes 1 to L m , L z each (Line 54) and then writes new values to base objects v m , v z (Line 37). Since read j (X z ) returns a non-A j response, T k writes 0 to L z before the read of L z by read j (X z ) in Line 15. Thus, the t-read of X z would return A j (in Line 17 after validation of the read set since X m has been updated-contradiction to the assumption that it the last t-read by T j to return a non-A j response. • Suppose that T j is an updating transaction that commits, then δ Tj = tryC j which implies that readj (Xm) < E tryC k < E tryC j . Then, T j must necessarily perform the checks in Line 33 and read that L m is 1. Thus, T j must return A j -contradiction to the assumption that T j is a committed transaction. The conjunction of Claims 4.10 and 4.12 establish that Algorithm 4.1 is opaque. We can now prove the following theorem: Theorem 4.13. Algorithm 4.1 describes a progressive, opaque and strict DAP TM implementation LP that provides wait-free TM-liveness, uses invisible reads, uses only read-write base objects, and for every execution E and transaction T k ∈ txns(E): • T k performs at most a single RAW, and • every t-read operation invoked by T k incurs O(1) memory stalls in E, and • every complete t-read operation invoked by T k performs O(|Rset(T k )|) steps in E. Proof. (TM-liveness and TM-progress) Since none of the implementations of the t-operations in Algorithm 4.1 contain unbounded loops or waiting statements, every t-operation op k returns a matching response after taking a finite number of steps in every execution. Thus, Algorithm 4.1 provides wait-free TM-liveness. To prove progressiveness, we proceed by enumerating the cases under which a transaction T k may be aborted. • Suppose that there exists a read k (X j ) performed by T k that returns A k from Line 15. Thus, there exists a process p t executing a transaction that has written 1 to r tj in Line 48, but has not yet written 0 to r tj in Line 44 or some t-object in Rset(T k ) has been updated since its t-read by T k . In both cases, there exists a concurrent transaction performing a t-write to some t-object in Rset(T k ). • Suppose that tryC k performed by T k that returns A k from Line 31. Thus, there exists a process p t executing a transaction that has written 1 to r tj in Line 48, but has not yet written 0 to r tj in Line 44. Thus, T k encounters step-contention with another transaction that concurrently attempts to update a t-object in Wset(T k ). • Suppose that tryC k performed by T k that returns A k from Line 33. Since T k returns A k from Line 33 for the same reason it returns A k after Line 15, the proof follows. (Strict disjoint-access parallelism) Consider any execution E of Algorithm 4.1 and let T i and T j be any two transactions that participate in E and access the same base object b in E. • Suppose that T i and T j contend on base object v j or L j . Since for every t-object X j , there exists distinct base objects v j and L j , T j and T j contend on v j only if X j ∈ Dset(T i ) ∩ Dset(T j ). • Suppose that T i and T j contend on base object r ij . Without loss of generality, let p i be the process executing transaction T i ; X j ∈ Wset(T i ) that writes 1 to r ij in Line 48. Indeed, no other process executing a transaction that writes to X j can write to r ij . Transaction T j reads r ij only if X j ∈ Dset(T j ) as evident from the accesses performed in Lines 48, 49, 44, 27. Thus, T i and T j access the same base object only if they access a common t-object. (Opacity) Follows from Lemma 4.9. (Invisible reads) Observe that read-only transactions do not perform any nontrivial events. Secondly, in any execution E of Algorithm 4.1, and any transaction T k ∈ txns(E), if X j ∈ Rset(T k ), T k does not write to any of the base objects associated with X j nor write any information that reveals its read set to other transactions. (Complexity) Consider any execution E of Algorithm 4.1. • For any T k ∈ txns(E), each read k only applies trivial primitives in E while tryC k simply returns C k if Wset(T k ) = ∅. Thus, Algorithm 4.1 uses invisible reads. • Any read-only transaction T k ∈ txns(E) not perform any RAW or AWAR. An updating transaction T k executed by process p i performs a sequence of writes (Line 48 to base objects {r ij } : X j ∈ Wset(T k ), followed by a sequence of reads to base objects {r tj } : t ∈ {1, . . . , n}, X j ∈ Wset(T k ) (Line 49) thus incurring a single multi-RAW. • Let e be a write event performed by some transaction T k executed by process p i in E on base objects v j and L j (Lines 37 and 54). Any transaction T k performs a write to v j or L j only after T k writes 0 to r ij , for every X j ∈ Wset(T k ). Thus, by Lemmata 4.17 and 4.9, it follows that events that involve an access to either of these base objects incurs O(1) stalls. Let e be a write event on base object r ij (Line 48) while writing to t-object X j . By Algorithm 4.1, no other process can write to r ij . It follows that any transaction T k ∈ txns(E) incurs O(1) memory stalls on account of any event it performs in E. Observe that any t-read read k (X j ) only accesses base objects v j , L j and other value base objects in Rset(T k ). But as already established above, these are O(1) stall events. Hence, every t-read operation incurs O(1)-stalls in E. The following corollary follows from Theorems 4.13 and 4.2. if X j ∈ Rset(T k ) then 3: [ov j , k j ] := read(v j ) 4: Rset(T k ) := Rset(T k ) ∪ {X j , [ov j , k j ]} 5: if read(L j ) = 0 then 6: Return A k 7: Return ov j 8: Strongly progressive TMs In this section, we prove that every strongly progressive strictly serializable TM that uses only read, write and conditional primitives has an execution in which in which n concurrent processes perform transactions on a single data item and incur Ω(log n) remote memory references [14]. We then describe a constant RAW/AWAR strongly progressive TM providing starvation-free TM-liveness from read-write base objects. A Ω(n log n) lower bound on remote memory references Our lower bound on RMR complexity of strongly progressive TMs is derived by reduction to mutual exclusion. Mutual exclusion. The mutex object supports two operations: Entry and Exit, both of which return the response ok. We say that a process p i is in the critical section after an execution π if π contains the invocation of Entry by p i that returns ok, but does not contain a subsequent invocation of Exit by p i in π. A mutual exclusion implementation satisfies the following properties: • (Mutual-exclusion) After any execution π, there exists at most one process that is in the critical section. • (Deadlock-freedom) Let π be any execution that contains the invocation of Enter by process p i . Then, in every extension of π in which every process takes infinitely many steps, some process is in the critical section. • (Finite-exit) Every process completes the Exit operation within a finite number of steps. We describe an implementation of a mutex object L(M ) from a strictly serializable, strongly progressive TM implementation M providing wait-free TM-liveness (Algorithm 4.3). The algorithm is based on the mutex implementation in [84]. Given a sequential implementation, we use a TM to execute the sequential code in a concurrent environment by encapsulating each sequential operation within an atomic transaction that replaces each read and write of a t-object with the transactional read and write implementations, respectively. If the transaction commits, then the result of the operation is returned; otherwise if one of the transactional operations aborts. For instance, in Algorithm 4.3, we wish to atomically read a t-object X, write a new value to it and return the old value of X prior to this write. To achieve this, we employ a strictly serializable TM implementation M . Since we assume that M is strongly progressive, in every execution, at least one transaction successfully commits and the value of X is returned. Shared objects. We associate each process p i with two alternating identities [p i , face i ]; face i ∈ {0, 1}. The strongly progressive TM implementation M is used to enqueue processes that attempt to enter the critical section within a single t-object X (initially ⊥ Since M is strongly progressive, in every execution E that contains an invocation of Enter by process p i , some process returns true from the invocation of func() in Line 23. , and has set itself to be p j 's successor by writing p i to register Succ[p j , face j ] in Line 28. Consider the possible two cases: the predecessor of [p j , face j is some process p k ;k = i or the predecessor of [p j , face j is the process p i itself. (Case 1) Since by assumption, process p j takes infinitely many steps in E, the only reason that p j is stuck without entering the critical section is that [p k , face k ] is also stuck in the while loop in Line 30. Note that it is possible for us to iteratively extend this execution in which p k 's predecessor is a process that is not p i or p j that is also stuck in the while loop in Line 30. But then the last such process must eventually read the corresponding Lock to be unlocked and enter the critical section. Thus, in every extension of E in which every process takes infinitely many steps, some process will enter the critical section. (Case 2) Suppose that the predecessor of [p j , face j is the process p i itself. However, observe that process p i , which takes infinitely many steps by our assumption must eventually read that Lock [p i , p j ] is unlocked and enter the critical section, thus establishing deadlock-freedom. We say that a TM implementation M accesses a single t-object if in every execution E of M and every transaction T ∈ txns(E), |Dset(T )| ≤ 1. We can now prove the following theorem: Theorem 4.21. Any strictly serializable, strongly progressive TM implementation with wait-free TMliveness from read, write and conditional primitives that accesses a single t-object has an execution whose RMR complexity is Ω(n log n). A constant expensive synchronization opaque TM In this section, we describe a strongly progressive opaque TM implementation providing starvation-free TM-liveness from read-write base objects with constant RAW/AWAR cost. For our implementation, we define and implement a starvation-free multi-trylock object. Starvation-free multi-trylock. A multi-trylock provides exclusive write-access to a set Q of t-objects. Specifically, a multi-trylock exports the following operations • acquire(Q) returns true or false • release(Q) releases the lock and returns ok • isContended(X j ), X j ∈ Q returns true or false We assume that processes are well-formed: they never invoke a new operation on the multi-trylock before receiving response from the previous invocation. We say that a process p i holds a lock on X j after an execution π if π contains the invocation of acquire(Q), X j ∈ Q by p i that returned true, but does not contain a subsequent invocation of release(Q ), X j ∈ Q , by p i in π. We say that X j is locked after π by process p i if p i holds a lock on X j after π. We say that X j is contended by p i after an execution π if π contains the invocation of acquire(Q), X j ∈ Q, by p i but does not contain a subsequent return false or return of release(Q ), X j ∈ Q , by p i in π. Let an execution π contain the invocation i op of an operation op followed by a corresponding response r op (we say that π contains op). We say that X j is uncontended (resp., locked) during the execution of op in π if X j is uncontended (resp., locked) after every prefix of π that contains i op but does not contain r op . We implement a multi-trylock object whose operations are starvation-free. The algorithm is inspired by the Black-White Bakery Algorithm [121] and uses a finite number of bounded registers. A starvation-free multi-trylock implementation satisfies the following properties: • Mutual-exclusion: For any object X j , and any execution π, there exists at most one process that holds a lock on X j after π. • Progress: Let π be any execution that contains acquire(Q) by process p i . If no other process p k , k = i contends infinitely long on some X j ∈ Q, then acquire(Q) returns true in π. • Let π be any execution that contains isContended(X j ) invoked by p i . -If X j is locked by p ; = i during the complete execution of isContended(X j ) in π, then isContended(X j ) returns true. -If ∀ = i, X j is never contended by p during the execution of isContended(X j ) in π, then isContended(X j ) returns false. Our starvation-free multi-trylock in Algorithm 4.4 uses the following shared variables: registers r ij for each process p i and object X j , a shared bit color ∈ {B, W }, registers LA i ∈ {0, . . . , N } for each p i that denote a Label and M C i ∈ {B, W } for each p i . We say (LA i , i) < (LA k , k) iff LA i < LA k or LA i = LA k and i < k. We now prove the following invariant about the multi-trylock implementation. Lemma 4.22. In every execution π of Algorithm 4.4, if p i holds a lock on some object X j after π, then one of the following conditions must hold: r ij , for each process p i and each t-object X j , initially 0 6: acquire(Q): 7: for all X j ∈ Q do 8: write(r ij , 1) 9: c i := color 10: write(M C i , c i ) 11: for all X j ∈ Q do 19: write(r ij , 0) Proof. In order to hold the lock on X j , some process p i writes 1 to r ij , writes a value, say W to M C i and reads the Labels of other processes that have obtained the same color as itself and generates a Label greater by one than the maximum Label read (Line 11). Observe that until the value of the color bit is changed, all processes read the same value W . The first process p i to hold the lock on X j changes the color bit to B when releasing the lock and hence the value read by all subsequent processes will be B until it is changed again. Now consider two cases: write(LA i , 1 + max({LA k )|M C k = M C i }) 12: while ∃j : ∃k = i: isContended(X j ) && ((LA k = 0; (M C k = M C i ); (LA k , k) < (LA i , i)) || (1) Assume that there exists a process p k , k = i, LA k = 0 and M C k = M C i such that (LA k , k) < (LA i , i), but p i holds a lock on X j after π. Thus, isContended(X j ) returns true to p i because p k writes to r kj (Line 8) before writing to LA k (Line 11). By assumption, (LA k , k) < (LA i , i); LA k > 0 and M C i = M C k , but the conditional in Line 13 returned true to p i without waiting for p k to stop contending on X j -contradiction. (2) Assume that there exists a process p k , k = i, LA k = 0 and M C k = M C i such that M C i = color, but p i holds a lock on X j after π. Again, since LA k > 0, isContended(X j ) returns true to p i , M C k = M C i and M C i = color, but the conditional in Line 13 returned true to p i without waiting for p k to stop contending on X j -contradiction. We can thus prove the following theorem: Assume, by contradiction, that L does not provide mutual-exclusion: there exists an execution π after which processes p i and p k , k = i hold a lock on the same object, say X j . Since both p i and p k have performed the write to LA i and LA k resp. in Line 11, LA i , LA k > 0. Consider two cases: L also ensures progress. If process p i wants to hold the lock on an object X j i.e. invokes acquire(Q), X j ∈ Q, it checks if any other process p k holds the lock on X j . If such a process p k exists and M C k = M C i , then clearly isContended(X j ) returns true for p i and (LA k , k) < (LA i , i). Thus, p i fails the conditional in Line 13 and waits until p k releases the lock on X j to return true. However, if p k contends infinitely long on X j , p i is also forced to wait indefinitely to be returned true from the invocation of acquire(Q). (1) If M C k = M C i , The same argument works when M C k = M C i since when p k stops contending on X j , isContended(X j ) eventually returns false for p i if p k does not contend infinitely long on X j . All operations performed by L are starvation-free. Each process p i that successfully holds the lock on an object X j in an execution π invokes acquire(Q), X j ∈ Q, obtains a color and chooses a value for LA i since there is no way to be blocked while writing to LA i . The response of operation acquire(Q) by p i is only delayed if there exists a concurrent invocation of acquire(Q ), X j ∈ Q by p k in π. In that case, process p i waits until p k invokes release(Q) and writes 0 to r kj and eventually holds the lock on X j . The implementation of release and isContended are wait-free operations (and hence starvation-free) since they contains no unbounded loops or waiting statements. The implementation of isContended(X j ) only reads base objects. The implementation of release(Q) writes to a series of base objects (Line 18) and then reads a base object (Line 20) incurring a single RAW. The implementation of acquire(Q) writes to base objects (Line 8), reads the shared bit color (Line 9)-one RAW, writes to a base object (Line 10), reads the Labels (Line 11)-one RAW, writes to its own Label and finally performs a sequence of reads when evaluating the conditional in Line 13-one RAW. Thus, Algorithm 4.4 incurs at most four RAWs. Strongly progressive TM from starvation-free multi-trylock. We now use the starvation-free multi-trylock to implement a starvation-free strongly progressive opaque TM implementation with constant expensive synchronization (Algorithm 4.5). The implementation is almost identical to the progressive TM implementation LP in Algorithm 4.1, except that the function calls to acquire and release the transaction's write set are replaced with analogous calls to a multi-trylock object. Theorem 4.24. Algorithm 4.5 implements a strongly progressive opaque TM implementation with starvation-free t-operations that uses invisible reads and employs at most four RAWs per transaction. Proof. (Opacity) Since Algorithm 4.5 is similar to the opaque progressive TM implementation in Algorithm 4.1, it is easy to adapt the proof of Lemma 4.9 to prove opacity for this implementation. (TM-progress and TM-liveness) Every transaction T k in a TM M whose t-operations are defined by Algorithm 4.5 can be aborted in the following scenarios: • Read-validation failed in read k or tryC k • read k or tryC k returned A k because X j ∈ Rset(T k ) is locked (belongs to write set of a concurrent transaction) v j , for each t-object X j 3: L, a starvation-free multi-trylock object 4: Local variables: 5: Rset k , Wset k for every transaction T k ; 6: dictionaries storing {Xm, vm} 7: read k (X j ): 8: if X j ∈ Rset(T k ) then 9: [ov j , k j ] := read(v j ) 10: Rset(T k ) := Rset(T k ) ∪ {X j , [ov j , k j ]} 11: if isAbortable() then 12: Return A k 13: Return ov j if |Wset(T k )| = ∅ then 23: Return C k 24: locked := L.acquire(Wset(T k )) 25: if isAbortable() then 26: L.release(Wset(T k )) 27: Return A k 28: for all X j ∈ Wset(T k ) do 29: write(v j , (nv j , k)) 30: L.release(Wset(T k )) 31: return C k 32: Function: isAbortable(): 33: if ∃X j ∈ Rset(T k ) : X j ∈ Wset(T k ) ∧ L.isContended(X j ) then 34: Return true 35: if validate() then 36: Return true 37: Return false Since in each of these cases, a transaction is aborted only because of a read-write conflict with a concurrent transaction, it is easy to see that M is progressive. To show Algorithm 4.5 also implements a strongly progressive TM, we need to show that for every set of transactions that concurrently contend on a single t-object, at least one of the transactions is not aborted. Consider transactions T i and T k that concurrently attempt to execute tryC i and tryC k such that X j ∈ Wset i ∪ Wset k . Consequently, they both invoke the acquire operation of the multi-trylock (Line 24) and thus, from Theorem 4.23, both T i and T k must commit eventually. Also, if validation of a t-read in T k fails, it means that the t-object is overwritten by some transaction T i such that T i precedes T k , implying at least one of the transactions commit. Otherwise, if some t-object X j ∈ Rset(T k ) is locked and returns abort since the t-object is in the write set of a concurrent transaction T i . While it may still be possible that T i returns A i after acquiring the lock on Wset i , strong progressiveness only guarantees progress for transactions that conflict on at most one t-object. Thus, in either case, for every set of transactions that conflict on at most one t-object, at least one transaction is not forcefully aborted. Starvation-free TM-liveness follows from the fact that the multi try-lock we use in the implementation of M provides starvation-free acquire and release operations. On the cost of permissive opaque TMs We have shown that (strongly) progressive TMs that allow a transaction to be aborted only on read-write conflicts have constant RAW/AWAR complexity. However, not aborting on conflicts may not necessarily affect TM-correctness. Ideally, we would like to derive TM implementations that are permissive, in the sense that a transaction is aborted only if committing it would violate TM-correctness. On the cost of permissive opaque TMs Definition 4.2 (Permissiveness). A TM implementation M is permissive with respect to TM-correctness C if for every history H of M such that H ends with a response r k and replacing r k with some r k = A k gives a history that satisfies C, we have r k = A k . Therefore, permissiveness does not allow a transaction to abort, unless committing it would violate the execution's correctness. We first show that a transaction in a permissive opaque implementation can only be forcefully aborted if it tries to commit: If op i is a write, then H 0 · ok i is also opaque -no write operation of the incomplete transaction T i appears in H 0 and, thus, H 0 is also a serialization of H 0 · ok i . If op i is a read(X) for some t-object X, then we can construct a serialization of H 0 · v where v is the value of X written by the last committed transaction in H 0 preceding T i or the initial value of X if there is no such transaction. It is easy to see that H 0 " obtained from H 0 by adding read(X) · v at the end of T i is a serialization of H 0 · read(X). In both cases, there exists a non-A i response r i to op i that preserves opacity of H 0 · r i , and, thus, the only t-operation that can be forcefully aborted in an execution of M is tryC. We now show that an execution of a transaction in a permissive opaque TM implementation (providing starvation-free TM-liveness) may require to perform at least one RAW/AWAR pattern per t-read. Proof. Consider an execution E of M consisting of transactions T 1 , T 2 , T 3 as shown in Figure 4.2: T 3 performs a t-read of X 1 , then T 2 performs a t-write on X 1 and commits, and finally T 1 performs a series of reads from objects X 1 , . . . , X m . Since the implementation is permissive, no transaction can be forcefully aborted in E, and the only valid serialization of this execution is T 3 , T 2 , T 1 . Note also that the execution generates a sequential history: each invocation of a t-operation is immediately followed by a matching response. Thus, since we assume starvation-freedom as a liveness property, such an execution exists. We consider read 1 (X k ), 2 ≤ k ≤ m in execution E. Imagine that we modify the execution E as follows. Immediately after read 1 (X k ) executed by T 1 we add write 3 (X, v), and tryC 3 executed by T 3 (let T C 3 (X k ) denote the complete execution of W 3 (X k , v) followed by tryC 3 ). Obviously, T C 3 (X k ) must return abort: neither T 3 can be serialized before T 1 nor T 1 can be serialized before T 3 . On the other hand if T C 3 (X k ) takes place just before read 1 (X k ), then T C 3 (X k ) must return commit but read 1 (X k ) must return the value written by T 3 . In other words, read 1 (X k ) and T C 3 (X k ) are strongly non-commutative [17]: both of them see the difference when ordered differently. As a result, intuitively, read 1 (X k ) needs to perform a RAW or AWAR to make sure that the order of these two "conflicting" operations is properly maintained. We formalize this argument below. Consider a modification E of E, in which T 3 performs write 3 (X k ) immediately after read 1 (X k ) and then tries to commit. In any serialization of E , T 3 must precede T 2 (read 3 (X 1 ) returns the initial value of X 1 ) and T 2 must precede T 1 to respect the real-time order of transactions. The execution of read 1 (X k ) does not modify base objects, hence, T 3 does not observe read 1 (X k ) in E . Since M is permissive, T 3 must commit in E . But since T 1 performs read 1 (X k ) before T 3 commits and T 3 updates X k , we also have Consequently, each read 1 (X k ) must perform a write to a base object. R 1 (X 1 ) → nv R 1 (Xm) W 2 (X 1 , nv) R 3 (X 1 ) → v T 1 T 2 C 2 T 3 Let π be the execution fragment that represents the complete execution of read 1 (X k ) and E k , the prefix of E up to (but excluding) the invocation of read 1 (X k ). Clearly, π contains a write to a base object. Let π w be the first write to a base object in π. Thus, π can be represented as π s · π w · π f . Suppose that π does not contain a RAW or AWAR. Consider the execution fragment E k · π s · ρ, where ρ is the complete execution of T C 3 (X k ) by T 3 . Such an execution of M exists since π s does not perform any base object write, hence, E k · π s · ρ is indistinguishable to T 3 from E k · ρ. Since, by our assumption, π w · π f contains no RAW, any read performed in π w · π f can only be applied to base objects previously written in π w · π f . Since π w is not an AWAR, E k · π s · ρ · π w · π f is an execution of M since it is indistinguishable to T 1 from E k · π. In E k · π s · ρ · π w · π f , T 3 commits (as in ρ) but T 1 ignores the value written by T 3 to X k . But there exists no serialization that justifies this execution-contradiction to the assumption that M is opaque. Thus, each read 1 (X k ), 2 ≤ k ≤ m must contain a RAW/AWAR. Note that since all t-reads of T 1 are executed sequentially, all these RAW/AWAR patterns are pairwise non-overlapping, which completes the proof. Related work and Discussion In this section, we summarize the complexity bounds for blocking TMs presented in this chapter and identify some open questions. Sequential TMs. Theorem 4.2 improves the read-validation step-complexity lower bound [62,64] derived for strict-data partitioning (a very strong version of DAP) and invisible reads. In a strict data partitioned TM, the set of base objects used by the TM is split into disjoint sets, each storing information only about a single data item. Indeed, every TM implementation that is strict data-partitioned satisfies weak DAP, but not vice-versa (cf. Section 2.6). The definition of invisible reads assumed in [62,64] requires that a t-read operation does not apply nontrivial events in any execution. Theorem 4.2 however, assumes weak invisible reads, stipulating that t-read operations of a transaction T do not apply nontrivial events only when T is not concurrent with any other transaction. We believe that the TM-progress and TM-liveness restrictions as well as the definitions of DAP and invisible reads we consider for this result are the weakest possible assumptions that may be made. To the best of our knowledge, these assumptions cover every TM implementation that is subject to the validation step-complexity [36,39,79]. Progressive TMs. We summarize the known complexity bounds for progressive (and resp. strongly progressive) TMs in Table 4.1 (and resp. Guerraoui and Kapalka [64] proved that it is impossible to implement strictly serializable strongly progressive TMs that provide wait-free TM-liveness (every t-operation returns a matching response within Permissive TMs. Crain et al. [34] proved that a permissive opaque TM implementation cannot maintain invisible reads, which inspired the derivation of our lower bound on RAW/AWAR complexity in Section 4.5. Furthermore, [34] described a permissive VWC TM implementation that ensures that t-read operations do not perform nontrivial primitives, but the tryCommit invoked by a read-only transaction perform a linear (in the size of the transaction's data set) number of RAW/AWARs. Thus, an open question is whether there exists a linear lower bound on RAW/AWAR complexity for weaker (than opacity) TM-correctness properties of VWC and TMS1. 5 Complexity bounds for non-blocking TMs Overview In the previous chapter, we presented complexity bounds for lock-based blocking TMs. Early TM implementations such as the popular DSTM [79] however avoid using locks and provide non-blocking TM-progress. In this chapter, we present several complexity bounds for non-blocking TMs exemplified by obstruction-freedom, possibly the weakest non-blocking progress condition [78,82]. We first establish that it is impossible to implement a strictly serializable obstruction-free TM that provides both weak DAP and read invisibility. Indeed, popular obstruction-free TMs like DSTM [79] and FSTM [52] are weak DAP, but use visible reads for aborting pending writing transactions. Secondly, we show that a t-read operation in a n-process strictly serializable obstruction-free TM implementation may incur Ω(n) stalls. Specifically, we prove that every such TM implementation has a (n − 1)-stall execution for an invoked t-read operation. Thirdly, we prove that any RW DAP opaque obstructionfree TM implementation has an execution in which a read-only transaction incurs Ω(n) non-overlapping RAWs or AWARs. Finally, we show that there exists a considerable complexity gap between blocking (i.e., progressive) and non-blocking (i.e., obstruction-free) TM implementations. We use the progressive opaque TM implementation LP described in Algorithm 4.1 (Chapter 4) to establish a linear separation in memory stall and RAW/AWAR complexity between blocking and non-blocking TMs. Formally, let OF denote the class of TMs that provide OF TM-progress and OF TM-liveness. Roadmap of Chapter 5. In Section 5.2, we show that no strictly serializable TM in OF can be weak DAP and have invisible reads. In Section 5.3, we determine stall complexity bounds for strictly serializable TMs in OF, and in Section 5.4, we present a linear (in n) lower bound on the RAW/AWAR complexity for RW DAP opaque TMs in OF. In Section 5.5, we describe two obstruction-free algorithms: a RW DAP opaque TM and a weak DAP (but not RW DAP) opaque TM. In Section 5.6, we present complexity gaps between blocking and non-blocking TM implementations. We conclude this chapter with a discussion on related work and open questions concerning obstruction-free TMs. Impossibility of weak DAP and invisible reads In this section, we prove that it is impossible to combine weak DAP and invisible reads for strictly serializable TMs in OF. R0(Z) → v W0(X, nv) tryC 0 R2(X) → v initial value (event of T 0 ) e R1(X) → nv new value T 0 T 2 T 1 (a) T 1 returns new value of X since T 2 is invisible R0(Z) → v W0(X, nv) tryC 0 R2(X) → v initial value (event of T 0 ) e R1(X) → nv new value W3(Z, nv) write new value T 0 T 2 T 3 T 1 (b) T 1 and T 3 do not contend on any base object R0(Z) → v W0(X, nv) tryC 0 R2(X) → v initial value (event of T 0 ) e R1(X) → nv new value W3(Z, nv) write new value T 0 T 2 T 3 T 1 (c) T 3 does not access the base object from the nontrivial event e Here is a proof sketch: suppose, by contradiction, that such a TM implementation M exists. Consider an execution E of M in which a transaction T 0 performs a t-read of t-object Z (returning the initial value v), writes nv (new value) to t-object X, and commits. Let E denote the longest prefix of E that cannot be extended with the t-complete step contention-free execution of any transaction that reads nv in X and commits. R0(Z) → v W0(X, nv) tryC 0 R2(X) → v initial value (event of T 0 ) e R1(X) → nv new value W3(Z, nv) write new value T 0 T 3 T 2 T 1 (d Thus if T 0 takes one more step, then the resulting execution E · e can be extended with the t-complete step contention-free execution of a transaction T 1 that reads nv in X and commits. Since M uses invisible reads, the following execution exists: E can be extended with the t-complete step contention-free execution of a transaction T 2 that reads the initial value v in X and commits, followed by the step e of T 0 after which transaction T 1 running step contention-free reads nv in X and commits. Moreover, this execution is indistinguishable to T 1 and T 2 from an execution in which the read set of T 0 is empty. Thus, we can modify this execution by inserting the step contention-free execution of a committed transaction T 3 that writes a new value to Z after E , but preceding T 2 in real-time order. Intuitively, by weak DAP, transactions T 1 and T 2 cannot distinguish this execution from the original one in which T 3 does not participate. Thus, we can show that the following execution exists: E is extended with the t-complete step contentionfree execution of T 3 that writes nv to Z and commits, followed by the t-complete step contention-free execution of T 2 that reads the initial value v in X and commits, followed by the step e of T 0 , after which T 1 reads nv in X and commits. This execution is, however, not strictly serializable: T 0 must appear in any serialization (T 1 reads a value written by T 0 ). Transaction T 2 must precede T 0 , since the t-read of X by T 2 returns the initial value of X. To respect real-time order, T 3 must precede T 2 . Finally, T 0 must precede T 3 since the t-read of Z returns the initial value of Z. The cycle T 0 → T 3 → T 2 → T 0 implies a contradiction. The formal proof follows. Theorem 5.1. There does not exist a weak DAP strictly serializable TM implementation in OF that uses invisible reads. Proof. By contradiction, assume that such an implementation M ∈ OF exists. Let v be the initial value of t-objects X and Z. Consider an execution E of M in which a transaction T 0 performs read 0 (Z) → v (returning v), writes nv = v to X, and commits. Let E denote the longest prefix of E that cannot be extended with the t-complete step contention-free execution of any transaction performing a t-read X that returns nv and commits. Impossibility of weak DAP and invisible reads Let e be the enabled event of transaction T 0 in the configuration after E . Without loss of generality, assume that E · e can be extended with the t-complete step contention-free execution of a committed transaction T 1 that reads X and returns nv. Let E · e · E 1 be such an execution, where E 1 is the t-complete step contention-free execution fragment of transaction T 1 that performs read 1 (X) → nv and commits. We now prove that M has an execution of the form E · E 2 · e · E 1 , where E 2 is the t-complete step contention-free execution fragment of transaction T 2 that performs read 2 (X) → v and commits. We observe that E · E 2 is an execution of M . Indeed, by OF TM-progress and OF TM-liveness, T 2 must return a matching response that is not A 2 in E · E 2 , and by the definition of E , this response must be the initial value v of X. By the assumption of invisible reads, E 2 does not contain any nontrivial events. Consequently, E ·E 2 ·e·E 1 is indistinguishable to transaction T 1 from the execution E ·e·E 1 . Thus, E ·E 2 ·e·E 1 is also an execution of M (Figure 5.1a). Claim 5.2. M has an execution of the form E ·E 2 ·E 3 ·e·E 1 where E 3 is the t-complete step contentionfree execution fragment of transaction T 3 that writes nv = v to Z and commits. Proof. The proof is through a sequence of indistinguishability arguments to construct the execution. We first claim that M has an execution of the form E · E 2 · e · E 1 · E 3 . Indeed, by OF TM-progress and OF TM-liveness, T 3 must be committed in E · E 2 · e · E 1 · E 3 . Since M uses invisible reads, the execution E · E 2 · e · E 1 · E 3 is indistinguishable to transactions T 1 and T 3 from the executionÊ · E 1 · E 3 , whereÊ is the t-incomplete step contention-free execution of transaction T 0 with WsetÊ(T 0 ) = {X}; RsetÊ(T 0 ) = ∅ that writes nv to X. Observe that the execution E · E 2 · e · E 1 · E 3 is indistinguishable to transactions T 1 and T 3 from the executionÊ · E 1 · E 3 , in which transactions T 3 and T 1 are disjoint-access. Consequently, by Lemma 2.10, T 1 and T 3 do not contend on any base object inÊ · E 1 · E 3 . Thus, M has an execution of the form E · E 2 · e · E 3 · E 1 (Figure 5.1b). By definition of E , T 0 applies a nontrivial primitive to some base object, say b, in event e that T 1 must access in E 1 . Thus, the execution fragment E 3 does not contain any nontrivial event on b in the execution E · E 2 · e · E 1 · E 3 . In fact, since T 3 is disjoint-access with T 0 in the executionÊ · E 3 · E 1 , by Lemma 2.10, it cannot access the base object b to which T 0 applies a nontrivial primitive in the event e. Thus, transaction T 3 must perform the same sequence of events E 3 immediately after E , implying that M has an execution of the form E · E 2 · E 3 · e · E 1 (Figure 5.1c). Finally, we observe that the execution E · E 2 · E 3 · e · E 1 established in Claim 5.2 is indistinguishable to transactions T 2 and T 3 from an executionẼ · E 2 · E 3 · e · E 1 , where Wset(T 0 ) = {X} and Rset(T 0 ) = ∅ iñ E. But transactions T 3 and T 2 are disjoint-access inẼ · E 2 · E 3 · e · E 1 and by Lemma 2.10, T 2 and T 3 do not contend on any base object in this execution. Thus, M has an execution of the form E · E 3 · E 2 · e · E 1 (Figure 5.1d) in which T 3 precedes T 2 in real-time order. However, the execution E · E 3 · E 2 · e · E 1 is not strictly serializable: T 0 must be committed in any serialization and transaction T 2 must precede T 0 since read 2 (X) returns the initial value of X. To respect real-time order, T 3 must precede T 2 , while T 0 must precede T 1 since read 1 (X) returns nv, the value of X updated by T 0 . Finally, T 0 must precede T 3 since read 0 (Z) returns the initial value of Z. But there exists no such serialization-contradiction. A linear lower bound on memory stall complexity We prove a linear (in n) lower bound for strictly serializable TM implementations in OF on the total number of memory stalls incurred by a single t-read operation. Inductively, for each k ≤ n − 1, we construct a specific k-stall execution [46] in which some t-read operation by a process p incurs k stalls. In the k-stall execution, k processes are partitioned into disjoint subsets S 1 , . . . , S i . The execution can be represented as α · σ 1 · · · σ i ; α is p-free, where in each σ j , j = 1, . . . , i, p first runs by itself, then each process in S j applies a nontrivial event on a base object b j , and then p applies an event on b j . Moreover, p does not detect step contention in this execution and, thus, must return a non-abort value in its t-read and commit in the solo extension of it. Additionally, it is guaranteed that in any extension of α by the processes other than {p} ∪ S 1 ∪ S 2 ∪ . . . ∪ S i , no nontrivial primitive is applied on a base object accessed in σ 1 · · · σ i . Assuming that k ≤ n − 2, we introduce a not previously used process executing an updating transaction immediately after α, so that the subsequent t-read operation executed by p is "perturbed" (must return another value). This will help us to construct a (k + k )-stall execution α · α · σ 1 · · · σ i · σ i+1 , where k > 0. The formal proof follows: Theorem 5.3. Every strictly serializable TM implementation M ∈ OF has a (n − 1)-stall execution E for a t-read operation performed in E. Proof. We proceed by induction. Observe that the empty execution is a 0-stall execution since it vacuously satisfies the invariants of Definition 2.18. Let v be the initial value of t-objects X and Z. Let α = α 1 · · · α n−2 be a step contention-free execution of a strictly serializable TM implementation M ∈ OF, where for all j ∈ {1, . . . , n − 2}, α j is the longest prefix of the execution fragmentᾱ j that denotes the t-complete step-contention free execution of committed transaction T j (invoked by process p j ) that performs read j (Z) → v, writes value nv = v to X in the execution α 1 · · · α j−1 ·ᾱ j such that • tryC j () is incomplete in α j , • α 1 · · · α j cannot be extended with the t-complete step contention-free execution fragment of any transaction T n−1 or T n that performs exactly one t-read of X that returns nv and commits. Assume, inductively, that α · σ 1 · · · σ i is a k-stall execution for read n (X) executed by process p n , where 0 ≤ k ≤ n − 2. By Definition 2.18, there are distinct base objects b 1 , . . . b i accessed by disjoint sets of processes S 1 . . . S i in the execution fragment σ 1 · · · σ i , where |S 1 ∪ . . . ∪ S i | = k and σ 1 · · · σ i contains no events of processes not in S 1 ∪ . . . ∪ S i ∪ {p n }. We will prove that there exists a (k + k )-stall execution for read n (X), for some k ≥ 1. By Lemma 2.12, α ·σ 1 · · · σ i is indistinguishable to T n from a step contention-free execution. Let σ be the finite step contention-free execution fragment that extends α · σ 1 · · · σ i in which T n performs events by itself: completes read n (X) and returns a response. By OF TM-progress and OF TM-liveness, read n (X) and the subsequent tryC k must each return non-A n responses in α · σ 1 · · · σ i · σ. By construction of α and strict serializability of M , read n (X) must return the response v or nv in this execution. We prove that there exists an execution fragment γ performed by some process p n−1 ∈ ({p n } ∪ S 1 ∪ · · · ∪ S i ) extending α that contains a nontrivial event on some base object that must be accessed by read n (X) in σ 1 · · · σ i · σ. Consider the case that read n (X) returns the response nv in α · σ 1 · · · σ i · σ. We define a step contentionfree fragment γ extending α that is the t-complete step contention-free execution of transaction T n−1 executed by some process p n−1 ∈ ({p n } ∪ S 1 ∪ · · · ∪ S i ) that performs read n−1 (X) → v, writes nv = v to Z and commits. By definition of α, OF TM-progress and OF TM-liveness, M has an execution of the form α · γ. We claim that the execution fragment γ must contain a nontrivial event on some base object that must be accessed by read n (X) in σ 1 · · · σ i · σ. Suppose otherwise. Then, read n (X) must return the response nv in σ 1 · · · σ i · σ. But the execution α · σ 1 · · · σ i · σ is not strictly serializable. Since read n (X) → nv, there exists a transaction T q ∈ txns(α) that must be committed and must precede T n in any serialization. Transaction T n−1 must precede T n in any serialization to respect the real-time order and T n−1 must precede T q in any serialization. Also, T q must precede T n−1 in any serialization. But there exists no such serialization. Consider the case that read n (X) returns the response v in α · σ 1 · · · σ i · σ. In this case, we define the step contention-free fragment γ extending α as the t-complete step contention-free execution of transaction T n−1 executed by some process p n−1 ∈ ({p n } ∪ S 1 ∪ · · · ∪ S i ) that writes nv = v to X and commits. By definition of α, OF TM-progress and OF TM-liveness, M has an execution of the form α · γ. By strict serializability of M , the execution fragment γ must contain a nontrivial event on some base object that must be accessed by read n (X) in σ 1 · · · σ i · σ. Suppose otherwise. Then, σ 1 · · · σ i · γ · σ is an execution of M in which read n (X) → v. But this execution is not strictly serializable: every transaction T q ∈ txns(α) must be aborted or must be preceded by T n in any serialization, but committed transaction T n−1 must precede T n in any serialization to respect the real-time ordering of transactions. But then read n (X) must return the new value nv of X that is updated by T n−1 -contradiction. Since, by Definition 2.18, the execution fragment γ executed by some process p n−1 ∈ ({p n }∪S 1 ∪· · ·∪S i ) contains no nontrivial events to any base object accessed in σ 1 · · · σ i , it must contain a nontrivial event to some base object b i+1 ∈ {b 1 , . . . , b i } that is accessed by T n in the execution fragment σ. Let A denote the set of all finite ({p n } ∪ S 1 . . . ∪ S i )-free execution fragments that extend α. Let b i+1 ∈ {b 1 , . . . , b i } be the first base object accessed by T n in the execution fragment σ to which some transaction applies a nontrivial event in the execution fragment α ∈ A. Clearly, some such execution α · α exists that contains a nontrivial event in α to some distinct base object b i+1 not accessed in the execution fragment σ 1 · · · σ i . We choose the execution α · α ∈ A that maximizes the number of transactions that are poised to apply nontrivial events on b i+1 in the configuration after α · α . Let S i+1 denote the set of processes executing these transactions and k = |S i+1 | (k > 0 as already proved). We now construct a (k + k )-stall execution α · α · σ 1 · · · σ i · σ i+1 for read n (X), where in σ i+1 , p n applies events by itself, then each of the processes in S i+1 applies a nontrivial event on b i+1 , and finally, p n accesses b i+1 . By construction, α · α is p n -free. Let σ i+1 be the prefix of σ not including T n 's first access to b i+1 , concatenated with the nontrivial events on b i+1 by each of the k transactions executed by processes in S i+1 followed by the access of b i+1 by T n . Observe that T n performs exactly one t-operation read n (X) in the execution fragment σ 1 · · · σ i+1 and σ 1 · · · σ i+1 contains no events of processes not in ({p n } ∪ S 1 ∪ · · · ∪ S i ∪ S i+1 ). To complete the induction, we need to show that in every ({p n } ∪ S 1 ∪ · · · ∪ S i ∪ S i+1 )-free extension of α · α , no transaction applies a nontrivial event to any base object accessed in the execution fragment σ 1 · · · σ i · σ i+1 . Let β be any such execution fragment that extends α · α . By our construction, σ i+1 is the execution fragment that consists of events by p n on base objects accessed in σ 1 · · · σ i , nontrivial events on b i+1 by transactions in S i+1 and finally, an access to b i+1 by p n . Since α·σ 1 · · · σ i is a k-stall execution by our induction hypothesis, α · β is ({p n } ∪ S 1 . . . ∪ S i })-free and thus, α · β does not contain nontrivial events on any base object accessed in σ 1 · · · σ i . We now claim that β does not contain nontrivial events to b i+1 . Suppose otherwise. Thus, there exists some transaction T that has an enabled nontrivial event to b i+1 in the configuration after α · α · β , where β is some prefix of β. But this contradicts the choice of α · α as the extension of α that maximizes k . Thus, α · α · σ 1 · · · σ i · σ i+1 is indeed a (k + k )-stall execution for T n where 1 < k < (k + k ) ≤ (n − 1). Since there are at most n processes that are concurrent at any prefix of an execution, the lower bound of Theorem 5.3 is tight. A linear lower bound on expensive synchronization for RW DAP We prove that opaque, RW DAP TM implementations in OF have executions in which some read-only transaction performs a linear (in n) number of non-overlapping RAWs or AWARs. Prior to presenting the formal proof, we present an overview (the executions used in the proof are depicted in Figure 5.2). We first construct an execution of the formρ 1 · · ·ρ m , where for all j ∈ {1, . . . , m}; m = n − 3,ρ j denotes the t-complete step contention-free execution of transaction T j that reads the initial value v in a distinct t-object Z j , writes a new value nv to a distinct t-object X j and commits. Observe that since any two transactions that participate in this execution are mutually read-write disjoint-access, they cannot contend on the same base object and, thus, the execution appears solo to each of them. Let each of two new transactions T n−1 and T n perform m t-reads on objects X 1 , . . . , X m . For j ∈ {1, . . . , m}, we now define ρ j to be the longest prefix ofρ j such that ρ 1 · · · ρ j cannot be extended the complete step contention-free execution fragment of T n−1 or T n where the t-read of X j returns nv ( Figure 5.2a). Let e j be the event by T j enabled after ρ 1 · · · ρ j . Let us count the number of indices j ∈ {1, . . . , m} such that T n−1 (resp., T n ) reads the new value nv in X j when it runs after ρ 1 · · · ρ j · e j . Without loss of generality, assume that T n−1 has more such indices j than T n . We are going to show that, in the worst-case, T n must perform m 2 non-overlapping RAW/AWARs in the course of performing m t-reads of X 1 , . . . , X m immediately after ρ 1 · · · ρ m . Consider any j ∈ {1, . . . , m} such that T n−1 , when it runs step contention-free after ρ 1 · · · ρ j · e j , reads nv in X j . We claim that, in ρ 1 · · · ρ m extended with the step contention-free execution of T n performing j t-reads read n (X 1 ) · · · read n (X j ), the t-read of X j must contain a RAW or an AWAR. Suppose not. Then we are going to schedule a specific execution of T j and T n−1 concurrently with read n (X j ) so that T n cannot detect the concurrency. By the definition of ρ j and the fact that the TM is RW DAP, T n , when it runs step contention-free after ρ 1 · · · ρ m , must read v (the initial value) in X j (Figure 5.2b). Then the following execution exists: ρ 1 · · · ρ m is extended with the t-complete step contention-free execution of T n−2 writing nv to Z j and committing, after which T n runs step contentionfree and reads v in X j (Figure 5.2c). Since, by the assumption, read n (X j ) contains no RAWs or AWARs, we show that we can run T n−1 performing j t-reads concurrently with the execution of read n (X j ) so that T n and T n−1 are unaware of step contention and read n−1 (X j ) still reads the value nv in X j . To understand why this is possible, consider the following: we take the execution depicted in Figure 5.2c, but without the execution of read n (X j ), i.e, ρ 1 · · · ρ m is extended with the step contention-free execution of committed transaction T n−2 writing nv to Z j , after which T n runs step contention-free performing j − 1 t-reads. This execution can be extended with the step e j by T j , followed by the step contention-free execution of transaction T n−1 in which it reads nv in X j . Indeed, by RW DAP and the definition of ρ j · e j , there exists such an execution ( Figure 5.2d). Since read n (X j ) contains no RAWs or AWARs, we can reschedule the execution fragment e j followed by the execution of T n−1 so that it is concurrent with the execution of read n (X j ) and neither T n nor T n−1 see a difference ( Figure 5.2e). Therefore, in this execution, read n (X j ) still returns v, while read n−1 (X j ) returns nv. However, the resulting execution ( Figure 5.2e) is not opaque. In any serialization the following must hold. Since T n−1 reads the value written by T j in X j , T j must be committed. Since read n (X j ) returns the initial value v, T n must precede T j . The committed transaction T n−2 , which writes a new value to Z j , must precede T n to respect the real-time order on transactions. However, T j must precede T n−2 since read j (Z j ) returns the initial value and the implementation is opaque. The cycle T j → T n−2 → T n → T j implies a contradiction. Thus, we can show that transaction T n must perform Ω(n) RAW/AWARs during the execution of m t-reads immediately after ρ 1 · · · ρ m . A linear lower bound on expensive synchronization for RW DAP R1(Z1) → v W1(X1, nv) tryC 1 Rm(Zm) → v Wm(Xm, nv) tryC m T1 Tm (a) Transactions in {T 1 , . . . , Tm};m = n − 3 are mutually readwrite disjoint-access and concurrent; they are poised to apply a nontrivial primitive R1(Z1) → v W1(X1, nv) tryC 1 Rm(Zm) → v Wm(Xm, nv) tryC m Rn(X1) → v Rn(Xj) → v T1 Tm Tn (b) Tn performs m reads; each readn(X j ) returns initial value v R1(Z1) → v W1(X1, nv) tryC 1 Rm(Zm) → v Wm(Xm, nv) tryC m Rn(X1) → v Rn(Xj) → v Wn−2(Zj, nv) T1 Tm Tn Tn−2 (c) T n−2 commits; Tn is read-write disjoint-access with T n−2 R1(Z1) → v W1(X1, nv) tryC 1 Rm(Zm) → v Wm(Xm, nv) tryC m Wn−2(Zj, nv) (event of Tj) ej Rn−1(X1) Rn−1(Xj) → nv Rn(X1) · · · Rn(Xj−1) T1 Tm Tn Tn−2 Tn−1 (d) T n−1 is read-write disjoint-access with T n−2 ; read n−1 (X j ) returns the value nv R1(Z1) → v W1(X1, nv) tryC 1 Rm(Zm) → v Wm(Xm, nv) tryC m Rn(X1) · · · Rn(Xj−1) Rn(Xj) → v Wn−2(Zj, nv) (event of Tj) ej Rn−1(X1) Rn−1(Xj) → nv T1 Tm Tn Tn−2 Tn−1 (e) Suppose readn(X j ) does not perform a RAW/AWAR, Tn and T n−1 are unaware of step contention and Tn misses the event of T j , but R n−1 (X j ) returns the value of X j that is updated by T j Proof. For all j ∈ {1, . . . , m}; m = n − 3, let v be the initial value of t-objects X j and Z j . Throughout this proof, we assume that, for all i ∈ {1, . . . , n}, transaction T i is invoked by process p i . By OF TM-progress and OF TM-liveness, any opaque and RW DAP TM implementation M ∈ OF has an execution of the formρ 1 · · ·ρ m , where for all j ∈ {1, . . . , m},ρ j denotes the t-complete step contention-free execution of transaction T j that performs read j (Z j ) → v, writes value nv = v to X j and commits. By construction, any two transactions that participate inρ 1 · · ·ρ n are mutually read-write disjoint-access and cannot contend on the same base object. It follows that for all 1 ≤ j ≤ m,ρ j is an execution of M . For all j ∈ {1, . . . , m}, we iteratively define an execution ρ j of M as follows: it is the longest prefix ofρ j such that ρ 1 · · · ρ j cannot be extended with the complete step contention-free execution fragment of transaction T n that performs j t-reads: read n (X 1 ) · · · read n (X j ) in which read n (X j ) → nv nor with the complete step contention-free execution fragment of transaction T n−1 that performs j t-reads: read n−1 (X 1 ) · · · read n−1 (X j ) in which read n−1 (X j ) → nv ( Figure 5.2a). For any j ∈ {1, . . . , m}, let e j be the event transaction T j is poised to apply in the configuration after ρ 1 · · · ρ j . Thus, the execution ρ 1 · · · ρ j · e j can be extended with the complete step contention-free executions of at least one of transaction T n or T n−1 that performs j t-reads of X 1 , . . . , X j in which the t-read of X j returns the new value nv. Let T n−1 be the transaction that must return the new value for the maximum number of X j 's when ρ 1 · · · ρ j · e j is extended with the t-reads of X 1 , . . . , X j . We show that, in the worst-case, transaction T n must perform m 2 non-overlapping RAW/AWARs in the course of performing m t-reads of X 1 , . . . , X m immediately after ρ 1 · · · ρ m . Symmetric arguments apply for the case when T n must return the new value for the maximum number of X j 's when ρ 1 · · · ρ j · e j is extended with the t-reads of X 1 , . . . , X j . For succinctness, let α = ρ 1 · · · ρ m · γ · δ j−1 . We now prove that if π j does not contain a RAW or an AWAR, we can define π j 1 · π j 2 = π j to construct an execution of the form α · π j 1 · e j · β · π j 2 ( Figure 5.2e) such that • no event in π j 1 is the application of a nontrivial primitive • α · π j 1 · e j · β · π j 2 is indistinguishable to T n from the step contention-free execution α · π j 1 · π j 2 • α · π j 1 · e j · β · π j 2 is indistinguishable to T n−1 from the step contention-free execution α · e j · β. The following claim defines π j 1 and π j 2 to construct this execution. Claim 5.7. For all j ∈ {1, . . . , m}, M has an execution of the form α · π j 1 · e j · β · π j 2 . Proof. Let t be the first event containing a write to a base object in the execution fragment π j . We represent π j as the execution fragment π j 1 · t · π j f . Since π j 1 does not contain nontrivial events that write to a base object, α · π j 1 · e j · β is indistinguishable to transaction T n−1 from the step contention-free execution α · e j · β (as already proven in Claim 5.6). Consequently, α · π j 1 · e j · β is an execution of M . Since t is not an atomic-write-after-read, M has an execution of the form α · γ · π j 1 · e j · β · t. Secondly, since π j does not contain a read-after-write, any read of a base object performed in π j f may only be performed to base objects previously written in t · π j f . Thus, α · π j 1 · e j · β · t · π j f is indistinguishable to T n from the step contention-free execution α · π j 1 · t · π j f . But, as already proved, α · π j is an execution of M . Choosing π j 2 = t · π j f , it follows that M has an execution of the form α · π j 1 · e j · β · π j 2 . We have now proved that, for all j ∈ {1, . . . , m}, M has an execution of the form ρ 1 · · · ρ m · γ · δ j−1 · π j 1 · e j · β · π j 2 ( Figure 5.2e). The execution in Figure 5.2e is not opaque. Indeed, in any serialization the following must hold. Since T n−1 reads the value written by T j in X j , T j must be committed. Since read n (X j ) returns the initial value v, T n must precede T j . The committed transaction T n−2 , which writes a new value to Z j , must precede T n to respect the real-time order on transactions. However, T j must precede T n−2 since read j (Z j ) returns the initial value of Z j . The cycle T j → T n−2 → T n → T j implies that there exists no such a serialization. Thus, for each j ∈ J, transaction T n must perform a RAW or an AWAR during the t-read of X j in the course of performing m t-reads of X 1 , . . . , X m immediately after ρ 1 · · · ρ m . Since |J| ≥ (n−3) 2 , in the worst-case, T n must perform Ω(n) RAW/AWARs during the execution of m t-reads immediately after ρ 1 · · · ρ m . Algorithms for obstruction-free TMs In this section, we present two opaque obstruction-free TM implementations: the first one satisfies RW DAP, but not strict DAP while the second one satisfies weak DAP, but not RW DAP. An opaque RW DAP TM implementation In this section, we describe a RW DAP TM implementation in OF (based on DSTM [79]). Every t-object X m maintains a base object tvar [m] and every transaction T k maintains a status[k] base object. Both base objects support the read, write and compare-and-swap (cas) primitives. The object tvar [m] stores a triple: the owner of X m is an updating transaction that performs the latest write to X m , the old value and new value of X m represent two latest versions of X m . The base object status[k] denotes if T k is live (i.e. t-incomplete), committed or aborted. Intuitively, if status[k] is committed, then other transactions can safely read the value of the t-objects updated by T k . Implementation of read k (X m ) first reads tvar [m] and checks if the owner of X m is live; if so, it forcefully aborts the owning transaction and returns the old value of X m . Otherwise, if the owner is committed, it returns the new value of X m . In both cases, it only returns a non-abort value if no t-object previously read has been updated since. The write k (X m , v) works similar to the read k (X m ) implementation; but additionally, if the owner of X m is live, it forcefully aborts the owning transaction, assumes ownership of X m , sets v as the new value of X m and leaves the old value of X m unchanged. Otherwise, if the owner of X m is a committed transaction, it updates the old value of X m to be the value of X m updated by its previous owner. The tryC k implementation sets status[k] to committed if it has not been set to aborted by a concurrent transaction, otherwise T k is deemed aborted. Since any t-read operation performs at most two AWARs and the tryC performs only a single AWAR, any read-only transaction T performs at most O(|Rset(T )|) AWARs. The pseudocode is described in Algorithm 5.1. Proof. Since opacity is a safety property, we only consider finite executions [18]. Let E by any finite execution of Algorithm 5.1. Let < E denote a total-order on events in E. Let H denote a subsequence of E constructed by selecting linearization points of t-operations performed in E. The linearization point of a t-operation op, denoted as op is associated with a base object event or an event performed during the execution of op using the following procedure. Completions. First, we obtain a completion of E by removing some pending invocations and adding responses to the remaining pending invocations involving a transaction T k as follows: every incomplete read k , write k , tryC k operation is removed from E; an incomplete write k is removed from E. Linearization points. We now associate linearization points to t-operations in the obtained completion of E as follows: • For every t-read op k that returns a non-A k value, op k is chosen as the event in Line 12 of Algorithm 5.1, else, op k is chosen as invocation event of op k • For every t-write op k that returns a non-A k value, op k is chosen as the event in Line 37 of Algorithm 5.1, else, op k is chosen as invocation event of op k • For every op k = tryC k that returns C k , op k is associated with Line 65. < H denotes a total-order on t-operations in the complete sequential history H. Serialization points. The serialization of a transaction T j , denoted as δ Tj is associated with the linearization point of a t-operation performed during the execution of the transaction. We obtain a t-complete historyH from H as follows: for every transaction T k in H that is complete, but not t-complete, we insert tryC k · A k after H. H is thus a t-complete sequential history. A t-complete t-sequential history S equivalent toH is obtained by associating serialization points to transactions inH as follows: • If T k is an update transaction that commits, then δ T k is tryC k Return C k 68: Return A k • If T k is an aborted or read-only transaction inH, then δ T k is assigned to the linearization point of the last t-read that returned a non-A k value in T k < S denotes a total-order on transactions in the t-sequential history S. Claim 5.9. If T i ≺ RT H T j , then T i < S T j . Proof. This follows from the fact that for a given transaction, its serialization point is chosen between the first and last event of the transaction implying if T i ≺ H T j , then δ Ti < E δ Tj implies T i < S T j Claim 5.10. If transaction T i returns C i in E, then status[i]=committed in E. Proof. Transaction T i must perform the event in Line 66 before returning T i i.e. the cas on its own status to change the value to committed. The proof now follows from the fact that any other transaction may change the status of T i only if it is live (Lines 45 and 21). Claim 5.11. S is legal. Proof. Observe that for every read j (X) → v, there exists some transaction T i that performs write i (X, v) and completes the event in Line 22 to write v as the new value of X such that read j (X) ≺ RT H write i (X, v). For any updating committing transaction T i , δ Ti = tryC i . Since read j (X) returns a response v, the event in Line 12 must succeed the event in Line 66 when T i changes status[i] to committed. Suppose otherwise, then read j (X) subsequently forces T i to abort by writing aborted to status[i] and must return the old value of X that is updated by the previous owner of X, which must be committed in E (Line 40). Since δ Ti = tryC i precedes the event in Line 66, it follows that δ Ti < E readj (X) . We now need to prove that δ Ti < E δ Tj . Consider the following cases: • if T j is an updating committed transaction, then δ Tj is assigned to tryC j . But since readj (X) < E tryC j , it follows that δ Ti < E δ Tj . • if T j is a read-only or aborted transaction, then δ Tj is assigned to the last t-read that did not abort. Again, it follows that δ Ti < E δ Tj . To prove that S is legal, we need to show that, there does not exist any transaction T k that returns C k in S and performs write k (X, v ); v = v such that T i < S T k < S T j . Now, suppose by contradiction that there exists a committed transaction T k , X ∈ Wset(T k ) that writes v = v to X such that T i < S T k < S T j . Since T i and T k are both updating transactions that commit, (T i < S T k ) ⇐⇒ (δ Ti < E δ T k ) (δ Ti < E δ T k ) ⇐⇒ ( tryC i < E tryC k ) Since, T j reads the value of X written by T i , one of the following is true: tryC i < E tryC k < E readj (X) or tryC i < E readj (X) < E tryC k . If tryC i < E tryC k < E readj (X) , then the event in Line 66 performed by T k when it changes the status field to committed precedes the event in Line 12 performed by T j . Since tryC i < E tryC k and both T i and T k are committed in E, T k must perform the event in Line 37 after T i changes status[i] to committed since otherwise, T k would perform the event in Line 45 and change status[i] to aborted, thereby forcing T i to return A i . However, read j (X) observes that the owner of X is T k and since the status of T k is committed at this point in the execution, read j (X) must return v and not v-contradiction. Thus, tryC i < E readj (X) < E tryC k . We now need to prove that δ Tj indeed precedes δ T k = tryC k in E. Now consider two cases: • Suppose that T j is a read-only transaction. Then, δ Tj is assigned to the last t-read performed by T j that returns a non-A j value. If read j (X) is not the last t-read that returned a non-A j value, then there exists a read j (X ) such that readj (X) < E tryC k < E readj (X ) . But then this t-read of X must abort since the value of X has been updated by T k since T j first read X-contradiction. • Suppose that T j is an updating transaction that commits, then δ Tj = tryC j which implies that readj (X) < E tryC k < E tryC j . Then, T j must neccesarily perform the validation of its read set in Line 65 and return A j -contradiction. Claims 5.9 and 5.11 establish that Algorithm 5.1 is opaque. To prove OF TM-progress, we proceed by enumerating the cases under which a transaction T k may be aborted in any execution. • Suppose that there exists a read k (X m ) performed by T k that returns A k . If read k (X m ) returns A k in Line 28, then there exists a concurrent transaction that updated a t-object in Rset(T k ) or changed status[k] to aborted. In both cases, T k returns A k only because there is step contention. • Suppose that there exists a write k (X m , v) performed by T k that returns A k in Line 54. Thus, either a concurrent transaction has changed status[k] to aborted or the value in tvar [m] has been updated since the event in Line 37. In both cases, T k returns A k only because of step contention with another transaction. • Suppose that a read k (X m ) or write k (X m , v) return A k in Lines 21 and 45 respectively. Thus, a concurrent transaction has takes steps concurrently by updating the status of owner m since the read by T k in Lines 12 and 37 respectively. • Suppose that tryC k () returns A k in Line 62. This is because there exists a t-object in Rset(T k ) that has been updated by a concurrent transaction since, i.e., tryC k () returns A k only on encountering step contention. It follows that in any step contention-free execution of a transaction T k from a T k -free execution, T k must return C k after taking a finite number of steps. The enumeration above also proves that M implements a progressive TM. (Read-write disjoint-access parallelism) Consider any execution E of Algorithm 5.1 and let T i and T j be any two transactions that contend on a base object b in E. We need to prove that there is a path between a t-object in Dset(T i ) and a t-object in Dset(T j ) inG(T i , T j , E) or there exists X ∈ Dset(T i ) ∩ Dset(T j ). Recall that there exists an edge between t-objects X and Y inG(T i , T j , E) only if there exists a transaction T ∈ txns(E) such that {X, Y } ∈ Wset(T ). • Suppose that T i and T j contend on base object tvar[m] belonging to t-object X m in E. By Algorithm 5.1, a transaction accesses X m only if X m is contained in Dset(T m ). Thus, both T i and T j must access X m . • Suppose that T i and T j contend on base object status[i] in E (the case when T i and T j contend on status[j] is symmetric). T j accesses status[i] while performing a t-read of some t-object X in Lines 15 and 21 only if T i is the owner of X. Also, T j accesses status[i] while performing a t-write to X in Lines 39 and 45 only if T i is the owner of X. But if T i is the owner of X, then X ∈ Wset(T i ). • Suppose that T i and T j contend on base object status[m] belonging to some transaction T m in E. Firstly, observe that T i or T j access status[m] only if there exist t-objects X and Y in Dset(T i ) and Dset(T j ) respectively such that {X, Y } ∈ Wset(T m ). This is because T i and T j would both read status[m] in Lines 15 (during t-read) and 39 (during t-write) only if T m was the previous owner of X and Y . Secondly, one of T i or T j applies a nontrivial primitive to status[m] only if T i and T j read status[m]=live in Lines 15 (during t-read) and 37 (during t-write). Thus, at least one of T i or T j is concurrent to T m in E. It follows that there exists a path between X and Y inG(T i , T j , E). (Complexity) Every t-read operation performs at most one AWAR in an execution E (Line 21) of Algorithm 5.1. It follows that any read-only transaction T k ∈ txns(E) performs at most |Rset(T k )| AWARs in E. The linear step-complexity is immediate from the fact that during the t-read operations, the transaction validates its entire read set (Line 25). All other t-operations incur O(1) step-complexity since they involve no iteration statements like for and while loops. Since at most n − 1 transactions may be t-incomplete at any point in an execution E, it follows that E is at most a (n − 1)-stall execution for any t-read op and every T ∈ txns(E) incurs O(n) stalls on account of any event performed in E. More specifically, consider the following execution E: for all i ∈ {1, . . . , n−1}, each transaction T i performs write i (X m , v) in a step-contention free execution until it is poised to apply a nontrivial event on tvar [m] (Line 22). By OF TM-progress, we construct E such that each of the T i is poised to apply a nontrivial event on tvar [m] after E. Consider the execution fragment of read n (X m ) that is poised to perform an event e that reads tvar [m] (Line 12) immediately after E. In the constructed execution, T n incurs O(n) stalls on account of e and thus, produces the desired (n − 1)-stall execution for read n (X). An opaque weak DAP TM implementation In this section, we describe a weak DAP TM implementation in OF with constant step-complexity t-read operations. Return C k 22: Return A k Algorithm 5.2 describes a weak DAP implementation in OF that does not satisfy read-write DAP. The code for the t-write operations is identical to Algorithm 5.1. During the t-read of t-object X m by transaction T k , T k becomes the owner of X m thus eliminating the per-read validation step-complexity inherent to Algorithm 5.1. Similarly, tryC k also not involve performing the validation of the T k 's read set; the implementation simply sets status[k] = committed and returns C k . Proof. The proofs of opacity, TM-liveness and TM-progress are almost identical to the analogous proofs for Algorithm 5.1. (Weak disjoint-access parallelism) Consider any execution E of Algorithm 5.2 and let T i and T j be any two transactions that contend on a base object b in E. We need to prove that there is a path between a tobject in Dset(T i ) and a t-object in Dset(T j ) inG(T i , T j , E) or there exists X ∈ Dset(T i )∩Dset(T j ). Recall that there exists an edge between t-objects X and Y in G(T i , T j , E) only if there exists a transaction T ∈ txns(E) such that {X, Y } ∈ Dset(T ). Why Transactional memory should not be obstruction-free • Suppose that T i and T j contend on base object tvar[m] belonging to t-object X m in E. By Algorithm 5.2, a transaction accesses X m only if X m is contained in Dset(T m ). Thus, both T i and T j must access X m . • Suppose that T i and T j contend on base object status[i] in E (the case when T i and T j contend on status[j] is symmetric). T j accesses status[i] while performing a t-read of some t-object X in Lines 4 and 10 only if T i is the owner of X. Also, T j accesses status[i] while performing a t-write to X in Lines 39 and 45 only if T i is the owner of X. But if T i is the owner of X, then X ∈ Dset(T i ). • Suppose that T i and T j contend on base object status[m] belonging to some transaction T m in E. Firstly, observe that T i or T j access status[m] only if there exist t-objects X and Y in Dset(T i ) and Dset(T j ) respectively such that {X, Y } ∈ Dset(T m ). This is because T i and T j would both read status[m] in Lines 4 (during t-read) and 39 (during t-write) only if T m was the previous owner of X and Y . Secondly, one of T i or T j applies a nontrivial primitive to status[m] only if T i and T j read status[m]=live in Lines 4 (during t-read) and 37 (during t-write). Thus, at least one of T i or T j is concurrent to T m in E. It follows that there exists a path between X and Y inG(T i , T j , E). (Complexity) Since no implementation of any of the t-operation contains any iteration statements like for and while loops), the proof follows. As a synchronization abstraction, TM came as an alternative to conventional lock-based synchronization, and it therefore appears natural that early TM implementations [52,79,101,120], avoided using locks. Instead, early TM designs relied on non-blocking synchronization, where a prematurely halted transaction cannot prevent all other transactions from committing. Possibly the weakest progress condition elucidating non-blocking TM-progress is obstruction-freedom. Why Transactional memory should not be obstruction-free However, in 2005, Ennals [48] argued that obstruction-free TMs inherently yield poor performance, because they require transactions to forcefully abort each other. Ennals further described a lock-based TM implementation [47] satisfying progressiveness that he claimed to outperform DSTM [79], the most referenced obstruction-free TM implementation at the time. Inspired by [48], more recent lock-based progressive TMs, such as TL [40], TL2 [39] and NOrec [36], demonstrate better performance than obstruction-free TMs on most workloads. There is a considerable amount of empirical evidence on the performance gap between non-blocking (obstruction-free) and blocking (progressive) TM implementations but no analytical result explains it. We present complexity lower and upper bounds that provide such an explanation. To exhibit a complexity gap between blocking and non-blocking TMs, we go back to the the progressive opaque TM implementation LP (Algorithm 4.1) that beats the impossibility result and the lower bounds we established for obstruction-free TMs. Recall that our implementation LP , (1) uses only read-write base objects and provides wait-free TM-liveness, (2) In fact, (iv) and (v) exhibit a linear separation between blocking and non-blocking TMs w.r.t expensive synchronization and memory stall complexity, respectively. Altogether, our results exhibit a considerable complexity gap between progressive and obstruction-free TMs, as summarized in Figure 5.3, that seems to justify the shift in TM practice (circa. 2005) from non-blocking to blocking TMs. Overcoming our lower bounds for obstruction-free TMs individually is comparatively easy. Say, TL [40] combines strict DAP with invisible reads, but it is not read-write, and it does not provide constant RAW/AWAR and stall complexities. Coming out with a single algorithm that beats all these lower bounds is quite nontrivial. Our algorithm LP incurs the cost of incremental validation, i.e., checking that the current read set has not changed per every new read operation. This is, however, unavoidable for invisible read algorithms (cf. Theorem 4.2), and is, in fact, believed to yield better performance in practice than "visible" reads [36,40,47], and we show that it enables constant stall and RAW/AWAR complexity. Related work and Discussion In this section, we summarize the results presented in this chapter and identify some unresolved questions. Lower bounds for non-blocking TMs. Complexity of obstruction-free TMs was first studied by Guerraoui and Kapalka [60,64] who proved that they cannot provide strict DAP. However, as we show in Section 5.5, it is possible to realize weaker than strict DAP variants of obstruction-free opaque TMs. Bushkov et al. [31] improved on the impossibility result in [60] and showed that a variant of strict DAP cannot be combined with obstruction-free TM-progress, even if a weaker (than strictly serializability) TMcorrectness property is assumed. In the thesis, we do not consider relaxations of strict serializability. Guerraoui and Kapalka [60,64] also proved that a strict serializable TM that provides OF TM-progress and wait-free TM-liveness cannot be implemented using only read and write primitives. An interesting open question is whether we can implement strict serializable TMs in OF using only read and write primitives. Observe that, since there are at most n concurrent transactions, we cannot do better than (n − 1) stalls (cf. Definition 2.18). Thus, the lower bound of Theorem 5.3 is tight. Moreover, we conjecture that the linear (in n) lower bound of Theorem 5.4 for RW DAP opaque obstruction-free TMs can be strengthened to be linear in the size of the transaction's read set. Then, Algorithm 5.1, which proves a linear upper bound in the size of the transaction's read set, would allow us to establish a linear tight bound (in the size of the transaction's read set) for RW DAP opaque obstruction-free TMs. Blocking versus non-blocking TMs. As highlighted in [40,48], obstruction-free TMs typically must forcefully abort pending conflicting transactions. This observation inspires the impossibility of invisible reads (Theorem 5.1). Typically, to detect the presence of a conflicting transaction and abort it, the reading transaction must employ a RAW or a read-modify-write primitive like compare-and-swap, motivating the linear lower bound on expensive synchronization (Theorem 5.4). Also, in obstruction-free TMs, a transaction may not wait for a concurrent inactive transaction to complete and, as a result, we may have an execution in which a transaction incurs a distinct stall due to a transaction run by each other process, hence the linear stall complexity (Theorem 5.3). Intuitively, since transactions in progressive TMs may abort themselves in case of conflicts, they can employ invisible reads and maintain constant stall and RAW/AWAR complexities. The lower bound and the proof technique in Theorem 5.3 is inspired by an analogous lower bound on linearizable solo-terminating implementations [16,46] of a wide class of "perturbable" objects that include counters, compare-and-swap and single-writer snapshots [16,46]. Informally, the definition of solo-termination (adapted to the TM context) says that for every finite execution E, and every transaction T that is t-incomplete in E, there is a finite step contention-free extension in which T eventually commits. Observe that, under this definition, T is guaranteed to commit even in some executions that are not step contention-free for T . However, the definition of OF TM-progress used in the thesis ensures that T is guaranteed to commit only if all its events are issued in the absence of step contention. Moreover, [16] described a single-lock (only the process holding the lock can invoke an operation) implementation of these objects that incurs O(log n) stalls, thus establishing a separation between the worst-case operation stall complexity of non-blocking and blocking (i.e., lock-based) implementations of these objects. In this chapter, we presented a linear separation in memory stall complexity between obstruction-free TMs and lock-based TMs characterized by progressiveness, which is a strictly stronger (than single-lock) progress guarantee, thus establishing the inherent cost of non-blocking progress in the TM context. Some benefits of obstruction-free TMs, namely their ability to make progress even if some transactions prematurely fail, are not provided by progressive TMs. However, several papers [39,40,48] argued that lock-based TMs tend to outperform obstruction-free ones by allowing for simpler algorithms with lower overhead, and their inherent progress issues may be resolved using timeouts and contentionmanagers [115]. This chapter explains the empirically observed performance gap between blocking and non-blocking TMs via a series of lower bounds on obstruction-free TMs and a progressive TM algorithm that beats all of them. 6 Lower bounds for partially non-blocking TMs Overview It is easy to see that dynamic TMs where the patterns in which transactions access t-objects are not known in advance do not allow for wait-free TMs [64], i.e., every transaction must commit in a finite number of steps of the process executing it, regardless of the behavior of concurrent processes. Suppose that a transaction T 1 reads t-object X, then a concurrent transaction T 2 reads t-object Y , writes to X and commits, and finally T 2 writes to Y . Since T 1 has read the "old" value in X and T 2 has read the "old" value in Y , there is no way to commit T 1 and order the two transactions in a sequential execution. As this scenario can be repeated arbitrarily often, even the weaker guarantee of local progress that only requires that each transaction eventually commits if repeated sufficiently often, cannot be ensured by any strictly serializable TM implementation, regardless of the base objects it uses [32]. 1 But can we ensure that at least some transactions commit wait-free and what are the inherent costs? It is often argued that many realistic workloads are read-dominated : the proportion of read-only transactions is higher than that of updating ones, or read-only transactions have much larger data sets than updating ones [25,65]. Therefore, it seems natural to require that read-only transactions commit wait-free. Since we are interested in complexity lower bounds, we require that updating transaction provide only sequential TM-progress. First, we focus on strictly serializable TMs with the above TM-progress conditions that use invisible reads. We show that this requirement results in maintaining unbounded sets of versions for every data item, i.e., such implementations may not be practical due to their space complexity. Secondly, we prove that strictly serializable TMs with these progress conditions cannot ensure strict DAP. Thus, two transactions that access mutually disjoint data sets may prevent each other from committing. Thirdly, for weak DAP TMs, we show that a read-only transaction (with an arbitrarily large read set) must sometimes perform at least one expensive synchronization pattern [17] per t-read operation, i.e., the expensive synchronization complexity of a read-only transaction is linear in the size of its data set. Formally, we denote by RWF the class of partially non-blocking TMs. Definition 6.1. (The class RWF) A TM implementation M ∈ RWF iff in its every execution: • ( sequential TM-progress and sequential TM-liveness for updating transactions) i.e., every transaction running step contention-free from a t-quiescent configuration, commits in a finite number of its steps. R 0 (X 1 ) → v 01 R 2 (X 1 ) → v 11 R 2i (X 1 ) → v i1 ∀X ∈ X : write v 1 T 1 commits ∀X ∈ X : write v i T 2i−1 commits Phase 0| T 0 T 2 Phase 1| T 1 Phase i| T 2i−1 T 2i ↓ extend to c − 1 phases (a) for all i ∈ {1, . . . , c − 1}, T 2i−1 writes v i to each X ; read 2i (X 1 ) must return v i 1 R 0 (X 1 ) → v 01 R 2 (X 1 ) → v 11 R 2i (X 1 ) → v i1 R 0 (X 2 ) → v 02 · · · R 0 (X ) → v 0 · · · R 2 (X 2 ) → v 12 · · · R 2 (X ) → v 1 · · · R 2i (X 2 ) → v i2 · · · R 2i (X ) → v i · · · ∀X ∈ X : write v 1 T 1 commits ∀X ∈ X : write v i T 2i−1 commits T 0 T 2 T 1 T 2i−1 T 2i ↓ extend to c − 1 phases Roadmap of Chapter 6. Section 6.2 presents a lower bound on the inherent space complexity of TMs in RWF. Section 6.3 proves the impossibility of strict DAP TMs in RWF while in Section 6.4, assuming weak DAP, we prove a linear, in the size of the transaction's read set, lower bound on expensive synchronization complexity. We conclude this chapter with a discussion of the related work and open questions concerning TMs in RWF. The space complexity of invisible reads We prove that every strictly serializable TM implementation M ∈ RWF that uses invisible reads must keep unbounded sets of values for every t-object. To do so, for every c ∈ N, we construct an execution of M that maintains at least c distinct values for every t-object. We require the following technical definition: • for all i ∈ {1, . . . , c}, the response of the i th t-read of X in E is v i . Impossibility of strict DAP Theorem 6.1. Let M be any strictly serializable TM implementation in RWF that uses invisible reads, and X , any set of t-objects. Then, for every c ∈ N, there exists an execution E of M such that E maintains at least c distinct values of each t-object X ∈ X . Proof. Let v 0 be the initial value of t-object X ∈ X . For every c ∈ N, we iteratively construct an execution E of M of the form depicted in Figure 6.1a. The construction of E proceeds in phases: there are at most c − 1 phases. For all i ∈ {0, . . . c − 1}, we denote the execution after phase i as E i which is defined as follows: • E 0 is the complete step contention-free execution fragment α 0 of read-only transaction T 0 that performs read 0 (X 1 ) → v 01 • for all i ∈ {1, . . . , c − 1}, E i is defined to be an execution of the form α 0 · ρ 1 · α 1 · · · ρ i · α i such that for all j ∈ {1, . . . , i}, ρ j is the t-complete step contention-free execution fragment of an updating transaction T 2j−1 that, for all X ∈ X writes the value v j and commits α j is the complete step contention-free execution fragment of a read-only transaction T 2j that performs read 2j (X 1 ) → v j1 Since read-only transactions are invisible, for all i ∈ {0, . . . , c − 1}, the execution fragment α i does not contain any nontrivial events. Consequently, for all i < j ≤ c − 1, the configuration after E i is indistinguishable to transaction T 2j−1 from a t-quiescent configuration and it must be committed in ρ j (by sequential progress for updating transactions). Observe that, for all 1 ≤ j < i, T 2j−1 ≺ RT E T 2i−1 . Strict serializability of M now stipulates that, for all i ∈ {1, . . . , c − 1}, the t-read of X 1 performed by transaction T 2i in the execution fragment α i must return the value v i1 of X 1 as written by transaction T 2i−1 in the execution fragment ρ i (in any serialization, T 2i−1 is the latest committed transaction writing to X 1 that precedes T 2i ). Thus, M indeed has an execution E of the form depicted in Figure 6.1a. Consider the execution fragment E that extends E in which, for all i ∈ {0, . . . , c − 1}, read-only transaction T 2i is extended with the complete execution of the t-reads of every t-object X ∈ X \ {X 1 } (depicted in Figure 6.1b). We claim that, for all i ∈ {0, . . . , c − 1}, and for all X ∈ X \ {X 1 }, read 2i (X ) performed by transaction T 2i must return the value v i of X written by transaction T 2i−1 in the execution fragment ρ i . Indeed, by wait-free progress, read i (X ) must return a non-abort response in such an extension of E. Suppose by contradiction that read i (X ) returns a response that is not v i . There are two cases: • read 2i (X ) returns the value v j written by transaction T 2j−1 ; j < i. However, since for all j < i, T 2j ≺ RT E T 2i , the execution is not strictly serializable-contradiction. • read 2i (X ) returns the value v j written by transaction T 2j ; j > i. Since read i (X 1 ) returns the value v i1 and T 2i ≺ RT E T 2j , there exists no such serialization-contradiction. Thus, E maintains at least c distinct values of every t-object X ∈ X . Impossibility of strict DAP In this section, we prove that it is impossible to derive strictly serializable TM implementations in RWF which ensure that any two transactions accessing pairwise disjoint data sets can execute without contending on the same base object. Theorem 6.2. There exists no strictly serializable strict DAP TM implementation in RWF. R 0 (X 1 ) → v (event of T 1 ) W 1 (X 1 , nv) W 1 (X 3 , nv) tryC 1 R 3 (X 3 ) → nv T 3 commits T 0 T 1 T 3 (a) By strict DAP, T 0 and T 3 do not contend on any base object R 0 (X 1 ) → v R 0 (X 2 ) → nv (event of T 1 ) W 1 (X 1 , nv) W 1 (X 3 , nv) tryC 1 R 3 (X 3 ) → nv T 3 commits W 2 (X 2 , nv) T 2 commits T 0 T 1 T 3 T 2 (b) read 0 (X 2 ) must return nv R 0 (X 1 ) → v R 0 (X 2 ) → nv (event of T 1 ) W 1 (X 1 , nv) W 1 (X 3 , nv) tryC 1 R 3 (X 3 ) → nv T 3 commits W 2 (X 2 , nv) T 2 commits T 0 T 1 T 3 T 2 (c) By strict DAP, T 0 cannot distinguish this execution from the execution in 6.2b Let v be the initial value of t-objects X 1 , X 2 and X 3 . Let π be the t-complete step contention-free execution of transaction T 1 that writes the value nv = v to t-objects X 1 and X 3 . By sequential progress for updating transactions, T 1 must be committed in π. Note that any read-only transaction that runs step contention-free after some prefix of π must return a non-abort value. Since any such transaction reading X 1 or X 3 must return v after the empty prefix of π and nv when it starts from π, there exists π , the longest prefix of π that cannot be extended with the t-complete step contention-free execution of any transaction that performs a t-read of X 1 and returns nv nor with the t-complete step contention-free execution of any transaction that performs a t-read of X 3 and returns nv. Consider the execution fragment π · α 1 , where α 1 is the complete step contention-free execution of transaction T 0 that performs read 0 (X 1 ) → v. Indeed, by definition of π and wait-free progress (assumed for read-only transactions), M has an execution of the form π · α 1 . Let e be the enabled event of transaction T 1 in the configuration after π . Without loss of generality, assume that π · e can be extended with the t-complete step contention-free execution of a transaction that reads X 3 and returns nv. We now prove that M has an execution of the form π · α 1 · e · β · γ, where • β is the t-complete step contention-free execution fragment of transaction T 3 that performs read 3 (X 3 ) → nv and commits • γ is the t-complete step contention-free execution fragment of transaction T 2 that writes nv to X 2 and commits. Observe that, by definition of π , M has an execution of the form π · e · β. By construction, transaction T 1 applies a nontrivial primitive to a base object, say b in the event e that is accessed by transaction T 3 in the execution fragment β. Since transactions T 0 and T 3 access mutually disjoint data sets in π · α 1 · e · β, T 3 does not access any base object in β to which transaction T 0 applies a nontrivial primtive in the execution fragment α 1 (assumption of strict DAP). Thus, α 1 does not contain a nontrivial primitive to b and π · α 1 · e · β is indistinguishable to T 3 from the execution π · e · β. This proves that M has an execution of the form π · α 1 · e · β (depicted in Figure 6.2a). Since transaction T 2 writes to t-object Dset(T 2 ) = X 2 ∈ {Dset(T 1 ) ∪ Dset(T 0 ) ∪ Dset(T 3 )}, by strict DAP, the configuration after π · α 1 · e · β is indistinguishable to T 2 from a t-quiescent configuration. Indeed, transaction T 2 does not contend with any of the transactions T 1 , T 0 and T 3 on any base object in π · α 1 · e · β · γ. Sequential progress of M requires that T 2 must be committed in π · α 1 · e · β · γ. Thus, M has an execution of the form π · α 1 · e · β · γ. By the above arguments, the execution π · α 1 · e · β · γ is indistinguishable to each of the transactions T 1 , T 0 , T 2 and T 3 from γ · π · α 1 · e · β in which transaction T 2 precedes T 1 in real-time ordering. Thus, γ · π · α 1 · e · β is also an execution of M . Consider the extension of the execution γ · π · α 1 · e · β in which transaction T 0 performs read 0 (X 2 ) and commits (depicted in Figure 6.2b). Strict serializability of M stipulates that read 0 (X 2 ) must return nv since T 2 (which writes nv to X 2 in γ) precedes T 0 in this execution. Similarly, we now extend the execution π · α 1 · e · β · γ with the complete step contention-free execution fragment of the t-read of X 2 by transaction T 0 . Since T 0 is a read-only transaction, it must be committed in this extension. However, as proved above, this execution is indistinguishable to T 0 from the execution depicted in Figure 6.2b in which read 0 (X 2 ) must return nv. Thus, M has an execution of the form π · α 1 · e · β · γ · α 2 , where T 0 performs read 0 (X 2 ) → nv in α 2 and commits. However, the execution π · α 1 · e · β · γ · α 2 (depicted in Figure 6.2c) is not strictly serializable. Transaction T 1 must be committed in any serialization and must precede transaction T 3 since read 3 (X 3 ) returns the value of X 3 written by T m . However, transaction T 0 must must precede T 1 since read 0 (X 1 ) returns the initial the value of X 1 . Also, transaction T 2 must precede T 0 since read 0 (X 2 ) returns the value of X 2 written by T 2 . But transaction T 3 must precede T 2 to respect real-time ordering of transactions. Thus, T 1 must precede T 0 in any serialization. But there exists no such serialization: a contradiction to the assumption that M is strictly serializable. A linear lower bound on expensive synchronization for weak DAP In this section, we prove a linear lower bound (in the size of the transaction's read set) on the number of RAWs or AWARs for weak DAP TM implementations in RWF. To do so, we construct an execution in which each t-read operation of an arbitrarily long read-only transaction contains a RAW or an AWAR. Proof. Let v be the initial value of each of the t-objects X 1 , . . . , X m . Consider the t-complete step contention-free execution of transaction T 0 that performs m t-reads read 0 (X 1 ), read 0 (X 1 ),. . . read 0 (X m ) and commits. We prove that each of the first m − 1 t-reads must perform a RAW or an AWAR. For all j ∈ {1, . . . , m − 1}, M has an execution of the form α 1 · α 2 · · · α j , where for all i ∈ {1, . . . , j}, α i is the complete step contention-free execution fragment of read 0 (X j ) → v. Assume inductively that each of the first j − 1 t-reads performs a RAW or an AWAR in this execution. We prove that read 0 (X j ) must perform a RAW or an AWAR in the execution fragment α j . Suppose by contradiction that α j does not contain a RAW or an AWAR. The following claim shows that we can schedule a committed transaction T j that writes a new value to X j concurrent to read 0 (X j ) such that the execution is indistinguishable to both T 0 and T j from a step contention-free execution (depicted in Figure 6.3a). R 0 (X 1 ) · · · R 0 (X j−1 ) j − 1 t-reads R 0 (X j ) → v initial value W j (X j , nv) T j commits T 0 T j (a) read 0 (X j ) → v performs no RAW/AWAR; T 0 and T j are unaware of step contention R0(X1) · · · R0(Xj−1) j − 1 t-reads R0(Xj) → v initial value Wj(Xj, nv) Tj commits W (Xm, nv) T commits R0(Xm) → nv new value T 0 T T j (b) R 0 (Xm) must return nv by strict serializability R 0 (X 1 ) · · · R 0 (X j−1 ) j − 1 t-reads R 0 (X j ) → v initial value W j (X j , nv) T j commits W (X m , nv) T commits R 0 (X m ) → nv new value T 0 T T j (c) By weak DAP, T 0 cannot distinguish this execution from 6.3b Figure 6.3: Executions in the proof of Theorem 6.3; execution in 6.3c is not strictly serializable Claim 6.4. For all j ∈ {1, . . . , m − 1}, M has an execution of the form α 1 · · · α j−1 · α 1 j · δ j · α 2 j where, • δ j is the t-complete step contention-free execution fragment of transaction T j that writes nv = v and commits • α 1 j · α 2 j = α j is the complete execution fragment of the j th t-read read 0 (X j ) → v such that α 1 j does not contain any nontrivial events α 1 · · · α j−1 · α 1 j · δ j · α 2 j is indistinguishable to T 0 from the step contention-free execution fragment α 1 · · · α j−1 · α 1 j · α 2 j Moreover, T j does not access any base object to which T 0 applies a nontrivial event in α 1 · · · α j−1 · α 1 j · δ j . Proof. By wait-free progress (for read-only transactions) and strict serializability, M has an execution of the form α 1 · · · α j−1 in which each of the t-reads performed by T 0 must return the initial value of the t-objects. Since T j is an updating transaction, by sequential progress, there exists an execution of M of the form δ j · α 1 · · · α j−1 . Since T 0 and T j are disjoint-access in the δ j · α 1 · · · α j−1 , by Lemma 2.10, T 0 and T j do not contend on any base object in δ j · α 1 · · · α j−1 . Thus, α 1 · · · α j−1 · δ j is indistinguishable to T j from the execution δ j and α 1 · · · α j−1 · δ j is also an execution of M . Let e be the first event that contains a write to a base object in α j . If there exists no such write event to a base object in α j , then α 1 j = α j and α 2 j is empty. Otherwise, we represent the execution fragment α j as α 1 j · e · α f j . Since α s j does not contain any nontrivial events that write to a base object, α 1 · · · α j−1 · α s j · δ j is indistinguishable to transaction T j from the execution α 1 · · · α j−1 · δ j . Thus, α 1 · · · α j−1 · α s j · δ j is an execution of M . Since e is not an atomic-write-after-read, α 1 · · · α j−1 · α s j · δ j · e is an execution of M . Since α j does not contain a RAW, any read performed in α f j may only be performed to base objects previously written in e · α f j . Thus, α 1 · · · α j−1 · α s j · δ j · e · α f j is indistinguishable to transaction T 0 from the step contention-free execution α 1 · · · α j−1 · α s j · e · α f j in which read 0 (X j ) → v. Choosing α 2 j = e · α f j , it follows that M has an execution of the form α 1 · · · α j−1 · α 1 j · δ j · α 2 j that is indistinguishable to T j and T 0 from a step contention-free execution. The proof follows. We now prove that, for all j ∈ {1, . . . , m − 1}, M has an execution of the form δ m · α 1 · · · α j−1 · α 1 j · δ j · α 2 j such that • δ m is the t-complete step contention-free execution of transaction T that writes nv = v to X m and commits • T and T 0 do not contend on any base object in δ m · α 1 · · · α j−1 · α 1 j · δ j · α 2 j • T and T j do not contend on any base object in δ m · α 1 · · · α j−1 · α 1 j · δ j · α 2 j . By sequential progress for updating transactions, T which writes the value nv to X m must be committed in δ m since it is running in the absence of step-contention from the initial configuration. Observe that T and T 0 are disjoint-access in δ m ·α 1 · · · α j−1 ·α 1 j ·δ j ·α 2 j . By definition of α 1 j and α 2 j , δ m ·α 1 · · · α j−1 ·α 1 j ·δ j ·α 2 j is indistinguishable to T 0 from δ m · α 1 · · · α j−1 · α 1 j · α 2 j . By Lemma 2.10, T and T 0 do not contend on any base object in δ m · α 1 · · · α j−1 · α 1 j · α 2 j . By Claim 6.4, δ m · α 1 · · · α j−1 · α 1 j · δ j is indistinguishable to T j from δ m · δ j . But transactions T and T j are disjoint-access in δ m · δ j , and by Lemma 2.10, T j and T do not contend on any base object in δ m · δ j . Since strict serializability of M stipulates that each of the j t-reads performed by T 0 return the initial values of the respective t-objects, M has an execution of the form δ m · α 1 · · · α j−1 · α 1 j · δ j · α 2 j . Consider the extension of δ m ·α 1 · · · α j−1 ·α 1 j ·δ j ·α 2 j in which T 0 performs (m−j) t-reads of X j+1 , · · · , X m step contention-free and commits (depicted in Figure 6.3b). By wait-free progress of M and since T 0 is a read-only transaction, there exists such an execution. Notice that the m th t-read, read 0 (X m ) must return the value nv by strict serializability since T precedes T 0 in real-time order in this execution. Recall that neither pairs of transactions T and T j nor T and T 0 contend on any base object in the execution δ m · α 1 · · · α j−1 · α 1 j · δ j · α 2 j . It follows that for all j ∈ {1, . . . , m − 1}, M has an execution of the form α 1 · · · α j−1 · α 1 j · δ j · α 2 j · δ m in which T j precedes T in real-time order. Let α be the execution fragment that extends α 1 · · · α j−1 · α 1 j · δ j · α 2 j · δ m in which T 0 performs (m − j) t-reads of X j+1 , · · · , X m step contention-free and commits (depicted in Figure 6.3c). Since α 1 · · · α j−1 · α 1 j · δ j · α 2 j · δ m is indistinguishable to T 0 from the execution δ m · α 1 · · · α! j−1 · α 1 j · δ j · α 2 j , read 0 (X m ) must return the response value nv in α . The execution α 1 · · · α j−1 · α 1 j · δ j · α 2 j · δ m · α is not strictly serializable. In any serialization, T j must precede T to respect the real-time ordering of transactions, while T must precede T 0 since read j (X m ) returns the value of X m updated by T . Also, transaction T 0 must precede T j since read 0 (X j ) returns the initial value of X j . But there exists no such serialization: a contradiction to the assumption that M is strict serializable. Thus, for all j ∈ {1, . . . , m − 1}, transaction T 0 must perform a RAW or an AWAR during the execution of read 0 (X j ), completing the proof. Since Theorem 6.3 implies that read-only transactions must perform nontrivial events, we have the following corollary that was proved directly in [24]. Corollary 6.5 ( [24]). There does not exist any strictly serializable weak DAP TM implementation M ∈ RWF that uses invisible reads. Related work and Discussion Attiya et al. [24] showed that it is impossible to implement weak DAP strictly serializable TMs in RWF if read-only transactions may only apply trivial primitives to base objects. Attiya et al. [24] also considered a stronger "disjoint-access" property, called simply DAP, referring to the original definition proposed Israeli and Rappoport [86]. In DAP, two transactions are allowed to concurrently access (even for reading) the same base object only if they are disjoint-access. For an n-process DAP TM implementation, it is shown in [24] that a read-only transaction must perform at least n − 3 writes. Our lower bound in Theorem 6.3 is strictly stronger than the one in [24], as it assumes only weak DAP, considers a more precise RAW/AWAR metric, and does not depend on the number of processes in the system. (Technically, the last point follows from the fact that the execution constructed in the proof of Theorem 6.3 uses only 3 concurrent processes.) Thus, the theorem subsumes the two lower bounds of [24] within a single proof. Perelman et al. [110] considered the closely related (to RWF) class of mv-permissive TMs: a transaction can only be aborted if it is an updating transaction that conflicts with another updating transaction. RWF is incomparable with the class of mv-permissive TMs. On the one hand, mv-permissiveness guarantees that read-only transactions never abort, but does not imply that they commit in a waitfree manner. On the other hand, RWF allows an updating transaction to abort in the presence of a concurrent read-only transaction, which is disallowed by mv-permissive TMs. Observe that, technically, mv-permissiveness is a blocking TM-progress condition, although when used in conjunction with wait-free TM-liveness, it is a partially non-blocking TM-progress condition that is strictly stronger than RWF. Assuming starvation-free TM-liveness, [110] showed that implementing a weak DAP strictly serializable mv-permissive TM is impossible. In the thesis, we showed that strictly serializable TMs in RWF cannot provide strict DAP, but proving the impossibility result assuming weak DAP remains an interesting open question. [110] also proved that mv-permissive TMs cannot be online space optimal, i.e., no mv-permissive TM can keep the minimum number of old object versions for any TM history. Our result on the space complexity of implementations in RWF that use invisible reads (Theorem 6.1) is different since it proves that the implementation must maintain an unbounded number of versions of every t-object. Our proof technique can however be used to show that mv-permissive TMs considered in [110] should also maintain unbounded number of versions. Overview Hybrid transactional memory. The TM abstraction, in its original manifestation from the proposal by Herlihy and Moss [80], augmented the processor's cache-coherence protocol and extended the CPU's instruction set with instructions to indicate which memory accesses must be transactional [80]. Most popular TM designs, subsequent to the original proposal in [80] have implemented all the functionality in software [36,52,79,101,117] (cf. software TM model in Chapter 2). More recently, CPUs have included hardware extensions to support short, small hardware transactions [1,107,111]. Early experience with programming Hardware transactional memory (HTM), e.g. [7,38,44], paints an interesting picture: if used carefully, HTM can be an extremely useful construct, and can significantly speed up and simplify concurrent implementations. At the same time, this powerful tool is not without its limitations: since HTMs are usually implemented on top of the cache coherence mechanism, hardware transactions have inherent capacity constraints on the number of distinct memory locations that can be accessed inside a single transaction. Moreover, all current proposals are best-effort, as they may abort under imprecisely specified conditions (cache capacity overflow, interrupts etc). In brief, the programmer should not solely rely on HTMs. Several Hybrid Transactional Memory (HyTM) schemes [35,37,88,99] have been proposed to complement the fast, but best-effort nature of HTM with a slow, reliable software transactional memory (STM) backup. These proposals have explored a wide range of trade-offs between the overhead on hardware transactions, concurrent execution of hardware and software, and the provided progress guarantees. Early proposals for HyTM implementations [37,88] shared some interesting features. First, transactions that do not conflict are expected to run concurrently, regardless of their types (software or hardware). This property is referred to as progressiveness [63] and is believed to allow for increased parallelism. Second, in addition to changing the values of transactional objects, hardware transactions usually employ code instrumentation techniques. Intuitively, instrumentation is used by hardware transactions to detect concurrency scenarios and abort in the case of contention. The number of instrumentation steps performed by these implementations within a hardware transaction is usually proportional to the size of the transaction's data set. Recent work by Riegel et al. [113] surveyed the various HyTM algorithms to date, focusing on techniques to reduce instrumentation overheads in the frequently executed hardware fast-path. However, it is not clear whether there are fundamental limitations when building a HyTM with non-trivial concurrency between hardware and software transactions. In particular, what are the inherent instrumentation costs of building a HyTM, and what are the trade-offs between these costs and the provided concurrency, i.e., the ability of the HyTM system to run software and hardware transactions in parallel? Modelling HyTM. To address these questions, the thesis proposes the first model for hybrid TM systems which formally captures the notion of cached accesses provided by hardware transactions, and precisely defines instrumentation costs in a quantifiable way. We model a hardware transaction as a series of memory accesses that operate on locally cached copies of the variables, followed by a cache-commit operation. In case a concurrent transaction performs a (read-write or write-write) conflicting access to a cached object, the cached copy is invalidated and the hardware transaction aborts. Our model for instrumentation is motivated by recent experimental evidence which suggests that the overhead on hardware transactions imposed by code which detects concurrent software transactions is a significant performance bottleneck [102]. In particular, we say that a HyTM implementation imposes a logical partitioning of shared memory into data and metadata locations. Intuitively, metadata is used by transactions to exchange information about contention and conflicts while data locations only store the values of data items read and updated within transactions. We quantify instrumentation cost by measuring the number of accesses to metadata objects which transactions perform. Our framework captures all known HyTM proposals which combine HTMs with an STM fallback [35,37,88,99,112]. The cost of instrumentation. Once this general model is in place, we derive two lower bounds on the cost of implementing a HyTM. First, we show that some instrumentation is necessary in a HyTM implementation even if we only intend to provide sequential progress, where a transaction is only guaranteed to commit if it runs in the absence of concurrency. Second, we prove that any progressive HyTM implementation providing obstruction-free liveness (every operation running solo returns some response) and has executions in which an arbitrarily long readonly hardware transaction running in the absence of concurrency must access a number of distinct metadata objects proportional to the size of its data set. Our proof technique is interesting in its own right. Inductively, we start with a sequential execution in which a "large" set S m of read-only hardware transactions, each accessing m distinct data items and m distinct metadata memory locations, run after an execution E m . We then construct execution E m+1 , an extension of E m , which forces at least half of the transactions in S m to access a new metadata base object when reading a new (m + 1) th data item, running after E m+1 . The technical challenge, and the key departure from prior work on STM lower bounds, e.g. [24,60,64], is that hardware transactions practically possess "automatic" conflict detection, aborting on contention. This is in contrast to STMs, which must take steps to detect contention on memory locations. We match this lower bound with an HyTM algorithm that, additionally, allows for uninstrumented writes and invisible reads and is provably opaque [64]. To the best of our knowledge, this is the first formal proof of correctness of a HyTM algorithm. Low-instrumentation HyTM. The high instrumentation costs of early HyTM designs, which we show to be inherent, stimulated more recent HyTM schemes [35,99,102,113] to sacrifice progressiveness for constant instrumentation cost (i.e., not depending on the size of the transaction). In the past two years, Dalessandro et al. [35] and Riegel et al. [113] have proposed HyTMs based on the efficient NOrec STM [36]. These HyTMs schemes do not guarantee any parallelism among transactions; only sequential progress is ensured. Despite this, they are among the best-performing HyTMs to date due to the limited instrumentation in hardware transactions. Starting from this observation, we provide a more precise upper bound for low-instrumentation HyTMs by presenting a HyTM algorithm with invisible reads and uninstrumented hardware writes which guarantees that a hardware transaction accesses at most one metadata object in the course of its execution. Software transactions in this implementation remain progressive, while hardware transactions are guaranteed to commit only if they do not run concurrently with an updating software transaction (or exceed capacity). Therefore, the cost of avoiding the linear lower bound for progressive implementations is that hardware transactions may be aborted by non-conflicting software ones. Roadmap of Chapter 7. In Section 7.2, we introduce the model of HyTMs and Section 7.3 studies the inherent cost of concurrency in progressive HyTMs by presenting a linear lower bound on the cost of instrumentation while Section 7.4 presents a matching upper bound. In Section 7.5 discusses providing partial concurrency with instrumentation cost and in Section 7.6, we elaborate on prior work related to HyTMs. Modelling HyTM In this chapter, we introduce the model of HyTMs, extending the TM model from Chapter 2, that intuitively captures the cache-coherence protocols employed in shared memory systems. Direct and cached accesses We now describe the operation of a Hybrid Transactional Memory (HyTM) implementation. In our model, every base object can be accessed with two kinds of primitives, direct and cached. In a direct access, the rmw primitive operates on the memory state: the direct-access event atomically reads the value of the object in the shared memory and, if necessary, modifies it. In a cached access performed by a process i, the rmw primitive operates on the cached state recorded in process i's tracking set τ i . One can think of τ i as the L1 cache of process i. A hardware transaction is a series of cached rmw primitives performed on τ i followed by a cache-commit primitive. More precisely, τ i is a set of triples (b, v, m) where b is a base object identifier, v is a value, and m ∈ {shared , exclusive} is an access mode. The triple (b, v, m) is added to the tracking set when i performs a cached rmw access of b, where m is set to exclusive if the access is nontrivial, and to shared otherwise. We assume that there exists some constant TS (representing the size of the L1 cache) such that the condition |τ i | ≤ TS must always hold; this condition will be enforced by our model. A base object b is present in τ i with mode m if ∃v, (b, v, m) ∈ τ i . A trivial (resp. nontrivial) cached primitive g, h applied to b by process i first checks the condition |τ i | = TS and if so, it sets τ i = ∅ and immediately returns ⊥ (we call this event a capacity abort). We assume that TS is large enough so that no transaction with data set of size 1 can incur a capacity abort. If the transaction does not incur a capacity abort, the process checks whether b is present in exclusive (resp. any) mode in τ j for any j = i. If so, τ i is set to ∅ and the primitive returns ⊥. Otherwise, the triple (b, v, shared ) (resp. (b, g(v), exclusive)) is added to τ i , where v is the most recent cached value of A tracking set can be invalidated by a concurrent process: if, in a configuration C where (b, v, exclusive) ∈ τ i (resp. (b, v, shared ) ∈ τ i ), a process j = i applies any primitive (resp. any nontrivial primitive) to b, then τ i becomes invalid and any subsequent cached primitive invoked by i sets τ i to ∅ and returns ⊥. We refer to this event as a tracking set abort. Finally, the cache-commit primitive issued by process i with a valid τ i does the following: for each base object b such that (b, v, exclusive) ∈ τ i , the value of b in C is updated to v. Finally, τ i is set to ∅ and the primitive returns commit. Note that HTM may also abort spuriously, or because of unsupported operations [111]. The first cause can be modelled probabilistically in the above framework, which would not however significantly affect our claims and proofs, except for a more cumbersome presentation. Also, our lower bounds are based exclusively on executions containing t-reads and t-writes. Therefore, in the following, we only consider contention and capacity aborts. Slow-path and fast-path transactions In the following, we partition HyTM transactions into fast-path transactions and slow-path transactions. Practically, two separate algorithms (fast-path one and slow-path one) are provided for each t-operation. A slow-path transaction models a regular software transaction. An event of a slow-path transaction is either an invocation or response of a t-operation, or a rmw primitive on a base object. A fast-path transaction essentially encapsulates a hardware transaction. An event of a fast-path transaction is either an invocation or response of a t-operation, a cached primitive on a base object, or a cache-commit: t-read and t-write are only allowed to contain cached primitives, and tryC consists of invoking cache-commit. Furthermore, we assume that a fast-path transaction T k returns A k as soon an underlying cached primitive or cache-commit returns ⊥. Figure 7.1 depicts such a scenario illustrating a tracking set abort: fast-path transaction T 2 executed by process p 2 accesses a base object b in shared (and resp. exclusive) mode and it is added to its tracking set τ 2 . Immediately after the access of b by T 2 , a concurrent transaction T 1 applies a nontrivial primitive to b (and resp. accesses b). Thus, the tracking of p 2 is invalidated and T 2 must be aborted in any extension of this execution. We Intuitively, these observations say that fast-path transactions which are not yet committed are invisible to slow-path transactions, and can communicate with other fast-path transactions only by incurring their tracking-set aborts. Figure 7.2 illustrates Observation 7.1: a fast-path transaction T 2 is concurrent to a slow-path transaction T 1 in an execution E. Since T 2 is t-incomplete or aborted in this execution, E is indistinguishable to T 1 from an execution E derived by removing all events of T 2 from E. Analogously, to illustrate Observation 7.2, if T 1 is a fast-path transaction that does not incur a tracking set abort in E, then E is indistinguishable to T 1 from E . Instrumentation Now we define the notion of code instrumentation in fast-path transactions. Intuitively, instrumentation characterizes the number of extra "metadata" accesses performed by a fast-path transaction. We start with the following technical definition. An execution E of a HyTM M appears t-sequential to a transaction T k ∈ txns(E) if there exists an execution E of M such that: • txns(E ) ⊆ txns(E) \ {T k } and the configuration after E is t-quiescent, • every transaction T m ∈ txns(E) that precedes T k in real-time order is included in E such that E|m = E |m, • for every transaction T m ∈ txns(E ), Rset E (T m ) ⊆ Rset E (T m ) and Wset E (T m ) ⊆ Wset E (T m ), and • E · E|k is an execution of M. Definition 7.1 (Data and metadata base objects). Let X be the set of t-objects operated by a HyTM implementation M. Now we partition the set of base objects used by M into a set D of data objects and a set M of metadata objects (D ∩ M = ∅). We further partition D into sets D X associated with each t-object X ∈ X : D = X∈X D X , for all X = Y in X , D X ∩ D Y = ∅, such that: 1. In every execution E, each fast-path transaction T k ∈ txns(E) only accesses base objects in X∈DSet(T k ) D X or M. 2. Let E · ρ and E · E · ρ be two t-complete executions, such that E and E · E are t-complete, ρ and ρ are complete executions of a transaction T k / ∈ txns(E · E ), H ρ = H ρ , and ∀T m ∈ txns(E ), Dset(T m ) ∩ Dset(T k ) = ∅. Then the states of the base objects X∈DSet(T k ) D X in the configuration after E · ρ and E · E · ρ are the same. R0(Z) → v W0(X, nv) tryC 0 W0(Y, nv) Wz(Z, nv) write new value S F T0 Tz (b) Since Tz is uninstrumented, by Observation 7.3 and sequential TM-progress, Tz must commit Let execution E appear t-sequential to a transaction T k and let the enabled event e of T k after E be a primitive on a base object b ∈ D. Then, unless e returns ⊥, E · e also appears t-sequential to T k . R0(Z) → v W0(X, nv) tryC 0 W0(Y, nv) Rx(X) → v Intuitively, the first condition says that a transaction is only allowed to access data objects based on its data set. The second condition says that transactions with disjoint data sets can communicate only via metadata objects. Finally, the last condition means that base objects in D may only contain the "values" of t-objects, and cannot be used to detect concurrent transactions. Note that our results will lower bound the number of metadata objects that must be accessed under particular assumptions, thus from a cost perspective, D should be made as large as possible. All HyTM proposals we aware of, such as HybridNOrec [35,112], PhTM [99] and others [37,88], conform to our definition of instrumentation in fast-path transactions. For instance, HybridNOrec [35,112] employs a distinct base object in D for each t-object and a global sequence lock as the metadata that is accessed by fast-path transactions to detect concurrency with slow-path transactions. Similarly, the HyTM implementation by Damron et al. [37] also associates a distinct base object in D for each t-object and additionally, a transaction header and ownership record as metadata base objects. Observation 7.3. Consider any execution E of a HyTM implementation M which provides uninstrumented reads (resp. writes). For any fast-path read-only (resp. write-only) transaction T k ∈ txns(E), that runs step-contention free after E, the execution E appears t-sequential to T k . Impossibility of uninstrumented HyTMs In this section, we show that any strictly serializable HyTM must be instrumented, even under a very weak progress assumption by which a transaction is guaranteed to commit only when run t-sequentially: Definition 7.3 (Sequential TM-progress for HyTMs). A HyTM implementation M provides sequential TM-progress for fast-path transactions (and resp. slow-path) if in every execution E of M, a fastpath (and resp. slow-path) transaction T k returns A k in E only if T k incurs a capacity abort or T k is concurrent to another transaction. We say that M provides sequential TM-progress if it provides sequential TM-progress for fast-path and slow-path transactions. Theorem 7.4. There does not exist a strictly serializable uninstrumented HyTM implementation that ensures sequential TM-progress and TM-liveness. Proof. Suppose by contradiction that such a HyTM M exists. For simplicity, assume that v is the initial value of t-objects X, Y and Z. Let E be the t-complete step contention-free execution of a slow-path transaction T 0 that performs read 0 (Z) → v, write 0 (X, nv), write 0 (Y, nv) (nv = v), and commits. Such an execution exists since M ensures sequential TM-progress. By Observation 7.3, any transaction that runs step contention-free starting from a prefix of E must return a non-abort value. Since any such transaction reading X or Y must return v when it starts from the empty prefix of E and nv when it starts from E. Thus, there exists E , the longest prefix of E that cannot be extended with the t-complete step contentionfree execution of a fast-path transaction reading X or Y and returning nv. Let e is the enabled event of T 0 in the configuration after E . Without loss of generality, suppose that there exists an execution E · e · E y where E y is the t-complete step contention-free execution fragment of some fast-path transaction T y that reads Y is returns nv (Figure 7.3a). Claim 7.5. M has an execution E · E z · E x , where • E z is the t-complete step contention-free execution fragment of a fast-path transaction T z that writes nv = v to Z and commits • E x is the t-complete step contention-free execution fragment of a fast-path transaction T x that performs a single t-read read x (X) → v and commits. Proof. By Observation 7.3, the extension of E in which T z writes to Z and tries to commit appears tsequential to T z . By sequential TM-progress, T z complets the write and commits. Let E ·E z (Figure 7.3b) be the resulting execution of M. Similarly, the extension of E in which T x reads X and tries to commit appears t-sequential to T x . By sequential TM-progress, T x commits and let E · E x be the resulting execution of M. By the definition of E , read x (X) must return v in E · E x . Since M is uninstrumented and the data sets of T x and T z are disjoint, the sets of base objects accessed in the execution fragments E x and E y are also disjoint. Thus, E · E z · E x is indistinguishable to T x from the execution E · E x , which implies that E · E z · E x is an execution of M (Figure 7.3c). Finally, we prove that the sequence of events, E · E z · E x · e · E y is an execution of M. Since the transactions T x , T y , T z have pairwise disjoint data sets in E · E z · E x · e · E y , no base object accessed ib E y can be accessed in E x and E z . The read operation on X performed by T y in E · e · E y returns nv and, by the definition of E and e, T y must have accessed the base object b modified in the event e by T 0 . Thus, b is not accessed in E x and E z and E · E z · E x · e is an execution of M. Summing up, E · E z · E x · e · E y is indistinguishable to T y from E · e · E y , which implies that E · E z · E x · e · E y is an execution of M (Figure 7.3d). But the resulting execution is not strictly serializable. Indeed, suppose that a serialization exists. As the value written by T 0 is returned by a committed transaction T y , T 0 must be committed and precede T y in the serialization. Since T x returns the initial value of X, T x must precede T 0 . Since T 0 reads the initial value of Z, T 0 must precede T z . Finally, T z must precede T x to respect the real-time order. The cycle in the serialization establishes a contradiction. A linear lower bound on instrumentation for progressive HyTMs In this section, we show that giving HyTM the ability to run and commit transactions in parallel brings considerable instrumentation costs. We focus on a natural progress condition called progressiveness [61,62,63] that allows a transaction to abort only if it experiences a read-write or write-write conflict with a concurrent transaction: Definition 7.4 (Progressiveness for HyTMs). We say that transactions T i and T j conflict in an execution E on a t-object X if X ∈ Dset(T i ) ∩ Dset(T j ) and X ∈ Wset(T i ) ∪ Wset(T j ). A HyTM implementation M is fast-path (resp. slow-path) progressive if in every execution E of M and for every fast-path (and resp. slow-path) transaction T i that aborts in E, either A i is a capacity abort or T i conflicts with some transaction T j that is concurrent to T i in E. We say M is progressive if it is both fast-path and slow-path progressive. We show that for every opaque fast-path progressive HyTM that provides obstruction-free TM-liveness, an arbitrarily long read-only transaction might access a number of distinct metadata base objects that is linear in the size of its read set or experience a capacity abort. The following auxiliary results will be crucial in proving our lower bound. We observe first that a fast path transaction in a progressive HyTM can contend on a base object only with a conflicting transaction. Lemma 7.6. Let M be any fast-path progressive HyTM implementation. Let E · E 1 · E 2 be an execution of M where E 1 (and resp. E 2 ) is the step contention-free execution fragment of transaction T 1 ∈ txns(E) (and resp. T 2 ∈ txns(E)), T 1 (and resp. T 2 ) does not conflict with any transaction in E · E 1 · E 2 , and at least one of T 1 or T 2 is a fast-path transaction. Then, T 1 and T 2 do not contend on any base object in E · E 1 · E 2 . Proof. Suppose, by contradiction that T 1 or T 2 contend on the same base object in E · E 1 · E 2 . If in E 1 , T 1 performs a nontrivial event on a base object on which they contend, let e 1 be the last event in E 1 in which T 1 performs such an event to some base object b and e 2 , the first event in E 2 that accesses b. Otherwise, T 1 only performs trivial events in E 1 to base objects on which it contends with T 2 in E · E 1 · E 2 : let e 2 be the first event in E 2 in which E 2 performs a nontrivial event to some base object b on which they contend and e 1 , the last event of E 1 in T 1 that accesses b. Let E 1 (and resp. E 2 ) be the longest prefix of E 1 (and resp. E 2 ) that does not include e 1 (and resp. e 2 ). Since before accessing b, the execution is step contention-free for T 1 , E · E 1 · E 2 is an execution of M. By construction, T 1 and T 2 do not conflict in E · E 1 · E 2 . Moreover, E · E 1 · E 2 is indistinguishable to T 2 from E · E 1 · E 2 . Hence, T 1 and T 2 are poised to apply contending events e 1 and e 2 on b in the executionẼ = E · E 1 · E 2 . Recall that at least one event of e 1 and e 2 must be nontrivial. Consider the executionẼ · e 1 · e 2 where e 2 is the event of p 2 in which it applies the primitive of e 2 to the configuration afterẼ · e 1 . AfterẼ · e 1 , b is contained in the tracking set of process p 1 . If b is contained in τ 1 in the shared mode, then e 2 is a nontrivial primitive on b, which invalidates τ 1 inẼ · e 1 · e 2 . If b is contained in τ 1 in the exclusive mode, then any subsequent access of b invalidates τ 1 inẼ · e 1 · e 2 . In both cases, τ 1 is invalidated and T 1 incurs a tracking set abort. Thus, transaction T 1 must return A 1 in any extension of E · e 1 · e 2 -a contradiction to the assumption that M is progressive. Iterative application of Lemma 7.6 implies the following: Corollary 7.7. Let M be any fast-path progressive HyTM implementation. Let E ·E 1 · · · E i ·E i+1 · · · E m be any execution of M where for all i ∈ {1, . . . , m}, E i is the step contention-free execution fragment of transaction T i ∈ txns(E) and any two transactions in E 1 · · · E m do not conflict. For all i, j = 1, . . . , m, i = j, if T i is fast-path, then T i and T j do not contend on a base object in E · E 1 · · · E i · · · E m Proof. Let T i be a fast-path transaction. By Lemma 7.6, in E · E 1 · · · E i · · · E m , T i does not contend with T i−1 (if i > 1) or T i+1 (if i < m) on any base object and, thus, E i commutes with E i−1 and E i+1 . Thus, E · E 1 · · · E i−2 · E i · E i−1 · E i+1 · · · E m (if i > 1) and E · E 1 · · · E i−1 · E i+1 · E i · E i+2 · · · E m (if i < m) are executions of M. By iteratively applying Lemma 7.6, we derive that T i does not contend with any T j , j = i. We say that execution fragments E and E are similar if they export equivalent histories, i.e., no process can see the difference between them by looking at the invocations and responses of t-operations. We now use Corollary 7.7 to show that t-operations only accessing data base objects cannot detect contention with non-conflicting transactions. Lemma 7.8. Let E be any t-complete execution of a progressive HyTM implementation M that provides OF TM-liveness. For any m ∈ N, consider a set of m executions of M of the form E · E i · γ i · ρ i where E i is the t-complete step contention-free execution fragment of a transaction T m+i , γ i is a complete step contention-free execution fragment of a fast-path transaction T i such that Dset(T i ) ∩ Dset(T m+i ) = ∅ in E · E i · γ i , and ρ i is the execution fragment of a t-operation by T i that does not contain accesses to any metadata base object. If, for all i, j ∈ {1, . . . , m}, i = j, Dset(T i )∩Dset(T m+j ) = ∅, Dset(T i )∩Dset(T j ) = ∅ and Dset(T m+i )∩Dset(T m+j ) = ∅, then there exists a t-complete step contention-free execution fragment E that is similar to E 1 · · · E m such that for all i ∈ {1, . . . , m}, E · E · γ i · ρ i is an execution of M. Proof. Observe that any two transactions in the execution fragment E 1 · · · E m access mutually disjoint data sets. Since M is progressive and provides OF TM-liveness, there exists a t-sequential execution fragment E = E 1 · · · E m such that, for all i ∈ {1, . . . , m}, the execution fragments E i and E i are similar and E · E is an execution of M. Corollary 7.7 implies that, for all for all i ∈ {1, . . . , m}, M has an execution of the form E · E 1 · · · E i · · · E m · γ i . More specifically, M has an execution of the form E · γ i · E 1 · · · E i · · · E m . Recall that the execution fragment ρ i of fast-path transaction T i that extends γ i contains accesses only to base objects in It follows that M has an execution of the form E · γ i · E 1 · · · E i · ρ i · E i+1 · · · E m . and the states of each of the base objects X∈DSet(Ti) D X accessed by T i in the configuration after E · γ i · E 1 · · · E i and E · γ i · E i are the same. But E · γ i · E i · ρ i is an execution of M. Thus, for all i ∈ {1, . . . , m}, M has an execution of the form E · E · γ i · ρ i . Finally, we are now ready to derive our lower bound. Theorem 7.9. Let M be any progressive, opaque HyTM implementation that provides OF TM-liveness. For every m ∈ N, there exists an execution E in which some fast-path read-only transaction T k ∈ txns(E) satisfies either (1) Dset(T k ) ≤ m and T k incurs a capacity abort in E or (2) Dset(T k ) = m and T k accesses Ω(m) distinct metadata base objects in E. Here is a high-level overview of the proof technique. Let κ be the smallest integer such that some fastpath transaction running step contention-free after a t-quiescent configuration performs κ t-reads and incurs a capacity abort. We prove that, for all m ≤ κ − 1, there exists a t-complete execution E m and a set S m with |S m | = 2 κ−m of read-only fast-path transactions that access mutually disjoint data sets such that each transaction in S m that runs step contention-free from E m and performs t-reads of m distinct t-objects accesses at least one distinct metadata base object within the execution of each t-read operation. We proceed by induction. Assume that the induction statement holds for all m < kappa − 1. We prove that a set S m+1 ; |S m+1 | = 2 κ−(m+1) of fast-path transactions, each of which run step contention-free after the same t-complete execution E m+1 , perform m + 1 t-reads of distinct t-objects so that at least one distinct metadata base object is accessed within the execution of each t-read operation. In our construction, we pick any two new transactions from the set S m and show that one of them running step contention-free from a t-complete execution that extends E m performs m + 1 t-reads of distinct t-objects so that at least one distinct metadata base object is accessed within the execution of each t-read operation. In this way, the set of transactions is reduced by half in each step of the induction until one transaction remains which must have accessed a distinct metadata base object in every one of its m + 1 t-reads. Intuitively, since all the transactions that we use in our construction access mutually disjoint data sets, we can apply Lemma 7.6 to construct a t-complete execution E m+1 such that each of the fast-path transactions in S m+1 when running step contention-free after E m+1 perform m + 1 t-reads so that at least one distinct metadata base object is accessed within the execution of each t-read operation. We now present the formal proof: Proof. In the constructions which follow, every fast-path transaction executes at most m + 1 t-reads. Let κ be the smallest integer such that some fast-path transaction running step contention-free after a t-quiescent configuration performs κ t-reads and incurs a capacity abort. We proceed by induction. Induction statement. We prove that, for all m ≤ κ − 1, there exists a t-complete execution E m and a set S m with |S m | = 2 κ−m of read-only fast-path transactions that access mutually disjoint data sets such that each transaction T fi ∈ S m that runs step contention-free from E m and performs t-reads of m distinct t-objects accesses at least one distinct metadata base object within the execution of each t-read operation. Let E fi be the step contention-free execution of T fi after E m and let Dset(T fi ) = {X i,1 , . . . , X i,m }. The induction. Assume that the induction statement holds for all m ≤ κ−1. The statement is trivially true for the base case m = 0 for every κ ∈ N. We will prove that a set S m+1 ; |S m+1 | = 2 κ−(m+1) of fast-path transactions, each of which run step contention-free from the same t-quiescent configuration E m+1 , perform m + 1 t-reads of distinct tobjects so that at least one distinct metadata base object is accessed within the execution of each t-read operation. The construction proceeds in phases: there are exactly |Sm| 2 phases. In each phase, we pick any two new transactions from the set S m and show that one of them running step contention-free after a t-complete execution that extends E m performs m + 1 t-reads of distinct t-objects so that at least one distinct metadata base object is accessed within the execution of each t-read operation. Throughout this proof, we will assume that any two transactions (and resp. execution fragments) with distinct subscripts represent distinct identifiers. For all i ∈ {0, . . . , |Sm| 2 − 1}, let X 2i+1 , X 2i+2 ∈ |Sm|−1 i=0 {X i,1 , . . . , X i,m } be distinct t-objects and let v be the value of X 2i+1 and X 2i+2 after E m . Let T si denote a slow-path transaction which writes nv = v to X 2i+1 and X 2i+2 . Let E si be the t-complete step contention-free execution fragment of T si running immediately after E m . Let E si be the longest prefix of the execution E si such that E m · E si can be extended neither with the complete step contention-free execution fragment of transaction T f2i+1 that performs its m t-reads of X 2i+1,1 , . . . , X 2i+1,m and then performs read f2i+1 (X 2i+1 ) and returns nv, nor with the complete step contention-free execution fragment of some transaction T f2i+2 that performs t-reads of X 2i+2 1 , . . . , X 2i+2,m and then performs read f2i+2 (X 2i+2 ) and returns nv. Progressiveness and OF TM-liveness of M stipulates that such an execution exists. Let e i be the enabled event of T si in the configuration after E m · E si . By construction, the execution E m · E si can be extended with at least one of the complete step contention-free executions of transaction T f2i+1 performing (m + 1) t-reads of X 2i+1,1 , . . . , X 2i+1,m , X 2i+1 such that read f2i+1 (X 2i+1 ) → nv or transaction T f2i+2 performing t-reads of X 2i+2,1 , . . . , X 2i+2,m , X 2i+2 such that read f2i+2 (X 2i+2 ) → nv. Without loss of generality, suppose that T f2i+1 reads the value of X 2i+1 to be nv after E m · E 0i · e i . For any i ∈ {0, . . . , |Sm| 2 − 1}, we will denote by α i the execution fragment which we will construct in phase i. For any i ∈ {0, . . . , |Sm| 2 − 1}, we prove that M has an execution of the form E m · α i in which T f2i+1 (or T f2i+2 ) running step contention-free after a t-complete execution that extends E m performs m + 1 t-reads of distinct t-objects so that at least one distinct metadata base object is accessed within the execution of each first m t-read operations and T f2i+1 (or T f2i+2 ) is poised to apply an event after E m · α i that accesses a distinct metadata base object during the (m + 1) th t-read. Furthermore, we will show that E m · α i appears t-sequential to T f2i+1 (or T f2i+2 ). (Construction of phase i) Let E f2i+1 (and resp. E f2i+2 ) be the complete step contention-free execution of the t-reads of X 2i+1,1 , . . . , X 2i+1,m (and resp. X 2i+2,1 , . . . , X 2i+2,m ) running after E m by T f2i+1 (and resp. T f2i+2 ). By the inductive hypothesis, transaction T f2i+1 (and resp. T f2i+2 ) accesses m distinct metadata objects in the execution E m · E f2i+1 (and resp. E m · E f2i+2 ). Recall that transaction T f2i+1 does not conflict with transaction T si . Thus, by Corollary 7.7, M has an execution of the form E m ·E si ·e i ·E f2i+1 (and resp. E m ·E si ·e i ·E f2i+2 ). Let E rf2i+1 be the complete step contention-free execution fragment of read f2i+1 (X 2i+1 ) that extends E 2i+1 = E m · E si · e i · E f2i+1 . By OF TM-liveness, read f2i+1 (X 2i+1 ) must return a matching response in E 2i+1 · E rf2i+1 . We now consider two cases. Case I: Suppose E rf2i+1 accesses at least one metadata base object b not previously accessed by T f2i+1 . Let E rf2i+1 be the longest prefix of E rf2i+1 which does not apply any primitives to any metadata base object b not previously accessed by T f2i+1 . The execution E m ·E si ·e i ·E f2i+1 ·E rf2i+1 appears t-sequential to T f2i+1 because E f2i+1 does not contend with T si on any base object and any common base object accessed in the execution fragments E rx2i+1 and E si by T f2i+1 and T si respectively must be data objects contained in D. Thus, we have that |Dset(T f2i+1 )| = m + 1 and that T f2i+1 accesses m distinct metadata base objects within each of its first m t-read operations and is poised to access a distinct metadata base object during the execution of the (m + 1) th t-read. In this case, let α i = E m · E si · e i · E f2i+1 · E rf2i+1 . Case II: Suppose E rf2i+1 does not access any metadata base object not previously accessed by T f2i+1 . In this case, we will first prove the following: Claim 7.10. M has an execution of the form E 2i+2 = E m · E si · e i ·Ē f2i+1 · E f2i+2 whereĒ f2i+1 is the t-complete step contention-free execution of T f2i+1 in which read f2i+1 (X 2i+1 ) → nv, T f2i+1 invokes tryC f2i+1 and returns a matching response. Proof. Since E rf2i+1 does not contain accesses to any distinct metadata base objects, the execution E m · E si · e i · E f2i+1 · E rf2i+1 appears t-sequential to T f2i+1 . By definition of the event e i , read f2i+1 (X 2i+1 ) must access the base object to which the event e i applies a nontrivial primitive and return the response nv in E si · e i · E f2i+1 · E rf2i+1 . By OF TM-liveness, it follows that E m · E si · e i ·Ē f2i+1 is an execution of M. Now recall that E m ·E si ·e i ·E f2i+2 is an execution of M because transactions T f2i+2 and T si do not conflict in this execution and thus, cannot contend on any base object. Finally, because T f2i+1 and T f2i+2 access disjoint data sets in E m ·E si ·e i ·Ē f2i+1 ·E f2i+2 , by Lemma 7.6 again, we have that E m ·E si ·e i ·Ē f2i+1 ·E f2i+2 is an execution of M. Let E rf2i+2 be the complete step contention-free execution fragment of read f2i+2 (X 2i+2 ) after E m · E si · e i ·Ē f2i+1 · E f2i+2 . By the induction hypothesis and Claim 7.10, transaction T f2i+2 must access m distinct metadata base objects in the execution E m · E si · e i ·Ē f2i+1 · E f2i+2 . If E rf2i+2 accesses some metadata base object, then by the argument given in Case I applied to transaction T f2i+2 , we get that T f2i+2 accesses m distinct metadata base objects within each of the first m t-read operations and is poised to access a distinct metadata base object during the execution of the (m + 1) th t-read. Thus, suppose that E rf2i+2 does not access any metadata base object previously accessed by T f2i+2 . We claim that this is impossible and proceed to derive a contradiction. In particular, E rf2i+2 does not contend with T si on any metadata base object. Consequently, the execution E m · E si · e i ·Ē f2i+1 · E f2i+2 appears t-sequential to T x2i+2 since E rx2i+2 only contends with T si on base objects in D. It follows that E 2i+2 · E rf2i+2 must also appear t-sequential to T f2i+2 and so E rf2i+2 cannot abort. Recall that the base object, say b, to which T si applies a nontrivial primitive in the event e i is accessed by T f2i+1 in E m · E si · e i ·Ē f2i+1 · E f2i+2 ; thus, b ∈ D X2i+1 . Since X 2i+1 ∈ Dset(T f2i+2 ), b cannot be accessed by T f2i+2 . Thus, the execution E m · E si · e i ·Ē f2i+1 · E f2i+2 · E rf2i+2 is indistinguishable to T f2i+2 from the execution E i · E si · E f2i+2 · E rf2i+2 in which read f2i+2 (X 2i+2 ) must return the response v (by construction of E si ). But we observe now that the execution E m · E si · e i ·Ē f2i+1 · E f2i+2 · E rf2i+2 is not opaque. In any serialization corresponding to this execution, T si must be committed and must precede T f2i+1 because T f2i+1 read nv from X 2i+1 . Also, transaction T f2i+2 must precede T si because T f2i+2 read v from X 2i+2 . However T f2i+1 must precede T f2i+2 to respect real-time ordering of transactions. Clearly, there exists no such serialization-contradiction. Letting E rf2i+2 be the longest prefix of E rf2i+2 which does not access a base object b ∈ M not previously accessed by T f2i+2 , we can let α i = E si · e i ·Ē f2i+1 · E f2i+2 · E rf2i+2 in this case. Combining Cases I and II, the following claim holds. Claim 7.11. For each i ∈ {0, . . . , |Sm| 2 − 1}, M has an execution of the form E m · α i in which (1) some fast-path transaction T i ∈ txns(α i ) performs t-reads of m + 1 distinct t-objects so that at least one distinct metadata base object is accessed within the execution of each of the first m t-reads, T i is poised to access a distinct metadata base object after E m ·α i during the execution of the (m+1) th t-read and the execution appears t-sequential to T i , (2) the two fast-path transactions in the execution fragment α i do not contend on the same base object. (Collecting the phases) We will now describe how we can construct the set S m+1 of fast-path transactions from these |Sm| 2 phases and force each of them to access m + 1 distinct metadata base objects when running step contention-free after the same t-complete execution. For each i ∈ {0, . . . , |Sm| 2 − 1}, let β i be the subsequence of the execution α i consisting of all the events of the fast-path transaction that is poised to access a (m + 1) th distinct metadata base object. Henceforth, we denote by T i the fast-path transaction that participates in β i . Then, from Claim 7.11, it follows that, for each i ∈ {0, . . . , |Sm| 2 − 1}, M has an execution of the form E m · E si · e i · β i in which the fast-path transaction T i performs t-reads of m + 1 distinct t-objects so that at least one distinct metadata base object is accessed within the execution of each of the first m t-reads, T i is poised to access a distinct metadata base object after E m · E si · e i · β i during the execution of the (m + 1) th t-read and the execution appears t-sequential to T i . The following result is a corollary to the above claim that is obtained by applying the definition of "appears t-sequential". Recall that E si · e i is the t-incomplete execution of slow-path transaction T si that accesses t-objects X 2i+1 and X 2i+2 . E m · E i · β i such that the configuration after E m · E i is t-quiescent, txns(E i ) ⊆ {T si } and Dset(T si ) ⊆ {X 2i+1 , X 2i+2 } in E i . We can represent the execution β i = γ i · ρ i where fast-path transaction T i performs complete t-reads of m distinct t-objects in γ i and then performs an incomplete t-read of the (m + 1) th t-object in ρ i in which T i only accesses base objects in X∈DSet(Ti) {X}. Recall that T i and T si do not contend on the same base object in the execution E m · E i · γ i . Thus, for all i ∈ {0, . . . , |Sm| 2 − 1}, M has an execution of the form E m · γ i · E i · ρ i . Instrumentation-optimal progressive HyTM Observe that the fast-path transaction T i ∈ γ i does not access any t-object that is accessed by any slowpath transaction in the execution fragment E 0 · · · E |Sm | 2 −1 . By Lemma 7.8, there exists a t-complete step contention-free execution fragment E that is similar to E 0 · · · E |Sm| 2 −1 such that for all i ∈ {0, . . . , |Sm| 2 − 1}, M has an execution of the form E m · E · γ i · ρ i . By our construction, the enabled event of each fast-path transaction T i ∈ β i in this execution is an access to a distinct metadata base object. Let S m+1 denote the set of all fast-path transactions that participate in the execution fragment β 0 · · · β |(Sm| 2 −1 and E m+1 = E m · E . Thus, |S m+1 | fast-path transactions, each of which run step contention-free from the same t-quiescent configuration, perform m + 1 t-reads of distinct t-objects so that at least one distinct metadata base object is accessed within the execution of each t-read operation. This completes the proof. Instrumentation-optimal progressive HyTM We prove that the lower bound in Theorem 7.9 is tight by describing an 'instrumentation-optimal" HyTM implementation (Algorithm 7.1) that is opaque, progressive, provides wait-free TM-liveness, uses invisible reads. Base objects. For every t-object X j , our implementation maintains a base object v j ∈ D that stores the value of X j and a metadata base object r j , which is a lock bit that stores 0 or 1. Fast-path transactions. For a fast-path transaction T k , the read k (X j ) implementation first reads r j to check if X j is locked by a concurrent updating transaction. If so, it returns A k , else it returns the value of X j . Updating fast-path transactions use uninstrumented writes: write(X j , v) simply stores the cached state of X j along with its value v and if the cache has not been invalidated, updates the shared memory during tryC k by invoking the commit-cache primitive. Slow-path read-only transactions. Any read k (X j ) invoked by a slow-path transaction first reads the value of the object from v j , checks if r j is set and then performs value-based validation on its entire read set to check if any of them have been modified. If either of these conditions is true, the transaction returns A k . Otherwise, it returns the value of X j . A read-only transaction simply returns C k during the tryCommit. Slow-path updating transactions. The write k (X, v) implementation of a slow-path transaction stores v and the current value of X j locally, deferring the actual update in shared memory to tryCommit. During tryC k , an updating slow-path transaction T k attempts to obtain exclusive write access to its entire write set as follows: for every t-object X j ∈ Wset(T k ), it writes 1 to each base object r j by performing a compare-and-set (cas) primitive that checks if the value of r j is not 1 and, if so, replaces it with 1. If the cas fails, then T k releases the locks on all objects X it had previously acquired by writing 0 to r and then returns A k . Intuitively, if the cas fails, some concurrent transaction is performing a t-write to a t-object in Wset(T k ). If all the locks on the write set were acquired successfully, T k checks if any t-object in Rset(T k ) is concurrently being updated by another transaction and then performs value-based validation of the read set. If a conflict is detected from the these checks, the transaction is aborted. Finally, tryC k attempts to write the values of the t-objects via cas operations. If any cas on the individual base objects fails, there must be a concurrent fast-path writer, and so T k rolls back the state of the base objects that were updated, releases locks on its write set and returns A k . The roll backs are performed with cas operations, skipping any which fail to allow for concurrent fast-path writes to locked locations. Note that if a concurrent read operation of a fast-path transaction T finds an "invalid" value in v j that was written by such transaction T k but has not been rolled back yet, then T either incurs a tracking set abort later because T k has updated v j or finds r j to be 1. In both cases, the read operation of T aborts. The implementation uses invisible reads (no nontrivial primitives are applied by reading transactions). Every t-operation returns a matching response within a finite number of its steps. Algorithm 7.1 Progressive opaque HyTM implementation that provides uninstrumented writes and invisible reads; code for process p i executing transaction T k 1: Shared objects: 2: v j ∈ D, for each t-object X j 3: allows reads, writes and cas 4: r j ∈ M, for each t-object X j 5: allows reads, writes and cas 6: Local objects: Completions. First, we obtain a completion of E by removing some pending invocations or adding responses to the remaining pending invocations as follows: • incomplete read k , write k operation performed by a slow-path transaction T k is removed from E; an incomplete tryC k is removed from E if T k has not performed any write to a base object r j ; X j ∈ Wset(T k ) in Line 36, otherwise it is completed by including C k after E. • every incomplete read k , tryA k , write k and tryC k performed by a fast-path transaction T k is removed from E. Linearization points. Now a linearization H of E is obtained by associating linearization points to t-operations in the obtained completion of E. For all t-operations performed a slow-path transaction T k , linearization points as assigned as follows: • For every t-read op k that returns a non-A k value, op k is chosen as the event in Line 11 of Algorithm 7.1, else, op k is chosen as invocation event of op k • For every op k = write k that returns, op k is chosen as the invocation event of op k • For every op k = tryC k that returns C k such that Wset(T k ) = ∅, op k is associated with the first write to a base object performed by release when invoked in Line 40, else if op k returns A k , op k is associated with the invocation event of op k • For every op k = tryC k that returns C k such that Wset(T k ) = ∅, op k is associated with Line 28 For all t-operations performed a fast-path transaction T k , linearization points as assigned as follows: • For every t-read op k that returns a non-A k value, op k is chosen as the event in Line 66 of Algorithm 7.1, else, op k is chosen as invocation event of op k • For every op k that is a tryC k , op k is the commit-cache k primitive invoked by T k • For every op k that is a write k , op k is the event in Line 71. < H denotes a total-order on t-operations in the complete sequential history H. Serialization points. The serialization of a transaction T j , denoted as δ Tj is associated with the linearization point of a t-operation performed by the transaction. We obtain a t-complete historyH from H as follows. A serialization S is obtained by associating serialization points to transactions inH as follows: for every transaction T k in H that is complete, but not t-complete, we insert tryC k · A k immediately after the last event of T k in H. • If T k is an updating transaction that commits, then δ T k is tryC k • If T k is a read-only or aborted transaction, then δ T k is assigned to the linearization point of the last t-read that returned a non-A k value in T k < S denotes a total-order on transactions in the t-sequential history S. Claim 7.14. If T i ≺ H T j , then T i < S T j Proof. This follows from the fact that for a given transaction, its serialization point is chosen between the first and last event of the transaction implying if T i ≺ H T j , then δ Ti < E δ Tj implies T i < S T j . Claim 7.15. S is legal. Proof. We claim that for every read j (X m ) → v, there exists some slow-path transaction T i (or resp. fast-path) that performs write i (X m , v) and completes the event in Line 36 (or resp. Line 71) such that read j (X m ) ≺ RT H write i (X m , v). Suppose that T i is a slow-path transaction: since read j (X m ) returns the response v, the event in Line 11 succeeds the event in Line 36 performed by tryC i . Since read j (X m ) can return a non-abort response only after T i writes 0 to r m in Line 52, T i must be committed in S. Consequently, tryC i < E readj (Xm) . Since, for any updating committing transaction T i , δ Ti = tryC i , it follows that δ Ti < E δ Tj . Otherwise if T i is a fast-path transaction, then clearly T i is a committed transaction in S. Recall that read j (X m ) can read v during the event in Line 11 only after T i applies the commit-cache primitive. By the assignment of linearization points, tryC i < E readj (Xm) and thus, δ Ti < E readj (Xm) . Thus, to prove that S is legal, it suffices to show that there does not exist a transaction T k that returns C k in S and performs write k (X m , v ); v = v such that T i < S T k < S T j . T i and T k are both updating transactions that commit. Thus, (T i < S T k ) ⇐⇒ (δ Ti < E δ T k ) (δ Ti < E δ T k ) ⇐⇒ ( tryC i < E tryC k ) Since, T j reads the value of X written by T i , one of the following is true: tryC i < E tryC k < E readj (Xm) or tryC i < E readj (Xm) < E tryC k . Suppose that tryC i < E tryC k < E readj (Xm) . (Case I:) T i and T k are slow-path transactions. Thus, T k returns a response from the event in Line 29 before the read of the base object associated with X m by T j in Line 11. Since T i and T k are both committed in E, T k returns true from the event in Line 29 only after T i writes 0 to r m in Line 52. If T j is a slow-path transaction, recall that read j (X m ) checks if X j is locked by a concurrent transaction, then performs read-validation (Line 13) before returning a matching response. We claim that read j (X m ) must return A j in any such execution. Consider the following possible sequence of events: T k returns true from acquire function invocation, updates the value of X m to shared-memory (Line 36), T j reads the base object v m associated with X m , T k releases X m by writing 0 to r m and finally T j performs the check in Line 13. But in this case, read j (X m ) is forced to return the value v written by T m -contradiction to the assumption that read j (X m ) returns v. Otherwise suppose that T k acquires exclusive access to X m by writing 1 to r m and returns true from the invocation of acquire, updates v m in Line 36), T j reads v m , T j performs the check in Line 13 and finally T k releases X m by writing 0 to r m . Again, read j (X m ) must return A j since T j reads that r m is 1-contradiction. A similar argument applies to the case that T j is a fast-path transaction. Indeed, since every data base object read by T j is contained in its tracking set, if any concurrent transaction updates any t-object in its read set, T j is aborted immediately by our model(cf. Section 7.2.2). Thus, tryC i < E readj (X) < E tryC k . (Case II:) T i is a slow-path transaction and T k is a fast-path transaction. Thus, T k returns C k before the read of the base object associated with X m by T j in Line 11, but after the response of acquire by T i in Line 29. Since read j (X m ) reads the value of X m to be v and not v , T i performs the cas to v m in Line 36 after the T k performs the commit-cache primitive (since if otherwise, T k would be aborted in E). But then the cas on v m performed by T i would return false and T i would return A i -contradiction. (Case III:) T k is a slow-path transaction and T i is a fast-path transaction. This is analogous to the above case. (Case IV:) T i and T k are fast-path transactions. Thus, T k returns C k before the read of the base object associated with X m by T j in Line 11, but before T i returns C i (this follows from Observations 7.1 and 7.2). Consequently, read j (X m ) must read the value of X m to be v and return v -contradiction. We now need to prove that δ Tj indeed precedes tryC k in E. Consider the two possible cases: • Suppose that T j is a read-only transaction. Then, δ Tj is assigned to the last t-read performed by T j that returns a non-A j value. If read j (X m ) is not the last t-read that returned a non-A j value, then there exists a read j (X ) such that readj (Xm) < E tryC k < E readj (X ) . But then this t-read of X must abort by performing the checks in Line 13 or incur a tracking set abort-contradiction. • Suppose that T j is an updating transaction that commits, then δ Tj = tryC j which implies that readj (X) < E tryC k < E tryC j . Then, T j must neccesarily perform the checks in Line 32 and return A j or incur a tracking set abort-contradiction to the assumption that T j is a committed transaction. The proof follows. The conjunction of Claims 7.14 and 7.15 establish that Algorithm 7.1 is opaque. Theorem 7.16. There exists an opaque HyTM implementation that provides uninstrumented writes, invisible reads, progressiveness and wait-free TM-liveness such that in its every execution E, every readonly fast-path transaction T ∈ txns(E) accesses O(|Rset(T )|) distinct metadata base objects. Proof. (Opacity) Follows from Lemma 7.13. (TM-liveness and TM-progress) Since none of the implementations of the t-operations in Algorithm 7.1 contain unbounded loops or waiting statements, Algorithm 7.1 provides wait-free TM-liveness, i.e., every t-operation returns a matching response after taking a finite number of steps. Consider the cases under which a slow-path transaction T k may be aborted in any execution. • Suppose that there exists a read k (X j ) performed by T k that returns A k from Line 13. Thus, there exists a transaction that has written 1 to r j in Line 44, but has not yet written 0 to r j in Line 52 or some t-object in Rset(T k ) has been updated since its t-read by T k . In both cases, there exists a concurrent transaction performing a t-write to some t-object in Rset(T k ), thus forcing a read-write conflict. • Suppose that tryC k performed by T k that returns A k from Line 30. Thus, there exists a transaction that has written 1 to r j in Line 44, but has not yet written 0 to r j in Line 52. Thus, T k encounters write-write conflict with another transaction that concurrently attempts to update a t-object in Wset(T k ). • Suppose that tryC k performed by T k that returns A k from Line 32. Since T k returns A k from Line 32 for the same reason it returns A k after Line 13, the proof follows. Consider the cases under which a fast-path transaction T k may be aborted in any execution E. • Suppose that a read k (X m ) performed by T k returns A k from Line 67. Thus, there exists a concurrent slow-path transaction that is pending in its tryCommit and has written 1 to r m , but not released the lock on X m i.e. T k conflicts with another transaction in E. • Suppose that T k returns A k while performing a cached access of some base object b via a trivial (and resp. nontrivial) primitive. Indeed, this is possible only if some concurrent transaction writes (and resp. reads or writes) to b. However, two transactions T k and T m may contend on b in E only if there exists X ∈ Dset(T i ) ∩ Dset(T j ) and X ∈ Wset(T i ) ∪ Wset(T j ). from Line 30. The same argument applies for the case when T k returns A k while performing commit-cache k in E. (Complexity) The implementation uses uninstrumented writes since each write k (X m ) simply writes to v m ∈ D Xm and does not access any metadata base object. The complexity of each read k (X m ) is a single access to a metadata base object r m in Line 67 that is not accessed any other transaction T i unless X m ∈ Dset(T i ). while the tryC k just calls cache-commit k that returns C k . Thus, each read-only transaction T k accesses O(|Rset(T k )|) distinct metadata base objects in any execution. 7.5 Providing partial concurrency at low cost Algorithm 7.2 Opaque HyTM implementation with progressive slow-path and sequential fast-path TM-progress; code for T k by process p i 1: Shared objects: 2: v j ∈ D, for each t-object X j 3: allows reads, writes and cas 4: r j ∈ M, for each t-object X j 5: allows reads, writes and cas 6: fa, fetch-and-add object Code for slow-path transactions if Wset(T k ) = ∅ then 9: Return C k 10: locked := acquire(Wset(T k )) 11: if ¬ locked then 12: Return A k 13: fa.add (1) 14: if isAbortable() then 15: release(Lset(T k )) 16: Return A k 17: for all X j ∈ Wset(T k ) do 18: if v j .cas((ov j , k j ), (nv j , k)) then Return undo(Oset(T k )) 22: release(Wset(T k )) 23: Return C k 24: Function: release(Q): 25: for all X j ∈ Q do 26: r j .write(0) 27: fa.add(−1) 28: Return ok Code for fast-path transactions commit-cache i // returns C k or A k We showed that allowing fast-path transactions to run concurrently in HyTM results in an instrumentation cost that is proportional to the read-set size of a fast-path transaction. But can we run at least some transactions concurrently with constant instrumentation cost, while still keeping invisible reads? Algorithm 7.2 implements a slow-path progressive opaque HyTM with invisible reads and wait-free TMliveness. To fast-path transactions, it only provides sequential TM-progress (they are only guaranteed to commit in the absence of concurrency), but in return the algorithm is only using a single metadata base object fa that is read once by a fast-path transaction and accessed twice with a fetch-and-add primitive by an updating slow-path transaction. Thus, the instrumentation cost of the algorithm is constant. Intuitively, fa allows fast-path transactions to detect the existence of concurrent updating slow-path transactions. Each time an updating slow-path updating transaction tries to commit, it increments fa and once all writes to data base objects are completed (this part of the algorithm is identical to Algorithm 7.1) or the transaction is aborted, it decrements fa. Therefore, fa = 0 means that at least one slow-path updating transaction is incomplete. A fast-path transaction simply checks if fa = 0 in the beginning and aborts if so, otherwise, its code is identical to that in Algorithm 7.1. Note that this way, any update of fa automatically causes a tracking set abort of any incomplete fast-path transaction. Theorem 7.17. There exists an opaque HyTM implementation that provides uninstrumented writes, invisible reads, progressiveness for slow-path transactions, sequential TM-progress for fast-path transactions and wait-free TM-liveness such that in every its execution E, every fast-path transaction accesses at most one metadata base object. Proof. The proof of opacity is almost identical to the analogous proof for Algorithm 7.1 in Lemma 7.13. As with Algorithm 7.1, enumerating the cases under which a slow-path transaction T k returns A k proves that Algorithm 7.2 satisfies progressiveness for slow-path transactions. Any fast-path transaction T k ; Rset(T k ) = ∅ reads the metadata base object fa and adds it to the process's tracking set (Line 31). If the value of fa is not 0, indicating that there exists a concurrent slow-path transaction pending in its tryCommit, T k returns A k . Thus, the implementation provides sequential TM-progress for fast-path transactions. Also, in every execution E of M, no fast-path write-only transaction accesses any metadata base object and a fast-path reading transaction accesses the metadata base object fa exactly once, during the first t-read. Related work and Discussion HyTM model. Our HyTM model is a natural extension of the model we specified for Software Transactional memory (cf. Chapter 2), and has the advantage of being relatively simple. The term instrumentation was originally used in the context of HyTMs [35,99,112] to indicate the overhead a hardware transaction induces in order to detect pending software transactions. The impossibility of designing HyTMs without any code instrumentation was intuitively suggested in [35], we present a formal proof in this paper. In [23], Attiya and Hillel considered the instrumentation cost of privatization, i.e., allowing transactions to isolate data items by making them private to a process so that no other process is allowed to modify the privatized item. Just as we capture a tradeoff between the cost of hardware instrumentation and the amount of concurrency allowed between hardware and software transactions, [23] captures a tradeoff between the cost of privatization and the number of transactions guaranteed to make progress concurrently in -progressive STMs. The model we consider is fundamentally different to [23], in that we model hardware transactions at the level of cache coherence, and do not consider non-transactional accesses, i.e., neither data nor meta-data base objects are private in our HyTM model. The proof techniques we employ are also different. Uninstrumented HTMs may be viewed as being disjoint-access parallel (DAP) [24,86]. As such, some of the techniques used in the proof of Theorem 7.4 resemble those used in [24,60,64]. We have proved that it is impossible to completely forgo instrumentation in a HyTM even if only sequential TM-progress is required, and that any opaque HyTM implementation providing non-trivial progress either has to pay a linear number of metadata accesses, or will have to allow slow-path transactions to abort fast-path operations. The main motivation for our definition of metadata base objects (Definition 7.1) is given by experiments suggesting that the cost of concurrency detection is a significant bottleneck for many HyTM implementations [102]. To precisely characterize the costs incurred by hardware transactions, we made a distinction between the set of memory locations that store the data values of the t-objects and the locations that store the metadata information. To the best of our knowledge, all known HyTM proposals, such as HybridNOrec [35,112], PhTM [99] and others [37,88] avoid co-locating the data and metadata within a single base object. HyTM algorithms. Circa 2005, several papers introduced HyTM implementations [12, 37, 88] that integrated HTMs with variants of DSTM [79]. These implementations provide nontrivial concurrency between hardware and software transactions (progressiveness), by imposing instrumentation on hardware transactions: every t-read operation incurs at least one extra access to a metadata base object. Our Theorem 7.9 shows that this overhead is unavoidable. Of note, write operations of these HyTMs are also instrumented, but our Algorithm 7.1 shows that it is not necessary. Implementations like PhTM [99] and HybridNOrec [35] overcome the per-access instrumentation cost of [37,88] by realizing that if one is prepared to sacrifice progress, hardware transactions need instrumentation only at the boundaries of transactions to detect pending software transactions. Inspired by this observation, our HyTM implementation described in Algorithm 7.2 overcomes the linear per-read instrumentation cost by allowing hardware readers to abort due to a concurrent software writer, but maintains progressiveness for software transactions, unlike [35,99,102]. 8 Optimism for boosting concurrency The wickedness and the foolishness of no man can avail against the fond optimism of mankind. James Branch Cabell -The Silver Stallion Overview In previous chapters, we were concerned with the inherent complexities of implementing TM. In this chapter, we are concerned with using TM to derive concurrent implementations and raise a fundamental question about the ability of the TM abstraction to transform a sequential implementation to a concurrent one. Specifically, does the optimistic nature of TM give it an inherent advantage in exploiting concurrency that is lacking in pessimistic synchronization techniques like locking? To exploit concurrency, conventional lock-based synchronization pessimistically protects accesses to the shared memory before executing them. Speculative synchronization, achieved using TMs, optimistically executes memory operations with a risk of aborting them in the future. A programmer typically uses these synchronization techniques as "wrappers" to allow every process (or thread) to locally run its sequential code while ensuring that the resulting concurrent execution is globally correct. Unfortunately, it is difficult for programmers to tell in advance which of the synchronization techniques will establish more concurrency in their resulting programs. In this chapter, we analyze the "amount of concurrency" one can obtain by turning a sequential program into a concurrent one. In particular, we compare the use of optimistic and pessimistic synchronization techniques, whose prime examples are TMs and locks respectively. To fairly compare concurrency provided by implementations based on various techniques, one has (1) to define what it means for a concurrent program to be correct regardless of the type of synchronization it uses and (2) to define a metric of concurrency. This condition is weaker than serializability since it does not require that there exists a single sequential execution that is consistent with all local executions. It is however sufficient to guarantee that optimistic executions do not observe an inconsistent transient state that could lead, for example, to a fatal error like division-by-zero. Furthermore, the implementation should "make sense" globally, given the sequential type of the data structure we implement. The high-level history of every execution of a concurrent implementation must be linearizable [27,83] with respect to this sequential type. The combination of local serializability and linearizability gives a correctness criterion that we call LS-linearizability, where LS stands for "locally serializable". We show that LS-linearizability is, as the original linearizability, compositional [81,83]: a composition of LS-linearizable implementations is also LS-linearizable. We apply the criterion of LS-linearizability to two broad classes of pessimistic and optimistic synchronization techniques. Pessimistic implementations capture what can be achieved using classic locks; in contrast, optimistic implementations proceed speculatively and fail to return a response to the process in the case of conflicts, e.g., relying on transactional memory. Measuring concurrency. We characterize the amount of concurrency provided by an LS-linearizable implementation as the set of schedules it accepts. To this end, we define a concurrency metric inspired by the analysis of parallelism in database concurrency control [74,126]. More specifically, we assume an external scheduler that defines which processes execute which steps of the corresponding sequential program in a dynamic and unpredictable fashion. This allows us to define concurrency provided by an implementation as the set of schedules (interleavings of steps of concurrent sequential operations) it accepts (is able to effectively process). Then, the more schedules the implementation would accept, the more concurrent it would be. We provide a framework to compare the concurrency one can get by choosing a particular synchronization technique for a specific data type. For the first time, we analytically capture the inherent concurrency provided by optimism-based and pessimism-based implementations in exploiting concurrency. We illustrate this using a popular sequential list-based set implementation [81], concurrent implementations of which are our running examples. More precisely, we show that there exist TM-based implementations that, for some workloads, allow for more concurrency than any pessimistic implementation, but we also show that there exist pessimistic implementations that, for other workloads, allow for more concurrency than any TM-based implementation. Intuitively, an implementation based on transactions may abort an operation based on the way concurrent steps are scheduled, while a pessimistic implementation has to proceed eagerly without knowing about how future steps will be scheduled, sometimes over-conservatively rejecting a potentially acceptable schedule. By contrast, pessimistic implementations designed to exploit the semantics of the data type can supersede the "semantics-oblivious" TM-based implementations. More surprisingly, we demonstrate that combining the benefit of pessimistic implementations, namely their semantics awareness, and the benefit of TMs, namely their optimism, enables implementations that are strictly better-suited for exploiting concurrency than any of them individually. We describe a generic optimistic implementation of the listbased set that is optimal with respect to our concurrency metric: we show that, essentially, it accepts all correct concurrent schedules. Our results suggest that "relaxed" TM models that are designed with the semantics of the high-level object in mind might be central to exploiting concurrency. Roadmap of Chapter 8. In Section 8.2, we introduce the class of optimistic and pessimistic concurrent implementations we consider in this chapter. Section 8.3 introduces the definition of locally serializable linearizability and Section 8.4 is devoted to the concurrency analysis of optimistic and pessimistic synchronization techniques in the context of the list-based set. We wrap up with concluding remarks in Section 8.5. Concurrent implementations Objects and implementations. As with Chapter 2, we assume an asynchronous shared-memory system in which a set of n > 1 processes p 1 , . . . , p n communicate by applying operations on shared objects. An object is an instance of an abstract data type which specifies a set of operations that provide the only means to manipulate the object. Recall that an abstract data type τ is a tuple (Φ, Γ, Q, q 0 , δ) where Φ is a set of operations, Γ is a set of responses, Q is a set of states, q 0 ∈ Q is an initial state and δ ⊆ Q × Φ × Q × Γ is a transition relation that determines, for each state and each operation, the set of possible resulting states and produced responses. In this chapter, we consider only types that are total, i.e., for every q ∈ Q, π ∈ Φ, there exist q ∈ Q and r ∈ Γ such that (q, π, q , r) ∈ δ. We assume that every type τ = (Φ, Γ, Q, q 0 , δ) is computable, i.e., there exists a Turing machine that, for each input (q, π), q ∈ Q, π ∈ Φ, computes a pair (q , r) such that (q, π, q , r) ∈ δ. For any type τ , each high-level object O τ of this type has a sequential implementation. For each operation π ∈ Φ, IS specifies a deterministic procedure that performs reads and writes on a collection of objects X 1 , . . . , X m that encode a state of O τ , and returns a response r ∈ Γ. Sequential list-based set. As a running example, we consider the sorted linked-list based implementation of the type set, commonly referred to as the list-based set [81]. Recall that the set type exports operations insert(v), remove(v) and contains(v), with v ∈ Z. Formally, the set type is defined by the tuple (Φ, Γ, Q, q 0 , δ) where: Φ = {insert(v), remove(v), contains(v)}; v ∈ Z Γ = {true, false} Q is the set of all finite subsets of Z; q 0 = ∅ δ is defined as follows: (1): (q, contains(v), q, (v ∈ q)) (2): (q, insert(v), q ∪ {v}, (v ∈ q)) (3): (q, remove(v), q \ {v}, (v ∈ q)) We consider a sequential implementation LL (Algorithm 8.1) of the set type using a sorted linked list where each element (or object) stores an integer value, val , and a pointer to its successor, next, so that elements are sorted in the ascending order of their value. Every operation invoked with a parameter v traverses the list starting from the head up to the element storing value v ≥ v. If v = v, then contains(v) returns true, remove(v) unlinks the corresponding element and returns true, and insert(v) returns false. Otherwise, contains(v) and remove(v) return false while insert(v) adds a new element with value v to the list and returns true. The list-based set is denoted by (LL, set). Concurrent implementations. We tackle the problem of turning the sequential implementation IS of type τ into a concurrent one, shared by n processes. The implementation provides the processes with algorithms for the reads and writes on objects. We refer to the resulting implementation as a concurrent implementation of (IS , τ ). As in Chapter 2, we assume an asynchronous shared-memory system in which the processes communicate by applying primitives on shared base objects [75]. We place no upper bounds on the number of versions an object may maintain or on the size of this object. Throughout this chapter, the term operation refers to some high-level operation of the type, while readwrite operations on objects are referred simply as reads and writes. An implemented read or write may abort by returning a special response ⊥. In this case, we say that the corresponding high-level operation is aborted. The ⊥ event is treated both as the response event of the read or write operation and as the response of the corresponding high-level operation. Initially head, tail, 3: head.val = −∞, tail.val = +∞ 4: head.next = tail 5: insert(v): 6: prev ← head // copy the address 7: curr ← read(prev .next) // fetch the next element 8: while (tval ← read(curr .val)) < v do 9: prev ← curr 10: curr ← read(curr .next) // fetch from memory 11: end while 12: if tval = v then // tval is stored locally 13: X ← new-node(v, prev .next) // v and address of curr 14: write(prev .next, X) // next points to the new element 15: Return (tval = v) 16: remove(v): 17: prev ← head // copy the address 18: curr ← read(prev .next) // fetch next field 19: while (tval ← read(curr .val)) < v do // val local copy 20: prev ← curr 21: curr ← read(curr .next) 22: end while 23: if tval = v then 24: tnext ← read(curr .next) // fetch the node after curr 25: write(prev .next, tnext) // delete the node 26: Return (tval = v) 27: contains(v): 28: curr ← head 29: curr ← read(prev .next) 30: while (tval ← read(curr .val)) < v do 31: curr ← read(curr .next) 32: end while 33: Return (tval = v) Executions and histories. An execution of a concurrent implementation (of (IS , τ )) is a sequence of invocations and responses of high-level operations of type τ , invocations and responses of read and write operations, and primitives applied on base-objects. We assume that executions are well-formed : no process invokes a new read or write, or high-level operation before the previous read or write, or a high-level operation, resp., returns, or takes steps outside its read or write operation's interval. Let α|p i denote the subsequence of an execution α restricted to the events of process p i . Executions α and α are equivalent if for every process p i , α|p i = α |p i . An operation π precedes another operation π in an execution α, denoted π → α π , if the response of π occurs before the invocation of π . Two operations are concurrent if neither precedes the other. An execution is sequential if it has no concurrent operations. A sequential execution α is legal if for every object X, every read of X in α returns the latest written value of X. An operation is complete in α if the invocation event is followed by a matching (non-⊥) response or aborted; otherwise, it is incomplete in α. Execution α is complete if every operation is complete in α. The history exported by an execution α is the subsequence of α reduced to the invocations and responses of operations, reads and writes, except for the reads and writes that return ⊥. High-level histories and linearizability. A high-level historyH of an execution α is the subsequence of α consisting of all invocations and responses of (high-level) operations. Definition 8.1 (Linearizability). A complete high-level historyH is linearizable with respect to an object type τ if there exists a sequential high-level history S equivalent toH such that (1) →H ⊆→ S and (2) S is consistent with the sequential specification of type τ . Now a high-level historyH is linearizable if it can be completed (by adding matching responses to a subset of incomplete operations inH and removing the rest) to a linearizable high-level history [27,83]. Locally serializable linearizability Obedient implementations. We only consider implementations that satisfy the following condition: Let α be any complete sequential execution of a concurrent implementation I. Then in every execution of I of the form α · ρ 1 · · · ρ k where each ρ i (i = 1, . . . , k) is the complete execution of a read, every read returns the value written by the last write that does not belong to an aborted operation. Intuitively, this assumption restricts our scope to "obedient" implementations of reads and writes, where no read value may depend on some future write. In particular, we filter out implementations in which the complete execution of a high-level operation is performed within the first read or write of its sequential implementation. Pessimistic implementations. Informally, a concurrent implementation is pessimistic if the exported history contains every read-write event that appears in the execution. More precisely, no execution of a pessimistic implementation includes operations that returned ⊥. For example, a class of pessimistic implementations are those based on locks. A lock provides shared or exclusive access to an object X through synchronization primitives lock S (X) (shared mode), lock(X) (exclusive mode), and unlock(X). When lock S (X) (resp. lock(X)) invoked by a process p i returns, we say that p i holds a lock on X in shared (resp. exclusive) mode. A process releases the object it holds by invoking unlock(X). If no process holds a shared or exclusive lock on X, then lock(X) eventually returns; if no process holds an exclusive lock on X, then lock S (X) eventually returns; and if no process holds a lock on X forever, then every lock(X) or lock S (X) eventually returns. Given a sequential implementation of a data type, a corresponding lock-based concurrent one is derived by inserting the synchronization primitives to provide read-write access to an object. Optimistic implementations. In contrast with pessimistic ones, optimistic implementations may, under certain conditions, abort an operation: some read or write may return ⊥, in which case the corresponding operation also returns ⊥. Popular classes of optimistic implementations are those based on "lazy synchronization" [71,81] (with the ability of returning ⊥ and re-invoking an operation) or transactional memory. Locally serializable linearizability We are now ready to define the correctness criterion that we impose on our concurrent implementations. Let H be a history and let π be a high-level operation in H. Then H|π denotes the subsequence of H consisting of the events of π, except for the last aborted read or write, if any. Let IS be a sequential implementation of an object of type τ and Σ IS , the set of histories of IS . Definition 8.2 (LS-linearizability). A history H is locally serializable with respect to IS if for every high-level operation π in H, there exists S ∈ Σ IS such that H|π = S|π. A history H is LS-linearizable with respect to (IS , τ ) (we also write H is (IS, τ )-LSL) if: (1) H is locally serializable with respect to IS and (2) the corresponding high-level historyH is linearizable with respect to τ . Observe that local serializability stipulates that the execution is witnessed sequential by every operation. Two different operations (even when invoked by the same process) are not required to witness mutually consistent sequential executions. A concurrent implementation I is LS-linearizable with respect to (IS, τ ) (we also write I is (IS , τ )-LSL) if every history exported by I is (IS , τ )-LSL. Throughout this paper, when we refer to a concurrent implementation of (IS , τ ), we assume that it is LS-linearizable with respect to (IS , τ ). LS-linearizability is compositional. Just as linearizability, LS-linearizability is compositional [81,83]: a composition of LSL implementations is also LSL. We define the composition of two distinct object types τ 1 and τ 2 as a type τ 1 × τ 2 = (Φ, Γ, Q, q 0 , δ) as follows: insert(2) and insert (5) can proceed concurrently with contains (5), the history is LS-linearizable but not serializable. (We only depict important read-write events here.) Φ = Φ 1 ∪ Φ 2 , Γ = Γ 1 ∪ Γ 2 , 1 Q = Q 1 × Q 2 , R(h) W (X1) insert(2)q 0 = (q 01 , q 02 ), and δ ⊆ Q × Φ × Q × Γ is such that ((q 1 , q 2 ), π, (q 1 q 2 ), r) ∈ δ if and only if for i ∈ {1, 2}, if π ∈ Φ i then (q i , π, q i , r) ∈ δ i ∧ q 3−i = q 3−i . Every sequential implementation IS of an object O 1 ×O 2 of a composed type τ 1 ×τ 2 naturally induces two sequential implementations I S1 and I S2 LetH be a completion of the high-level history corresponding to H such thatH|O 1 andH|O 2 are linearizable with respect to τ 1 and τ 2 , respectively. Since linearizability is compositional [81,83],H is linearizable with respect to τ 1 × τ 2 . Now let, for each operation π, S 1 π and S 2 π be any two sequential histories of I S1 and I S2 such that H|π|O j = S j π |π, for j ∈ {1, 2} (since H|O 1 and H|O 2 are LS-linearizable such histories exist). We construct a sequential history S π by interleaving events of S 1 π and S 2 π so that S π |O j = S j π , j ∈ {1, 2}. Since each S j π acts on a distinct component O j of O 1 × O 2 , every such S π is a sequential history of IS . We pick one S π that respects the local history H|π, which is possible, since H|π is consistent with both S 1 |π and S 2 |π. Thus, for each π, we obtain a history of IS that agrees with H|π. Moreover, the high-level history of H is linearizable. Thus, H is LS-linearizable with respect to IS . LS-linearizability versus other criteria. LS-linearizability is a two-level consistency criterion which makes it suitable to compare concurrent implementations of a sequential data structure, regardless of synchronization techniques they use. It is quite distinct from related criteria designed for database and software transactions, such as serializability [109,125] and multilevel serializability [124,125]. For example, serializability [109] prevents sequences of reads and writes from conflicting in a cyclic way, establishing a global order of transactions. Reasoning only at the level of reads and writes may be overly conservative: higher-level operations may commute even if their reads and writes conflict [123]. Consider an execution of a concurrent list-based set depicted in Figure 8.1. We assume here that the set initial state is {1, 3, 4}. Operation contains(5) is concurrent, first with operation insert(2) and then with operation insert(5). The history is not serializable: insert(5) sees the effect of insert(2) because R(X 1 ) by insert(5) returns the value of X 1 that is updated by insert(2) and thus should be serialized after it. But contains(5) misses element 2 in the linked list, but must see the effect of insert(5) to perform the read of X 5 , i.e., the element created by insert (5). However, this history is LSL since each of the three local histories is consistent with some sequential history of LL. Multilevel serializability [124,125] was proposed to reason in terms of multiple semantic levels in the same execution. LS-linearizability, being defined for two levels only, does not require a global serialization of low-level operations as 2-level serializability does. LS-linearizability simply requires each process to observe a local serialization, which can be different from one process to another. Also, to make it more suitable for concurrency analysis of a concrete data structure, instead of semantic-based commutativity [123], we use the sequential specification of the high-level behavior of the object [83]. Linearizability [27,83] only accounts for high-level behavior of a data structure, so it does not imply LSlinearizability. For example, Herlihy's universal construction [75] provides a linearizable implementation for any given object type, but does not guarantee that each execution locally appears sequential with respect to any sequential implementation of the type. Local serializability, by itself, does not require any synchronization between processes and can be trivially implemented without communication among the processes. Therefore, the two parts of LS-linearizability indeed complement each other. Pessimistic vs. optimistic synchronization In this section, we compare the relative abilities of optimistic and pessimistic synchronization techniques to exploit concurrency in the context of the list-based set. To characterize the ability of a concurrent implementation to process arbitrary interleavings of sequential code, we introduce the notion of a schedule. Intuitively, a schedule describes the order in which complete high-level operations, and sequential reads and writes are invoked by the user. More precisely, a schedule is an equivalence class of complete histories that agree on the order of invocation and response events of reads, writes and high-level operations, but not necessarily on read values or high-level responses. Thus, a schedule can be treated as a history, where responses of reads and operations are not specified. We say that an implementation I accepts a schedule σ if it exports a history H such that complete(H) exhibits the order of σ, where complete(H) is the subsequence of H that consists of the events of the complete operations that returned a matching response. We then say that the execution (or history) exports σ. A schedule σ is (IS , τ )-LSL if there exists an (IS , τ )-LSL history that exports σ. A synchronization technique is a set of concurrent implementations. We define a specific optimistic synchronization technique and then a specific pessimistic one. The class SM. Formally, SM denotes the set of optimistic, safe-strict serializable LSL implementations. Let α denote the execution of a concurrent implementation and ops(α), the set of operations each of which performs at least one event in α. Let α k denote the prefix of α up to the last event of operation π k . Let Cseq(α) denote the set of subsequences of α that consist of all the events of operations that are complete in α. We say that α is strictly serializable if there exists a legal sequential execution α equivalent to a sequence in σ ∈ Cseq(α) such that → σ ⊆→ α . We focus on optimistic implementations that are strictly serializable and, in addition, guarantee that every operation (even aborted or incomplete) observes correct (serial) behavior. More precisely, an execution α is safe-strict serializable if (1) α is strictly serializable, and (2) for each operation π k , there exists a legal sequential execution α = π 0 · · · π i · π k and σ ∈ Cseq(α k ) such that {π 0 , · · · , π i } ⊆ ops(σ) and ∀π m ∈ ops(α ) : α |m = α k |m. Similar to other relaxations of opacity [64] like TMS1 [43] and VWC [85], safe-strict serializable implementations (SM) require that every transaction (even aborted and incomplete) observes "correct" serial behavior. Safe-strict serializability captures nicely both local serializability and linearizability. If we transform a sequential implementation IS of a type τ into a safe-strict serializable concurrent one, we obtain an LSL implementation of (IS , τ ). Thus, the following lemma is immediate. Lemma 8.2. Let I be a safe-strict serializable implementation of (IS, τ ). Then, I is LS-linearizable with respect to (IS, τ ). Indeed, by running each operation of IS within a transaction of a safe-strict serializable TM, we make sure that completed operations witness the same execution of IS , and every operation that returned ⊥ is consistent with some execution of IS based on previously completed operations. The class P. This denotes the set of deadlock-free pessimistic LSL implementations: assuming that every process takes enough steps, at least one of the concurrent operations return a matching response [82]. Note that P includes implementations that are not necessarily safe-strict serializable. Concurrency analysis We now provide a concurrency analysis of synchronization techniques SM and P in the context of the list-based set. A pessimistic implementation I H ∈ P of (LL, set). We describe a pessimistic implementation of (LL, set), I H ∈ P, that accepts non-serializable schedules: each read operation performed by contains acquires the shared lock on the object, reads the next field of the element before releasing the shared lock on the predecessor element in a hand-over-hand manner [29]. Update operations (insert and remove) acquire the exclusive lock on the head during read(head ) and release it at the end. Every other read operation performed by update operations simply reads the element next field to traverse the list. The write operation performed by an insert or a remove acquires the exclusive lock, writes the value to the element and releases the lock. There is no real concurrency between any two update operations since the process holds the exclusive lock on the head throughout the operation execution. Thus: Lemma 8.3. I H is deadlock-free and LSL implementation of (LL, set). On the one hand, the schedule of (LL, set) depicted in Figure 8.1, which we denote by σ 0 , is not serializable and must be rejected by any implementation in SM. However, there exists an execution of I H that exports σ 0 since there is no read-write conflict on any two consecutive elements accessed. On the other hand, consider the schedule σ of (LL, set) in Figure 8.2(a). Clearly, σ is serializable and is accepted by implementations based on most progressive TMs since there is no read-write conflict. For example, let I LP denote an implementation of (IS , τ ) based on the progressive opaque TM implementation LP in Algorithm 4.1 (Chapter 4). Then, I LP ∈ SM and the schedule σ is accepted by I LP . However, we prove that σ is not accepted by any implementation in P. Our proof technique is interesting in its own right: we show that if there exists any implementation in P that accepts σ, it must also accept the schedule σ depicted in Figure 8.2(b). In σ , insert(2) overwrites the write on head performed by insert(1) resulting in a lost update. By deadlock-freedom, there exists an extension of σ in which a contains(1) returns false; but this is not a linearizable schedule. Theorem 8.4. There exists a schedule σ 0 of (LL, set) that is accepted by an implementation in PL, but not accepted by any implementation I ∈ SM. Proof. Let σ 0 be the schedule of (LL, set) depicted in Figure 8.1. Suppose by contradiction that σ 0 ∈ S(I), where I is an implementation of (LL, set) based on any safe-strict serializable TM. Thus, there exists an execution α of I that exports σ 0 . Now consider two cases: (1) Suppose that the read of X 4 by contains(5) returns the value of X 4 that is updated by insert (5). Since insert(2) → α insert(5), insert(2) must precede insert(5) in any sequential execution α equivalent to α. Also, since contains(5) reads X 1 prior to its update by insert(2), contains(5) must precede insert(2) in α . But then the read of X 4 is not legal in α -a contradiction since α must be serializable. (2) Suppose that contains(5) reads the initial value of X 4 , i.e., its value prior to the write to X 4 by insert (5), where X 4 .next points to the tail of the list (according to our sequential implementation LL). But then, according to LL, contains(5) cannot access X 5 in σ 0 -a contradiction. Consider the pessimistic implementation I H ∈ P: since the contains operation traverses the list using shared hand-over-hand locking, the process p i executing contains (5) can release the lock on element X 1 prior to the acquisition of the exclusive lock on X 1 by insert (2). Similarly, p i can acquire the shared lock on X 4 immediately after the release of the exclusive lock on X 4 by the process executing insert(5) while still holding the shared lock on element X 3 . Thus, there exists an execution of I H that exports σ 0 . Theorem 8.5. There exists a schedule σ of (LL, set) that is accepted by an implementation in SM, but not accepted by any implementation in P. Proof. We show first that the schedule σ of (LL, set) depicted in Figure 8.2(a) is not accepted by any implementation in P. Suppose the contrary and let σ be exported by an execution α. Here α starts with three sequential insert operations with parameters 1, 2, and 3. The resulting "state" of the set is {1, 2, 3}, where value i ∈ {1, 2, 3} is stored in object X i . Suppose, by contradiction, that some I ∈ P accepts σ. We show that I then accepts the schedule σ depicted in Figure 8.2(b), which starts with a sequential execution of insert(3) storing value 3 in object X 1 . Let α be any history of I that exports σ . Recall that we only consider obedient implementations: in α : the read of head by insert (2) in σ refers to X 1 (the next element to be read by insert (2)). In α, element X 1 stores value 1, i.e., insert(1) can safely return false, while in σ , X 1 stores value 3, i.e., the next step of insert(1) must be a write to head. Thus, no process can distinguish α and α before the read operations on X 1 return. Let α be the prefix of α ending with R(X 1 ) executed by insert(2). Since I is deadlock-free, we have an extension of α in which both insert(1) and insert(2) terminate; we show that this extension violates linearizability. Since I is locally-serializable, to respect our sequential implementation of (LL, set), both operations should complete the write to head before returning. Let π 1 = insert(1) be the first operation to write to head in this extended execution. Let π 2 = insert(2) be the other insert operation. It is clear that π 1 returns true even though π 2 overwrites the update of π 1 on head and also returns true. Recall that implementations in P are deadlock-free. Thus, we can further extend the execution with a complete contains(1) that will return false (the element inserted to the list by π 1 is lost)-a contradiction since I is linearizable with respect to set. Thus, σ / ∈ S(I) for any I ∈ P. On the other hand, the schedule σ is accepted by I LP ∈ SM, since there is no conflict between the two concurrent update operations. Concurrency optimality We now combine the benefits of semantics awareness of implementations in P and the optimism of SM to derive a generic optimistic implementation of the list-based set that supersedes every implementation in classes P and SM in terms of concurrency. Our implementation, denoted I RM provides processes with algorithms for implementing read and write operations on the elements of the list for each operation of the list-based set (Algorithm 8.2). Every object (or element) X is specified by the following shared variables: t-var [ ] stores the value v ∈ V of X , r[ ] stores a boolean indicating if X is marked for deletion, L[ ] stores a tuple of the version number of X and a locked flag; the latter indicates whether a concurrent process is performing a write to X . if (ver 1 = ver 2 ) ∨ r then 16: Return ⊥ 17: rbuf k .add( X , ver 1 ) // override penultimate entry 18: Return val 19: write k (X , v) executed by remove: 20: let oldver be such that X , oldver ∈ rbuf k 21: ver ← oldver Return ok 32: write k (X , v) executed by insert: 33: let oldver be such that X , oldver ∈ rbuf k 34: ver ← oldver 35: if ¬L[ ].cas( ver , false , ver , true ) then 36: Return ⊥ // grab lock or abort Return ok Any operation with input parameter v traverses the list starting from the head element up to the element storing value v ≥ v without writing to shared memory. If a read operation on an element conflicts with a write operation to the same element or if the element is marked for deletion, the operation terminates by returning ⊥. While traversing the list, the process maintains the last two read elements and their version numbers in the local rotating buffer rbuf . If none of the read operations performed by contains(v) return ⊥ and if v = v, then contains(v) returns true; otherwise it returns false. Thus, the contains does not write to shared memory. To perform write operation to an element as part of an update operation (insert and remove), the process first retrieves the version of the object that belongs to its rotating buffer. It returns ⊥ if the version has been changed since the previous read of the element or if a concurrent process is executing a write to the same element. Note that, technically, ⊥ is returned only if prev .next → curr . If prev .next → curr , then we attempt to lock the element with the current version and return ⊥ if there is a concurrent process executing a write to the same element. But we avoid expanding on this step in our algorithm pseudocode. The write operation performed by the remove operation, additionally checks if the element to be removed from the list is locked by another process; if not, it sets a flag on the element to mark it for deletion. If none of the read or write operations performed during the insert(v) or remove(v) returned ⊥, appropriate matching responses are returned as prescribed by the sequential implementation LL. Any update operation of I RM uses at most two expensive synchronization patterns [17]. Proof of LS-linearizability. Let α be an execution of I RM and < α denote the total-order on events in α. For simplicity, we assume that α starts with an artificial sequential execution of an insert operation π 0 that inserts tail and sets head .next = tail . Let H be the history exported by α, where all reads and writes are sequential. We construct H by associating a linearization point op with each non-aborted read or write operation op performed in α as follows: • if op is a read, then performed by process p k , op is the base-object read in line 12; • if op is a write within an insert operation, op is the base-object cas in line 22; • if op is a write within a remove operation, op is the base-object cas in line 35. We say that a read of an element X within an operation π is valid in H (we also say that X is valid ) if there does not exist any remove operation π 1 that deallocates X (removes X from the list) such that π1.write(X) < α π.read(X) . Lemma 8.6. Let π be any operation performing read(X) followed by read(Y ) in H. Then (1) there exists an insert operation that sets X.next = Y prior to π.read(X), and (2) π.read(X) and π.read(Y ) are valid in H. Proof. Let π be any operation in I RM that performs read(X) followed by read(Y ). If X and Y are head and tail respectively, head .next = tail (by assumption). Since no remove operation deallocates the head or tail, the read of X and Y are valid in H. Now, let X be the head element and suppose that π performs read(X) followed by read(Y ); Y = tail in H. Clearly, if π performs a read(Y ), there exists an operation π = insert that has previously set head.next = Y . More specifically, π.read(X) performs the action in line 12 after the write to shared memory by π in line 37. By the assignment of linearization points to tx-operations, π < α π.read(X) . Thus, there exists an insert operation that sets X.next = Y prior to π.read(X) in H. For the second claim, we need to prove that the read(Y ) by π is valid in H. Suppose by contradiction that Y has been deallocated by some π = remove operation prior to read(Y ) by π. By the rules for linearization of read and write operations, the action in line 28 precedes the action in line 12. However, π proceeds to perform the check in line 15 and returns ⊥ since the flag corresponding to the element Y is previously set by π . Thus, H does not contain π.read(Y )-contradiction. Inductively, by the above arguments, every non-head read by π is performed on an element previously created by an insert operation and is valid in H. Lemma 8.7. H is locally serializable with respect to LL. Proof. By Lemma 8.6, every element X read within an operation π is previously created by an insert operation and is valid in H. Moreover, if the read operation on X returns v , then X.next stores a pointer to another valid element that stores an integer value v > v . Note that the series of reads performed by π terminates as soon as an element storing value v or higher is found. Thus, π performs at most O(|v − v 0 |) reads, where v 0 is the value of the second element read by π. Now we construct S π as a sequence of insert operations, that insert values read by π, one by one, followed by π. By construction, S π ∈ Σ LL . It is sufficient for us to prove that every finite high-level history H of I RM is linearizable. First, we obtain a completionH of H as follows. The invocation of an incomplete contains operation is discarded. The invocation of an incomplete π = insert ∨ remove operation that has not returned successfully from the write operation is discarded; otherwise, it is completed with response true. We obtain a sequential high-level historyS equivalent toH by associating a linearization point π with each operation π as follows. For each π = insert ∨ remove that returns true inH, π is associated with the first write performed by π in H; otherwise π is associated with the last read performed by π in H. For π = contains that returns true, π is associated with the last read performed in I RM ; otherwise π is associated with the read of head. Since linearization points are chosen within the intervals of operations of I RM , for any two operations π i and π j inH, if π i →H π j , then π i →S π j . Lemma 8.8.S is consistent with the sequential specification of type set. Proof. LetS k be the prefix ofS consisting of the first k complete operations. We associate eachS k with a set q k of objects that were successfully inserted and not subsequently successfully removed inS k . We show by induction on k that the sequence of state transitions inS k is consistent with operations' responses inS k with respect to the set type. The base case k = 1 is trivial: the tail element containing +∞ is successfully inserted. Suppose thatS k is consistent with the set type and let π 1 with argument v ∈ Z and response r π1 be the last operation of S k+1 . We want to show that (q k , π 1 , q k+1 , r π1 ) is consistent with the set type. (1) If π 1 = insert(v) returns true inS k+1 , there does not exist any other π 2 = insert(v) that returns true inS k+1 such that there does not exist any remove(v) that returns true; π 2 →S k+1 remove(v) →S k+1 π 1 . Suppose by contradiction that such a π 1 and π 2 exist. Every successful insert(v) operation performs its penultimate read on an element X that stores a value v < v and the last read is performed on an element that stores a value v > v. Clearly, π 1 also performs a write on X. By construction ofS, π 1 is linearized at the release of the cas lock on element X. Observe that π 2 must also perform a write to the element X (otherwise one of π 1 or π 2 would return false). By assumption, the write to X in shared-memory by π 2 (line 37) precedes the corresponding write to X in shared-memory by π 2 . If π2 < α π1.read(X) , then π 1 cannot return true-a contradiction. Otherwise, if π1.read(X) < α π2 , then π 1 reaches line 22 and return ⊥. This is because either π 1 attempts to acquire the cas lock on X while it is still held by π 2 or the value of X contained in the rbuf of the process executing π 1 has changed-a contradiction. If π 1 = insert(v) returns false inS k+1 , there exists a π 2 = insert(v) that returns true inS k+1 such that there does not exist any π 3 = remove(v) that returns true; π 2 →S k+1 π 3 →S k+1 π 1 . Suppose that such a π 2 does not exist. Thus, π 1 must perform its last read on an element that stores value v > v, perform the action in Line 37 and return true-a contradiction. It is easy to verify that the conjunction of the above two claims prove that ∀q ∈ Q; ∀v ∈ Z,S k+1 satisfies (q, insert(v), q ∪ {v}, (v ∈ q)). (2) If π 1 = remove(v), similar arguments as applied to insert(v) prove that ∀q ∈ Q; ∀v ∈ Z,S k+1 satisfies (q, remove(v), q \ {v}, (v ∈ q)). (3) If π 1 = contains(v) returns true inS k+1 , there exists π 2 = insert(v) that returns true inS k+1 such that there does not exist any remove(v) that returns true inS k+1 such that π 2 →S k+1 remove(v) →S k+1 π 1 . The proof of this claim immediately follows from Lemma 8.6. Now, if π 1 = contains(v) returns false inS k+1 , there does not exist an π 2 = insert(v) that returns true such that there does not exist any remove(v) that returns true; π 2 →S k+1 remove(v) →S k+1 contains(v). Suppose by contradiction that such a π 1 and π 2 exist. Thus, the action in line 37 by the insert(v) operation that updates some element, say X precedes the action in line 12 by contains(v) that is associated with its first read (the head ). We claim that contains(v) must read the element X newly created by insert(v) and return true-a contradiction to the initial assumption that it returns false. The only case when this can happen is if there exists a remove operation that forces X to be unreachable from head i.e. concurrent to the write to X by insert, there exists a remove that sets X .next to X.next after the action in line 35 by insert. But this is not possible since the cas on X performed by the remove would return false. Thus, inductively, the sequence of state transitions inS satisfies the sequential specification of the set type. Lemmas 8.7 and 8.8 imply: 8.4 Pessimistic vs. optimistic synchronization Theorem 8.9. I RM is LS-linearizable with respect to (LL, set). Proof of concurrency optimality. Now we show that I RM supersedes, in terms of concurrency, any implementation in classes P or SM. The proof is based on a more general optimality result, interesting in its own right: any finite schedule rejected by I RM is not observably LS-linearizable (or simply observable). We show that any finite schedule rejected by our algorithm is not observably correct. A correct schedule σ is observably correct if by completing update operations in σ and extending, for any v ∈ Z, the resulting schedule with a complete sequential execution contains(v), applied to the resulting contents of the list, we obtain a correct schedule. Here the contents of the list after a given correct schedule is determined based on the order of its write operations. For each element, we define the resulting state of its next field based on the last write in the schedule. Since in a correct schedule, each new element is first created and then linked to the list, we can reconstruct the state of the list by iteratively traversing it, starting from head . Intuitively, a schedule is observably correct if it incurs no "lost updates". Consider, for example a schedule (cf. Figure 8.2(b)) in which two operations, insert(1) and insert(2) are applied to the list with state {3}. The resulting schedule is trivially correct (both operations return true so the schedule can some from a complete linearizable history). However, in the schedule, one of the operations, say insert (1), overwrites the effect of the other one. Thus, if we extend the schedule with a complete execution of contains (2), the only possible response it may give is false which obviously does not produce a linearizable high-level history. Theorem 8.10 (Optimality). I RM accepts all schedules that are observable with respect to (LL, set). Proof. We prove that any schedule rejected by I RM is not observable. We go through the cases when a read or write returns ⊥ (implying the operation fails to return a matching response) and thus the current schedule is rejected: Consider the subcase (1a), r[ ] is set true by a preceding or concurrent write(X ) (line 27). The high-level operation performing this write is a remove that marks the corresponding list element as removed. Since no removed element can be read in a sequential execution of LL, the corresponding history is not locally serializable. Alternatively, in subcase (1b), the version of X read previously in line 11 has changed. Thus, an update operation has concurrently performed a write to X . However, there exist executions that export such schedules. In case (2), the write performed by a remove operation returns ⊥. In subcase (2a), X is currently locked. Thus, a concurrent high-level operation has previously locked X (by successfully performing L[ ].cas() in line 22) and has not yet released the lock (by writing ver , false to L[ ] in line 29). In subcase (2b), the current version of X (stored in L[ ]) differs from the version of X witnessed by a preceding read. Thus, a concurrent high-level operation completed a write to X after the current high-level operation π performed a read of X . In both (2a) and (2b), a concurrent high-level updating operation π (remove or insert) has written or is about to perform a write to X . In subcase (2c), the cas on the element X (element that stores the value v) executed by remove(v) returns false (line 25). Recall that by the sequential implementation LL, remove(v) performs a read of X prior to the write(X ), where X .next refers to X . If the cas on X fails, there exists a process that concurrently performed a write to X , but after the read of X by remove(v). In all cases, we observe that if we did not abort the write to X , then the schedule extended by a complete execution of contains is not LSL. In case (3), the write performed by an insert operation returns ⊥. Similar arguments to case (2) prove that any schedule rejected is not observable LSL. Theorem 8.10 implies that the schedules exported by the histories in Figures 8.1 and 8.2(a) and that are not accepted by any I ∈ SM and any I ∈ P, respectively, are indeed accepted by I RM . But it is easy to see that implementations in SM and P can only accept observable schedules. As a result, I RM can be shown to strictly supersede any pessimistic or TM-based implementation of the list-based set. Corollary 8.11. I RM accepts every schedule accepted by any implementation in P and SM. Moreover, I RM accepts schedules σ and σ that are rejected by any implementation in P and SM, respectively. Related work and Discussion Measuring concurrency. Sets of accepted schedules are commonly used as a metric of concurrency provided by a shared memory implementation. Gramoli et al. [53] defined a concurrency metric, the input acceptance, as the ratio of committed transactions over aborted transactions when TM executes the given schedule. Unlike our metric, input acceptance does not apply to lock-based programs. For static database transactions, Kung and Papadimitriou [89] use the metric to capture the parallelism of a locking scheme, While acknowledging that the metric is theoretical, they insist that it may have "practical significance as well, if the schedulers in question have relatively small scheduling times as compared with waiting and execution times." Herlihy [74] employed the metric to compare various optimistic and pessimistic synchronization techniques using commutativity of operations constituting high-level transactions. A synchronization technique is implicitly considered in [74] as highly concurrent, namely "optimal", if no other technique accepts more schedules. By contrast, we focus here on a dynamic model where the scheduler cannot use the prior knowledge of all the shared addresses to be accessed. Also, unlike [74,89], the results in this chapter require all operations, including aborted ones, to observe (locally) consistent states. Concurrency optimality. This chapter shows that "semantics-oblivious" optimistic TM and "semanticsaware" pessimistic locking are incomparable with respect to exploiting concurrency of the list-based set. Yet, we have shown how to use the benefits of optimism to derive a concurrency optimal implementation that is fine-tuned to the semantics of the list-based set. Intuitively, the ability of an implementation to successfully process interleaving steps of concurrent threads is an appealing property that should be met by performance gains. We believe this to be so. In work that is not part of the thesis [56], we confirm experimentally that the concurrency optimal optimistic implementation of the list-based set based on I RM outperforms the state-of-the-art implementations of the list-based set, namely, the Lazy linked list [71] and the Harris-Michael linked list [70,105]. Does the claim also hold for other data structures? We suspect so. For example, similar but more general data structures, such as skip-lists or tree-based dictionaries, may allow for optimizations similar to proposed in this paper. Our results provides some preliminary hints in the quest for the "right" synchronization technique to develop highly concurrent and efficient implementations of data types. 9 Concluding remarks Everything has to come to an end, sometime. Lyman Frank Baum-The Marvelous Land of Oz The inclusion of hardware support for transactions in mainstream CPU's [1,107,111] suggests that TM is an important concurrency abstraction. However, hardware transactions are not going to be sufficient to support efficient concurrent programming since they may be aborted spuriously; the fast but potentially unreliable hardware transactions must be complemented with slower, but more reliable software transactions. Thus, understanding the inherent cost of both hardware and software transactions is of both theoretical and practical interest. Below, we briefly recall the outcomes of the thesis and overview the future research directions. Safety for TMs. We formalized the semantics of a safe TM: every transaction, including aborted and incomplete ones, must observe a view that is consistent with some sequential execution. We introduced the notion of deferred-update semantics which explicitly precludes reading from a transaction that has not yet invoked tryCommit. We believe that our definition is useful to TM practitioners, since it streamlines possible implementations of t-read and tryCommit operations. Complexity of TMs. The cost of the TM abstraction is parametrized by several properties: safety for transactions, conditions under which transactions must terminate, conditions under which transactions must commit/abort, bound on the number of versions that can be maintained and a multitude of other implementation strategies like disjoint-access parallelism and invisible reads. At a high-level, the complexity bounds presented in the thesis suggest that providing high degrees of concurrency in software transactional memory (STM) implementations incurs a considerable synchronization cost. As we show, permissive STMs, while providing the best possible concurrency in theory, require a strong synchronization primitive (AWAR) or a memory fence (RAW) per read operation, which may result in excessively slow execution times. Progressive STMs provide only basic concurrency by adapting to data conflicts, but perform considerably better in this respect: we present progressive implementations that incur constant RAW/AWAR complexity. Since Transactional memory was originally proposed as an alternative to locking, early STMs implementations [52,79,101,117,120] adopted optimistic concurrency control and guaranteed that a prematurely halted transaction cannot not prevent other transactions from committing. However, popular state-ofthe-art STM implementations like TL2 [39] and NOrec [36] are progressive, providing no non-blocking progress guarantees for transactions, but perform empirically better than obstruction-free TMs. Complexity lower and upper bounds presented in the thesis explain this performance gap. Do our results mean that maximizing the ability of processing multiple transactions in parallel or providing non-blocking progress should not be an important factor in STM design? It would seem so. Should we rather even focus on speculative "single-lock" solutions á la flat combining [72] or "pessimistic" STMs in which transactions never abort [5]? Difficult to say affirmatively, but probably not, since our results suggest progressive STMs incur low complexity overheads as also evidenced by their good empirical performance on most realistic TM workloads [36,39]. Several questions yet remain open on the complexity of STMs. For instance, the bounds in the thesis were derived for the TM-correctness property of strict serializability and its restrictions. But there has been study of relaxations of strict serializability like snapshot isolation [24,31]. Verifying if the lower bounds presented in the thesis hold under such weak TM-correctness properties and extending the proofs if indeed, presents interesting open questions. The discussion section of Chapters 4, 5 and 6 additionally list some unresolved questions closely related to the results in the thesis. One problem of practical need that is not considered in the thesis concerns the interaction of transactional code with non-transactional code, i.e., the same data item is accessed both transactionally and nontransactionally. It is expected that code executed within a transaction behave as lock-based code within a single "global lock" [104,116] to avoid memory races. Techniques to ensure the safety of non-transactional accesses have been formulated through the notion of privatization [23,118]. Devising techniques to ensure privatization for TMs and understanding the cost of enforcing it is an important research direction. In the thesis, we assumed that a rmw event is an access to a single base object. However, there have been proposals to provide implementations with the ability to invoke k-rmw; k ∈ N primitives [21,42] that allow accessing up to k base objects in a single atomic event. For example, the k-cas instruction allows to perform k cas instructions atomically on a vector b 1 , . . . , b k of base objects: it accepts as input a vector old 1 , . . . , old k , new 1 , . . . , new k and atomically updates the value of b 1 , . . . , b k to new 1 , . . . , new k and returns true iff for all i ∈ {1, . . . , k}, old i = new i ; otherwise it returns false. However, the ability to access such k-rmw primitives does not necessarily simplify the design and improve the performance of non-blocking implementations nor overcome the compositionality issue [21,42,81]. Nonetheless, verifying if the lower bounds presented in the thesis hold in this shared memory model is an interesting problem. HyTMs. We have introduced an analytical model for hybrid transactional memory that captures the notion of cached accesses as performed by hardware transactions. We then derived lower and upper bounds in this model to capture the inherent tradeoff between the degree of concurrency allowed among hardware and software transactions and the instrumentation overhead introduced on the hardware. In a nutshell, our results say that it is impossible to completely forgo instrumentation in a sequential HyTM, and that any opaque HyTM implementation providing non-trivial progress either has to pay a linear number of metadata accesses, or will have to allow slow-path transactions to abort fast-path operations. Our model of HTMs assumed that the hardware resources were bounded, in the sense that, a hardware transaction may only access a bounded number of data items, exceeding which, it incurs a capacity abort. To overcome the inherent limitations of bounded HTMs, there have been proposals for "unbounded HTMs" that allow transactions to commit even if they exceed the hardware resources [12,68]. The HyTM model from Chapter 7 can be easily extended to accommodate unbounded HTM designs by disregarding capacity aborts. Some papers have investigated alternatives to providing HTMs with an STM fallback, such as sandboxing [4,33], or employing hardware-accelerated STM [114,119], and the use of both direct and cached accesses within the same hardware transaction to reduce instrumentation overhead [88,112,113]. Another approach proposed reduced hardware transactions [102], where a part of the slow-path is executed using a short fast-path transaction, which allows to partially eliminate instrumentation from the hardware fast-path. Modelling and deriving complexity bounds for HyTM proposals outside the HyTM model described in the thesis is an interesting future direction. Relaxed transactional memory. The concurrency lower bounds derived in Chapter 8 illustrated that a strictly serializable TM, when used as a black-box to transform a sequential implementation of the list-based set to a concurrent one, is not concurrency-optimal. This is due to the fact that TM detects conflicts at the level of transactional reads and writes resulting in false conflicts, in the sense that, the read-write conflict may not affect the correctness of the implemented high-level set type. As we have shown, we can derive a concurrency optimal optimistic (non-strictly serializable) implementation that can process every correct schedule of the list-based set. Indeed, several papers have studied "relaxed" TMs that are fined-tuned to the semantics of the high-level data type [50,76,77]. Exploring the complexity of such relaxed TM models represents a very important future research direction. and synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.1.1 Concurrent computing overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.1.2 Synchronization using locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.1.3 Non-blocking synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.2 Transactional Memory (TM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.3 Summary of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.3.1 Safety for transactional memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.3.2 Complexity of transactional memory . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.3.3 Hybrid transactional memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.3.4 Optimism for boosting concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4 Roadmap of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2 Transactional Memory model21 2.1 TM interface and TM implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2 TM-correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3 TM-progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.4 TM-liveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5 Invisible reads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.6 Disjoint-access parallelism (DAP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.7 TM complexity metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3 Safety for transactional memory 33 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2 Safety properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.3 Opacity and deferred-update(DU) semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.4 On the safety of du-opacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.4.1 Du-opacity is prefix-closed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.4.2 The limit of du-opaque histories . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.4.3 Du-opacity is limit-closed for complete histories . . . . . . . . . . . . . . . . . . . . 38 3.4.4 Du-opacity vs. opacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.5 Strict serializability with DU semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.6 Du-opacity vs. other deferred-update criteria . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.6.1 Virtual-world consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.6.2 Transactional memory specification (TMS) . . . . . . . . . . . . . . . . . . . . . . 45 3.7 Related work and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4 Complexity bounds for blocking TMs 49 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.2 Sequential TMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2.1 A quadratic lower bound on step complexity . . . . . . . . . . . . . . . . . . . . . 51 4.2.2 Expensive synchronization in Transactional memory cannot be eliminated . . . . . 53 4.3 Progressive TMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.3.1 A linear lower bound on the amount of protected data . . . . . . . . . . . . . . . . 54 4.3.2 A constant stall and constant expensive synchronization strict DAP opaque TM . 55 Figure 1 . 1 : 11Transforming a sequential implementation of the list-based set to a TM-based concurrent one Strict disjoint-access parallelism. A TM implementation M is strictly disjoint-access parallel (strict DAP) if, for all executions E of M , and for all transactions T i and T j that participate in E, T i and T j contend on a base object in E only if Dset(T i ) ∩ Dset(T j ) = ∅ [64]. Proposition 2.1. Strict DAP Strict data-partitioning. Proposition 2. 3 . 3Weak DAP RW DAP. Definition 3. 2 ( 2Guerraoui and Kapalka [64]). A finite history H is final-state opaque if there is a legal t-complete t-sequential history S, such that 1. for any two transactions T k , T m ∈ txns(H), if T k ≺ RT H T m , then T k < S T m , and 2. S is equivalent to a completion of H. Lemma 3 . 1 . 31Let H be a du-opaque history and let S be a serialization of H. For any i ∈ N, there is a serialization S i of H i (the prefix of H consisting of the first i events), such that seq(S i ) is a subsequence of seq(S). Figure 3 . 2 : 32An infinite history in which tryC 1 is incomplete and any two transactions are concurrent. Lemma 3 . 3 . 33Let H be a finite du-opaque history and assume T k ∈ txns(H) is a complete transaction in H, such that every transaction in Lset H (T k ) is complete in H. Then there is a serialization S of H, such that for all T k , T m ∈ txns(H), if T k ≺ LS H T m , then T k < S T m . If f (k) and f (m) are transactions at indices k, m of the same cseq i (S i ), then clearly f (k) = f (m) implies k = m. Suppose f (k) is the transaction at index k in some cseq i (S i ) and f (m) is the transaction at index m in some cseq (S ). For every > i and k < m, if cseq i (S i )[k] = T , then cseq (S )[m] = T since cseq i (S i ) = cseq i (S ). If > i and k > m, it follows from the definition that f (k) = f (m). Similar arguments for the case when < i prove that if f (k) = f (m), then k = m. Figure 3 . 33: A history that is opaque, but not du-opaque.Theorem 3.5 implies the following: Corollary 3.9. Let M be a TM implementation that ensures that in every infinite history H of M , every transaction T ∈ txns(H) is complete in H. Then, M is du-opaque if and only if every finite history of M is du-opaque. Figure 3 . 34: A sequential du-opaque history, which is not opaque by the definition of [59]. 1. an infinite opaque history H ∈ Opacity uw if and only if every transaction T ∈ txns(H) is complete in H, and 2. an opaque history H ∈ Opacity uw if and only if for every pair of write operations write k (X, v) and Theorem 3 . 13 . 313Du-opacity du-VWC. Figure 3 . 37: A history which is du-TMS1 but not du-VWC.Proof.A history H is du-TMS1 if and only if H is strictly serializable and there is a du-TMS1 serialization for every t-operation op k,i that does not return A k in H. Figure 4 . 1 : 41Executions in the proof of Lemma 4.1; By weak DAP, T φ cannot distinguish this from the execution inFigure 4.1a Theorem 4. 2 . 2For every weak DAP TM implementation M that provides ICF TM-liveness, sequential TM-progress and uses weak invisible reads, Claim 4. 3 . 3For all i ∈ {2, . . . , m} and ≤ (i − 1), M has an execution of the form E i 1 or E i 2 . the same serialization justifies an opaque execution. if ∃X j ∈ Rset(T k ):[ov j , k j ] = read(v j ) Lemma 4 . 9 . 49Algorithm 4.1 implements an opaque TM. Corollary 4. 14 . 14Let M be any weak DAP progressive opaque TM implementation providing ICF TMliveness and weak invisible reads. Then, for every execution E and each read-only transaction T k ∈ txns(E), T k performs Θ(m 2 ) steps in E, where m = |Rset E (T k )|. Algorithm 4. 2 2Strict DAP progressive strictly serializable TM implementation; code for T k executed by process p i 1: read k (X j ):2: j , ⊥] := Rset(T k ).locate(X j )10: Return ov j Similarly, we can prove an almost matching upper bound for Theorem 4.2 for strictly serializable progressive TMs.Consider Algorithm 4.2 that is a simplification of the opaque progressive TM in Algorithm 4.1: we remove the validation performed in the implementation of a t-read, i.e., Line 17 in Algorithm 4.1; otherwise, the two algorithms are identical. It is easy to see this results in a strictly serializable (but not opaque) TM implementation. Thus, Theorem 4.15. Algorithm 4.2 describes a progressive, strictly serializable and strict DAP TM implementation that provides wait-free TM-liveness, uses invisible reads, uses only read-write base objects, and for every execution E and transaction T k ∈ txns(E): every t-read operation invoked by T k performs O(1) steps and tryC k performs O(|Rset(T k )|) steps in E. Corollary 4. 16 . 16Let M be any weak DAP progressive strictly serializable TM implementation providing ICF TM-liveness and weak invisible reads. Then, for every execution E and each read-only transaction T k ∈ txns(E), each read k performs O(1) steps and tryC k performs Θ(m) steps in E, where m = |Rset E (T k )|. Lemma 4. 18 . 18The implementation L(M ) (Algorithm 4.3) provides deadlock-freedom.Proof. Let E be any execution of L(M ). Observe that a process may be stuck indefinitely only in Lines 23 and 30 as it performs the while loop. Theorem 4. 19 . 19Any strictly serializable, strongly progressive TM implementation M that accesses a single t-object implies a deadlock-free, finite exit mutual exclusion implementation L(M ) such that the RMR complexity of M is within a constant factor of the RMR complexity of L(M ).Proof. (Mutual-exclusion) Follows from Lemma 4.17.(Finite-exit) The proof is immediate since the Exit operation contains no unbounded loops or waiting statements.(Deadlock-freedom) Follows from Lemma 4.18. (RMR complexity) First, let us consider the CC model. Observe that every event not on M performed by a process p i as it performs the Entry or Exit operations incurs O(1) RMR cost clearly, possibly barring the while loop executed in Line 30. During the execution of this while loop, process p i spins on the register Lock [p i ][p j ], where p j is the predecessor of p i . Observe that p i 's cached copy of Lock [p i ][p j ] may be invalidated only by process p j as it unlocks the register in Line 37. Since no other process may write to this register and p i terminates the while loop immediately after the write to Lock [p i ][p j ] by p j , p i incurs O(1) RMR's. Thus, the overall RMR cost incurred by M is within a constant factor of the RMR cost of L(M ). Now we consider the DSM model. As with the reasoning for the CC model, every event not on M performed by a process p i as it performs the Entry or Exit operations incurs O(1) RMR cost clearly, possibly barring the while loop executed in Line 30. During the execution of this while loop, process p i spins on the register Lock [p i ][p j ], where p j is the predecessor of p i . Recall that Lock [p i ][p j ] is a register that is local to p i and thus, p i does not incur any RMR cost on account of executing this loop. It follows that p i incurs O(1) RMR cost in the DSM model. Thus, the overall RMR cost of M is within a constant factor of the RMR cost of L(M ) in the DSM model. Theorem 4 . 420. ([22]) Any deadlock-free, finite-exit mutual exclusion implementation from read, write and conditional primitives has an execution whose RMR complexity is Ω(n log n). Theorems 4 . 420 and 4.19 imply: ( LA k = 0; (M C k = M C i ); M C i = color)) for some k = i; LA k = 0, if M C k = M C i , then (LA k , k) > (LA i , i) (2) for some k = i; LA k = 0, if M C k = M C i , then M C i = color Theorem 4. 23 . 23Algorithm 4.4 is an implementation of multi-trylock object in which every operation is starvation-free and incurs at most four RAWs.Proof. Denote by L the shared object implemented by Algorithm 4.4. then from Condition 1 of Lemma 4.22, we have (LA k , k) < (LA i , i) and (LA k , k) > (LA i , i)-contradiction.(2) If M C k = M C i , then from Condition 2 of Lemma 4.22, we have M C i = color and M C k = color which implies M C k = M C i -contradiction. Algorithm 4. 5 5Strongly progressive, opaque TM: the implementation of T k executed by p i [ ov j , ⊥] := Rset(T k ).locate(X j ) ( Complexity) Any process executing a transaction T k holds the lock on Wset(T k ) only once during tryC k . If |Wset(T k )| = ∅, then the transaction simply returns C k incurring no RAW's. Thus, from Theorem 4.23, Algorithm 4.5 incurs at most four RAWs per updating transaction and no RAW's are performed in read-only transactions. Lemma 4. 25 . 25Let a TM implementation M be permissive with respect to opacity. If a transaction T i is forcefully aborted executing a t-operation op i , then op i is tryC i .Proof. Suppose, by contradiction, that there exists a history H of M such that some op i ∈ {read i , write i } executed within a transaction T i returns A i . Let H 0 be the shortest prefix of H that ends just before op i returns. By definition, H 0 is opaque and any history H 0 · r i where r i = A i is not opaque. Let H 0 be the serialization of H 0 . Theorem 4 . 26 . 426Let M be a permissive opaque TM implementation providing starvation-free TMliveness. Then, for any m ∈ N, M has an execution in which some transaction performs m t-reads such that the execution of each t-read contains at least one RAW or AWAR. Figure 4 . 2 : 42Execution E of a permissive, opaque TM: T2 and T3 force T1 to perform a RAW/AWAR in each R1(X k ), 2 ≤ k ≤ m T 1 must precede T 3 in any serialization. Thus, T 3 cannot precede T 1 in any serialization-contradiction. ) T 3 and T 2 do not contend on any base object Figure 5 . 1 : 51Executions in the proof of Theorem 5.1; execution in 5.1d is not strictly serializable Figure 5 . 2 : 52Executions in the proof of Theorem 5.4; execution in 5.2e is not opaque Theorem 5.4. Every RW DAP opaque TM implementation M ∈ OF has an execution E in which some read-only transaction T ∈ txns(E) performs Ω(n) non-overlapping RAW/AWARs. Lemma 5 . 8 . 58Algorithm 5.1 implements an opaque TM. Theorem 5 . 12 . 512Algorithm 5.1 describes a RW DAP, progressive opaque TM implementation M ∈ OF such that in every execution E of M ,• the total number of stalls incurred by a t-read operation invoked in E is O(n),• every read-only transaction T ∈ txns(E) performs O(|Rset(T )|) AWARs in E, and• every complete t-read operation invoked by transaction T k ∈ txns(E) performs O(|Rset E (T k )| steps.Proof. (Opacity) Follows from Lemma 5.8(TM-liveness and TM-progress) Since none of the implementations of the t-operations in Algorithm 5.1 contain unbounded loops or waiting statements, every t-operation op k returns a matching response after taking a finite number of steps. Thus, Algorithm 5.1 provides wait-free TM-liveness. Theorem 5. 13 . 13Algorithm 5.2 describes a weak TM implementation M ∈ OF such that in any execution E of M , for every transaction T ∈ txns(E), T performs O(1) steps during the execution of any t-operation in E. Figure 5 . 3 : 53Complexity gap between blocking and non-blocking TMs Figure 6 . 1 : 61(b) extend every read-only transaction T 2i in phase i with t-reads of X 2 , . . . X , . . .; each read 2i (X ) must return v i Executions in the proof of Theorem 6.1; execution in 6.1a must maintain c distinct values of every t-object• ( wait-free TM-progress for read-only transactions) every read-only transaction commits in a finite number of its steps, and Definition 6. 2 . 2Let E be any execution of a TM implementation M . We say that E maintains c distinct values {v 1 , . . . , v c } of t-object X, if there exists an execution E · E of M such that • E contains the complete executions of c t-reads of X and, Figure 6 . 2 : 62Executions in the proof of Theorem 6.2; execution in 6.2c is not strictly serializable Proof. Suppose by contradiction that there exists a strict DAP TM implementation M ∈ RWF. Theorem 6. 3 . 3Every strictly serializable weakly DAP TM implementation M ∈ RWF has, for all m ∈ N, an execution in which some read-only transaction T 0 with m = |Rset(T 0 )| performs Ω(m) RAWs/AWARs. The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.. . . HAL: I've just picked up a fault in the AE35 unit. It's going to go 100% failure in 72 hours. HAL: It can only be attributable to human error. Stanley Kubrick -2001: A Space Odyssey v, shared) ∈ τ2 after E (b) τ 2 is invalidated by (fast-path or slowpath) transaction T 1 's write to base object b Figure 7 . 1 : 71Tracking set aborts in fast-path transactions; we denote a fast-path (and resp. slow-path) transaction by F (and resp. S) b in τ i (in case b was previously accessed by i within the current hardware transaction) or the value of b in the current memory configuration, and finally h(v) is returned. Figure 7 . 2 : 72provide two key observations on this model regarding the interactions of non-committed fast path transactions with other transactions. Let E be any execution of a HyTM implementation M in which a fast-path transaction T k is either t-incomplete or aborted. Then the sequence of events E derived by removing all events of E|k from E is an execution M. Moreover: Execution E in Figure 7.2a is indistinguishable to T 1 from the execution E in Figure 7.2b Observation 7.1. To every slow-path transaction T m ∈ txns(E), E is indistinguishable from E . Observation 7.2. If a fast-path transaction T m ∈ txns(E) \ {T k } does not incur a tracking set abort in E, then E is indistinguishable to T m from E . Ty must return the new value Tx does not access any metadata, by Observation 7.3, it cannot abort and must return the initial value value of X Ty does not contend with Tx or Tz on any base object Figure 7 . 3 : 73Executions in the proof of Theorem 7.4; execution in 7.3d is not strictly serializable 3. Definition 7. 2 ( 2Uninstrumented HyTMs). A HyTM implementation M provides uninstrumented writes (resp. reads) if in every execution E of M, for every write-only (resp. read-only) fast-path transaction T k , all primitives in E|k are performed on base objects in D. A HyTM is uninstrumented if both its reads and writes are uninstrumented. D X . Moreover, for all i, j ∈ {1, . . . , m}; i = j, Dset(T i ) ∩ Dset(T m+j ) = ∅ and Dset(T m+i ) ∩ Dset(T m+j ) = ∅. Corollary 7 . 12 . 712For all i ∈ {0, . . . , |(Sm| 2 − 1}, M has an execution of the form ( ov j , k j ) := read(v j ) // cached read 35: Return ov j 36: write k (X j , v): // fast-path 37:v j .write(nv j , k) Figure 8 . 1 : 81A concurrency scenario for a list-based set, initially {1, 3, 4}, where value i is stored at node Xi: Figure 8 . 2 : 82(a) a history exporting schedule σ, with initial state {1, 2, 3}, accepted by I LP ∈ SM; (b) a history exporting a problematic schedule σ , with initial state {3}, which should be accepted by any I ∈ P if it accepts σ Algorithm 8. 2 2Code for process p k implementing reads and writes in implementation I RM k [i] ⊂ X × N ; i = {1, 2} cyclic buffer of size 2, read k (X ) executed by insert, remove, contains: 11: ver 1 , * ← L[ ].read() // get versioned lock 12: val ← t-var [ ].read() // get value 13: r ← r[ ].read() 14: ver 2 , * ← L[ ].read() // reget versioned lock 15: var [ ].write(v) // update memory 38: L[ ].write( ver + 1, false )// release locks 39: ( 1 ) 1read(X ) returns ⊥ in line 16 when r[ ] = true or when ver 1 = ver 2 , (2) write(X ) performed by remove(v) either returns ⊥ in line 22 when the cas operation on L[ ] returns false or returns ⊥ in line 25 when the cas operation on the element that stores v returns false, and (3) write(X ) performed by insert returns ⊥ in line 35 when the cas operation on L[ ] returns false. Definition 2.1 (Completions). Let H be a history. A completion of H, denoted H, is a history derived from H as follows: Definition 2.2 (Strict serializability).A finite history H is serializable if there is a legal t-complete t-sequential history S, such that there is a completion H of H, such that S is equivalent to cseq(H), where cseq(H) is the subsequence of H reduced to committed transactions in H. Lemma 2.1. Let H be any t-complete history. Then, H is strictly serializable iff H is linearizable with respect to the TM type. Figure 3.1: History H is final-state opaque, while its prefix H is not final-state opaque.Definition 3.3 (Guerraoui and Kapalka [64]). A history H is opaque if and only if every finite prefix H of H (including H itself if it is finite) is final-state opaque. Table 3 . 31: Relations between TM consistency definitions. Algorithm 4.1 Strict DAP progressive opaque TM implementation LP ; code for T k executed by process p i1: Shared base objects: 2: ). For each [p i , face i ], L(M ) uses a register bit Done[p i , face i ] that indicates if this face of the process has left the critical section or is executing the Entry operation. Additionally, we use a register Succ[p i , face i ] that stores the process expected to succeed p i in the critical section. If Succ[p i , face i ] = p j , we say that p j is the successor of p i (and p i is the predecessor of p j ). Intuitively, this means that p j is expected to enter the critical section immediately after p i . Finally, L(M ) uses a 2-dimensional bit array Lock : for each process p i , there are n −1 registers associated with the other processes. For all j ∈ {0, . . . , n − 1} \ {i}, the registers Lock [p i ][p j ] are local to p i and registers Lock [p j ][p i ] are remote to p i . Process p i can only access registers in the Lock array that are local or remote to it.Entry operation. A process p i adopts a new identity face i and writes false to Done(p i , face i ) to indicate that p i has started the Entry operation. Process p i now initializes the successor of [p i , face i ] by writing ⊥ to Succ[p i , face i ]. Now, p i uses a strongly progressive TM implementation M to atomically store its pid and identity i.e., face i to t-object X and returns the pid and identity of its predecessor, say [p j , face j ]. Intuitively, this suggests that [p i , face i ] is scheduled to enter the critical section immediately after ]p j , face j ] exits the critical section. Note that if p i reads the initial value of t-object X, then it immediately enters the critical section. Otherwise it writes locked to the register Lock [p i , p j ] and sets itself to be the successor of [p j , face j ] by writing p i to Succ[p j , face j ]. Process p i now checks if p j has started the Exit operation by checking if Done[p j , face j ] is set. If it is, p i enters the critical section; otherwise p i spins on the register Lock[p i ][p j ] until it is unlocked. operation. Process p i first indicates that it has exited the critical section by setting Done[p i , face i ], following which it unlocks the register Lock [Succ[p i , face i ]][p i ] to allow p i 's successor to enter the critical section.Proof. Let E be any execution of L(M ). We say that [p i , face i ] is the successor of [p j , face j ] if p i reads the value of prev in Line 25 to be [p j , face j ] (and [p j , face j ] is the predecessor of [p i , face i ]); otherwise if p i reads the value to be ⊥, we say that p i has no predecessor.Suppose by contradiction that there exist processes p i and p j that are both inside the critical section after E. Since p i is inside the critical section, either (1) p i read prev = ⊥ in Line 23, or (2) p i read that Done[prev ] is true (Line 29) or p i reads that Done[prev ] is false and Lock [p i ][prev .pid ] is unlocked Suppose that p i read prev = ⊥ and entered the critical section. Since in this case, p i does not have any predecessor, some other process that returns successfully from the while loop in Line 25 must Algorithm 4.3 Mutual-exclusion object L from a strongly progressive, strict serializable TM M ; code for process p i ; 1 ≤ i ≤ n Return ok be successor of p i in E. Since there exists [p j , face j ] also inside the critical section after E, p j reads that either [p i , face i ] or some other process to be its predecessor. Observe that there must exist some such process [p k , face k ] whose predecessor is [p i , face i ]. Hence, without loss of generality, we can assume that[p j , face j ] is the successor of [p i , face i ]. By our assumption, [p j , face j ] is also inside the critical section. Thus, p j locked the register Lock [p j , p i ] in Line 27 and set itself to be p i 's successor in Line 28. Then, p j read that Done[p i , face i ] is true or read that Done[p i , face i ] is false and waited until Lock [p j , p i ] is unlocked and then entered the critical section. But this is possible only if p i has left the critical section and updated the registers Done[p i , face i ] and Lock [p j , p i ] in Lines 36 and 37 respectively-contradiction to the assumption that [p i , face i ] is also inside the critical section after E. Suppose that p i did not read prev = ⊥ and entered the critical section. Thus, p i read that Done[prev ] is false in Line 29 and Lock [p i ][prev .pid ] is unlocked in Line 30, where prev is the predecessor of [p i , face i ]. As with case 1, without loss of generality, we can assume that [p j , face j ] is the successor of [p i , face i ] or [p j , face j ] is the predecessor of [p i , face i ]. that [p j , face j ] is the predecessor of [p i , face i ], i.e., p i writes the value [p i , face i ] to the register Succ[p j , face j ] in Line 28.Since [p j , face j ] is also inside the critical section after E, process p i must read that Done[p j , face j ] is true in Line 29 and Lock [p i , p j ] is locked in Line 30. But then p i could not have entered the critical section after E-contradiction. that [p j , face j ] is the successor of [p i , face i ], i.e., p j writes the value [p j , face j ] to the register Succ[p i , face i ]. Since both p i and p j are inside the critical section after E, process p j must read that Done[p i , face i ] is true in Line 29 and Lock [p j , p i ] is locked in Line 30. Thus, p j must spin on the register Lock [p j , p i ], waiting for it to be unlocked by p i before entering the critical section-contradiction to the assumption that both p i and p j are inside the critical section.Exit Lemma 4.17. The implementation L(M ) (Algorithm 4.3) satisfies mutual exclusion. (Line 30). (Case 1) 1: Local variables: 2: bit face i , for each process p i 3: Shared objects: 4: strongly progressive, strictly 5: serializable TM M 6: t-object X, initially ⊥ 7: storing value v ∈ {[p i , face i ]} ∪ {⊥} 8: for each tuple [p i , face i ] 9: Done[p i , face i ] ∈ {true, false} 10: Succ[p i , face i ] ∈ {p 1 , . . . , pn} ∪ {⊥} 11: for each p i and j ∈ {1, . . . , n} \ {i} 12: Lock [p i ][p j ] ∈ {locked, unlocked} 13: Function: func(): 14: atomic using M 15: value := tx-read(X) 16: tx-write(X, [p i , face i ]) 17: on abort Return false 18: Return value 19: Entry: 20: face i := 1 − face i 21: Done[p i , face i ].write(false) 22: Succ[p i , face i ].write(⊥) 23: while (prev ← func) = false do 24: no op 25: end while 26: if prev = ⊥ then 27: Lock [p i ][prev .pid].write(locked) 28: Succ[prev ].write(p i ) 29: if Done[prev ] = false then 30: while Lock [p i ][prev .pid] = unlocked do 31: no op 32: end while 33: Return ok 34: // Critical section 35: Exit: 36: Done[p i , face i ].write(true) 37: Lock [Succ[p i , face i ]][p i ].write(unlocked) 38: (Case 2) Suppose Suppose Thus, L(M ) satisfies mutual-exclusion. Now consider a process p i that returns successfully from the while loop inLine 23. Suppose that p i is stuck indefinitely as it performs the while loop in Line 30. Thus, no process has unlocked the register Lock [p i ][prev .pid ] by writing to it in the Exit section. Recall that since [p i , face i ] has reached the while loop in Line 30, [p i , face i ] necessarily has a predecessor, say [p j , face j ] Thus, as [p i , face ] is stuck in the while loop waiting for Lock [p i , p j ] to be unlocked by process p j , p j leaves the critical section, unlocks Lock [p i , p j ] in Line 37 and prior to the read of Lock [p i , p j ], p j re-starts the Entry operation, writes false to Done[p j , 1 − face j ] and sets itself to be the successor of [p i , face i ] and spins on the register Lock [p j , p i ]. M C i ∈ {B, W } for each process p i , initially WChapter 4 Complexity bounds for blocking TMs Algorithm 4.4 Starvation-free multi-trylock invoked by process p i 1: Shared variables: 2: LA i , for each process p i , initially 0 3: 4: color ∈ {B, W }, initally W 5: Table 4 . 42). Some questions remain open. Can the tight bounds Table 4 . 1 : 41Complexity bounds for progressive TMs.TM-correctness TM-liveness Invisible reads rmw primitives Complexity Strict serializability WF read-write Impossible Strict serializability read-write, conditional Ω(n log n) RMRs Opacity starvation-free yes read-write O(1) RAW/AWAR Table 4 . 42: Complexity bounds for strongly progressive TMs. a finite number of steps) using only read and write primitives. Algorithm 4.5 describes one means to circumvent this impossibility result by describing an opaque strongly progressive TM implementation from read-write base objects that provides starvation-free TM-liveness. We conjecture that the lower bound of Theorem 4.21 on the RMR complexity is tight. Proving this remains an interesting open question. Algorithm 5.1 RW DAP opaque implementation M ∈ OF; code for T k Rset k , Wset k for every transaction T k ;if ∃{X j , [owner j , oval j , nval j ]} ∈ Rset(T k ):1: Shared base objects: 2: tvar [m], storing [owner m, ovalm, nvalm] 3: for each t-object Xm, supports read, write, cas 4: owner m, a transaction identifier 5: ovalm ∈ V 6: nvalm ∈ V 7: status[k] ∈ {live, aborted, committed}, 8: for each T k ; supports read, write, cas 9: Local variables: 10: 11: dictionaries storing {Xm, Tvar [m]} 12: read k (Xm): 13: [owner m, ovalm, nvalm] ← tvar [m].read() 14: if owner m = k then 15: sm ← status[owner m].read() 16: if sm = committed then 17: curr = nvalm 18: else if sm = aborted then 19: curr = ovalm 20: else 21: if status[owner m].cas(live, aborted) then 22: curr = ovalm 23: else 24: Return A k 25: if status[k] = live ∧ ¬validate() then 26: Rset(T k ).add({Xm, [owner m, ovalm, nvalm]}) 27: Return curr 28: Return A k 29: else 30: Return Rset(T k ).locate(Xm) 31: Function: validate(): 32: 33: ([owner j , oval j , nval j ] = tvar [j].read()) then 34: Return true 35: Return false 36: write k (Xm , v): 37: [owner m, ovalm, nvalm] ← tvar [m].read() 38: if owner m = k then 39: sm ← status[owner m].read() 40: if sm = committed then 41: curr = nvalm 42: else if sm = aborted then 43: curr = ovalm 44: else 45: if status[owner m].cas(live, aborted) then 46: curr = ovalm 47: else 48: Return A k 49: om ← tvar [m].cas([owner m, ovalm, nvalm], [k, curr , v]) 50: if om ∧ status[k] = live then 51: Wset k .add({Xm, [k, curr , v]}) 52: Return ok 53: else 54: Return A k 55: else 56: [owner m, ovalm, nvalm] = Wset k .locate(Xm) 57: s = tvar [m].cas([owner m, ovalm, nvalm], [k, ovalm, v]) 58: if s then 59: Wset(T k ).add({Xm, [k, ovalm, v]}) 60: Return ok 61: else 62: Return A k 63: tryC k (): 64: if validate() then 65: Return A k 66: if status[k].cas(live, committed) then 67: Algorithm 5.2 Weak DAP opaque implementation M ∈ OF; code for T k om ← tvar [m].cas([owner m, ovalm, nvalm], [k, ovalm, nvalm])1: read k (Xm): 2: [owner m, ovalm, nvalm] ← tvar [m].read() 3: if owner m = k then 4: sm ← status[owner m].read() 5: if sm = committed then 6: curr = nvalm 7: else if sm = aborted then 8: curr = ovalm 9: else 10: if status[owner m].cas(live, aborted) then 11: curr = ovalm 12: Return A k 13: 14: if om ∧ status[k] = live then 15: Rset(T k ).add({Xm, [owner m, ovalm, nvalm]}) 16: Return curr 17: else 18: Return Rset(T k ).locate(Xm) 19: tryC k (): 20: if status[k].cas(live, committed) then 21: Algorithm 8.1 Sequential implementation LL (sorted linked list) of set type1: Shared variables: 2: of objects O 1 and O 2 , respectively. Now a correctness criterion Ψ is compositional if for every history H on an object composition O 1 × O 2 , if Ψ holds for H|O i with respect to I Si , for i ∈ {1, 2}, then Ψ holds for H with respect to IS = I S1 × I S2 . Here, H|O i denotes the subsequence of H consisting of events on O i . Theorem 8.1. LS-linearizability is compositional. Proof. Let H, a history on O 1 × O 2 , be LS-linearizable with respect to IS . Let each H|O i , i ∈ {1, 2}, be LS-linearizable with respect to I Si . Without loss of generality, we assume that H is complete (if H is incomplete, we consider any completion of it containing LS-linearizable completions of H|O 1 and H|O 1 ). 22 : 22if ¬L[ ].cas( ver , false , ver , true ) then Return ⊥ // grab lock or abort 24:let X = X be such that {X , ver } ∈ rbuf k if ¬L[ ].cas( ver ,false , ver , true ) then Return ⊥ // grab lock or abort 27: r[ ].write(true) // mark element for deletion 28: t-var [ ].write(v) // update memory 29: L[ ].write( ver + 1, false )// release locks 30: L[ ].write( ver + 1, false )23: 25: 26: 31: Different compilers may use different names for the delimiter; in GCC, it is transaction_atomic[2] Note that the counter-example would not work if we imagine that the data sets accessed by a transaction can be known in advance. However, in the thesis, we consider the conventional dynamic TM programming model. Here we treat each τ i as a distinct type by adding index i to all elements of Φ i , Γ i , and Q i . Correctness. We begin by defining a consistency criterion, namely locally-serializable linearizability. We say that a concurrent implementation of a given sequential data type is locally serializable if it ensures that the local execution of each operation is equivalent to some execution of its sequential implementation. overviews on HyTM designs and implementations. The software component of the HyTM algorithms presented in this paper is inspired by progressive STM implementations. 69, 112] provide detailed. 36, 39, 90] and is subject to the lower bounds for progressive STMs established in [23, 62, 64, 90References [69, 112] provide detailed overviews on HyTM designs and implementations. The software component of the HyTM algorithms presented in this paper is inspired by progressive STM implementa- tions [36, 39, 90] and is subject to the lower bounds for progressive STMs established in [23, 62, 64, 90]. List of Figures. List of Figures Transforming a sequential implementation of the list-based set to a TM-based concurrent one. Transforming a sequential implementation of the list-based set to a TM-based concurrent one . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 History H is final-state opaque, while its prefix H is not final-state opaque. 35History H is final-state opaque, while its prefix H is not final-state opaque. . . . . . . . . 35 Each finite prefix of the history is du-opaque, but the infinite limit of the ever-extending sequence is not du-opaque. An infinite history in which tryC 1 is incomplete and any two transactions are concurrentAn infinite history in which tryC 1 is incomplete and any two transactions are concurrent. Each finite prefix of the history is du-opaque, but the infinite limit of the ever-extending sequence is not du-opaque. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 A history that is opaque, but not du-opaque. A history that is opaque, but not du-opaque. . . . . . . . . . . . . . . . . . . . . . . . . . 41 A sequential du-opaque history, which is not opaque by the definition of [59. 42A sequential du-opaque history, which is not opaque by the definition of [59]. . . . . . . . 42 A history that is du-VWC, but not du-opaque. A history that is du-VWC, but not du-opaque. . . . . . . . . . . . . . . . . . . . . . . . . 44 A history which is du-VWC but not du-TMS1. A history which is du-VWC but not du-TMS1. . . . . . . . . . . . . . . . . . . . . . . . . 46 A history which is du-TMS1 but not du-VWC. A history which is du-TMS1 but not du-VWC. . . . . . . . . . . . . . . . . . . . . . . . . 46 Execution E of a permissive, opaque TM: T2 and T3 force T1 to perform a RAW/AWAR in each R1(X k ). Execution E of a permissive, opaque TM: T2 and T3 force T1 to perform a RAW/AWAR in each R1(X k ), 2 ≤ k ≤ m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Executions in the proof of Theorem 5.1; execution in 5.1d is not strictly serializable. 74Executions in the proof of Theorem 5.1; execution in 5.1d is not strictly serializable . . . 74 87 distinct values of every t-object. . . . . . . . . . . . . . . . Tms, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92Complexity gap between blocking and non-blocking. Complexity gap between blocking and non-blocking TMs . . . . . . . . . . . . . . . . . . . 87 distinct values of every t-object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Executions in the proof of Theorem 6.2; execution in 6.2c is not strictly serializable. 94Executions in the proof of Theorem 6.2; execution in 6.2c is not strictly serializable . . . 94 Executions in the proof of Theorem 6.3; execution in 6.3c is not strictly serializable. 96Executions in the proof of Theorem 6.3; execution in 6.3c is not strictly serializable . . . 96 Tracking set aborts in fast-path transactions; we denote a fast-path (and resp. slow-path) transaction by F (and resp. S). Tracking set aborts in fast-path transactions; we denote a fast-path (and resp. slow-path) transaction by F (and resp. S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Executions in the proof of Theorem 7.4; execution in 7.3d is not strictly serializable. 104Executions in the proof of Theorem 7.4; execution in 7.3d is not strictly serializable . . . 104 A concurrency scenario for a list-based set, initially {1, 3, 4}, where value i is stored at node Xi: insert(2) and insert(5) can proceed concurrently with contains(5), the history is LS-linearizable but not serializable. (We only depict important read-write events here.). A concurrency scenario for a list-based set, initially {1, 3, 4}, where value i is stored at node Xi: insert(2) and insert(5) can proceed concurrently with contains(5), the history is LS-linearizable but not serializable. (We only depict important read-write events here.) . . . . . . . . . . . . . 124 2 (a) a history exporting schedule σ, with initial state {1, 2, 3}, accepted by I LP ∈ SM; (b) a history exporting a problematic schedule σ , with initial state {3}, which should be accepted by any I ∈ P if it accepts σ. 88.2 (a) a history exporting schedule σ, with initial state {1, 2, 3}, accepted by I LP ∈ SM; (b) a history exporting a problematic schedule σ , with initial state {3}, which should be accepted by any I ∈ P if it accepts σ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 List of Tables . Complexity Bounds For Progressive Tms, . . . . . . . . . . . . . . . . . . . . . . . . . 71. . . , . . . . . . . . . . . . . . . . . . . . . . . . . 71Complexity bounds for progressive TMs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Complexity bounds for strongly progressive TMs. Complexity bounds for strongly progressive TMs. . . . . . . . . . . . . . . . . . . . . . . . 71 Bibliography Advanced Synchronization Facility Proposed Architectural Specification. Advanced Synchronization Facility Proposed Architectural Specification, March 2009. http:// developer.amd.com/wordpress/media/2013/09/45432-ASF_Spec_2.1.pdf. . Transactional Memory in GCC. Transactional Memory in GCC. 2012. Shared memory consistency models: A tutorial. S V Adve, K Gharachorloo, IEEE Computer. 2912S. V. Adve and K. Gharachorloo. Shared memory consistency models: A tutorial. IEEE Computer, 29(12):66-76, 1996. Software-improved hardware lock elision. Y Afek, A Levy, A Morrison, PODC. ACM. Y. Afek, A. Levy, and A. Morrison. Software-improved hardware lock elision. In PODC. ACM, 2014. Pessimistic software lock-elision. Y Afek, A Matveev, N Shavit, Proceedings of the 26th International Conference on Distributed Computing, DISC'12. the 26th International Conference on Distributed Computing, DISC'12Berlin, HeidelbergSpringer-VerlagY. Afek, A. Matveev, and N. Shavit. Pessimistic software lock-elision. In Proceedings of the 26th International Conference on Distributed Computing, DISC'12, pages 297-311, Berlin, Heidelberg, 2012. Springer-Verlag. Abortable and query-abortable objects and their efficient implementation. M K Aguilera, S Frølund, V Hadzilacos, S L Horn, S Toueg, PODC. M. K. Aguilera, S. Frølund, V. Hadzilacos, S. L. Horn, and S. Toueg. Abortable and query-abortable objects and their efficient implementation. In PODC, pages 23-32, 2007. Stacktrack: An automated transactional approach to concurrent memory reclamation. D Alistarh, P Eugster, M Herlihy, A Matveev, N Shavit, Proceedings of the Ninth European Conference on Computer Systems, EuroSys '14. the Ninth European Conference on Computer Systems, EuroSys '14New York, NY, USAACM25D. Alistarh, P. Eugster, M. Herlihy, A. Matveev, and N. Shavit. Stacktrack: An automated transactional approach to concurrent memory reclamation. In Proceedings of the Ninth European Conference on Computer Systems, EuroSys '14, pages 25:1-25:14, New York, NY, USA, 2014. ACM. Inherent limitations of hybrid transactional memory. D Alistarh, J Kopinsky, P Kuznetsov, S Ravi, N Shavit, abs/1405.5689CoRRD. Alistarh, J. Kopinsky, P. Kuznetsov, S. Ravi, and N. Shavit. Inherent limitations of hybrid transactional memory. CoRR, abs/1405.5689, 2014. Inherent limitations of hybrid transactional memory. CoRR, abs/1405. D Alistarh, J Kopinsky, P Kuznetsov, S Ravi, N Shavit, 29th International Symposium on Distributed Computing (DISC'15). Japan5689To appear inD. Alistarh, J. Kopinsky, P. Kuznetsov, S. Ravi, and N. Shavit. Inherent limitations of hybrid transactional memory. CoRR, abs/1405.5689, 2014. To appear in 29th International Symposium on Distributed Computing (DISC'15), Japan. D Alistarh, J Kopinsky, P Kuznetsov, S Ravi, N Shavit, Inherent limitations of hybrid transactional memory. 6th Workshop on the Theory of Transactional Memory. Paris, FranceD. Alistarh, J. Kopinsky, P. Kuznetsov, S. Ravi, and N. Shavit. Inherent limitations of hybrid transactional memory. 6th Workshop on the Theory of Transactional Memory, Paris, France, 2014. Defining liveness. B Alpern, F B Schneider, Inf. Process. Lett. 214B. Alpern and F. B. Schneider. Defining liveness. Inf. Process. Lett., 21(4):181-185, Oct. 1985. Unbounded transactional memory. C S Ananian, K Asanovic, B C Kuszmaul, C E Leiserson, S Lie, Proceedings of the 11th International Symposium on High-Performance Computer Architecture, HPCA '05. the 11th International Symposium on High-Performance Computer Architecture, HPCA '05Washington, DC, USAIEEE Computer SocietyC. S. Ananian, K. Asanovic, B. C. Kuszmaul, C. E. Leiserson, and S. Lie. Unbounded transactional memory. In Proceedings of the 11th International Symposium on High-Performance Computer Architecture, HPCA '05, pages 316-327, Washington, DC, USA, 2005. IEEE Computer Society. Universal constructions for multi-object operations. J H Anderson, M Moir, Proceedings of the Fourteenth Annual ACM Symposium on Principles of Distributed Computing, PODC '95. the Fourteenth Annual ACM Symposium on Principles of Distributed Computing, PODC '95New York, NY, USAACMJ. H. Anderson and M. Moir. Universal constructions for multi-object operations. In Proceedings of the Fourteenth Annual ACM Symposium on Principles of Distributed Computing, PODC '95, pages 184-193, New York, NY, USA, 1995. ACM. The performance of spin lock alternatives for shared-memory multiprocessors. T E Anderson, IEEE Trans. Parallel Distrib. Syst. 11T. E. Anderson. The performance of spin lock alternatives for shared-memory multiprocessors. IEEE Trans. Parallel Distrib. Syst., 1(1):6-16, 1990. Safety of live transactions in transactional memory: TMS is necessary and sufficient. H Attiya, A Gotsman, S Hans, N Rinetzky, DISC. H. Attiya, A. Gotsman, S. Hans, and N. Rinetzky. Safety of live transactions in transactional memory: TMS is necessary and sufficient. In DISC, pages 376-390, 2014. The complexity of obstruction-free implementations. H Attiya, R Guerraoui, D Hendler, P Kuznetsov, J. ACM. 564H. Attiya, R. Guerraoui, D. Hendler, and P. Kuznetsov. The complexity of obstruction-free imple- mentations. J. ACM, 56(4), 2009. Laws of order: Expensive synchronization in concurrent algorithms cannot be eliminated. H Attiya, R Guerraoui, D Hendler, P Kuznetsov, M Michael, M Vechev, POPL. H. Attiya, R. Guerraoui, D. Hendler, P. Kuznetsov, M. Michael, and M. Vechev. Laws of order: Expensive synchronization in concurrent algorithms cannot be eliminated. In POPL, pages 487- 498, 2011. Safety of deferred update in transactional memory. H Attiya, S Hans, P Kuznetsov, S Ravi, IEEE 33rd International Conference on Distributed Computing Systems. 0H. Attiya, S. Hans, P. Kuznetsov, and S. Ravi. Safety of deferred update in transactional memory. 2013 IEEE 33rd International Conference on Distributed Computing Systems, 0:601-610, 2013. Safety of deferred update in transactional memory. H Attiya, S Hans, P Kuznetsov, S Ravi, abs/1301.6297CoRR. H. Attiya, S. Hans, P. Kuznetsov, and S. Ravi. Safety of deferred update in transactional memory. CoRR, abs/1301.6297, 2013. Safety and deferred update in transactional memory. H Attiya, S Hans, P Kuznetsov, S Ravi, Transactional Memory. Foundations, Algorithms, Tools, and Applications. R. Guerraoui and P. RomanoSpringer International Publishing8913H. Attiya, S. Hans, P. Kuznetsov, and S. Ravi. Safety and deferred update in transactional mem- ory. In R. Guerraoui and P. Romano, editors, Transactional Memory. Foundations, Algorithms, Tools, and Applications, volume 8913 of Lecture Notes in Computer Science, pages 50-71. Springer International Publishing, 2015. Time and space lower bounds for implementations using k-cas. Parallel and Distributed Systems. H Attiya, D Hendler, IEEE Transactions on. 212H. Attiya and D. Hendler. Time and space lower bounds for implementations using k-cas. Parallel and Distributed Systems, IEEE Transactions on, 21(2):162 -173, feb. 2010. Tight rmr lower bounds for mutual exclusion and other problems. H Attiya, D Hendler, P Woelfel, Proceedings of the Twenty-seventh ACM Symposium on Principles of Distributed Computing, PODC '08. the Twenty-seventh ACM Symposium on Principles of Distributed Computing, PODC '08New York, NY, USAACMH. Attiya, D. Hendler, and P. Woelfel. Tight rmr lower bounds for mutual exclusion and other problems. In Proceedings of the Twenty-seventh ACM Symposium on Principles of Distributed Computing, PODC '08, pages 447-447, New York, NY, USA, 2008. ACM. The cost of privatization in software transactional memory. H Attiya, E Hillel, IEEE Trans. Computers. 6212H. Attiya and E. Hillel. The cost of privatization in software transactional memory. IEEE Trans. Computers, 62(12):2531-2543, 2013. Inherent limitations on disjoint-access parallel implementations of transactional memory. Theory of Computing Systems. H Attiya, E Hillel, A Milani, 49H. Attiya, E. Hillel, and A. Milani. Inherent limitations on disjoint-access parallel implementations of transactional memory. Theory of Computing Systems, 49(4):698-719, 2011. Transactional scheduling for read-dominated workloads. H Attiya, A Milani, Proceedings of the 13th International Conference on Principles of Distributed Systems, OPODIS '09. the 13th International Conference on Principles of Distributed Systems, OPODIS '09Berlin, HeidelbergSpringer-VerlagH. Attiya and A. Milani. Transactional scheduling for read-dominated workloads. In Proceedings of the 13th International Conference on Principles of Distributed Systems, OPODIS '09, pages 3-17, Berlin, Heidelberg, 2009. Springer-Verlag. Sequential verification of serializability. H Attiya, G Ramalingam, N Rinetzky, Proceedings of the 37th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages. the 37th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languagesH. Attiya, G. Ramalingam, and N. Rinetzky. Sequential verification of serializability. In Proceedings of the 37th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pages 31-42, 2010. H Attiya, J Welch, Distributed Computing. Fundamentals, Simulations, and Advanced Topics. John Wiley & SonsH. Attiya and J. Welch. Distributed Computing. Fundamentals, Simulations, and Advanced Topics. John Wiley & Sons, 2004. A method for implementing lock-free shared-data structures. G Barnes, Proceedings of the Fifth Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA '93. the Fifth Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA '93New York, NY, USAACMG. Barnes. A method for implementing lock-free shared-data structures. In Proceedings of the Fifth Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA '93, pages 261-270, New York, NY, USA, 1993. ACM. Concurrency of operations on B-trees. R Bayer, M Schkolnick, Readings in database systems. Morgan Kaufmann Publishers IncR. Bayer and M. Schkolnick. Concurrency of operations on B-trees. In Readings in database systems, pages 129-139. Morgan Kaufmann Publishers Inc., 1988. What Is Priority Inversion (And How Do You Control It)?. E Bruno, E. Bruno. What Is Priority Inversion (And How Do You Control It)? 2011. The pcl theorem: Transactions cannot be parallel, consistent and live. V Bushkov, D Dziuma, P Fatourou, R Guerraoui, SPAA. V. Bushkov, D. Dziuma, P. Fatourou, and R. Guerraoui. The pcl theorem: Transactions cannot be parallel, consistent and live. In SPAA, pages 178-187, 2014. On the liveness of transactional memory. V Bushkov, R Guerraoui, M Kapalka, Proceedings of the 2012 ACM Symposium on Principles of Distributed Computing, PODC '12. the 2012 ACM Symposium on Principles of Distributed Computing, PODC '12New York, NY, USAACMV. Bushkov, R. Guerraoui, and M. Kapalka. On the liveness of transactional memory. In Proceed- ings of the 2012 ACM Symposium on Principles of Distributed Computing, PODC '12, pages 9-18, New York, NY, USA, 2012. ACM. Improved single global lock fallback for best-effort hardware transactional memory. I Calciu, T Shpeisman, G Pokam, M Herlihy, Transact 2014 Workshop. ACMI. Calciu, T. Shpeisman, G. Pokam, and M. Herlihy. Improved single global lock fallback for best-effort hardware transactional memory. In Transact 2014 Workshop. ACM, 2014. Read invisibility, virtual world consistency and permissiveness are compatible. T Crain, D Imbs, M Raynal, ; Asap -Inria -Irisa -Cnrs , UMR6074 -INRIA -Institut National des Sciences Appliquées de Rennes -Université de Rennes IResearch ReportT. Crain, D. Imbs, and M. Raynal. Read invisibility, virtual world consistency and permissiveness are compatible. Research Report, ASAP -INRIA -IRISA -CNRS : UMR6074 -INRIA -Institut National des Sciences Appliquées de Rennes -Université de Rennes I, 11 2010. Hybrid NOrec: a case study in the effectiveness of best effort hardware transactional memory. L Dalessandro, F Carouge, S White, Y Lev, M Moir, M L Scott, M F Spear, ASPLOS. R. Gupta and T. C. MowryACML. Dalessandro, F. Carouge, S. White, Y. Lev, M. Moir, M. L. Scott, and M. F. Spear. Hybrid NOrec: a case study in the effectiveness of best effort hardware transactional memory. In R. Gupta and T. C. Mowry, editors, ASPLOS, pages 39-52. ACM, 2011. Norec: Streamlining stm by abolishing ownership records. L Dalessandro, M F Spear, M L Scott, SIGPLAN Not. 455L. Dalessandro, M. F. Spear, and M. L. Scott. Norec: Streamlining stm by abolishing ownership records. SIGPLAN Not., 45(5):67-78, Jan. 2010. Hybrid transactional memory. P Damron, A Fedorova, Y Lev, V Luchangco, M Moir, D Nussbaum, SIGPLAN Not. 4111P. Damron, A. Fedorova, Y. Lev, V. Luchangco, M. Moir, and D. Nussbaum. Hybrid transactional memory. SIGPLAN Not., 41(11):336-346, Oct. 2006. Early experience with a commercial hardware transactional memory implementation. D Dice, Y Lev, M Moir, D Nussbaum, Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS XIV. the 14th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS XIVNew York, NY, USAACMD. Dice, Y. Lev, M. Moir, and D. Nussbaum. Early experience with a commercial hardware transactional memory implementation. In Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS XIV, pages 157-168, New York, NY, USA, 2009. ACM. Transactional locking ii. D Dice, O Shalev, N Shavit, Proceedings of the 20th International Conference on Distributed Computing, DISC'06. the 20th International Conference on Distributed Computing, DISC'06Berlin, HeidelbergSpringer-VerlagD. Dice, O. Shalev, and N. Shavit. Transactional locking ii. In Proceedings of the 20th International Conference on Distributed Computing, DISC'06, pages 194-208, Berlin, Heidelberg, 2006. Springer- Verlag. What really makes transactions fast? In Transact. D Dice, N Shavit, D. Dice and N. Shavit. What really makes transactions fast? In Transact, 2006. Solution of a problem in concurrent programming control. E W Dijkstra, Commun. ACM. 89569E. W. Dijkstra. Solution of a problem in concurrent programming control. Commun. ACM, 8(9):569-, Sept. 1965. Dcas is not a silver bullet for nonblocking algorithm design. S Doherty, D L Detlefs, L Groves, C H Flood, V Luchangco, P A Martin, M Moir, N Shavit, G L Steele, Jr , Proceedings of the Sixteenth Annual ACM Symposium on Parallelism in Algorithms and Architectures, SPAA '04. the Sixteenth Annual ACM Symposium on Parallelism in Algorithms and Architectures, SPAA '04New York, NY, USAACMS. Doherty, D. L. Detlefs, L. Groves, C. H. Flood, V. Luchangco, P. A. Martin, M. Moir, N. Shavit, and G. L. Steele, Jr. Dcas is not a silver bullet for nonblocking algorithm design. In Proceedings of the Sixteenth Annual ACM Symposium on Parallelism in Algorithms and Architectures, SPAA '04, pages 216-224, New York, NY, USA, 2004. ACM. Towards formally specifying and verifying transactional memory. S Doherty, L Groves, V Luchangco, M Moir, Formal Asp. Comput. 255S. Doherty, L. Groves, V. Luchangco, and M. Moir. Towards formally specifying and verifying transactional memory. Formal Asp. Comput., 25(5):769-799, 2013. On the power of hardware transactional memory to simplify memory management. A Dragojević, M Herlihy, Y Lev, M Moir, Proceedings of the 30th Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, PODC '11. the 30th Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, PODC '11New York, NY, USAACMA. Dragojević, M. Herlihy, Y. Lev, and M. Moir. On the power of hardware transactional memory to simplify memory management. In Proceedings of the 30th Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, PODC '11, pages 99-108, New York, NY, USA, 2011. ACM. Universal constructions that ensure disjoint-access parallelism and wait-freedom. F Ellen, P Fatourou, E Kosmas, A Milani, C Travers, PODC. F. Ellen, P. Fatourou, E. Kosmas, A. Milani, and C. Travers. Universal constructions that ensure disjoint-access parallelism and wait-freedom. In PODC, pages 115-124, 2012. On the inherent sequentiality of concurrent objects. F Ellen, D Hendler, N Shavit, SIAM J. Comput. 413F. Ellen, D. Hendler, and N. Shavit. On the inherent sequentiality of concurrent objects. SIAM J. Comput., 41(3):519-536, 2012. The lightweight transaction library. R Ennals, R. Ennals. The lightweight transaction library. http://sourceforge.net/projects/libltx/files/. Software transactional memory should not be obstruction-free. R Ennals, R. Ennals. Software transactional memory should not be obstruction-free. 2005. A highly-efficient wait-free universal construction. P Fatourou, N D Kallimanis, Proceedings of the Twenty-third Annual ACM Symposium on Parallelism in Algorithms and Architectures, SPAA '11. the Twenty-third Annual ACM Symposium on Parallelism in Algorithms and Architectures, SPAA '11New York, NY, USAACMP. Fatourou and N. D. Kallimanis. A highly-efficient wait-free universal construction. In Proceedings of the Twenty-third Annual ACM Symposium on Parallelism in Algorithms and Architectures, SPAA '11, pages 325-334, New York, NY, USA, 2011. ACM. Elastic transactions. P Felber, V Gramoli, R Guerraoui, DISC. P. Felber, V. Gramoli, and R. Guerraoui. Elastic transactions. In DISC, pages 93-107, 2009. On the inherent weakness of conditional synchronization primitives. F Fich, D Hendler, N Shavit, Proceedings of the Twenty-third Annual ACM Symposium on Principles of Distributed Computing, PODC '04. the Twenty-third Annual ACM Symposium on Principles of Distributed Computing, PODC '04New York, NY, USAACMF. Fich, D. Hendler, and N. Shavit. On the inherent weakness of conditional synchronization prim- itives. In Proceedings of the Twenty-third Annual ACM Symposium on Principles of Distributed Computing, PODC '04, pages 80-87, New York, NY, USA, 2004. ACM. Practical lock-freedom. K Fraser, Cambridge University Computer LaborotoryTechnical reportK. Fraser. Practical lock-freedom. Technical report, Cambridge University Computer Laborotory, 2003. On the input acceptance of transactional memory. V Gramoli, D Harmanci, P Felber, Parallel Processing Letters. 201V. Gramoli, D. Harmanci, and P. Felber. On the input acceptance of transactional memory. Parallel Processing Letters, 20(1):31-50, 2010. From sequential to concurrent: correctness and relative efficiency (ba). V Gramoli, P Kuznetsov, S Ravi, Principles of Distributed Computing (PODC). V. Gramoli, P. Kuznetsov, and S. Ravi. From sequential to concurrent: correctness and relative efficiency (ba). In Principles of Distributed Computing (PODC), pages 241-242, 2012. Optimism for boosting concurrency. V Gramoli, P Kuznetsov, S Ravi, abs/1203.4751CoRRV. Gramoli, P. Kuznetsov, and S. Ravi. Optimism for boosting concurrency. CoRR, abs/1203.4751, 2012. A concurrency-optimal list-based set. V Gramoli, P Kuznetsov, S Ravi, D Shang, abs/1502.01633CoRRV. Gramoli, P. Kuznetsov, S. Ravi, and D. Shang. A concurrency-optimal list-based set. CoRR, abs/1502.01633, 2015. A concurrency-optimal list-based set (ba). CoRR. V Gramoli, P Kuznetsov, S Ravi, D Shang, abs/1502.0163329th International Symposium on Distributed Computing (DISC'15). To appear inV. Gramoli, P. Kuznetsov, S. Ravi, and D. Shang. A concurrency-optimal list-based set (ba). CoRR, abs/1502.01633, 2015. To appear in 29th International Symposium on Distributed Computing (DISC'15). Transaction Processing: Concepts and Techniques. J Gray, A Reuter, Morgan Kaufmann Publishers IncSan Francisco, CA, USA1st editionJ. Gray and A. Reuter. Transaction Processing: Concepts and Techniques. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1st edition, 1992. Permissiveness in transactional memories. R Guerraoui, T A Henzinger, V Singh, DISC. R. Guerraoui, T. A. Henzinger, and V. Singh. Permissiveness in transactional memories. In DISC, pages 305-319, 2008. On obstruction-free transactions. R Guerraoui, M Kapalka, Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures, SPAA '08. the twentieth annual symposium on Parallelism in algorithms and architectures, SPAA '08New York, NY, USAACMR. Guerraoui and M. Kapalka. On obstruction-free transactions. In Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures, SPAA '08, pages 304-313, New York, NY, USA, 2008. ACM. On the correctness of transactional memory. R Guerraoui, M Kapalka, Proceedings of the 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP '08. the 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP '08New York, NY, USAACMR. Guerraoui and M. Kapalka. On the correctness of transactional memory. In Proceedings of the 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP '08, pages 175-184, New York, NY, USA, 2008. ACM. The semantics of progress in lock-based transactional memory. R Guerraoui, M Kapalka, SIGPLAN Not. 441R. Guerraoui and M. Kapalka. The semantics of progress in lock-based transactional memory. SIGPLAN Not., 44(1):404-415, Jan. 2009. Transactional memory: Glimmer of a theory. R Guerraoui, M Kapalka, Proceedings of the 21st International Conference on Computer Aided Verification, CAV '09. the 21st International Conference on Computer Aided Verification, CAV '09Berlin, HeidelbergSpringer-VerlagR. Guerraoui and M. Kapalka. Transactional memory: Glimmer of a theory. In Proceedings of the 21st International Conference on Computer Aided Verification, CAV '09, pages 1-15, Berlin, Heidelberg, 2009. Springer-Verlag. R Guerraoui, M Kapalka, Principles of Transactional Memory, Synthesis Lectures on Distributed Computing Theory. Morgan and ClaypoolR. Guerraoui and M. Kapalka. Principles of Transactional Memory, Synthesis Lectures on Dis- tributed Computing Theory. Morgan and Claypool, 2010. Stmbench7: A benchmark for software transactional memory. R Guerraoui, M Kapalka, J Vitek, SIGOPS Oper. Syst. Rev. 413R. Guerraoui, M. Kapalka, and J. Vitek. Stmbench7: A benchmark for software transactional memory. SIGOPS Oper. Syst. Rev., 41(3):315-324, Mar. 2007. Linearizability is not always a safety property. R Guerraoui, E Ruppert, NETYS. R. Guerraoui and E. Ruppert. Linearizability is not always a safety property. In NETYS, pages 57-69, 2014. What is safe in transactional memory. P K Hagit Attiya, Sandeep Hans, S Ravi, 4th Workshop on the Theory of Transactional Memory. Madeira, PortugalP. K. Hagit Attiya, Sandeep Hans and S. Ravi. What is safe in transactional memory. 4th Workshop on the Theory of Transactional Memory, Madeira, Portugal, 2012. . L Hammond, V Wong, M Chen, B D Carlstrom, J D Davis, B Hertzberg, M K Prabhu, H Wijaya, C Kozyrakis, K Olukotun, Transactional memory coherence and consistency. SIGARCH Comput. Archit. News. 322102L. Hammond, V. Wong, M. Chen, B. D. Carlstrom, J. D. Davis, B. Hertzberg, M. K. Prabhu, H. Wijaya, C. Kozyrakis, and K. Olukotun. Transactional memory coherence and consistency. SIGARCH Comput. Archit. News, 32(2):102-, Mar. 2004. T Harris, J R Larus, R Rajwar, Transactional Memory. Morgan & Claypool Publishers2nd editionT. Harris, J. R. Larus, and R. Rajwar. Transactional Memory, 2nd edition. Synthesis Lectures on Computer Architecture. Morgan & Claypool Publishers, 2010. A pragmatic implementation of non-blocking linked-lists. T L Harris, DISC. T. L. Harris. A pragmatic implementation of non-blocking linked-lists. In DISC, pages 300-314, 2001. A lazy concurrent list-based set algorithm. S Heller, M Herlihy, V Luchangco, M Moir, W N Scherer, N Shavit, OPODIS. S. Heller, M. Herlihy, V. Luchangco, M. Moir, W. N. Scherer, and N. Shavit. A lazy concurrent list-based set algorithm. In OPODIS, pages 3-16, 2006. Flat combining and the synchronization-parallelism tradeoff. D Hendler, I Incze, N Shavit, M Tzafrir, SPAA. D. Hendler, I. Incze, N. Shavit, and M. Tzafrir. Flat combining and the synchronization-parallelism tradeoff. In SPAA, pages 355-364, 2010. Computer Architecture: A Quantitative Approach. J L Hennessy, D A Patterson, Morgan Kaufmann Publishers IncSan Francisco, CA, USA3 editionJ. L. Hennessy and D. A. Patterson. Computer Architecture: A Quantitative Approach. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 3 edition, 2003. Apologizing versus asking permission: optimistic concurrency control for abstract data types. M Herlihy, ACM Trans. Database Syst. 151M. Herlihy. Apologizing versus asking permission: optimistic concurrency control for abstract data types. ACM Trans. Database Syst., 15(1):96-124, 1990. Wait-free synchronization. M Herlihy, ACM Trans. Prog. Lang. Syst. 131M. Herlihy. Wait-free synchronization. ACM Trans. Prog. Lang. Syst., 13(1):123-149, 1991. Transactional boosting: A methodology for highly-concurrent transactional objects. M Herlihy, E Koskinen, PPoPP. New York, NY, USAACMM. Herlihy and E. Koskinen. Transactional boosting: A methodology for highly-concurrent trans- actional objects. In PPoPP, New York, NY, USA, 2008. ACM. Composable transactional objects: A position paper. M Herlihy, E Koskinen, Lecture Notes in Computer Science. Z. Shao8410SpringerProgramming Languages and SystemsM. Herlihy and E. Koskinen. Composable transactional objects: A position paper. In Z. Shao, editor, Programming Languages and Systems, volume 8410 of Lecture Notes in Computer Science, pages 1-7. Springer Berlin Heidelberg, 2014. Obstruction-free synchronization: Double-ended queues as an example. M Herlihy, V Luchangco, M Moir, ICDCS. M. Herlihy, V. Luchangco, and M. Moir. Obstruction-free synchronization: Double-ended queues as an example. In ICDCS, pages 522-529, 2003. Software transactional memory for dynamic-sized data structures. M Herlihy, V Luchangco, M Moir, W N Scherer, Iii , Proceedings of the Twenty-second Annual Symposium on Principles of Distributed Computing, PODC '03. the Twenty-second Annual Symposium on Principles of Distributed Computing, PODC '03New York, NY, USAACMM. Herlihy, V. Luchangco, M. Moir, and W. N. Scherer, III. Software transactional memory for dynamic-sized data structures. In Proceedings of the Twenty-second Annual Symposium on Principles of Distributed Computing, PODC '03, pages 92-101, New York, NY, USA, 2003. ACM. Transactional memory: architectural support for lock-free data structures. M Herlihy, J E B Moss, ISCA. M. Herlihy and J. E. B. Moss. Transactional memory: architectural support for lock-free data structures. In ISCA, pages 289-300, 1993. The art of multiprocessor programming. M Herlihy, N Shavit, Morgan KaufmannM. Herlihy and N. Shavit. The art of multiprocessor programming. Morgan Kaufmann, 2008. On the nature of progress. M Herlihy, N Shavit, OPODIS. M. Herlihy and N. Shavit. On the nature of progress. In OPODIS, pages 313-328, 2011. Linearizability: A correctness condition for concurrent objects. M Herlihy, J M Wing, ACM Trans. Program. Lang. Syst. 123M. Herlihy and J. M. Wing. Linearizability: A correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst., 12(3):463-492, 1990. Local-spin mutual exclusion algorithms on the DSM model using fetch-and-store objects. L Hyonho, L. Hyonho. Local-spin mutual exclusion algorithms on the DSM model using fetch-and-store objects. 2003. Virtual world consistency: A condition for STM systems (with a versatile protocol with invisible read operations). D Imbs, M , Theor. Comput. Sci. 444D. Imbs and M. Raynal. Virtual world consistency: A condition for STM systems (with a versatile protocol with invisible read operations). Theor. Comput. Sci., 444, July 2012. Disjoint-access-parallel implementations of strong shared memory primitives. A Israeli, L Rappoport, PODC. A. Israeli and L. Rappoport. Disjoint-access-parallel implementations of strong shared memory primitives. In PODC, pages 151-160, 1994. Theorie der Endlichen und Unendlichen Graphen: Kombinatorische Topologie der Streckenkomplexe. D König, Akad. VerlagD. König. Theorie der Endlichen und Unendlichen Graphen: Kombinatorische Topologie der Streckenkomplexe. Akad. Verlag. 1936. Hybrid transactional memory. S Kumar, M Chu, C J Hughes, P Kundu, A Nguyen, Proceedings of the Eleventh ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP '06. the Eleventh ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP '06New York, NY, USAACMS. Kumar, M. Chu, C. J. Hughes, P. Kundu, and A. Nguyen. Hybrid transactional memory. In Proceedings of the Eleventh ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP '06, pages 209-220, New York, NY, USA, 2006. ACM. An optimality theory of concurrency control for databases. H T Kung, C H Papadimitriou, SIGMOD. H. T. Kung and C. H. Papadimitriou. An optimality theory of concurrency control for databases. In SIGMOD, pages 116-126, 1979. On the cost of concurrency in transactional memory. P Kuznetsov, S Ravi, OPODIS. P. Kuznetsov and S. Ravi. On the cost of concurrency in transactional memory. In OPODIS, pages 112-127, 2011. On the cost of concurrency in transactional memory. P Kuznetsov, S Ravi, abs/1103.1302CoRRP. Kuznetsov and S. Ravi. On the cost of concurrency in transactional memory. CoRR, abs/1103.1302, 2011. On partial wait-freedom in transactional memory. CoRR, abs/1407. P Kuznetsov, S Ravi, 6876P. Kuznetsov and S. Ravi. On partial wait-freedom in transactional memory. CoRR, abs/1407.6876, 2014. On partial wait-freedom in transactional memory. P Kuznetsov, S Ravi, Proceedings of the 2015 International Conference on Distributed Computing and Networking. the 2015 International Conference on Distributed Computing and NetworkingGoa, India10P. Kuznetsov and S. Ravi. On partial wait-freedom in transactional memory. In Proceedings of the 2015 International Conference on Distributed Computing and Networking, ICDCN 2015, Goa, India, January 4-7, 2015, page 10, 2015. Progressive transactional memory in time and space. P Kuznetsov, S Ravi, abs/1502.04908CoRRP. Kuznetsov and S. Ravi. Progressive transactional memory in time and space. CoRR, abs/1502.04908, 2015. Progressive transactional memory in time and space. P Kuznetsov, S Ravi, abs/1502.0490813th International Conference on Parallel Computing Technologies. RussiaCoRRTo appear inP. Kuznetsov and S. Ravi. Progressive transactional memory in time and space. CoRR, abs/1502.04908, 2015. To appear in 13th International Conference on Parallel Computing Tech- nologies, Russia. Why transactional memory should not be obstruction-free. P Kuznetsov, S Ravi, abs/1502.02725CoRRP. Kuznetsov and S. Ravi. Why transactional memory should not be obstruction-free. CoRR, abs/1502.02725, 2015. Why transactional memory should not be obstruction-free. CoRR. P Kuznetsov, S Ravi, abs/1502.0272529th International Symposium on Distributed Computing (DISC'15). JapanTo appear inP. Kuznetsov and S. Ravi. Why transactional memory should not be obstruction-free. CoRR, abs/1502.02725, 2015. To appear in 29th International Symposium on Distributed Computing (DISC'15), Japan. Putting opacity in its place. M Lesani, V Luchangco, M Moir, WTTM. M. Lesani, V. Luchangco, and M. Moir. Putting opacity in its place. In WTTM, 2012. Phtm: Phased transactional memory. Y Lev, M Moir, D Nussbaum, Workshop on Transactional Computing (Transact). Y. Lev, M. Moir, and D. Nussbaum. Phtm: Phased transactional memory. In In Work- shop on Transactional Computing (Transact), 2007. research.sun.com/scalable/pubs/ TRANS- ACT2007PhTM.pdf. Distributed Algorithms. N A Lynch, Morgan KaufmannN. A. Lynch. Distributed Algorithms. Morgan Kaufmann, 1996. Adaptive software transactional memory. V J Marathe, W N S Iii, M L Scott, Proc. of the 19th Intl. Symp. on Distributed Computing. of the 19th Intl. Symp. on Distributed ComputingV. J. Marathe, W. N. S. Iii, and M. L. Scott. Adaptive software transactional memory. In In Proc. of the 19th Intl. Symp. on Distributed Computing, pages 354-368, 2005. Reduced hardware transactions: a new approach to hybrid transactional memory. A Matveev, N Shavit, Proceedings of the 25th ACM symposium on Parallelism in algorithms and architectures. the 25th ACM symposium on Parallelism in algorithms and architecturesACMA. Matveev and N. Shavit. Reduced hardware transactions: a new approach to hybrid transactional memory. In Proceedings of the 25th ACM symposium on Parallelism in algorithms and architectures, pages 11-22. ACM, 2013. Memory barriers: a hardware view for software hackers. P E Mckenney, Linux Technology Center. P. E. McKenney. Memory barriers: a hardware view for software hackers. Linux Technology Center, IBM Beaverton, June 2010. Single global lock semantics in a weakly atomic stm. V Menon, S Balensiefer, T Shpeisman, A.-R Adl-Tabatabai, R L Hudson, B Saha, A Welc, SIGPLAN Not. 435V. Menon, S. Balensiefer, T. Shpeisman, A.-R. Adl-Tabatabai, R. L. Hudson, B. Saha, and A. Welc. Single global lock semantics in a weakly atomic stm. SIGPLAN Not., 43(5):15-26, May 2008. High performance dynamic lock-free hash tables and list-based sets. M M Michael, SPAA. M. M. Michael. High performance dynamic lock-free hash tables and list-based sets. In SPAA, pages 73-82, 2002. Simple, fast, and practical non-blocking and blocking concurrent queue algorithms. M M Michael, M L Scott, PODC. M. M. Michael and M. L. Scott. Simple, fast, and practical non-blocking and blocking concurrent queue algorithms. In PODC, pages 267-275, 1996. Memory Speculation of the Blue Gene/Q Compute Chip. M Ohmacht, M. Ohmacht. Memory Speculation of the Blue Gene/Q Compute Chip, 2011. http://wands.cse. lehigh.edu/IBM_BQC_PACT2011.ppt. Proving liveness properties of concurrent programs. S S Owicki, L Lamport, ACM Trans. Program. Lang. Syst. 43S. S. Owicki and L. Lamport. Proving liveness properties of concurrent programs. ACM Trans. Program. Lang. Syst., 4(3):455-495, 1982. The serializability of concurrent database updates. C H Papadimitriou, J. ACM. 26C. H. Papadimitriou. The serializability of concurrent database updates. J. ACM, 26:631-653, 1979. On maintaining multiple versions in STM. D Perelman, R Fan, I Keidar, PODC. D. Perelman, R. Fan, and I. Keidar. On maintaining multiple versions in STM. In PODC, pages 16-25, 2010. Transactional Synchronization in Haswell. J Reinders, J. Reinders. Transactional Synchronization in Haswell, 2012. http://software.intel.com/ en-us/blogs/2012/02/07/transactional-synchronization-in-haswell/. Software Transactional Memory Building Blocks. T , T. Riegel. Software Transactional Memory Building Blocks. 2013. Optimizing hybrid transactional memory: The importance of nonspeculative operations. T Riegel, P Marlier, M Nowack, P Felber, C Fetzer, Proceedings of the 23rd ACM Symposium on Parallelism in Algorithms and Architectures. the 23rd ACM Symposium on Parallelism in Algorithms and ArchitecturesACMT. Riegel, P. Marlier, M. Nowack, P. Felber, and C. Fetzer. Optimizing hybrid transactional memory: The importance of nonspeculative operations. In Proceedings of the 23rd ACM Symposium on Parallelism in Algorithms and Architectures, pages 53-64. ACM, 2011. Architectural support for software transactional memory. B Saha, A.-R Adl-Tabatabai, Q Jacobson, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture. the 39th Annual IEEE/ACM International Symposium on MicroarchitectureWashington, DC, USAIEEE Computer Society39B. Saha, A.-R. Adl-Tabatabai, and Q. Jacobson. Architectural support for software transactional memory. In Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchi- tecture, MICRO 39, pages 185-196, Washington, DC, USA, 2006. IEEE Computer Society. Advanced contention management for dynamic software transactional memory. W N Scherer, M L Scott, Proceedings of the Twenty-fourth Annual ACM Symposium on Principles of Distributed Computing, PODC '05. the Twenty-fourth Annual ACM Symposium on Principles of Distributed Computing, PODC '05New York, NY, USAACMW. N. Scherer, III and M. L. Scott. Advanced contention management for dynamic software transactional memory. In Proceedings of the Twenty-fourth Annual ACM Symposium on Principles of Distributed Computing, PODC '05, pages 240-248, New York, NY, USA, 2005. ACM. Shared-memory Synchronization, Synthesis Lectures on Distributed Computing Theory. M L Scott, Morgan and ClaypoolM. L. Scott. Shared-memory Synchronization, Synthesis Lectures on Distributed Computing The- ory. Morgan and Claypool, 2013. Software transactional memory. N Shavit, D Touitou, PODC. N. Shavit and D. Touitou. Software transactional memory. In PODC, pages 204-213, 1995. Privatization techniques for software transactional memory. M F Spear, V J Marathe, L Dalessandro, M L Scott, Proceedings of the Twenty-sixth Annual ACM Symposium on Principles of Distributed Computing, PODC '07. the Twenty-sixth Annual ACM Symposium on Principles of Distributed Computing, PODC '07New York, NY, USAACMM. F. Spear, V. J. Marathe, L. Dalessandro, and M. L. Scott. Privatization techniques for software transactional memory. In Proceedings of the Twenty-sixth Annual ACM Symposium on Principles of Distributed Computing, PODC '07, pages 338-339, New York, NY, USA, 2007. ACM. Nonblocking transactions without indirection using alert-on-update. M F Spear, A Shriraman, L Dalessandro, S Dwarkadas, M L Scott, Proceedings of the Nineteenth Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA '07. the Nineteenth Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA '07New York, NY, USAACMM. F. Spear, A. Shriraman, L. Dalessandro, S. Dwarkadas, and M. L. Scott. Nonblocking trans- actions without indirection using alert-on-update. In Proceedings of the Nineteenth Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA '07, pages 210-220, New York, NY, USA, 2007. ACM. Nztm: Nonblocking zero-indirection transactional memory. F Tabba, M Moir, J R Goodman, A W Hay, C Wang, Proceedings of the Twenty-first Annual Symposium on Parallelism in Algorithms and Architectures, SPAA '09. the Twenty-first Annual Symposium on Parallelism in Algorithms and Architectures, SPAA '09New York, NY, USAACMF. Tabba, M. Moir, J. R. Goodman, A. W. Hay, and C. Wang. Nztm: Nonblocking zero-indirection transactional memory. In Proceedings of the Twenty-first Annual Symposium on Parallelism in Algorithms and Architectures, SPAA '09, pages 204-213, New York, NY, USA, 2009. ACM. The black-white bakery algorithm and related bounded-space, adaptive, localspinning and fifo algorithms. G Taubenfeld, DISC '04: Proceedings of the 23rd International Symposum on Distributed Computing. G. Taubenfeld. The black-white bakery algorithm and related bounded-space, adaptive, local- spinning and fifo algorithms. In DISC '04: Proceedings of the 23rd International Symposum on Distributed Computing, 2004. Sharing a sequential data structure: correctness definition and concurrency analysis. 4th Workshop on the Theory of Transactional Memory. P K Vincent Gramoli, S Ravi, Madeira, PortugalP. K. Vincent Gramoli and S. Ravi. Sharing a sequential data structure: correctness definition and concurrency analysis. 4th Workshop on the Theory of Transactional Memory, Madeira, Portugal, 2012. Commutativity-based concurrency control for abstract data types. W E Weihl, IEEE Trans. Comput. 3712W. E. Weihl. Commutativity-based concurrency control for abstract data types. IEEE Trans. Comput., 37(12):1488-1505, 1988. A theoretical foundation of multi-level concurrency control. G Weikum, PODS. G. Weikum. A theoretical foundation of multi-level concurrency control. In PODS, pages 31-43, 1986. Transactional Information Systems: Theory, Algorithms, and the Practice of Concurrency Control and Recovery. G Weikum, G Vossen, Morgan KaufmannG. Weikum and G. Vossen. Transactional Information Systems: Theory, Algorithms, and the Practice of Concurrency Control and Recovery. Morgan Kaufmann, 2002. Papers The content of the thesis is based on the following tech reports and publications. M Yannakakis, abs/1502.04908J. ACM. P. Kuznetsov and S. Ravi312CoRRTech reportsProgressive transactional memory in time and spaceM. Yannakakis. Serializability by locking. J. ACM, 31(2):227-244, 1984. Papers The content of the thesis is based on the following tech reports and publications. Tech reports P. Kuznetsov and S. Ravi. Progressive transactional memory in time and space. CoRR, abs/1502.04908, 2015. Why transactional memory should not be obstruction-free. P Kuznetsov, S Ravi, abs/1502.02725CoRRP. Kuznetsov and S. Ravi. Why transactional memory should not be obstruction-free. CoRR, abs/1502.02725, 2015. Inherent limitations of hybrid transactional memory. D Alistarh, J Kopinsky, P Kuznetsov, S Ravi, N Shavit, abs/1405.5689CoRRD. Alistarh, J. Kopinsky, P. Kuznetsov, S. Ravi, and N. Shavit. Inherent limitations of hybrid transactional memory. CoRR, abs/1405.5689, 2014. Optimism for boosting concurrency. V Gramoli, P Kuznetsov, S Ravi, abs/1203.4751CoRRV. Gramoli, P. Kuznetsov, and S. Ravi. Optimism for boosting concurrency. CoRR, abs/1203.4751, 2012. On partial wait-freedom in transactional memory. CoRR, abs/1407. P Kuznetsov, S Ravi, 6876P. Kuznetsov and S. Ravi. On partial wait-freedom in transactional memory. CoRR, abs/1407.6876, 2014. Safety of deferred update in transactional memory. H Attiya, S Hans, P Kuznetsov, S Ravi, abs/1301.6297CoRR. H. Attiya, S. Hans, P. Kuznetsov, and S. Ravi. Safety of deferred update in transactional memory. CoRR, abs/1301.6297, 2013. On the cost of concurrency in transactional memory. P Kuznetsov, S Ravi, abs/1103.1302CoRRP. Kuznetsov and S. Ravi. On the cost of concurrency in transactional memory. CoRR, abs/1103.1302, 2011. Progressive transactional memory in time and space. P Publications, S Kuznetsov, Ravi, abs/1502.04908To appear in 13th International Conference on Parallel Computing Technologies. RussiaCoRRPublications P. Kuznetsov and S. Ravi. Progressive transactional memory in time and space. CoRR, abs/1502.04908, 2015. To appear in 13th International Conference on Parallel Computing Technologies, Russia. Why transactional memory should not be obstruction-free. P Kuznetsov, S Ravi, P. Kuznetsov and S. Ravi. Why transactional memory should not be obstruction-free. Corr, abs/1502.0272529th International Symposium on Distributed Computing (DISC'15). JapanCoRR, abs/1502.02725, 2015. To appear in 29th International Symposium on Distributed Computing (DISC'15), Japan. Inherent limitations of hybrid transactional memory. CoRR, abs/1405. D Alistarh, J Kopinsky, P Kuznetsov, S Ravi, N Shavit, 29th International Symposium on Distributed Computing (DISC'15). Japan5689To appear inD. Alistarh, J. Kopinsky, P. Kuznetsov, S. Ravi, and N. Shavit. Inherent limitations of hybrid transactional memory. CoRR, abs/1405.5689, 2014. To appear in 29th International Symposium on Distributed Computing (DISC'15), Japan. Safety and deferred update in transactional memory. H Attiya, S Hans, P Kuznetsov, S Ravi, Transactional Memory. Foundations, Algorithms, Tools, and Applications. R. Guerraoui and P. RomanoSpringer International Publishing8913H. Attiya, S. Hans, P. Kuznetsov, and S. Ravi. Safety and deferred update in transactional memory. In R. Guerraoui and P. Romano, editors, Transactional Memory. Foundations, Algorithms, Tools, and Applications, volume 8913 of Lecture Notes in Computer Science, pages 50-71. Springer International Publishing, 2015. On partial wait-freedom in transactional memory. P Kuznetsov, S Ravi, Proceedings of the 2015 International Conference on Distributed Computing and Networking. the 2015 International Conference on Distributed Computing and NetworkingGoa, India10P. Kuznetsov and S. Ravi. On partial wait-freedom in transactional memory. In Proceedings of the 2015 International Conference on Distributed Computing and Networking, ICDCN 2015, Goa, India, January 4-7, 2015, page 10, 2015. Safety of deferred update in transactional memory. H Attiya, S Hans, P Kuznetsov, S Ravi, IEEE 33rd International Conference on Distributed Computing Systems. 0H. Attiya, S. Hans, P. Kuznetsov, and S. Ravi. Safety of deferred update in transactional memory. 2013 IEEE 33rd International Conference on Distributed Computing Systems, 0:601-610, 2013. From sequential to concurrent: correctness and relative efficiency (ba). V Gramoli, P Kuznetsov, S Ravi, Principles of Distributed Computing (PODC). V. Gramoli, P. Kuznetsov, and S. Ravi. From sequential to concurrent: correctness and relative efficiency (ba). In Principles of Distributed Computing (PODC), pages 241-242, 2012. On the cost of concurrency in transactional memory. P Kuznetsov, S Ravi ; Alistarh, J Kopinsky, P Kuznetsov, S Ravi, N Shavit, Inherent limitations of hybrid transactional memory. 6th Workshop on the Theory of Transactional Memory. Paris, FranceOPODISP. Kuznetsov and S. Ravi. On the cost of concurrency in transactional memory. In OPODIS, pages 112-127, 2011. Workshop papers D. Alistarh, J. Kopinsky, P. Kuznetsov, S. Ravi, and N. Shavit. Inherent limitations of hybrid transactional memory. 6th Workshop on the Theory of Transactional Memory, Paris, France, 2014. Sharing a sequential data structure: correctness definition and concurrency analysis. 4th Workshop on the Theory of Transactional Memory. P K Vincent Gramoli, S Ravi, Madeira, PortugalP. K. Vincent Gramoli and S. Ravi. Sharing a sequential data structure: correctness defini- tion and concurrency analysis. 4th Workshop on the Theory of Transactional Memory, Madeira, Portugal, 2012. What is safe in transactional memory. P K Hagit Attiya, Sandeep Hans, S Ravi, 4th Workshop on the Theory of Transactional Memory. Madeira, PortugalP. K. Hagit Attiya, Sandeep Hans and S. Ravi. What is safe in transactional memory. 4th Workshop on the Theory of Transactional Memory, Madeira, Portugal, 2012. . Concurrently, I was also involved in the following paper whose contents are not included in the thesisConcurrently, I was also involved in the following paper whose contents are not included in the thesis. A concurrency-optimal list-based set. V Gramoli, P Kuznetsov, S Ravi, D Shang, abs/1502.01633CoRRV. Gramoli, P. Kuznetsov, S. Ravi, and D. Shang. A concurrency-optimal list-based set. CoRR, abs/1502.01633, 2015. A concurrency-optimal list-based set (ba). CoRR. V Gramoli, P Kuznetsov, S Ravi, D Shang, abs/1502.0163329th International Symposium on Distributed Computing (DISC'15). To appear inV. Gramoli, P. Kuznetsov, S. Ravi, and D. Shang. A concurrency-optimal list-based set (ba). CoRR, abs/1502.01633, 2015. To appear in 29th International Symposium on Distributed Computing (DISC'15).
[]
[ "Left-right mixing on leptonic and semileptonic b → u decays", "Left-right mixing on leptonic and semileptonic b → u decays" ]
[ "Chuan-Hung Chen \nDepartment of Physics\nNational Center for Theoretical Sciences\nNational Cheng-Kung University\n701, 300Tainan, HsinchuTaiwan, Taiwan\n", "Soo-Hyeon Nam \nDepartment of Physics\nNational Center for Theoretical Sciences\nNational Cheng-Kung University\n701, 300Tainan, HsinchuTaiwan, Taiwan\n" ]
[ "Department of Physics\nNational Center for Theoretical Sciences\nNational Cheng-Kung University\n701, 300Tainan, HsinchuTaiwan, Taiwan", "Department of Physics\nNational Center for Theoretical Sciences\nNational Cheng-Kung University\n701, 300Tainan, HsinchuTaiwan, Taiwan" ]
[]
It has been known that there exists a disagreement emerged between the determination of |V ub | from inclusive B → X u ℓν decays and exclusive B → πℓν decays. In order to solve the mismatch, we investigate the left-right (LR) mixing effects, denoted by ξ u , in leptonic and semileptonic b → u decays. We find that the new interactions (V + A) × (V − A) induced via the LR mixing can explain the mismatch between the values of |V ub | if Re(ξ u ) = −(0.14 ± 0.12). Furthermore, we also find that the LR mixing effects can enhance the branching fractions for B → τ ν and B → ρℓν decays by 30% and 17%, respectively, while reducing the branching fraction for B → γℓν decays by 18%. *
10.1016/j.physletb.2008.07.095
[ "https://arxiv.org/pdf/0807.0896v1.pdf" ]
118,992,277
0807.0896
9fe19f1747b8d45b218ae0a1877f844a5e72910c
Left-right mixing on leptonic and semileptonic b → u decays 6 Jul 2008 Chuan-Hung Chen Department of Physics National Center for Theoretical Sciences National Cheng-Kung University 701, 300Tainan, HsinchuTaiwan, Taiwan Soo-Hyeon Nam Department of Physics National Center for Theoretical Sciences National Cheng-Kung University 701, 300Tainan, HsinchuTaiwan, Taiwan Left-right mixing on leptonic and semileptonic b → u decays 6 Jul 2008(Dated: July 6, 2008) It has been known that there exists a disagreement emerged between the determination of |V ub | from inclusive B → X u ℓν decays and exclusive B → πℓν decays. In order to solve the mismatch, we investigate the left-right (LR) mixing effects, denoted by ξ u , in leptonic and semileptonic b → u decays. We find that the new interactions (V + A) × (V − A) induced via the LR mixing can explain the mismatch between the values of |V ub | if Re(ξ u ) = −(0.14 ± 0.12). Furthermore, we also find that the LR mixing effects can enhance the branching fractions for B → τ ν and B → ρℓν decays by 30% and 17%, respectively, while reducing the branching fraction for B → γℓν decays by 18%. * The dominant weak interaction in b → u decays in the standard model (SM) is strongly suppressed by the quark mixing matrix element |V ub | ∼ λ 4 where λ ≃ 0.22 in Wolfenstein parametrization [1]. Although the relevant decays occur at the tree level, such decays are often sensitive to a non-standard physics beyond the SM if the new physics effects are not directly proportional to the small weak mixing. One of the simplest extensions of the SM corresponding to such a scenario is the general left-right model (LRM) with gauge group SU(2) L × SU(2) R × U(1) [3]. Although the new physics effects in the LRM are followed by suppression factors such as the W L − W R mixing angle ξ, such suppression could be compensated by the right-handed quark mixing matrix V R if V R = V L (nonmanifest LRM) where V L is the usual Cabibbo-Kobayashi-Maskawa (CKM) matrix [2]. Especially, if V R takes one of the following forms, the W R mass limit can be lowered to approximately 300 GeV [4,5], and V R ub can be as large as λ (for M W R ≥ 800 GeV) [6]: (1) In this case, the right-handed current contributions in b → u decays can be maximal. The right-handed gauge boson mass M W R and the mixing angle ξ are restricted by a number of low-energy phenomenological constraints under various assumptions [5]. From the global analysis of muon decay measurements [7], the lower bound on ξ can be obtained without imposing discrete left-right symmetry as follows [8]: ξ ≤ g R g L M 2 W L M 2 W R < 0.034 g L g R ,(2) where g L (g R ) is the left(right)-handed gauge coupling constant. Although this mixing angle ξ is small, the combined parameter ξV R ub /V L ub could significantly contribute to the value of |V ub | extracted from the data in b → u decays. The general four-fermion interaction for b → qℓν ℓ decays with V ± A currents can be written as H ef f = 2 √ 2G F V L qb [(q L γ µ b L ) + ξ q (q R γ µ b R )] (l L γ µ ν L ) ,(3) where ξ q ≡ ξ(g R V R qb )/(g L V L qb ) and q = u, c. As well as the above terms, one can include other terms with right-handed leptons. However, the interference of such terms with the dominant one is suppressed by the small lepton masses m ℓ m ν , and the second dominant term is suppressed by ξ 2 or 1/M 4 W R , so we can drop them. From the above expression, it is clear that ξ u = ξ c in general. The bound on ξ c , ξ c ≈ 0.14 ± 0.18, was obtained by Voloshin from the difference ∆V cb = |V cb | incl − |V cb | excl where |V cb | incl and |V cb | excl were extracted from the inclusive rate of the decays B → ℓνX c and the exclusive decay B → D * ℓν at zero recoil, respectively [9]. One can see from Ref. [9] that |V cb | incl is related to V L cb of Eq. (3) as |V cb | incl ≈ |V L cb ||1 − ξ c f (x c )| where f (x q ) is a kinematic phase space function proportional to the ratio x q = m q /m b . For b → u decays, neglecting the u-quark mass, one can safely use the approximation |V ub | incl ≃ |V L ub | assuming that ξ u is small. Experimentally, unlike the case of |V cb | incl , the determination of |V ub | incl is very difficult due to the large background from b → c decays since |V L ub | ≪ |V L cb |. One may remove this large background by applying specific kinematic selection criteria such as the lepton-energy requirement, but in that restricted kinematic region the inclusive amplitude is governed by a non-perturbative shape function which is unknown theoretically from the first principle. In order to overcome this problem, various different theoretical techniques have been developed. In this letter, we adopt the following values obtained by the techniques called Dressed Gluon Exponentiation (DGE) [10] and the Analytic Coupling Model (AC) [11]: |V ub | incl × 10 3 = 4.48 ± 0.16 + 0.25 − 0.26 (DGE) 3.78 ± 0.13 ± 0.24 (AC) ,(4) where each value is an average of independent measurements [12]. Other than these two, there are several other methods to determine |V ub | incl [13]. However, we do not consider them here because they use inputs obtained from other measurements such as b → cℓν moments which could also be affected by possible new physics contributions. So, the values of |V ub | incl obtained from such methods are not suitable for our analysis. For numerical analysis, we use the weighted average of the two determinations of Eq. (4): |V ub | incl = (4.09 ± 0.20) × 10 −3 .(5) Due to the large error, this average can only be provisional. The determination of |V cb | excl from exclusive semileptonic B → π decays requires a theoretical calculation of the hadronic matrix element parametrized in terms of form factors. The most recent values of the B → π form factors were calculated by the QCD light-cone sum rule (LCSR), and the extracted value of |V cb | excl is [14]: |V ub | excl = (3.5 ± 0.4 ± 0.2 ± 0.1) × 10 −3 .(6) This updated result is in very good agreement with the earlier results from other groups, which can also be found in Ref. [14] with detailed discussion, so we do not repeat them here. The amplitude of semileptonic B → π decays is determined only by the vector current (ūγ µ b), and gets the overall factor (1 + ξ u ). From Eq. (3), one can then relate |V ub | excl to |V L ub | as |V ub | excl = |V L ub ||1 + ξ u | ≃ |V ub | incl |1 + ξ u | .(7) From the mismatch between the values of |V ub | extracted from the two different methods given in Eqs. (5,6), we roughly estimate the mixing parameter ξ u as ξ r u = −(0.14 ± 0.12),(8) where ξ r q ≡ Re(ξ q ), and we assumed ξ r u ≫ |ξ u | 2 . Of course, more accurate analysis of |V ub | extracted from the experimental data could further improve the bounds on ξ u . As discussed above, the obtained ξ r u is negative while ξ r c is positive, which implies that the mixing parameter ξ q is not universal and, in this case, the manifest (V R = V L ) LRM is disfavored. This negative value of the left-right mixing contribution commonly reduces the branching fractions for semileptonic B → P decays in b → u transitions where P indicates a pseudo-scalar meson. Using the obtained value of ξ r u , we will also estimate the branching fractions for other types of b → u transitions such as B → τ ν, B → ρℓν, and B → γℓν decays. Recently, the BELLE [15] and BABAR [16] collaborations have found evidence for the purely leptonic B − → τ −ν decays. Their measurements are Br(B − → τ −ν τ ) = (1.79 + 0.56 + 0.46 − 0.49 − 0.51 ) × 10 −4 (BELLE) (1.2 ± 0.4 ± 0.3 ± 0.2) × 10 −4 (BABAR) ,(9) where the BABAR result is an average of two results, (0.9 ± 0.6 ± 0.1) × 10 −4 and (1.8 +0.9 −0.8 ± 0.4 ± 0.2) × 10 −4 , from separate analysis with semi-leptonic and hadronic tags, respectively, and the latter one is newer. On the theory side, there have been numerous discussions on the mode B → τ ν in physics beyond the SM such as the two Higgs Doublet Model (2HDM) [17] and the Minimal Supersymmetric SM (MSSM) [18,19]. This process occurs via annihilation of b andū quarks, and its amplitude is determined only by the axial current (ūγ µ γ 5 b). So the branching ratio is give by Br(B − → τ −ν τ ) = G 2 F m B m 2 τ 8π 1 − m 2 τ m 2 B 2 f 2 B |V L ub | 2 |1 − ξ u | 2 τ B − ,(10) where τ B − is the lifetime of B − and f B is the B meson decay constant. Using f B = (216±22) MeV obtained from unquenched lattice QCD [20], we arrive at the SM prediction for the τ −ν τ branching fraction of (1.38 ± 0.31) × 10 −4 . In the presence of right-handed currents for small ξ, our estimate of the branching fraction according to Eq. (8) is Br(B − → τ −ν τ ) = (1.78 ± 0.53) × 10 −4 .(11) Interestingly, this value agrees very well with the BELLE result and the new BABAR result, but not with the old BABAR result. The decay mode B → ρℓν has been studied earlier in the SM by many authors [21]. Meanwhile, the left-right mixing effect in B → ρℓν decays was also studied for selected regions of q 2 in Ref. [22] where ξ u was assumed to be a positive real parameter. In this letter, we reexamine the mode B → ρℓν for the whole range of q 2 with the value of ξ u in Eq. (8). Since ρ is a vector particle, all virtual W polarizations are allowed in semileptonic B → ρ decays, and the hadronic matrix elements for B → ρ transitions can be written in terms of the four Lorentz-invariant form factors V and A 0,1,2 as ρ(p ρ , ǫ)|ūγ µ b|B(p B ) = −2 V (q 2 ) m B + m ρ ε µναβ ǫ * ν p α B p β ρ , ρ(p ρ , ǫ)|ūγ µ γ 5 b|B(p B ) = i A 0 (q 2 ) q 2 2m ρ (ǫ * · q) q µ +iA 1 (q 2 )(m B + m ρ ) ǫ * − ǫ * · q q 2 q µ −i A 2 (q 2 ) m B + m V p B + p ρ − m 2 B − m 2 ρ q 2 q µ (ǫ * · q) ,(12) where q is the momentum of lepton pair and p M is the momentum of M meson. In the limit of massless leptons, the terms proportional to q µ in Eq. (12) vanish, and the three helicity amplitudes H ±,0 depend effectively on only three form factors V and A 1,2 as: H ± = 1 m B + m ρ (m B + m ρ ) 2 (1 − ξ u )A 1 (q 2 ) ∓ 2m B |p ρ |(1 + ξ u )V (q 2 ) , H 0 = m B (1 − ξ u ) 2m ρ (m B + m ρ ) √ y 1 − m 2 ρ m 2 B − y (m B + m ρ ) 2 A 1 (q 2 ) − 4|p ρ | 2 A 2 (q 2 ) ,(13) where y = q 2 /m 2 B and p ρ is the ρ meson three-momentum in the B-meson rest frame. In terms of these three helicity amplitudes, the differential decay rate is then given by : 1 d 2 Γ(B 0 → ρ − ℓ + ν ℓ ) dyd cos θ ℓ = G 2 F m 2 B |p ρ |y 256π 3 |V L ub | 2 (1 − cos θ ℓ ) 2 |H + | 2 +(1 + cos θ ℓ ) 2 |H − | 2 + 2 sin θ ℓ 2 |H 0 | 2 ,(14) where θ ℓ is the azimuthal angle between the directions of the ℓν system and the lepton in the ℓν rest frame. At large q 2 , the axial current represented by the A i terms is dominant and 1 The general form of the differential decay rate for semileptonic B → ρ transitions with the non-zero lepton masses in the SM can be found in Ref. [19]. The right-handed current contribution can simply be obtained by replacing the form factors V and A i in the SM with (1 + ξ u )V and (1 − ξ u )A i , respectively. the corresponding decay rate can be expressed as Γ ∼ |1 − ξ u | 2 Γ SM while the vector current represented by the V term could be important at low q 2 . A theoretical prediction of the B → ρ decay rate requires a specific choice of form factors. For numerical analysis, we use the recent LCSR result [23], where the B → ρ form factors V and A i are parametrized as F (q 2 ) = f 1 1 − q 2 /m 2 1 + f 2 (1 − q 2 /m 2 2 ) n ,(15) and the corresponding parameters are collected in Table I. Using these values, we plot the differential branching fraction for B 0 → ρ − ℓ + ν ℓ decays by varying q 2 in Fig. 1. We also show the dΓ/dq 2 distribution in Fig. 2 for each term of H ±,0 given in Eq. (14). As one can see from the figures, H − contributes the largest fraction of the total rate in the SM, but the left-right mixing effects in H − is small due to the cancellation between the vector and axial currents. However, H 0 is only determined by the axial current, and receives the significant contribution from the left-right mixing term. As well as the branching fraction, one can consider the forward-backward asymmetry (A FB ) of charged lepton defined by A FB = 1 0 d cos θ d 2 Γ dyd cos θ − 0 −1 d cos θ d 2 Γ dyd cos θ 1 0 d cos θ d 2 Γ dyd cos θ + 0 −1 d cos θ d 2 Γ dyd cos θ .(16) The variation of A FB as a function of q 2 is shown in Fig. 3. After an integration over whole phase space, we show the left-right mixing effects to the branching fraction and A FB as Br(B 0 → ρ − ℓ + ν ℓ ) ≃ (1 − 1.21ξ r u )Br SM (B 0 → ρ − ℓ + ν ℓ ), dyA FB (B 0 → ρ − ℓ + ν ℓ ) ≃ (1 + 1.21ξ r u ) dyA SM FB (B 0 → ρ − ℓ + ν ℓ ) .(17) Note that the branching fraction can be enhanced by about 17% and the integrated A FB can be reduced by about 17% for ξ r u = −0.14. Of course, using the form factors from different theoretical methods would lead us to somewhat different results, and it is beyond the scope of this letter to discuss the detailed analysis of those results. The radiative leptonic B → γℓν ℓ decays are governed by internal bremsstrahlung (IB) and structure-dependence (SD) [25], where the former corresponds to the photon emitted via the lepton and associates with the helicity suppressed factor m ℓ /m B while the latter photon couples to the quarks inside B meson and is free of the suppressed factor. Therefore, for simplicity, we neglect the contributions of IB and only consider the contributions of SD. In order to consider the hadronic effects for leptonic B → γ decays, we parametrize the transition matrix elements in terms of the form factors F V and F A as [26]: γ(k, ǫ)|ūγ µ b|B − (p B ) = e F V (q 2 ) m B ε µνρσ ǫ * ν p ρ B k σ , γ(k, ǫ)|ūγ µ γ 5 b|B − (p B ) = ie F A (q 2 ) m B (p B · k)ǫ * µ − (ǫ * · p B )k µ ,(18) where ǫ and k are the polarization vector and the momentum of the photon, respectively. Using Eq.(3), the decay amplitude for B → γℓν ℓ can be written as A(B − → γℓ −ν ℓ ) = eG F √ 2 V L ub ǫ * α (λ)H αβ l (p ℓ )γ β (1 − γ 5 )ν(p ν )(19) with H αβ = F ′ A m B [−(p B · k)g αβ + p Bα k β ] + iǫ αβρσ F ′ V m B k ρ p σ B ,(20) where F ′ V = F V (1+ξ u ) and F ′ A = F A (1−ξ u ). With unpolarized photon, the double differential decay rate is then given by d 2 Γ(B − → γℓ −ν ℓ ) dyd cos θ = α em G 2 F m 5 B 512π 2 y (1 − y) 3 |V L ub | 2 1 −m 2 ℓ 2 I(q 2 , cos θ)(21) with I(q 2 , cos θ) = |F ′ V + F ′ A | 2 1 +m 2 ℓ + 1 −m 2 ℓ cos θ (1 + cos θ) +|F ′ A − F ′ V | 2 1 +m 2 ℓ − 1 −m 2 ℓ cos θ (1 − cos θ) ,(22) wherem ℓ = m ℓ / q 2 and θ is the relative angle between photon and lepton. Since both ρ and γ are vector particles, the numerical analysis of the B → γℓν ℓ transition can be done similarly to the B → ρℓν ℓ case. In order to clearly see the right-handed current contribution in B → γℓν ℓ decays and compare it with that in B → ρℓν ℓ decays, we use the LCSR result for the form factors parametrized as Eq. (15) obtained in Ref. [24], and plot the differential branching fraction for B → γℓν ℓ decays for zero lepton masses by varying q 2 in Fig. 4. One can see from the figure that the deviation from the SM is very small at low q 2 . This is because F V ∼ F A at low q 2 , and in this region the left-right mixing effect is suppresses by ξ u (F V − F A ), which is clear from Eq. (22). However the deviation from the SM becomes larger as q 2 is increased. Beside the branching fraction, we also obtain the angular asymmetry of lepton defined in Eq. (16) in B → γℓν ℓ decays as A FB (B → γℓν ℓ ) = 6Re(F ′ V F * ′ A ) [|F ′ A + F ′ V | 2 + |F ′ A − F ′ V | 2 ] (2 + m 2 ℓ /q 2 )(23) The variation of A FB as a function of q 2 for zero lepton masses is shown in Fig. 5. After an integration over whole phase space, we show the left-right mixing effects to the branching fraction and A FB as Br(B − → γℓ −ν ℓ ) ≃ (1 + 1.25ξ r u )Br SM (B − → γℓ −ν ℓ ), dyA FB (B − → γℓ −ν ℓ ) ≃ (1 − 1.25ξ r u ) dyA SM FB (B − → γℓ −ν ℓ ) .(24) Note that the branching fraction can be reduced by about 18% and the integrated A FB can be enhanced by about 18% for ξ r u = −0.14. This result can be compared with those in semileptonic B → V transitions as shown in the previous example of B → ρℓν decays where V indicates a vector meson. In summary, we show that the difference between the values of |V ub | extracted from the total inclusive semileptonic decay rate of b → u transitions and from the exclusive decay rate of B → πℓν transitions is sensitive to the admixture of right-handed b → u current characterized by the mixing parameter ξ u . From the current mismatch between |V ub | incl and |V ub | excl obtained from the independent experiments, we estimate the size of the left-right mixing parameter ξ u to be Re(ξ u ) = −(0.14 ± 0.12). For Re(ξ u ) = −0.14, we show that the branching fraction for leptonic B → τ ν and semileptonic B → ρℓν decays can be enhanced by 30% and 17%, respectively, while the branching fraction for radiative leptonic B → γℓν decays can be reduced by 18%. The left-right mixing contributions obtained in this letter in leptonic and semileptonic b → u decays are not simply negligible. Therefore, our estimate could be a reasonable guide to search for the existence of the right-handed current, and future experimental progress can further improve the bound of the new physics parameter. FIG. 1 : 1dΓ(B 0 → ρ − ℓ + ν ℓ )/dq 2 distribution for the SM (solid line), ξ r u = -0.14 (dashed line), and its error (dotted line). FIG. 2 : 2dΓ(B 0 → ρ − ℓ + ν ℓ )/dq 2 distributions for each of three terms in Eq.(14) for the SM (solid line), ξ r u = -0.14 (dashed line), and its error (dotted line). FIG. 3 : 3A FB (B 0 → ρ − ℓ + ν ℓ ) as a function of q 2 for the SM (solid line), ξ r u = -0.14 (dashed line), and its error (dotted line). FIG. 4 : 4dΓ(B − → γℓ −ν ℓ )/dq 2 distribution for the SM (solid line), ξ r u = -0.14 (dashed line), and its error (dotted line). FIG. 5 : 5A FB (B − → γℓ −ν ℓ ) as a function of q 2 for the SM (solid line), ξ r u = -0.14 (dashed line), and its error (dotted line). TABLE I : ISet of parameters for the B → ρ and B → γ form factors obtained from the fits of theLCSR results in Ref. [23] and Ref. [24], respectively. F (q 2 ) f 1 f 2 m 2 1 m 2 2 n V 1.045 −0.721 28.30 38.34 1 A 1 0 −0.240 - 37.51 1 A 2 0.009 −0.212 40.82 40.82 2 F V 0 0.190 - 31.36 2 F A 0 0.150 - 42.25 2 [1] L. Wolfenstein, Phys. Rev. Lett. 51, 1945Lett. 51, (1983. R N For A Review, Mohapatra, Unification and Supersymmetry. New YorkSpringerFor a review, see R. N. Mohapatra, Unification and Supersymmetry (Springer, New York, 1992) . . J C Pati, A Salam, Phys. Rev. D. 10275J. C. Pati and A. Salam, Phys. Rev. D 10, 275 (1974); . R N Mohapatra, J C Pati, ibid. 11112558R. N. Mohapatra and J. C. Pati, ibid. 11, 566 (1975); 11, 2558 (1975). . F I Olness, M E Ebel, Phys. Rev. D. 301034F. I. Olness and M. E. Ebel, Phys. Rev. D 30, 1034 (1984). . P Langacker, S U Sankar, Phys. Rev. D. 401569P. Langacker and S. U. Sankar, Phys. Rev. D 40, 1569 (1989). . T G Rizzo, Phys. Rev. D. 5898114014T. G. Rizzo, Phys. Rev. D 58, 114014 (98). . C A Gagliardi, R E Tribble, N J Williams, Phys. Rev. D. 7273002C. A. Gagliardi, R. E. Tribble, and N. J. Williams, Phys. Rev. D 72, 073002 (2005). . S.-H Nam, Phys. Rev. D. 6655008S.-h. Nam, Phys. Rev. D 66, 055008 (2002). . M B Voloshin, Mod. Phys. Lett. A. 121823M. B. Voloshin, Mod. Phys. Lett. A 12, 1823 (1997). . J R Andersen, E Gardi, J. High Energy Phys. 060197J. R. Andersen and E. Gardi, J. High Energy Phys. 0601, 097 (2006). . U Aglietti, arXiv:0711.0860hep-phU. Aglietti et al., arXiv:0711.0860 [hep-ph] . . P Gambino, J. High Energy Phys. 071058also see Ref. [12P. Gambino et al., J. High Energy Phys. 0710, 058 (2007); also see Ref. [12] . . G Duplancic, J. High Energy Phys. 080414and references there inG. Duplancic et al., J. High Energy Phys. 0804, 014 (2008), and references there in. . K Ikado, BELLE CollaborationPhys. Rev. Lett. 97251802K. Ikado et al., (BELLE Collaboration), Phys. Rev. Lett. 97, 251802 (2006). . B Aubert, BABAR CollaborationPhys. Rev. D. 7652002B. Aubert et al., (BABAR Collaboration), Phys. Rev. D 76, 052002 (2007); . W S Hou, Phys. Rev. D. 482342W. S. Hou, Phys. Rev. D 48, 2342 (1993); . A G Akeroyd, C H Chen, Phys. Rev. D. 7575004A. G. Akeroyd and C. H. Chen, Phys. Rev. D 75, 075004 (2007). . A G Akeroyd, S Recksiegel, J. Phys. 292311A. G. Akeroyd and S. Recksiegel, J. Phys. G29, 2311 (2003); . G Isidori, P Paradisi, Phys. Lett. B. 639499G. Isidori and P. Paradisi, Phys. Lett. B 639, 499 (2006); . A G Akeroyd, C H Chen, S Recksiegel, arXiv:0803.3517hep-phA. G. Akeroyd, C. H. Chen, and S. Recksiegel, arXiv:0803.3517 [hep-ph]. . C H Chen, C Q Geng, J. High Energy Phys. 061053C. H. Chen and C. Q. Geng, J. High Energy Phys. 0610, 053 (2006). . A Gray, HPQCDPhys. Rev. Lett. 95212001A. Gray et al., (HPQCD), Phys. Rev. Lett. 95, 212001 (2005). . L K For, Gibbons, Ann. Rev. Nucl. Part. Sci. 48121For instance, see L. K. Gibbons, Ann. Rev. Nucl. Part. Sci. 48, 121 (1998). . Y Yang, H Hao, F Su, Phys. Lett. B. 61897Y. Yang, H. Hao, and F. Su, Phys. Lett. B 618, 97 (2005). . P Ball, R Zwicky, Phys. Rev. D. 712005014029P. Ball and R. Zwicky, Phys. Rev. D , 7 (12005014029). . G Eilam, I E Halperin, R R Mendel, Phys. Lett. B. 361137G. Eilam, I. E. Halperin, and R. R. Mendel, Phys. Lett. B 361, 137 (1995). . T Goldmann, W J Wilson, Phys. Rev. D. 15709T. Goldmann and W. J. Wilson, Phys. Rev. D 15, 709 (1977). . C Q Geng, C C Lih, W M Zhang, Phys. Rev. D. 575697C. Q. Geng, C. C. Lih, and W. M. Zhang, Phys. Rev. D 57, 5697 (1998).
[]
[ "Speeding up Memory-based Collaborative Filtering with Landmarks", "Speeding up Memory-based Collaborative Filtering with Landmarks" ]
[ "Gustavo R Lima ", "· Carlos ", "E Mello ", "· Geraldo " ]
[]
[]
Recommender systems play an important role in many scenarios where users are overwhelmed with too many choices to make. In this context, Collaborative Filtering (CF) arises by providing a simple and widely used approach for personalized recommendation. Memory-based CF algorithms mostly rely on similarities between pairs of users or items, which are posteriorly employed in classifiers like k-Nearest Neighbor (kNN) to generalize for unknown ratings. A major issue regarding this approach is to build the similarity matrix. Depending on the dimensionality of the rating matrix, the similarity computations may become computationally intractable. To overcome this issue, we propose to represent users by their distances to preselected users, namely landmarks. This procedure allows to drastically reduce the computational cost associated with the similarity matrix. We evaluated our proposal on two distinct distinguishing databases, and the results showed our method has consistently and considerably outperformed eight CF algorithms (including both memory-based and model-based) in terms of computational performance.
null
[ "https://arxiv.org/pdf/1705.07051v1.pdf" ]
2,258,054
1705.07051
a3f8b6f8eb9db517d09668a28ddaaec7f3ef5479
Speeding up Memory-based Collaborative Filtering with Landmarks Gustavo R Lima · Carlos E Mello · Geraldo Speeding up Memory-based Collaborative Filtering with Landmarks Noname manuscript No. (will be inserted by the editor) Zimbrao the date of receipt and acceptance should be inserted laterRecommender system · Collaborative filtering · Memory-based algorithms · Land- marks · Data reduction · Dimensionality reduction · Non-linear transformations Recommender systems play an important role in many scenarios where users are overwhelmed with too many choices to make. In this context, Collaborative Filtering (CF) arises by providing a simple and widely used approach for personalized recommendation. Memory-based CF algorithms mostly rely on similarities between pairs of users or items, which are posteriorly employed in classifiers like k-Nearest Neighbor (kNN) to generalize for unknown ratings. A major issue regarding this approach is to build the similarity matrix. Depending on the dimensionality of the rating matrix, the similarity computations may become computationally intractable. To overcome this issue, we propose to represent users by their distances to preselected users, namely landmarks. This procedure allows to drastically reduce the computational cost associated with the similarity matrix. We evaluated our proposal on two distinct distinguishing databases, and the results showed our method has consistently and considerably outperformed eight CF algorithms (including both memory-based and model-based) in terms of computational performance. Introduction The continuously improving network technology and the exponential growth of social networks have been connecting the whole world, putting available a huge volume of content, media, goods, services, and many other different kinds of items on Internet . However, this phenomenon leads to the paradox of choice (Schwartz 2004). It addresses the problem that people overwhelmed with too many choices tend to be more anxious, and eventually give up to proceed with the order. To tackle this issue, a massive effort has been made towards the development of data mining methods for recommender systems (Ricci et al 2011). This promising technology aims at helping users search and find items that are likely to be consumed, alleviating the burden of choice. In this context, many recommender systems have been designed to provide users with suggested items in a personalized manner. A well-known and widely used approach for this kind of recommendation is Collaborative Filtering (CF) (Adomavicius and Tuzhilin 2005). It consists in considering the history of purchases and users' tastes to identify items that are likely to be acquired. In general, this data is represented by a rating matrix, where each row corresponds to a user, each column is assigned to an item, and each cell holds a rating given by the corresponding user and item. Thus, CF algorithms aim at predicting the missing ratings of the matrix, which are posteriorly used for personalized item recommendations. CF algorithms may be divide into two main classes: memory-based and model-based algorithms. The former class uses k-Nearest Neighbors (kNN) methods for rating predictions, and therefore relies on computing similarities between pairs of users or items according to their ratings (Koren and Bell 2011). The latter class employs matrix factorization techniques so as to obtain an approximation of the rating matrix, in which the unknown cells are filled with rating predictions (Koren et al 2009). Both memory-based and model-based algorithms provide advantages and disadvantages. In this work, we are interested in memory-based algorithms. This class of CF algorithms remains widely used in many real systems due to its simplicity. It provides an elegant way for integrating information of users and items beyond the ratings for refining similarities . In addition, memory-based CF algorithms allow online recommendations, something required in many practical applications as data is arriving constantly, new users are signing up, and new products are being offered (Abernethy et al 2007). So, incorporating such information in a online fashion is very desired to make up-to-date predictions on the fly by avoiding to re-optimize from scratch with each new piece of data. The major issue regarding to memory-based CF algorithms lies in its computational scalability associated with the growth of the rating matrix . As users are often represented by vectors of items (i.e. rows of the rating matrix), it turns out that the larger the number of items is, the higher the computational cost to compute similarities between users. Consequently, memory-based CF may become computationally intractable for a large number of users or items. In this paper, we propose an alternative to improve the computational scalability of memorybased CF algorithms. Our proposal consists in representing users by their distances to preselected users, namely landmarks. Thus, instead of computing similarities between users represented by large vectors (often sparse) of ratings, our method calculates similarities through vectors of distances to fixed landmarks, obtaining an approximate similarity matrix for posterior rating predictions. As the number of landmarks required for a good approximation is mostly much smaller than the number of items, the proposed method drastically alleviates the cost associated with the similarity matrix computation. The results show that our proposal consistently and considerably outperforms the evaluated CF algorithms (including both memory-based and model-based) in terms of computational performance. Interestingly, it achieves accuracy results better than the original memory-based CF algorithms with few landmarks. The main contributions of this work are the following: -A rating matrix reduction method to speed up memory-based CF algorithms. -The proposal and investigation of 5 landmark selection strategies. -An extensive comparison between our proposal and 8 CF algorithms, including both memorybased and model-based classes. The work is organized in five sections, where this is the first one. Section 2 reviews the literature and presents the related work. Section 3 describes the recommendation problem definitions. It also introduces our proposal and presents the landmark selection strategies. Section 4 starts with the description of databases and metrics employed in experiments, follows by detailing the parameter tuning of the proposed method, and finishes by comparing our proposal against other CF algorithms. Finally, Section 5 points out conclusions and future work. Related Work Collaborative Filtering (CF) approach consists in predicting whether a specific user would prefer an item rather than others based on ratings given by users (Adomavicius and Tuzhilin 2005). For this purpose, CF uses only a rating matrix R, where rows correspond to users, columns correspond to items, and each cell holds the rating value r uv given by user u to item v. Thus, the recommendation problem lies in predicting the missing ratings of R, which is often very sparse. Interestingly, although there are many algorithms in Supervised Learning (SL) for data classification and regression, these are not properly suitable to CF, since ratings are not represented in a shared vector space R d . This happens because most users do not consume the same items by preventing their representation in the same vector space R d . Consequently, CF problem is slightly different from SL. To overcome this issue, Braida et al. propose to build a vector space of latent factors to represent all item ratings given by users, and then apply SL techniques to predict unknown ratings. The authors use Singular Value Decomposition (SVD) to obtain user and item latent factors, and then build a vector space which contains all item ratings given by users. Their scheme consistently outperforms many state-of-the-art algorithms (Braida et al 2015). Sarwar et al. also apply SVD on the rating matrix to reduce its dimensionality and transform it in a new feature vector space. Thus, predictions are generated by operations between latent factor matrices of users and items (Sarwar et al 2000). Generally, dimensionality reduction techniques based on Matrix Factorization (MF) for CF are more efficient than other techniques, for instance Regularized SVD (Paterek 2007), Improved Regularized SVD (Paterek 2007), Probabilistic MF (Salakhutdinov and Mnih 2011) and Bayesian Probabilistic MF (Salakhutdinov and Mnih 2008). They have received great attention after Netflix Prize and are known as model-based CF algorithms (Breese et al 1998). In contrast, memory-based CF algorithms are an adapted k-Nearest Neighbors (kNN) method, in which similarity is computed considering only co-rated items between users, i.e. the similarity between users are computed only for the vectors of co-rated items (Adomavicius and Tuzhilin 2005). Although model-based CF algorithms usually provide higher accuracy than the memorybased ones, the latter has been widely used (Beladev et al 2015;Elbadrawy and Karypis 2015;Li and Sun 2008;Pang et al 2015;Saleh et al 2015). This is due to its simplicity in providing an elegant way for integrating information of users and items beyond the ratings for refining similarities . Additionally, memory-based algorithms allow online recommendations, making upto-date predictions on the fly, which avoids to re-optimize from scratch with each new piece of data (Abernethy et al 2007). For these reasons, many authors seek to improve memory-based CF accuracy and performance, for example in (Bobadilla et al 2013;Gao et al 2012;Luo et al 2013). A well-known problem present in memory-based CF algorithms lies in applying distance functions to users for calculating their similarities, which are computationally expensive. Often, the algorithm runtime increases with the number of users/items, becoming prohibitive to apply it on very large databases. Furthermore, finding a sub-matrix of R which contains all users and also is not empty might be impossible due to data sparsity, i.e. it is difficult to find an item vector subspace in which all users are represented. To tackle these issues, we propose a method to reduce the size rating matrix via landmarks. It consists in selecting n users as landmarks, and then representing all users by their similarities to these landmarks. Thus, instead of representing users in item vector space, we propose to locate users in landmark vector space whose dimensionality is much smaller. The landmark technique is useful to improve algorithm runtime and it was proposed by Silva and Tenenbaum in Multidimensional Scaling (MDS) context (Silva and Tenenbaum 2002). In this case, the authors propose a Landmark MDS (LMDS) algorithm, which uses landmarks to reduce the computational costs of traditional MDS. LMDS builds a landmark set by selecting few observations from data -the landmark set represents all observations. Then, it computes the similarity matrix for this set to obtain a suitable landmark representation in d-dimensional vector space. Finally, the other observations are mapped to this new space, considering their similarities to the landmarks (De Silva and Tenenbaum 2004). The main advantage of using LMDS instead of other techniques is to adjust accuracy and runtime. If one needs to decrease runtime, it is possible to sacrifice accuracy by reducing the size of the landmark set. Otherwise, if one needs to improve the algorithm's accuracy, it is also possible to increase the number of landmarks up to the database limit. Therefore, a good LMDS characteristic is to manage this trade off between runtime and accuracy (Platt 2004). Lee and Choi (Lee and Choi 2009) argue that noise in database harms LMDS accuracy, and then propose an adaptation for this algorithm, namely Landmark MDS Ensemble (LMDS Ensemble). They propose applying LMDS to different data partitions, and then combine individual solutions in the same coordinate system. Their algorithm is less noise-sensitive but maintains computational performance of LMDS. Another pitfall of landmark approach is to choose the most representative observation as landmarks, once the data representation depends on the similarity to these points. Several selection strategies are proposed in literature (Chen et al 2006;Crawford 2013, 2014;Orsenigo 2014;Shi et al 2016Shi et al , 2015Silva et al 2005), most of them related to select landmarks for Land-mark Isomap, which is a nonlinear reduction method variation to improve scalability (Babaeian et al 2015;Shang et al 2011;Silva and Tenenbaum 2002;Sun et al 2014). Finally, Hu et al. (Hu et al 2009) tackle the problem of applying Linear Discriminant Analysis (LDA) on databases where the number of samples is smaller than the data dimensionality. They propose joining MDS and LDA in an algorithm, named as Discriminant Multidimensional Mapping (DMM), and also employ landmarks in DMM (LDMM) to improve scalability and turn it feasible to very large databases. Proposal We now present some basic definitions about memory-based Collaborative Filtering (CF) algorithms and discuss how it scales with the rating matrix size. Then, we follow by describing our proposal, which uses landmarks to improve the computational performance for computing the similarity matrix, and analyze its complexity. We also propose some selection strategies for the problem of choosing landmarks. Problem Definition For making predictions, memory-based CF algorithms consider similarities computed between pairs of users or pairs of items. Here, we assume that similarity is obtained between pair of users, namely used-based CF. The same can be done for pairs of items, namely item-based CF. User-based CF considers only the co-rated items to compute similarities between users, and predicts ratings for not yet rated items given a particular user (Adomavicius and Tuzhilin 2005). Thus, the items with highest predicted ratings are recommended. In order to formally define the rating prediction problem, let U , P , R be the set of users, the set of items and the rating matrix, respectively. Yet, let V be the set of possible rating values in the recommender system. Thus, the rows of R represent users and the columns represent items. If a user u ∈ U rated an item v ∈ P with the value r uv ∈ V , then the cell at row u and column v of the matrix R holds the value r uv , otherwise it is empty. Consequently, the matrix R dimension is |U | × |I| and, because most of the ratings are not provided, it is typically very sparse (Adomavicius and Tuzhilin 2005). Let P u denote the item subset rated by a particular user u, and P uu = P u ∩ P u the subset of items co-rated by users u and u . Note that, the recommender system aims at finding for a particular user u the item v ∈ P \ P u to which the user u is likely to be most interested. In other words, it estimates a function f : U × P → V that predicts the rating f (u, v) for a user u and an item v. We denote the predicted rating byr uv (Ricci et al 2011). To estimate this function, User-based CF employs a similarity measure S : U × U → R to determine the similarity s uu between users u and u . Thus, the predicted ratingr uv is obtained with the k-Nearest Neighbors (kNN) rule given by (1): r uv = u ∈U \{u} s uu * (r u v −ū ) u ∈U \{u} s uu +ū,(1) whereū andū represents the mean rating value of users u and u , respectively. The most costly procedure in user-based CF is to compute the user-user similarity matrix. As the similarity measure must be applied to each pair of users in the system, which are represented by item ratings, a typical user-based CF must perform two nested loops, as one may see in algorithm 1. These loops iterate over the user set and select a pair of users u and u to compute their similarity. Thus, the algorithm complexity reaches O(|U | × |U | × d), where d indicates the similarity measure complexity, in case the number of items. Different similarity measures may be employed to build user-user similarity matrix and their complexity obviously depends on the number of operations performed. To compute the similarity between users u and u , the measure must iterates over the item set. The algorithm 2 computes the Cosine similarity. Algorithm 1: Algorithm to build user-user similarity matrix Data: user set U , similarity measure d Result: user-user similarity matrix S for u ∈ U do for u ∈ U \ {u} do S uu ← d(u, u ) end end Algorithm 2: The algorithm calculates the Cosine similarity between users u and u Data: users u and u , item set P , rating matrix R Result: Cosine similarity d uu x, y, z ← 0 if |P uu | > 1 then for v ∈ P uu do z ← z + ruv * r u v x ← x + r 2 uv y ← y + r 2 u v end d uu ← z/( √ x * √ y) else d uu ← −∞ end Note that, algorithm 2 has a loop that iterates over co-rated items of users u and u . Therefore, user-based CF algorithm performs three nested loops and, consequently, their complexity is O(|U |× |U | × |P |), which explains why its performance quickly decreases as the number of users increases. Building the New User Space In user-based CF, users are represented in item vector space, where components are the corresponding item ratings. Therefore, the user space dimensionality is |P |. To improve user-based CF performance, we propose representing users in a space whose dimensionality is much smaller than the original one. The new vector space basis consists of preselected users from the rating matrix R, namely landmarks. The new user vector components are composed of the similarities to each corresponding landmark. We select n users from R, according to some criterion, like the number of item ratings. These n users constitute the landmark set. Each landmark belongs to the original item vector space, with dimensionality |P |. To build the new user space, one applies for each user u ∈ U (including the landmarks) a nonlinear transformation, which provides a smaller dimensional space. This transformation consists in computing the similarities between users and landmarks. These values forms the components of the new user vector representation. Therefore, to improve user-based CF performance, one must choose n |P |. Additionally, the most representative landmarks should be preferred. Figure 1 illustrates the proposed method for a toy example. In Figure 1(a), the rating matrix R contains the item ratings given by users A, B, C, D, E and F . Missing ratings are indicated with '-'. Users B and C are selected as landmarks, considering the number of their given ratings (highlighted in red). In Figure 1(b), the user-landmark similarity matrix is computed with Euclidean distance as a similarity measure. Note that, the rows of this matrix represent users, the two columns correspond to the landmarks B and C, and its cells to corresponding similarities between the user and each landmark. At this step, users are totally represented by their distance to landmarks. In other words, the landmarks constitute the basis of the new vector space and each user is positioned on according to the landmarks. An advantage of the new representation may be seen in the toy example: users A and D are co-rated at only one item, and therefore computing their similarity would produce a low accurate Fig. 1 The figure shows the procedures to apply the proposed algorithm to a toy example. The algorithm input is the rating matrix illustrated in (a). The first step of algorithm is select landmarks, and then represent users in new vector space, as showed in (b) and (e). Then, similarities between users are computed in (c), and ratings predicted in (d). value. However, the new space locates users A and D according to their distance to landmarks B and C, which is here computed with more than one co-rated item. Therefore, users A and B are positioned near each other, as may be seen on Figure 1(e). Once computed the new reduced user space, the next step is to compute the user-user similarity matrix based on this new representation. In Figure 1(c), one applied the Cosine similarity measure. Finally, it predicts the missing ratings with kNN approach, using k=2, as shown in Figure 1(d). The computation of the user-user similarity matrix via landmarks is presented in algorithm 3. The first part of this algorithm consists in selecting n landmarks based on some criterion defined by the function selectLandmarks. After choosing the landmarks, users are represented in the landmark vector space by calculating their similarities to landmarks -the similarity measure d 1 . Finally, the last step builds the user-user similarity matrix for the new landmark representationthe similarity measure d 2 . Algorithm 3: Algorithm to build user-user similarity matrix using landmarks. Data: user set U , number of landmarks n, user-landmark similarity measure d 1 , user-user similarity measure d 2 Result: user-user similarity matrix S L ← selectLandmarks(n) H ← zeroM atrix(|U |, n) for u ∈ U do for l ∈ L do H ul ← d 1 (u, l) end end for u ∈ U do for u ∈ U \ {u} do S uu ← d 2 (u, u ) end end The difference between similarity measures d 1 and d 2 is that the former considers user ratings to compute similarities, while the latter considers user-landmark similarities to achieve its purpose. Algorithms 2 and 4 illustrates both cases, respectively. Algorithm 4: Algorithm to compute Cosine similarity between user u and u which are represented in landmark vector space Data: users u and u , landmark set L, user-landmark similarity matrix H Data: Cosine similarity S uu x, y, z ← 0 for l ∈ L do if H ul ! = −∞ and H u l ! = −∞ then z ← z + H ul * H u l x ← x + H 2 ul y ← y + H 2 u l end end S uu = z/( √ x * √ y) Choosing Landmarks The choice of landmarks is a critical task, once it directly affects the resulting vector space, and, consequently, influences the accuracy of user-based CF. To investigate this effect we propose five landmark selection strategies, described as follows: -Random -n users are randomly chosen as landmarks. -Dist. of Ratings -randomly chooses n users as landmarks by considering the distribution of ratings, i.e. users with more ratings are more likely to be selected. -Coresets -chooses at random n landmark candidates taking into account the rating distribution. Then, it computes the user similarity to these candidates and removes half of the most similar users. From the remaining users, n new landmark candidates are again randomly chosen and the half most similar users to the candidates are removed. The algorithm proceeds until no remaining user exists in database. This strategy is based on coresets(Feldman et al 2011). -Coresets Random -does similar to Coresets, but without considering the rating distribution, instead it just samples users uniformly at random. -Popularity -ranks users/items in descend order by their number of ratings and selects the first n users as landmarks. The proposed landmark selection strategies have underlying different criterion. Three of them -Dist. of Ratings, Coresets and Popularity -consider the number of ratings as criterion to select landmarks. Thus, we expect to select more representative landmarks compared with the other two strategies -Random and Coresets Random. Additionally, Random and Dist. of Ratings are both the simplest, and consequently the fastest strategies, since landmarks are selected with no data preprocessing. Popularity is an intermediate case, as one needs to sort users/items by their number of ratings, which requires more computations than random sampling. Finally, Coresets and Coresets Random are the most complex strategies. One should compute user similarities to n landmark candidates and remove the half most similar users from the entire set. This process proceeds until no user remains. Thus, both strategies require more computations than Popularity. Complexity Analysis In our proposal, users are represented in the landmark vector space. Once landmarks are chosen, the proposed algorithm performs three nested loops to compute the user-landmark similarity matrix. The first loop iterates over all users and the second one over n landmarks. Then, for each user u and landmark l, the loop inside the similarity measure d 1 iterates over the co-rated items of u and l and computes the corresponding similarity value. Therefore, to build the user-landmark similarity matrix, one requires O(|U | × n × |P |) steps. To compute user-user similarity matrix, the algorithm performs three nested loops. The first two iterate over all users and the one inside the similarity measure d 2 iterates over n landmarks. Thus, it results in O(|U | × |U | × n) steps. Accordingly, the proposed algorithm complexity is O(|U | × n × |P | + |U | × |U | × n) = O(|U | × n × (|P | + |U |)). Note that, this complexity becomes smaller as the number n of landmarks decreases. By comparing this result with the original complexity of user-based CF, O(|U | × |U | × |P |), it turns out that O(|U |×n×(|P |+|U |)) ≤ O(|U |×|U |×|P |) if n ≤ |U |×|P | |U |+|P | , what is a very realistic assumption. Experiments Methodology In order to analyze the proposed method performance, we conduct experiments on two well-known databases: MovieLens (Harper and Konstan 2016;Miller et al 2003) and Netflix (Bennett and Lanning 2007). Our objective was twofold: (1) parameter investigation and settings -to investigate the functioning of the proposed method with regards to its parameter settings such as the number of landmarks, the similarity measure to build the landmark-user matrix, and the landmark selection strategy; and, (2) comparative analysis -to compare the proposal with different state-of-the-art algorithms. In (1), the parameter investigation and settings, we start by varying the number of landmarks from 10 up to 100 for each selection strategy. The idea is to evaluate the algorithm prediction accuracy with different parameter settings and how these may be affected. Still, we also evaluate different similarity measures either to build the landmark-user/item matrix or the similarity matrix for rating predictions. The similarity measures were Euclidean, Cosine, and Pearson. In (2), the comparative analysis, there were compared several algorithms from both memorybased and model-based algorithms. These are listed as follows: All experiments were carried out with 10-fold cross validation. There were considered the mean absolute error (MAE) to measure accuracy of rating predictions and the runtime in seconds to assess computational performance (Herlocker et al 2004). Data sets In order to pursue a fair evaluation, we consider two different data set sizes from MoviesLens and Netflix databases. In the former database, there were already two available data sets: Movie-Lens100k and MovieLens1M; each one with one hundred thousand (100k) and one million (1M) ratings, respectively (Herlocker et al 1999). In the latter database, Netflix, it was necessary to cut out the original database by generating data sets with amounts of ratings equal to the two Movie-Lens data sets. Thus, there were extracted 100k and 1M ratings in chronological recording order from the original database, obtaining two data sets: Netflix100k and Netflix1M, respectively. The idea of these cuts was to preserve temporal characteristics. Table 1 shows the number of users and items, and the sparsity of the rating matrix in the four data sets. Parameter investigation and settings We here aim to analyze and discuss how the proposed algorithm works with different parameter values. Accuracy Analysis The graphs in Figure 2 and 3 present MAE per number of landmarks on the data sets of 100k and 1M ratings, respectively. The set of landmarks was varied from 10 up to 100, and, at each 10 landmarks, we compute MAE. This procedure was conducted for five landmark selection strategies: Random, Dist. of Ratings, Coresets, Coresets Random, and Popularity. In addition, these results were compared with the original version of the memory-based CF algorithm, i.e. user/item-based Collaborative Filtering (CF) with Cosine similarity, namely baseline algorithm. In this experiment, the proposed method employed Euclidean distance to build the userlandmark matrix, and then obtained the user-user similarity matrix by Cosine distance, for reducing the dimensionality of user-based CF. Analogously, the same settings were adopted for item-based CF. As one may observe, MAE decreases as the number of landmarks increases, and the proposed algorithm has outperformed their corresponding baseline algorithms with very few landmarks. This behavior was expected, once the more landmarks, the more information are supposedly retained throughout user/item representation. By comparing landmark selection strategies, Popularity has produced the highest accuracies in most of the times. There were few cases in which Popularity did not outperform the others as one may observe for user-based CF on MovieLens100k and item-based CF on MovieLens1M. However, in these cases, no other strategy has also consistently outperformed the others. Random and Coresets Random have shown the worst performance. Their corresponding MAE values were greater than the other strategies, despite these still remains below the baseline algorithms. We consider that all strategies have performed similarly, especially on MovieLens database. Interestingly, in this database, the MAE difference between landmark selection strategies was very tight, less than 10 −2 for both user-based and item-based CF. From a recommender system perspective, this difference may indicate no greater improvements for the final recommendation list. One may also note two distinct groups of landmark selection strategies in both Figure 2d and 3d. One group uniformly selects landmark at random: Random and Coreset Random; and another group composed of those strategies that take into account the number of ratings: Dist. of Ratings, Coresets, and Popularity. The difference in accuracy between these groups was higher for few landmarks. This leads us to claim that it is preferred to choose landmarks with more ratings, thus one obtains more co-rated items, and consequently, more 'representativeness' in the new user space. In Tables 1 2, 3, 4 and 5, we present the results obtained by the proposed method with different combinations of similarity measures to build both the user-landmark matrix and the user-user similarity matrix. There were evaluated three distance measures: Euclidean, Cosine and Pearson. We fixed the number of landmarks in 20 for MovieLens data sets, and 30 for Netflix. One should note that, Popularity with Cosine distance to build both matrices (user-landmark and user-user) in user-based CF has achieved the best accuracy overall. In item-based CF, the best measure combination was Cosine and Pearson distances to build item-landmark and item-item matrices, respectively. Nevertheless, Popularity has performed with relatively similar accuracy to the other combinations of similarity measures. This similar behavior holds for other selection strategies by differing in MAE about 10 −2 and 10 −3 , what is insignificant. Accordingly, we conclude that accuracy increases with the number of landmarks, the most accurate landmark selection strategies are those based on rating distribution, and the choice of similarity measures does not bring significant advantages for accuracy. Additionally, it is possible to outperform the corresponding baseline algorithms with very few landmarks -10 to 40 landmarks quickly improve accuracy. Table 6, 7, 8 and 9 show the corresponding time required by the proposed algorithm for different parameter settings and data sets. The idea is to investigate the impact on the computational performance with regards to the number of landmarks, the distance measures, and the selection strategies. Thus, we consider the runtime in seconds to build the similarity matrix and to compute rating predictions for the test set. Computational Performance Analysis As one would expect, the time increases almost linearly with the number of landmarks. As the dimensionality of user-landmark matrix grows, more computations are necessary to calculate the user-user similarity matrix. Analogously, the same behavior is observed for item-based CF. In terms of landmark selection strategy, one should note that the simpler strategies outperform the more complex ones. Random was away the fastest selection strategy, followed in order by Dist. of Ratings, Popularity, Coresets Random, and Coresets. The times spent by the baseline algorithms are presented in Table 10. Thus, our proposal may achieve a reduction up to 99.22% in terms of runtime. Random, Dist. of Ratings and Popularity have performed faster than the corresponding baseline algorithms for any number of landmarks between 10 and 100 on both data sets of 100k ratings. Coresets and Coresets Random have made the proposed algorithm slower due to their own complexities. Consequently, the proposal becomes slower than the baseline algorithms after 80 landmarks on MovieLens100k for user-based CF, and after adding 100 landmarks for item-based CF. Interestingly, all baseline algorithms on 1M-sized data set have taken more time to perform than the proposed algorithm for any parameter setting (number of landmarks and selection strategy). Thus, the proposal may succeed well on very large databases. Tables 1 11, 12, 13 and 14 present the computational performance with regards to different distance measures for similarity. To build user-landmark and user-user matrices, the fastest distance combinations were Euclidean-Euclidean and Cosine-Euclidean. The same behavior can be observed for the item-based CF. Person has presented the worst performance due to its nonlinear computational complexity, which becomes worse on the data sets of 1M ratings. Therefore, any of these distance measures may be applied to small databases without great influence on the computational performance. Nevertheless, Euclidean and Cosine should be preferred in case of very large databases. It is also interesting to highlight that our proposal has considerably reduced the runtime compared to the baseline algorithms, even with Pearson distance. Comparative Analysis We here aim at comparing the proposed algorithm in terms of accuracy and performance to CF algorithms of the state-of-the-art. In this experiment, our proposal is set with Popularity for landmark selection, and Cosine similarity to build both reduced and similarity matrix. Additionally, one used 20 landmarks for MovieLens data sets and 30 for Netflix. Yet, both user-based and itembased CF were set with 13 neighbors for rating prediction. As one should note, most of memory-based CF algorithms have higher MAE than model-based ones, mainly on the data sets of 1M ratings. This was expected, once model-based CF has been often superior in the sense of accuracy. In spite of Landmarks kNN has been classified as memory-based, it outperformed some modelbased ones like IRSVD, PMF and SVD++, and yielded higher accuracy for the item-based CF on the data sets of 100k ratings, as one may see in Figure 4. However, this behavior does not stand on the data sets of 1M ratings, wherein model-based CF algorithms have outperformed our proposal. Among memory-based CF algorithms, Euclidean kNN has achieved the highest MAE, while Cosine kNN and Pearson kNN have reached intermediate values. In case of model-based CF approach, BPMF has shown the lowest MAE on the data sets of 100k ratings, while RSVD has outperformed all model-based CF algorithms on the data sets of 1M ratings. Figure 5 clearly distinguishes the two groups of CF approaches, i.e. memory-based and modelbased. The first one has provided higher MAE than the proposed method, Landmarks kNN. On the other hand, the second group has beaten our proposal with relative margin. Therefore, we may note that our proposal outperformed memory-based CF algorithms on all data sets. Additionally, it has yielded similar accuracy to some model-based techniques on the data sets of 100k ratings. Table 15 presents the comparative performance among the CF algorithms. The values indicates how many times each CF algorithm is slower than our proposal (in bold). Computational Performance Analysis By comparing our proposal with memory-based CF algorithms it is at least 8 times faster than the fastest memory-based. This difference becomes smaller when compared with model-based CF algorithms, wherein PMF performs 1.2 times slower than the proposed method on Netflix1M. However, the other model-based CF algorithms are away slower. Therefore, we conclude that our proposal consistently and considerably outperformed the compared state-of-the-art CF algorithms, in the sense of computation performance. Each CF algorithm is represented by a point in the graphs which were divided in four quadrants, in order to better discern algorithm performances. Note that, user/item-based CF with Landmarks (Landmarks kNN ), PMF and BPMF are located at the first quadrant, which means these perform faster than others and have comparable accuracy. The graphs were generated using 1M ratings data sets. Compromise Between Accuracy and Runtime In order to compare CF algorithms from different standpoints, we plot the accuracies of each algorithm for the data sets of 1M ratings by their corresponding runtime. As one may see in Figure 6, x and y axes indicates the accuracy (measured with MAE) and the runtime logarithm (in log-seconds), respectively. Each algorithm is therefore represented by a point in 2D space. Each graph in Figure 6 has been divided into four quadrants so as to better discern the algorithm performances. The first quadrant is the desired one, since it indicates the lowest values for both MAE and runtime, i.e. the best compromise between accuracy and computational performance. Note that, the proposed algorithm, PMF and BPMF are within the first quadrant in all graphs. Additionally, our proposal has performed faster than any other algorithm, including PMF and BPMF. Regarding to accuracy, one should observe that model-based CF algorithms have clearly yielded higher accuracy than the other techniques based on similarity. Regularized SVD, Improved Regularized SVD and Bayesian Probabilistic MF have performed similarly, in the sense that, their accuracies and computational performances were comparable. The same might be verified for both Cosine and Pearson (Cosine kNN and Pearson kNN, respectively). This behavior is also observed in Figure 6a and Figure 6c, which correspond to user-based CF, and, analogously, in Figure 6b and Figure 6d for item-based CF. It is remarkable how our proposal consistently and considerably outperforms CF algorithms in computational performance. Furthermore, it provides rating predictions as accurate as any other memory-based techniques, and may offer an interesting compromise between accuracy and computational performance when compared with model-based CF algorithms. Conclusion In this paper, we presented a proposal to improve memory-based CF computational performance via rating matrix reduction with landmarks. It consists in representing users by their similarities to few preselected users, namely landmarks. Thus, instead of modeling users by rating vectors and building a user-user similarity matrix, we proposed to locate users by their similarities to landmarks, resulting in a new user-user similarity matrix. Thus, small numbers of landmarks leads to great reduction of the rating matrix. Consequently, it decreases the time spent to compute the posterior user-user similarity matrix. The proposed method has three parameters that influence the new space representation, and consequently the CF algorithm accuracy and runtime. These are 1)the number of landmarks, 2)the distance measure to compute the user-landmark matrix, and 3)the distance measure to compute user-user similarity matrix. After investigating different parameter settings, we found out that accuracy and runtime increase with the number of landmarks. Besides, all evaluated distance measures (Euclidean, Cosine and Pearson) have yielded similar accuracies. Another important component of the proposed algorithm is how to select landmarks. There were proposed five different ways of selecting landmarks: Random, Dist. of Ratings, Coresets, Coresets Random, and Popularity. The most accurate strategy was Popularity, while Random and Dist. of Ratings were the fastest ones. In order to conduct a fair comparison of the proposed algorithm, we selected eight CF algorithms -both memory-based and model-based approaches -to compare their performance on MovieLens and Netflix databases. There were considered two different cuts of each database (100k and 1M ratings). We have taken MAE as an accuracy measure, and the runtime in seconds as a computational performance measure. The results have shown that our proposal has consistently outperformed the other algorithms in terms of time consuming. It has also improved CF scalability, once its runtime increased almost linearly with the number of landmarks. Furthermore, one yielded the highest accuracy overall with regards to memory-based CF algorithms. Concluding, our proposal offers a very simple and efficient manner to reduce the cost of similarity computations for memory-based CF algorithms by conferring great speedup without loss of accuracy. As future work, a theoretical investigation should be addressed so as to determine the number of landmarks that guarantee accuracy bounds. We want to determine lower bounds for the number of landmarks given an approximation error . Besides, it also important to investigate how landmarks can be used to improve model-based CF algorithms. 1 . 1Memory-based algorithms: (a) k-Nearest Neighbor (kNN) with Euclidean (Adomavicius and Tuzhilin 2005); (b) kNN with Cosine (Adomavicius and Tuzhilin 2005); (c) kNN with Pearson (Adomavicius and Tuzhilin 2005); 2. Model-based algorithms: (a) Regularized Singular Value Decomposition (SVD) (Paterek 2007); (b) Improved Regularized Singular Value Decomposition (Paterek 2007); (c) Probability Matrix Factorization (MF) (Salakhutdinov and Mnih 2011); (d) Bayesian Probability Matrix Factorization (Salakhutdinov and Mnih 2008); and (e) SVD++ (Koren 2008; Koren et al 2009). Fig. 2 2The figure shows how MAE behaves as landmark increases for CF in 100k rating data sets. Fig. 3 3The figure shows how MAE behaves as landmark increases for CF in 1M rating data sets. Fig. 4 4The comparison considers 8 recommender algorithms: kNN with Euclidean, kNN with Cosine, kNN with Pearson, Regularized Singular Value Decomposition (RSVD), Improved Regularized Singular Value Decomposition (IRSVD), Probability Matrix Factorization (PMF), Bayesian Prob-The figure compares proposal MAE against the chosen baselines in 100k rating data sets. ability Matrix Factorization (BPMF), and SVD++. The proposed algorithm is here referred as Landmarks kNN. 4.4.1 Accuracy Analysis Figures 4 and 5 show accuracies achieved by the proposal and the other CF algorithms. The horizontal line crossing the boxplot graph indicates the MAE median of 10-fold cross validation obtained with Landmarks kNN. Fig. 5 5The figure compares proposal MAE against the chosen baselines in 1M rating data sets. Fig. 6 6This figure illustrates Accuracy per Time of CF algorithms. Table 1 1Data set characteristics.#ratings #users #items sparsity(%) MovieLens100k 100k 943 1,682 6.3 Netflix100k 100k 1,490 2,380 2.82 MovieLens1M 1M 6,040 3,952 4.19 Netflix1M 1M 8,782 4,577 2.48 10 20 30 40 50 60 70 80 90 100 0.70 0.72 0.74 Number of Landmarks MAE (a) User-based CF in MovieLens100k 10 20 30 40 50 60 70 80 90 100 0.70 0.72 0.74 Number of Landmarks CF Cosine Random Dist. of Ratings Coresets Coresets Random Popularity (b) Item-based CF in MovieLens100k 10 20 30 40 50 60 70 80 90 100 0.76 0.78 0.80 Number of Landmarks MAE (c) User-based CF in Netflix100k 10 20 30 40 50 60 70 80 90 100 0.76 0.78 0.80 Number of Landmarks CF Cosine Random Dist. of Ratings Coresets Coresets Random Popularity (d) Item-based CF in Netflix100k Table 2 2MAE of the user/item-based CF on MovieLens100k. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively.Users-Landmarks User-User Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF Euclidean Euclidean 0.725 0.708 * 0.724 0.705 0.724 0.705 0.723 0.711 * 0.723 0.706 Cosine 0.719 * 0.710 0.718 0.704 0.717 0.704 0.719 0.715 0.715 0.702 Pearson 0.721 0.711 0.717 0.705 0.714 0.704 0.718 0.715 0.713 0.700 Cosine Euclidean 0.724 0.715 0.715 0.703 * 0.710 0.698 * 0.716 * 0.717 0.705 0.690 Cosine 0.720 0.717 0.711 * 0.705 0.707 * 0.699 0.719 0.717 0.701 * * 0.690 Pearson 0.719 * 0.715 0.713 0.705 0.708 0.702 0.719 0.718 0.703 0.688 * * Pearson Euclidean 0.723 0.721 0.716 0.711 0.709 0.704 0.721 0.718 0.703 0.696 Cosine 0.722 0.718 0.715 0.707 0.712 0.703 0.722 0.725 0.703 0.696 Pearson 0.728 0.717 0.722 0.711 0.716 0.708 0.729 0.726 0.709 0.698 Table 3 3MAE of the user/item-based CF on Netflix100k. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively.Users-Landmarks User-User Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF Euclidean Euclidean 0.786 0.770 * 0.784 0.767 0.785 0.766 0.787 0.771 * 0.786 0.764 Cosine 0.784 0.771 0.774 0.758 0.776 0.758 0.779 0.778 0.770 0.754 Pearson 0.784 0.778 0.776 0.761 0.776 0.760 0.781 0.777 0.770 0.746 Cosine Euclidean 0.776 0.773 0.763 * 0.750 * 0.761 0.743 * 0.774 0.779 0.757 0.740 Cosine 0.775 0.774 0.764 0.754 0.759 0.747 0.772 * 0.779 0.752 * * 0.739 Pearson 0.774 * 0.775 0.767 0.753 0.758 * 0.742 0.776 0.782 0.755 0.730 * * Pearson Euclidean 0.781 0.784 0.777 0.761 0.769 0.752 0.779 0.786 0.763 0.745 Cosine 0.787 0.784 0.777 0.762 0.771 0.751 0.783 0.788 0.768 0.743 Pearson 0.791 0.786 0.782 0.763 0.780 0.753 0.787 0.794 0.775 0.747 Table 4 4MAE of the user/item-based CF on MovieLens1M. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively.Users-Landmarks User-User Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF Euclidean Euclidean 0.704 0.670 0.700 0.671 0.699 0.672 0.701 0.671 0.697 0.677 Cosine 0.698 0.666 * 0.691 0.660 * 0.688 0.661 0.694 0.665 * 0.682 0.665 Pearson 0.695 * 0.669 0.691 0.661 0.688 0.662 0.695 0.666 0.681 0.666 Cosine Euclidean 0.698 0.677 0.689 * 0.665 0.684 0.660 0.692 * 0.671 0.676 0.659 Cosine 0.697 0.681 0.689 * 0.665 0.682 * 0.659 0.694 0.675 0.673 * * 0.651 Pearson 0.703 0.679 0.690 0.663 0.684 0.656 * 0.695 0.675 0.673 * * 0.648 * * Pearson Euclidean 0.706 0.681 0.693 0.670 0.688 0.663 0.698 0.679 0.679 0.658 Cosine 0.704 0.684 0.694 0.668 0.689 0.661 0.698 0.679 0.679 0.655 Pearson 0.712 0.684 0.699 0.668 0.692 0.662 0.705 0.682 0.680 0.657 Table 5 5MAE of the user/item-based CF on Netflix1M. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively.Users-Landmarks User-User Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF Euclidean Euclidean 0.741 0.734 0.738 0.730 0.737 0.730 0.740 0.735 0.731 0.728 Cosine 0.738 0.727 * 0.730 0.716 0.728 0.714 0.738 0.730 0.719 0.711 Pearson 0.739 0.728 0.731 0.715 0.726 0.713 0.739 0.728 * 0.717 0.709 Cosine Euclidean 0.736 0.738 0.723 0.705 0.717 0.702 0.729 * 0.740 0.711 0.701 Cosine 0.736 0.737 0.720 * 0.701 * 0.715 * 0.696 0.729 * 0.737 0.708 * * 0.693 Pearson 0.733 * 0.731 0.722 0.701 * 0.716 0.695 * 0.731 0.735 0.710 0.691 * * Pearson Euclidean 0.739 0.738 0.728 0.704 0.721 0.699 0.734 0.740 0.714 0.697 Cosine 0.742 0.734 0.728 0.705 0.722 0.696 0.737 0.740 0.714 0.692 Pearson 0.745 0.740 0.733 0.707 0.725 0.700 0.741 0.738 0.716 0.697 Table 6 6User/Item-based CF runtime, in seconds, on MovieLens100k. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively.Landmarks Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF 10 0.4 0.7 0.4 0.9 1.5 2.2 1.4 1.9 0.5 1.0 20 0.8 1.3 0.8 1.6 2.6 3.9 2.6 3.2 0.9 1.8 30 1.2 2.0 1.2 2.5 3.7 5.7 3.6 4.5 1.3 2.6 40 1.6 2.7 1.6 3.2 4.7 7.1 4.6 5.8 1.7 3.5 50 2.0 3.4 2.0 4.0 5.7 8.8 5.6 7.2 2.1 4.4 60 2.4 4.1 2.5 4.9 6.7 10.3 6.5 8.5 2.6 5.3 70 2.8 4.8 2.9 5.7 7.8 11.8 7.7 9.5 3.0 6.2 80 3.2 5.5 3.3 6.5 8.7 13.3 8.4 10.7 3.4 7.0 90 3.6 6.2 3.7 7.3 9.5 14.7 9.4 11.7 3.8 7.8 100 4.0 6.9 4.1 8.1 10.6 16.2 10.2 13.1 4.2 8.6 Table 7 7User/Item-based CF runtime, in seconds, on Netflix100k. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively.Landmarks Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF 10 0.9 1.2 0.9 1.5 3.0 3.6 2.8 3.0 1.0 1.7 20 1.6 2.2 1.8 2.7 5.5 6.5 5.1 5.3 1.8 3.2 30 2.4 3.2 2.6 4.1 8.0 9.4 7.5 7.7 2.8 4.6 40 3.2 4.3 3.5 5.4 10.4 12.4 9.7 10.2 3.6 6.1 50 4.0 5.2 4.4 6.7 12.7 15.2 12.1 12.5 4.5 7.6 60 4.8 6.5 5.2 8.2 15.2 18.2 14.0 14.8 5.4 9.0 70 5.5 7.4 6.0 9.4 17.2 20.8 16.3 17.1 6.3 10.5 80 6.4 8.5 6.8 11.0 19.5 24.6 18.4 19.6 7.2 12.2 90 7.1 9.7 7.8 12.2 21.7 26.7 20.2 22.0 8.1 13.3 100 8.0 10.6 8.6 13.3 23.7 29.3 22.4 24.5 9.0 14.8 Table 8 8User/Item-based CF runtime, in seconds, on MovieLens1M. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively.Landmarks Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF 10 14.4 7.7 15.3 8.6 32.0 23.5 30.8 21.6 15.9 8.9 20 27.2 14.6 28.8 16.3 61.0 46.5 59.5 42.6 29.7 16.8 30 41.2 21.8 42.8 24.7 92.4 69.4 88.2 64.9 44.7 25.5 40 54.8 30.0 57.2 32.9 123.0 92.7 119.9 84.8 59.7 33.9 50 68.5 36.9 72.0 41.0 152.7 114.4 148.5 105.6 75.6 42.2 60 82.1 45.2 88.2 49.2 185.2 136.2 176.0 125.0 88.6 51.8 70 95.3 53.3 99.9 58.0 210.7 157.0 203.2 143.5 102.6 59.6 80 107.9 60.7 114.3 66.5 241.3 178.6 233.1 163.0 117.7 67.9 90 126.2 68.3 131.3 74.5 271.7 199.0 261.2 179.7 134.0 76.4 100 136.8 75.5 142.8 82.6 295.3 220.1 286.1 200.7 147.3 84.5 Table 9 9Table 10The runtime, in seconds, of user/item-based CF with Cosine in databases.User/Item-based CF runtime, in seconds, on Netflix1M. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively. Landmarks Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF 10 26.8 9.8 29.8 11.2 58.9 35.6 56.0 30.0 31.1 12.0 20 50.4 18.1 55.5 21.6 113.2 69.2 105.5 60.5 57.7 23.4 30 74.9 27.2 82.2 32.2 170.9 103.3 160.1 88.4 86.7 33.8 40 103.0 36.2 110.7 42.6 227.2 137.3 216.8 118.2 114.1 45.0 50 127.5 45.4 138.6 53.5 285.6 165.4 268.7 144.2 143.7 54.9 60 153.1 54.3 169.3 63.9 340.2 197.9 316.3 173.1 170.6 66.4 70 177.7 63.9 194.7 74.2 394.0 231.8 366.6 204.9 199.8 79.8 80 201.5 74.2 220.9 86.2 447.9 263.5 415.6 232.5 231.6 91.6 90 227.3 83.6 250.3 97.5 499.7 286.9 462.3 250.5 257.0 98.1 100 253.9 89.8 277.0 104.3 552.7 313.5 508.6 278.6 286.1 111.6 CF Type MovieLens100k Netflix100k MovieLens1M MovieLens1M Time (s) User-based 7.9 24.3 1250.9 3219.8 Item-based 15.2 42.6 758.2 1260.0 Table 11 User/Item-based runtime, in seconds, on MovieLens100k. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively. Users-Landmarks User-User Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF Euclidean Euclidean 0.70 1.19 * * 0.71 1.39 * 2.59 3.66 2.50 3.08 0.75 1.47 * Cosine 0.82 1.37 0.86 1.64 2.69 3.96 2.56 3.15 0.86 1.79 Pearson 1.20 2.03 1.25 2.55 3.17 4.88 3.02 3.77 1.30 2.82 Cosine Euclidean 0.67 * * 1.21 0.70 * 1.39 * 2.28 * 3.35 * 2.20 * 2.81 * 0.71 * 1.47 * Cosine 0.78 1.32 0.82 1.64 2.40 3.63 2.31 2.86 0.84 1.81 Pearson 1.16 2.05 1.21 2.56 2.81 4.56 2.72 3.54 1.25 2.81 Pearson Euclidean 1.01 1.39 1.03 1.67 2.88 3.90 2.78 3.24 1.08 1.81 Cosine 1.11 1.53 1.19 1.90 3.07 4.19 2.95 3.33 1.24 2.11 Pearson 1.49 2.16 1.59 2.76 3.47 5.14 3.29 3.86 1.68 3.08 Table 12 12User/Item-based CF runtime, in seconds, on Netflix100k. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively.Users-Landmarks User-User Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF Euclidean Euclidean 1.44 * * 2.92 * * 1.55 * 3.46 * 5.39 * 8.61 5.08 * 7.34 1.59 * 3.78 * Cosine 1.65 3.07 1.84 3.88 5.64 9.20 5.31 7.36 1.92 4.45 Pearson 2.35 4.11 2.67 5.84 6.61 11.15 5.99 8.48 2.81 6.73 Cosine Euclidean 2.27 2.96 2.38 3.51 7.66 8.60 * 7.17 7.24 * 2.47 3.80 Cosine 2.52 3.08 2.79 3.98 7.90 9.12 7.31 7.45 2.94 4.49 Pearson 3.71 4.10 4.11 5.83 9.30 11.24 8.60 8.31 4.29 6.74 Pearson Euclidean 1.94 3.30 2.15 4.11 5.97 9.30 5.57 7.67 2.27 4.56 Cosine 2.11 3.42 2.42 4.39 6.31 9.68 5.93 7.67 2.59 5.03 Pearson 2.77 4.15 3.19 5.90 7.09 11.32 6.48 8.46 3.48 6.80 Table 13 13User/Item-based CF runtime, in seconds, on MovieLens1M. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively. Users-Landmarks User-User Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF Euclidean Euclidean 22.44 * 13.58 23.43 * * 14.56 55.21 44.98 54.03 41.87 23.68 * 14.81 Cosine 26.95 15.28 28.88 16.70 61.24 47.32 59.00 43.15 29.66 17.09 Pearson 42.55 19.67 44.80 22.49 77.83 53.14 74.95 48.52 46.42 23.26 Cosine Euclidean 22.93 12.52 * * 23.47 13.52 * 53.58 * 40.34 * 53.67 * 37.60 * 24.16 13.59 * Cosine 27.87 13.16 29.69 15.02 62.05 41.78 59.87 38.35 29.80 15.57 Pearson 42.16 17.98 45.60 20.50 76.33 47.99 73.77 43.94 47.09 21.46 Pearson Euclidean 29.71 19.12 31.55 21.38 64.25 52.21 62.16 47.79 32.72 22.64 Cosine 33.61 20.14 36.72 23.46 69.50 54.25 66.20 48.47 38.36 24.80 Pearson 48.09 25.52 52.28 29.31 85.78 60.48 81.24 53.62 55.14 30.85 Table 14 14User/Item-based CF runtime, in seconds, on Netflix1M. 'UCF' and 'ICF' stand for User-based CF and Item-based CF, respectively.Users-Landmarks User-User Random Dist. of Ratings Coresets Coresets Random Popularity UCF ICF UCF ICF UCF ICF UCF ICF UCF ICF Euclidean Euclidean 64.06 25.07 * * 67.69 27.85 * 153.63 96.31 * 146.65 84.85 * 68.62 28.92 * Cosine 75.96 26.21 81.72 30.56 170.05 99.91 159.74 86.23 84.92 32.87 Pearson 114.99 31.49 126.05 40.08 215.21 109.13 196.14 92.30 132.18 42.85 Cosine Euclidean 60.80 * * 25.93 64.15 * 28.97 145.08 * 97.36 138.86 * 85.60 66.03 * 30.00 Cosine 71.33 26.40 78.23 31.39 159.55 101.11 148.97 87.01 81.33 33.58 Pearson 108.43 33.81 121.19 41.11 203.26 110.32 189.93 91.68 126.64 43.99 Pearson Euclidean 80.55 33.13 87.49 42.02 174.98 111.45 165.95 92.20 91.35 44.94 Cosine 89.80 34.73 101.65 43.92 189.86 114.41 174.55 92.30 107.27 47.98 Pearson 126.18 38.95 145.39 52.65 238.83 122.86 219.20 97.15 156.31 57.40 Table 15 15The table presents how many times the corresponding algorithm is slower than the proposal, which appears in bold. For instance, Euclidean kNN is 8.7 times slower than Landmarks kNN on MovieLens100k.CF Technique User-based Item-based MovieLens Netflix MovieLens Netflix 100k 1M 100k 1M 100k 1M 100k 1M Euclidean kNN 8.7 39.5 8.9 39.2 8.3 44.8 9.1 37.6 Cosine kNN 8.8 39.7 9.0 39.1 8.4 45.7 9.2 37.8 Pearson kNN 17.1 76.3 15.7 70.8 14.2 78.5 13.1 56.1 Landmarks kNN 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 RSVD 49.2 16.6 15.8 6.6 23.3 33.5 9.8 15.4 IRSVD 70.9 23.3 22.8 9.0 33.9 46.2 13.8 21.2 PMF 8.3 3.1 2.8 1.2 4.1 6.5 1.7 2.7 BPMF 50.3 10.1 24.2 5.0 24.8 19.8 14.5 12.1 SVD++ 437.1 297.8 177.9 134.0 161.1 828.6 85.6 541.2 The best result for each landmark selection strategy is highlighted in bold and is marked with an asterisk ('*'). The best result overall is also in bold but marked with double asterisk ('**'). Acknowledgements Our thanks to CNPq/CAPES for funding this research. Toward the next generation of recommender systems: A survey of the state-ofthe-art and possible extensions. J Abernethy, K Canini, J Langford, A Simma, IEEE transactions on knowledge and data engineering. 176University of California at BerkeleyTech Rep Adomavicius G, Tuzhilin AAbernethy J, Canini K, Langford J, Simma A (2007) Online collaborative filtering. University of California at Berkeley, Tech Rep Adomavicius G, Tuzhilin A (2005) Toward the next generation of recommender systems: A survey of the state-of- the-art and possible extensions. IEEE transactions on knowledge and data engineering 17(6):734-749 Nonlinear subspace clustering using curvature constrained distances. A Babaeian, M Babaee, A Bayestehtashk, M Bandarabadi, Pattern Recognition Letters. 68Babaeian A, Babaee M, Bayestehtashk A, Bandarabadi M (2015) Nonlinear subspace clustering using curvature constrained distances. Pattern Recognition Letters 68:118-125 Recommender systems for product bundling. M Beladev, B Shapira, L Rokach, Beladev M, Shapira B, Rokach L (2015) Recommender systems for product bundling The netflix prize. J Bennett, S Lanning, Proceedings of KDD cup and workshop. KDD cup and workshop35Bennett J, Lanning S (2007) The netflix prize. In: Proceedings of KDD cup and workshop, vol 2007, p 35 A similarity metric designed to speed up, using hardware, the recommender systems k-nearest neighbors algorithm. J Bobadilla, F Ortega, A Hernando, G Glez-De Rivera, Knowledge-Based Systems. 51Bobadilla J, Ortega F, Hernando A, Glez-de Rivera G (2013) A similarity metric designed to speed up, using hardware, the recommender systems k-nearest neighbors algorithm. Knowledge-Based Systems 51:27-34 Transforming collaborative filtering into supervised learning. F Braida, C E Mello, M B Pasinato, G Zimbrão, Expert Systems with Applications. 4210Braida F, Mello CE, Pasinato MB, Zimbrão G (2015) Transforming collaborative filtering into supervised learning. Expert Systems with Applications 42(10):4733-4742 Empirical analysis of predictive algorithms for collaborative filtering. J S Breese, D Heckerman, C Kadie, Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence. the Fourteenth conference on Uncertainty in artificial intelligenceMorgan Kaufmann Publishers IncBreese JS, Heckerman D, Kadie C (1998) Empirical analysis of predictive algorithms for collaborative filtering. In: Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, Morgan Kaufmann Publishers Inc., pp 43-52 Improved nonlinear manifold learning for land cover classification via intelligent landmark selection. Y Chen, M Crawford, J Ghosh, 2006 IEEE International Symposium on Geoscience and Remote Sensing. IEEEChen Y, Crawford M, Ghosh J (2006) Improved nonlinear manifold learning for land cover classification via intelligent landmark selection. In: 2006 IEEE International Symposium on Geoscience and Remote Sensing, IEEE, pp 545- 548 Selection of landmark points on nonlinear manifolds for spectral unmixing using local homogeneity. J Chi, M M Crawford, IEEE Geoscience and Remote Sensing Letters. 104Chi J, Crawford MM (2013) Selection of landmark points on nonlinear manifolds for spectral unmixing using local homogeneity. IEEE Geoscience and Remote Sensing Letters 10(4):711-715 Active landmark sampling for manifold learning based spectral unmixing. J Chi, M M Crawford, IEEE Geoscience and Remote Sensing Letters. 1111Chi J, Crawford MM (2014) Active landmark sampling for manifold learning based spectral unmixing. IEEE Geo- science and Remote Sensing Letters 11(11):1881-1885 User-specific feature-based similarity models for top-n recommendation of new items. De Silva, V Tenenbaum, J B , ACM Transactions on Intelligent Systems and Technology (TIST). 6333Stanford University Elbadrawy A, Karypis GTechnical reportSparse multidimensional scaling using landmark pointsDe Silva V, Tenenbaum JB (2004) Sparse multidimensional scaling using landmark points. Tech. rep., Technical report, Stanford University Elbadrawy A, Karypis G (2015) User-specific feature-based similarity models for top-n recommendation of new items. ACM Transactions on Intelligent Systems and Technology (TIST) 6(3):33 Scalable training of mixture models via coresets. D Feldman, M Faulkner, A Krause, Advances in neural information processing systems. Feldman D, Faulkner M, Krause A (2011) Scalable training of mixture models via coresets. In: Advances in neural information processing systems, pp 2142-2150 A novel two-level nearest neighbor classification algorithm using an adaptive distance metric. Y Gao, J Pan, Ji G Yang, Z , Knowledge-Based Systems. 26Gao Y, Pan J, Ji G, Yang Z (2012) A novel two-level nearest neighbor classification algorithm using an adaptive distance metric. Knowledge-Based Systems 26:103-110 The movielens datasets: History and context. F M Harper, J A Konstan, ACM Transactions on Interactive Intelligent Systems (TiiS). 5419Harper FM, Konstan JA (2016) The movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TiiS) 5(4):19 An algorithmic framework for performing collaborative filtering. J L Herlocker, J A Konstan, A Borchers, J Riedl, Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, ACM. the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, ACMHerlocker JL, Konstan JA, Borchers A, Riedl J (1999) An algorithmic framework for performing collaborative filtering. In: Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, ACM, pp 230-237 Evaluating collaborative filtering recommender systems. J L Herlocker, J A Konstan, L G Terveen, J T Riedl, ACM Transactions on Information Systems (TOIS). 221Herlocker JL, Konstan JA, Terveen LG, Riedl JT (2004) Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems (TOIS) 22(1):5-53 An incremental dimensionality reduction method on discriminant information for pattern classification. X Hu, Z Yang, L Jing, Pattern Recognition Letters. 3015Hu X, Yang Z, Jing L (2009) An incremental dimensionality reduction method on discriminant information for pattern classification. Pattern Recognition Letters 30(15):1416-1423 Factorization meets the neighborhood: a multifaceted collaborative filtering model. Y Koren, Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. the 14th ACM SIGKDD international conference on Knowledge discovery and data miningACMKoren Y (2008) Factorization meets the neighborhood: a multifaceted collaborative filtering model. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, pp 426-434 Advances in collaborative filtering. Y Koren, R Bell, Recommender systems handbook. SpringerKoren Y, Bell R (2011) Advances in collaborative filtering. In: Recommender systems handbook, Springer, pp 145-186 Matrix factorization techniques for recommender systems. Y Koren, R Bell, C Volinsky, Computer. 428Koren Y, Bell R, Volinsky C, et al (2009) Matrix factorization techniques for recommender systems. Computer 42(8):30-37 Landmark mds ensemble. S Lee, S Choi, Pattern Recognition. 429Lee S, Choi S (2009) Landmark mds ensemble. Pattern Recognition 42(9):2045-2053 Ranking-order case-based reasoning for financial distress prediction. H Li, J Sun, Knowledge-Based Systems. 218Li H, Sun J (2008) Ranking-order case-based reasoning for financial distress prediction. Knowledge-Based Systems 21(8):868-878 Boosting the k-nearest-neighborhood based incremental collaborative filtering. X Luo, Y Xia, Q Zhu, Y Li, Knowledge-Based Systems. 53Luo X, Xia Y, Zhu Q, Li Y (2013) Boosting the k-nearest-neighborhood based incremental collaborative filtering. Knowledge-Based Systems 53:90-99 Movielens unplugged: experiences with an occasionally connected recommender system. B N Miller, I Albert, S K Lam, J A Konstan, J Riedl, Proceedings of the 8th international conference on Intelligent user interfaces. the 8th international conference on Intelligent user interfacesACMMiller BN, Albert I, Lam SK, Konstan JA, Riedl J (2003) Movielens unplugged: experiences with an occasionally connected recommender system. In: Proceedings of the 8th international conference on Intelligent user interfaces, ACM, pp 263-266 An improved set covering problem for isomap supervised landmark selection. C Orsenigo, Pattern Recognition Letters. 49Orsenigo C (2014) An improved set covering problem for isomap supervised landmark selection. Pattern Recognition Letters 49:131-137 Cenknn: a scalable and effective text classifier. G Pang, H Jin, S Jiang, Data Mining and Knowledge Discovery. 293Pang G, Jin H, Jiang S (2015) Cenknn: a scalable and effective text classifier. Data Mining and Knowledge Discovery 29(3):593-625 Active learning applied to rating elicitation for incentive purposes. M B Pasinato, C E Mello, G Zimbrão, European Conference on Information Retrieval. SpringerPasinato MB, Mello CE, Zimbrão G (2015) Active learning applied to rating elicitation for incentive purposes. In: European Conference on Information Retrieval, Springer, pp 291-302 Improving regularized singular value decomposition for collaborative filtering. A Paterek, Proceedings of KDD cup and workshop. KDD cup and workshopPaterek A (2007) Improving regularized singular value decomposition for collaborative filtering. In: Proceedings of KDD cup and workshop, vol 2007, pp 5-8 Fast embedding of sparse music similarity graphs. J C Platt, Advances in neural information processing systems. 16578Platt JC (2004) Fast embedding of sparse music similarity graphs. Advances in neural information processing systems 16:571,578 Bayesian probabilistic matrix factorization using markov chain monte carlo. F Ricci, L Rokach, B Shapira, Proceedings of the 25th international conference on Machine learning. the 25th international conference on Machine learningACMIntroduction to recommender systems handbookRicci F, Rokach L, Shapira B (2011) Introduction to recommender systems handbook. Springer Salakhutdinov R, Mnih A (2008) Bayesian probabilistic matrix factorization using markov chain monte carlo. In: Proceedings of the 25th international conference on Machine learning, ACM, pp 880-887 Probabilistic matrix factorization. R Salakhutdinov, A Mnih, NIPS. 20Salakhutdinov R, Mnih A (2011) Probabilistic matrix factorization. In: NIPS, vol 20, pp 1-8 Promoting the performance of vertical recommendation systems by applying new classification techniques. A I Saleh, A I El Desouky, S H Ali, Knowledge-Based Systems. 75Saleh AI, El Desouky AI, Ali SH (2015) Promoting the performance of vertical recommendation systems by applying new classification techniques. Knowledge-Based Systems 75:192-223 Application of dimensionality reduction in recommender system-a case study. B M Sarwar, G Karypis, J A Konstan, J T Riedl, F Shang, L Jiao, J Shi, J Chai, ACM WEBKDD WORKSHOP Schwartz B. 324Pattern Recognition LettersSarwar BM, Karypis G, Konstan JA, Riedl JT (2000) Application of dimensionality reduction in recommender system-a case study. In: IN ACM WEBKDD WORKSHOP Schwartz B (2004) The paradox of choice. Ecco New York Shang F, Jiao L, Shi J, Chai J (2011) Robust positive semidefinite l-isomap ensemble. Pattern Recognition Letters 32(4):640-649 A landmark selection method for l-isomap based on greedy algorithm and its application. H Shi, B Yin, X Zhang, Y Kang, Y Lei, 54th IEEE Conference on Decision and Control (CDC). IEEEShi H, Yin B, Zhang X, Kang Y, Lei Y (2015) A landmark selection method for l-isomap based on greedy algorithm and its application. In: 2015 54th IEEE Conference on Decision and Control (CDC), IEEE, pp 7371-7376 A novel landmark point selection method for l-isomap. H Shi, B Yin, Y Bao, Y Lei, 12th IEEE International Conference on. IEEEControl and Automation (ICCA)Shi H, Yin B, Bao Y, Lei Y (2016) A novel landmark point selection method for l-isomap. In: Control and Automation (ICCA), 2016 12th IEEE International Conference on, IEEE, pp 621-625 Collaborative filtering beyond the user-item matrix: A survey of the state of the art and future challenges. Y Shi, M Larson, A Hanjalic, ACM Computing Surveys (CSUR). 4713Shi Y, Larson M, Hanjalic A (2014) Collaborative filtering beyond the user-item matrix: A survey of the state of the art and future challenges. ACM Computing Surveys (CSUR) 47(1):3 Selecting landmark points for sparse manifold learning. J Silva, J Marques, J Lemos, Advances in neural information processing systems. Silva J, Marques J, Lemos J (2005) Selecting landmark points for sparse manifold learning. In: Advances in neural information processing systems, pp 1241-1248 Global versus local methods in nonlinear dimensionality reduction. V D Silva, J B Tenenbaum, Advances in neural information processing systems. Silva VD, Tenenbaum JB (2002) Global versus local methods in nonlinear dimensionality reduction. In: Advances in neural information processing systems, pp 705-712 Ul-isomap based nonlinear dimensionality reduction for hyperspectral imagery classification. W Sun, A Halevy, J J Benedetto, W Czaja, C Liu, H Wu, B Shi, W Li, ISPRS Journal of Photogrammetry and Remote Sensing. 89Sun W, Halevy A, Benedetto JJ, Czaja W, Liu C, Wu H, Shi B, Li W (2014) Ul-isomap based nonlinear dimension- ality reduction for hyperspectral imagery classification. ISPRS Journal of Photogrammetry and Remote Sensing 89:25-36
[]
[ "On Noncommutative Minisuperspace and the Friedmann equations", "On Noncommutative Minisuperspace and the Friedmann equations" ]
[ "W Guzmán \nInstituto de Física de la Universidad de Guanajuato\nA.P. E-143C.P. 37150León, GuanajuatoMéxico\n", "M Sabido \nInstituto de Física de la Universidad de Guanajuato\nA.P. E-143C.P. 37150León, GuanajuatoMéxico\n", "J Socorro \nInstituto de Física de la Universidad de Guanajuato\nA.P. E-143C.P. 37150León, GuanajuatoMéxico\n" ]
[ "Instituto de Física de la Universidad de Guanajuato\nA.P. E-143C.P. 37150León, GuanajuatoMéxico", "Instituto de Física de la Universidad de Guanajuato\nA.P. E-143C.P. 37150León, GuanajuatoMéxico", "Instituto de Física de la Universidad de Guanajuato\nA.P. E-143C.P. 37150León, GuanajuatoMéxico" ]
[]
In this paper we present noncommutative version of scalar field cosmology. We find the noncommutative Friedmann equations as well as the noncommutative Klein-Gordon equation. Interestingly the noncommutative contributions are only present up to second order in the noncommutitive parameter. Finally we conclude that if we want a noncommutative minisuperspace with a constant noncommutative parameter as viable phenomenological model, the noncommuative parameter is very small.
10.1016/j.physletb.2011.02.012
[ "https://arxiv.org/pdf/0812.4251v1.pdf" ]
119,167,259
0812.4251
74d7fe54e31a4e5f5bfa9bdddb8a5b261bf11413
On Noncommutative Minisuperspace and the Friedmann equations 22 Dec 2008 W Guzmán Instituto de Física de la Universidad de Guanajuato A.P. E-143C.P. 37150León, GuanajuatoMéxico M Sabido Instituto de Física de la Universidad de Guanajuato A.P. E-143C.P. 37150León, GuanajuatoMéxico J Socorro Instituto de Física de la Universidad de Guanajuato A.P. E-143C.P. 37150León, GuanajuatoMéxico On Noncommutative Minisuperspace and the Friedmann equations 22 Dec 2008numbers: 0240Gh0460Kz9880Jk9880Qc In this paper we present noncommutative version of scalar field cosmology. We find the noncommutative Friedmann equations as well as the noncommutative Klein-Gordon equation. Interestingly the noncommutative contributions are only present up to second order in the noncommutitive parameter. Finally we conclude that if we want a noncommutative minisuperspace with a constant noncommutative parameter as viable phenomenological model, the noncommuative parameter is very small. I. INTRODUCTION The initial interest in noncommutative field theory [1] slowly but steadily permeated in the realm of gravity, from which several approaches to noncommutative gravity [2] were proposed. All of these formulations showed that the end result of a noncommutative theory of gravity, is a highly nonlinear theory and finding solutions to the corresponding noncommutaive field equations is technically very challenging. Even though working with a full noncommutative theory of gravity looks like a fruitless ordeal, several attempts where made to understand the effects of noncommuativity on different aspects of the universe. In some cases the effects of noncommutativity on the gravitational degrees of freedom where ignored [3], but interesting results where obtained in connection with scalar field cosmologies. One particularly interesting proposal concerning noncommutative cosmology, was presented in [4], the authors conjectured from the fact that the noncommutative deformations modify the commutative fields, the effects of the full noncummutative theory of gravity should be reflected in the minisuperspace variables. This was achieved by introducing the Moyal product of functions in the Wheeler-DeWitt equation, in the same manner as is done in noncommutative quantum mechanics (NCQM). The model analyzed was the Kantowski-Sach comology and was carried out at the quantum level, the authors show that a new states of the universe can be created as a consequence of to introduce this kind of deformations in the quantum phase space, several works followed with this main idea [5,6]. Although the noncommutative deformations of the minisuperspace where originally analyzed at the quantum level by introducing the effective noncommutativity on the minisuperspace, classical noncommutative formulations have been proposed, in [5] for example the authors considered classical noncommutative relations in the phase space for the Kantowski-Sach cosmological model, and they establish the classical noncommutative equations of motion. For scalar field cosmology the classical minisuperspace is deformed and a scalar field is used as the matter component of the universe. In [7], the study is focused in the consequences that the noncommutative deformation causes on the slow-roll parameter when an exponential potential is considered, it is found that the noncommutative deformation gives a mechanism that ends inflation. In all previous work based on a noncommutative minisuperspace a very explicit lapse function is used, this makes the comparison with the usual cosmic time a bit cumbersome and the only option is to analyze effects that are independent of the chosen gauge. The main idea of this classical noncommutativity is based on the assumption that modifying the Poisson brackets of the classical theory gives the noncommutative equations of motion, this is done by hand and is not justified in any way. The main purpose of this paper is to construct the noncommutative equations for noncommutative cosmology. We will work with an FRW universe and the matter content is a scalar field with arbitrary potential. This model has been used to explain several aspects of our universe, like inflation, dark energy and dark matter, the main reason is the flexibility of the scalar fields and the simplicity of their dynamics. Noncommutativity in the minisuperspace, will be introduced by modifying the symplectic structure, using the formalism of Hamiltonian manifolds. Once this is achieved noncommutative equivalents of the Friedmann equations are derived. Interestingly the noncommutative deformations only appear up to second order in the noncommutative parameter, further more, if we want to consider noncommutative minisuperspace based cosmology as a viable phenomenological model the resulting equations seem to favor two very restrictive possibilities, an extremely small value for the noncommutative minisuperspace parameter, or a very high degree of fine tuning in the parameters of the scalar field potential. The paper is organized as follows. In section II a short description of Hamiltonian manifolds is presented, in section III the noncommutative equations for scalar field cosmology are presented. Finally section IV is devoted for discussion and outlook. II. CLASSICAL DEFORMATION OF THE PHASE SPACE In order to work with the noncommutative deformations of the minisuperspace, we start by analyzing deformations of classical mechanics, in order to achieve the deformation we need an appropriate formulation of classical dynamics. For our purposes we will use the symplectic formalism of classical mechanics. Is well known that a Hamiltonian classical mechanic can be formulated in a 2n-dimensional differential manifold M with a symplectic structure. This means that a differential 2-form ω which is closed and non-degenerate exists, the pair formed by (M, ω) is called a symplectic manifold. In the Hamiltonian manifold, Hamilton's function H satisfies i XH ω = −dH(1) where X H is called Hamiltonian vector field. Specifying local coordinates on M, x µ = {q i , p i }, the above condition takes an explicit dependance on the 2-form ω dx µ (t) dt = ω µν ∂H ∂x ν ,(2) where ω µν are the components of ω −1 in the local coordinates x µ . In the symplectic manifold there is a general expression for the Poisson brackets between two functions in M based on Hamiltonian fluxes {f, g} = ω (X f , X g ), which in local coordinates has the familiar form {f, g} = ∂f ∂x µ ω µν ∂g ∂x ν ,(3) it is easy to check that the last equation generates the following commutation relations {x µ , x ν } = ω µν . If we consider the canonical symplectic structure ω c defined by ω c = dp i ∧dq i , where i = 1, . . . , n, we recover the usual Poisson brackets and equations (2) are just Hamilton's equations of classical mechanics. The Darboux's theorem establish that every symplectic structure can be driven to canonical one by a suitable choice of local coordinates in the neighborhood of any point x ∈ M. However it is possible to find new effects if we consider a non-canonical symplectic structure, for example a magnetic field can appear considering the appropriate ω (see [8]). This formalism of classical mechanics gives the mathematical framework to construct the noncommutative deformation of the minisuperspace. Using this formalism we can calculate the deformed Poisson brackets, from which we will determine the corresponding equations motion and the resulting algebra is consistent with NCQM. Being the deformation constructed in the tangent bundle T M instead of the symplectic manifold M all the original symmetries are left intact. This feature is attractive because the classical symmetries used to construct a commutative theory would be present in the deformed theory. III. NONCOMMUTATIVE COSMOLOGICAL EQUATIONS Cosmology presents an attractive arena for noncommutative models, both in the quantum as well as classical level. One of the features of noncommutative field theories is UV/IR mixing, this effectively mixes short scales with long scales, from this fact one may expect the even if the noncommutativity is present at a really small scale, by this UV/IR the effects might be present at an older time of the universe. Furthermore the presence of the noncommutativity could be related to a minimal size, this idea is from the analogy with quantum mechanics where and uncertainty relation between the momentum and coordinates is present. Let us start by introducing the phase space for a homogeneous and isotropic universe with Friedmann-Robertson-Walker metric ds 2 = −N 2 (t)dt 2 + e 2α dr 2 + r 2 dΩ ,(4) here we have considered a flat universe, a(t) = e α is the scale factor of the universe and N (t) is the lapse function, finally we will use a scalar field φ as the matter content for the model. The phase space and the Hamiltonian function is obtained from the action S = dx 4 √ −g R + 1 2 g µν ∂ µ φ∂ ν φ + V (φ) ,(5) where we have used the units 8πG = 1. The Hamiltonian function is calculated as in classical mechanics and is given by H = e −3α 1 12 P 2 α − 1 2 P 2 φ − e 6α V (φ) ,(6) where V (φ) is the scalar potential, we also set N (t) = 1, this means that we will be using the usual cosmic time. The phase space coordinates for this model are given by {α, φ; P α , P φ }, using Eq.(6) together with Eq.(2) and the canonical 2-form ω c , we find the equations of motioṅ α = 1 6 e −3α P α ,Ṗ α = 6e 3α V (φ),(7)φ = −e −3α P φ ,Ṗ φ = e 3α dV (φ) dφ . From the equations for α and φ and the Hamiltonian we construct the Friedmann equation 3H 2 = 1 2φ (t) + V (φ),(8) the Klein-Gordon equation follows from the Hamilton's equations for φ and P φ φ + 3Hφ = − dV (φ) dφ .(9) In order to find the effects of noncommutativity on the cosmological equations of motion, we follow the symplectic formalism on the phase space to the FRW cosmology with the scalar field. Lets first consider the following 2form ω nc = ω c + θdp α ∧ dp φ , evidently if θ is constant ω nc is closed and invertible, thus ω nc and the cosmological phase space define a symplectic manifold. The components of ω µν nc are ω µν nc =    0 θ 1 0 −θ 0 0 1 −1 0 0 0 0 −1 0 0    ,(10) From Eq. (3), the Poisson commutation relations are {α, φ} = θ, {p α , p φ } = 0, (11) {α, p α } = 1, {φ, p φ } = 1. This particular choice of ω is inspired and consistent with the effective noncommutativity used in quantum cosmology [4,7]. This algebra is the basis for the noncommutative cosmology, but the question of the validity of this deformed versions of the dynamics remains. In order to establish the validity, we turn to the symplectic formalism, we can trivially see that the 2-form ω is exact, this together with Darboux's theorem ensures that the new 2-form ω nc gives the correct equations of motion. Using the new algebra we easily calculate the deformed equations that govern the dynamicṡ α = {α, H} = 1 6 e −3α P α − θe 3α dV (φ) dφ ,(12)φ = {φ, H} = −e −3α P φ + 6θe 3α V (φ), we omitted writing the equations for the momenta and the Hamiltonian, as they remain unchanged under the noncommutative deformation. In order to arrive to Eq.(12) we used the following formulas {α, f (α, φ)} = θ ∂f ∂φ , {φ, f (α, φ)} = −θ ∂f ∂α ,(13) which are calculated from the noncommutative relations (11). Using Eq.(12) and the hamiltonian we arrived to the deformed Friedmann's equation 3H 2 = 1 2φ 2 + V (φ)−6θa 3 H dV dφ +φV −3(θa 3 ) 2 dV dφ 2 − 6V 2 .(14) The Klein-Gordon equation for this non-canonical 2-form can be calculated from equation (6) and (12) giving φ + 3Hφ = − dV dφ + 6θa 3 φ dV dφ + 6HV .(15) we can also clearly see that in the limit θ → 0 we recover the usual equations of scalar field cosmology. These are the noncommutative Friedmann equations for scalar field cosmology, this equations are derived for an arbitrary potential to the scalar field. IV. DISCUSSION AND OUTLOOK In this short paper we have constructed a model of noncommuative scalar field cosmology. The basic assumption is that the dynamics can be constructed from a new closed and non degenerate diferential 2-form ω nc on the Hamiltonian manifold (M, ω), constructed from the minisuperspace. This gives a modified Poisson algebra among the minisuperspace variables that is consistent with the assumption taken in noncommutative quantum cosmology [4,5,6]. The modified equations have the correct commutative limit when the noncommutative parameter vanishes. An intriguing feature is that the corrections only appear up to second order in θ, from this observation we can see that even if the noncommutative parameter was large the effective noncommutative equations have rather simple modifications. Another simplification arises, for the exponential potential, for example the quadratic term on θ on Eq. (14) can vanish if we take an exponential potential V (φ) = V 0 e −λφ and choosing λ = √ 6, then from a high degree of fine tuning the equations are further simplified. Further more there would be epochs in when the terms in the brackets multiplied by the noncommutative parameter may vanish, giving dynamics similar to the commutative universe, but again this will only be achieved under very particular conditions on the potential. To study the effects of noncommutative in dark energy, dark matter or inflation, we only need to solve equations (14) and (15) for the particular potential that explains each of the aspects mentioned before. Even if the noncommutative terms look simple, analytical solutions to the equations are difficult to find, but a complete analysis can be done numerically. Unfortunately things are not as simple as that, a closer look on the noncommutative corrections, we see that these are weighted by the product θa 3 , being these terms proportional to the volume of the universe the noncommutative corrections would dominate the dynamics at late times. It seems that in order to have some plausible evolution, the minisuperspace noncommutative parameter should be very small, of order of the inverse of the current volume of the universe. Taking this in to account the effects of noncommutativity will almost disappear at the early epochs of the universe and would be relevant a the current epoch. This might seem awkward, but scale mixing is a feature that appears in noncommutative field theory, so this might be an effect of the UV/IR mixing. In conclusion, noncommutative versions of the Friedmann equations where constructed and argued that the only way that this equations can be phenomenological sensible is by using very specific and fined tuned potentials or an extremely small value of θ, rendering noncommutativity irrelevant at very early stages of the universe with its effects appearing at older stages of cosmological evolution. Then in order to believe that minisuperspace noncommutativity with a constant noncommutative parameter is viable phenomenologically we only have one option, that the noncommuative parameter is almost zero. This might be and unattractive result, as one would expect that the effects of noncommutativity be present at early times or scales and disappear as we go to a larger universe, this picture can be realized if the noncommutativity parameter changes in time, research in this directions are being constructed and will be reported elsewhere. AcknowledgmentsThis work was partially supported by CONACYT grants 47641, 51306, 62253, CONCYTEG grant 07-16-K662-062 A01 and PROMEP grants UGTO-CA-3 and DINPO 38.07. . N Seiberg, E Witten, JHEP. 990932N. Seiberg and E. Witten, JHEP 9909, 032 (1999); . A Connes, M R Douglas, A Schwarz, JHEP. 98023A. Connes, M. R. Douglas, and A. Schwarz, JHEP 9802:003 (1998); . M R Douglas, N A Nekrasov, Rev. Mod. Phys. 73977M. R. Douglas and N. A. Nekrasov, Rev. Mod. Phys. 73, 977 (2001). . H Garcia-Compean, O Obregon, C Ramirez, M Sabido, Phys. Rev. D. 6844015H. Garcia-Compean, O. Obregon, C. Ramirez and M. Sabido, Phys. Rev. D 68, 044015 (2003); . H Garcia-Compean, O Obregon, C Ramirez, M Sabido, Phys. Rev. D. 6845010H. Garcia- Compean, O. Obregon, C. Ramirez and M. Sabido, Phys. Rev. D 68, 045010 (2003); . P Aschieri, M Dimitrijevic, F Meyer, J Wess, Class. Quant. Grav. 231883P. Aschieri, M. Dimitrije- vic, F. Meyer and J. Wess, Class. Quant. Grav. 23, 1883 (2006); . X Calmet, A Kobakhidze, Phys. Rev. D. 7245010X. Calmet and A. Kobakhidze, Phys. Rev. D 72, 045010 (2005); . L Alvarez-Gaume, F Meyer, M A Vazquez-Mozo, Nucl. Phys. B. 75392L. Alvarez-Gaume, F. Meyer and M. A. Vazquez-Mozo, Nucl. Phys. B 753, 92 (2006); . S Estrada-Jimenez, H Garcia-Compean, O Obregon, C Ramirez, Phys. Rev. D. 78124008S. Estrada-Jimenez, H. Garcia-Compean, O. Obregon and C. Ramirez, Phys. Rev. D 78,124008 (2008). . S Alexander, R Brandenberger, J Magueijo, Phys. Rev. D. 6781301S. Alexander, R. Brandenberger and J. Magueijo, Phys. Rev. D 67, 081301(R) (2003); . S Koh, R H Brandenberger, JCAP. 070621S. Koh and R. H. Branden- berger, JCAP 0706, 021 (2007). . H Garcia-Compean, O Obregon, C Ramirez, Phys. Rev. Lett. 88161301H. Garcia-Compean, O. Obregon and C. Ramirez, Phys. Rev. Lett. 88, 161301 (2002). . G D Barbosa, N Pinto-Neto, Phys. Rev. D. 70103512G. D. Barbosa and N. Pinto-Neto, Phys. Rev. D 70, 103512 (2004). . L O Pimentel, C Mora, Gen. Rel. Grav. 37817L. O. Pimentel and C. Mora, Gen. Rel. Grav. 37 (2005) 817; . L O Pimentel, O Obregon, Gen. Rel. Grav. 38553L. O. Pimentel and O. Obregon, Gen. Rel. Grav. 38, 553 (2006); . M Aguero, J A Aguilar, S , C Ortiz, M Sabido, J Socorro, Int. J. Theor. Phys. 462928M. Aguero, J. A. Aguilar S., C. Or- tiz, M. Sabido and J. Socorro, Int. J. Theor. Phys. 46, 2928 (2007); . W Guzman, C Ortiz, M Sabido, J Socorro, M A Aguero, Int. J. Mod. Phys. D. 161625W. Guzman, C. Ortiz, M. Sabido, J. So- corro and M. A. Aguero, Int. J. Mod. Phys. D 16, 1625 (2007); . B Vakili, N Khosravi, H R Sepangi, Class. Quant. Grav. 24931B. Vakili, N. Khosravi and H. R. Sepangi, Class. Quant. Grav. 24 (2007) 931 . W Guzman, M Sabido, J Socorro, Phys. Rev. D. 7687302W. Guzman, M. Sabido and J. Socorro, Phys. Rev. D 76, 087302 (2007). . V Guillemin, S Sternberg, Univ. Pr. 468pV. Guillemin and S. Sternberg, Cambridge, UK: Univ. Pr. (1990) 468 p.
[]
[ "Regime Switching Volatility Calibration by the Baum-Welch Method", "Regime Switching Volatility Calibration by the Baum-Welch Method" ]
[ "Sovan Mitra ", "Welch " ]
[]
[]
Regime switching volatility models provide a tractable method of modelling stochastic volatility. Currently the most popular method of regime switching calibration is the Hamilton filter. We propose using the Baum-Welch algorithm, an established technique from Engineering, to calibrate regime switching models instead. We demonstrate the Baum-Welch algorithm and discuss the significant advantages that it provides compared to the Hamilton filter. We provide computational results of calibrating the Baum-Welch filter to S&P 500 data and validate its performance in and out of sample.
10.1016/j.cam.2010.04.022
[ "https://arxiv.org/pdf/0904.1500v1.pdf" ]
2,781,430
0904.1500
6889d35da12392dca3ffbdefcf1dfc0585198477
Regime Switching Volatility Calibration by the Baum-Welch Method 9 Apr 2009 Sovan Mitra Welch Regime Switching Volatility Calibration by the Baum-Welch Method 9 Apr 2009Regime switchingstochastic volatilitycalibrationHamilton filterBaum- Regime switching volatility models provide a tractable method of modelling stochastic volatility. Currently the most popular method of regime switching calibration is the Hamilton filter. We propose using the Baum-Welch algorithm, an established technique from Engineering, to calibrate regime switching models instead. We demonstrate the Baum-Welch algorithm and discuss the significant advantages that it provides compared to the Hamilton filter. We provide computational results of calibrating the Baum-Welch filter to S&P 500 data and validate its performance in and out of sample. Introduction and Outline Regime switching (also known as hidden Markov models (HMM)) volatility models provide a tractable method of modelling stochastic volatility. Currently the most popular method of regime switching calibration is the Hamilton filter. However, regime switching calibration has been tackled in engineering (particularly for speech processing) for some time using the Baum-Welch algorithm (BW), where it is the most popular and standard method of HMM calibration. A review of the Baum-Welch algorithm can be found in [Lev05], [JR91]. The BW algorithm is increasingly being applied beyond engineering applications (for instance in bioinformatics [BEDE04]) but has been hardly applied to financial modelling, especially to regime switching stochastic volatility models. Unlike the Hamilton filter, the BW algorithm is capable of determining the entire set of HMM parameters from a sequence of observation data. Furthermore, BW is a complete estimation method since it also provides the required optimisation method to determine the parameters by MLE. The outline of the paper is as follows. Firstly, we introduce regime switching volatility models and the Hamilton filter. In the next section we introduce the Baum-Welch method, describing the algorithm even for multivariate Gaussian mixture observations. We then conduct a numerical experiment to verify the Baum-Welch method's to detect regimes for the S&P 500 index. We finally end with a conclusion. Regime Switching Volatility Model and Calibration Regime Switching Volatility Wiener process driven stochastic volatility models capture price and volatility dynamics more successfully compared to previous volatility models. Specifically, such models successfully capture the short term volatility dynamics. However, for longer term dynamics and fundamental economic changes (e.g. "credit crunch"), no mechanism existed to address the change in volatility dynamics and it has been empirically shown that volatility is related to long term and fundamental conditions. Bekaert in [BHL06] claims that volatility changes are caused by economic reforms, for example on Black Wednesday the pound sterling was withdrawn from the ERM (European Exchange Rate Mechanism), causing a sudden change in value of the pound sterling [BR02]. Schwert [Sch89] empirically shows that volatility increases during financial crises. A class of models that address fundamental and long term volatility modelling is the regime switching model (or hidden Markov model) e.g. as discusssed in [Tim00], [EvdH97]. In fact, Schwert suggests in [Sch89] that volatility changes during the Great Depression can be accounted for by a regime change such as in Hamilton's regime switching model [Ham89]. Regime switching is considered a tractable method of modelling price dynamics and does not violate Fama's "Efficient Market Hypothesis" [Fam65], which claims that price processes must follow a Markov process. Hamilton [Ham89] was the first to introduce regime switching models, which was applied to specifically model fundamental economic changes. For regime switching models, generally the return distribution rather than the continuous time process is specified. A typical example of a regime switching model is Hardy's model [Har01]: log((X(t + 1)/X(t))|i) ∼ N (u i , ϕ i ), i ∈ {1, .., R},(1) where • ϕ i and u i are constant for the duration of the regime; • i denotes the current regime (also called the Markov state or hidden Markov state); • R denotes the total number of regimes; • a transition matrix A is specified. p(q t+1 = j | q 1 , q 2 , .., q t = i) = p(q t+1 = j | q t = i),(2) where • q t is the Markov state (or regime) at time t of X(t); • i and j are specific Markov states. As time passes the process may remain or change to another state (known as state transition). The state transition probability matrix (also known as the transition kernel or stochastic matrix ) A, with elements a ij , tells us the probability of the process changing to state j given that we are now in state i, that is a ij = p(q t+1 = j | q t = i). Note that a ij is subject to the standard probability constraints: 0 ≤ a ij ≤ 1, ∀i, j,(3)∞ j=1 a ij = 1, ∀i.(4) We assume that all probabilities are stationary in time. From the definition of a MM the following proposition follows: Proposition 1. A Markov model is completely defined once the following parameters are known: • R, the total number of regimes or (hidden) states; • state transition probability matrix A of size R×R. Each element is a ij = p(q t+1 = j|q t = i), where i refers to the matrix row number and j to the column number of A; • initial (t=1) state probabilities π i = p(q 1 = i), ∀i. • R, the total number of (hidden) states or regimes; • A, the (hidden) state transition matrix of size R × R. Each element is a ij = p(q t+1 = j|q t = i); • initial (t=1) state probabilities π i = p(q 1 = i), ∀i; • B, the observation matrix, where each entry is b j (O t ) = p(O t |j) for observation O t . For b j (O t ) is typically defined to follow some continuous distribution e.g. b j (O t ) ∼ N (u j , ϕ j ). Current Calibration Method: Hamilton Filter (O t |O t−1 ), (O t−1 |O t−2 ), (O t−2 |O t−3 )... are independent. For a regime switching process the general likelihood function L(Θ) is: L(Θ) = f (O 1 |Θ)f (O 2 |Θ, O 1 )f (O 3 |Θ, O 1 , O 2 ) · · · f (O T |Θ, O 1 , O 2 , ..., O T −1 ), where f (O (.) |Θ) is the probability of O (.) , given model parameters Θ. Now by properties of logarithms we have: log(L(Θ)) = log(f (O 1 |Θ)) + log(f (O 2 |Θ, O 1 )) + · · · (5) + log(f (O T |Θ, O 1 , O 2 , ..., O T −1 )).(6) Hamilton proposes a likelihood function for regime switching models, the Hamilton filter. As an example, if we assume we have a two regime model with each regime having a lognormal return distribution, we wish to determine parameters Θ = {u 1 , u 2 , ϕ 1 , ϕ 2 , a 12 , a 21 }. Note that in this simple HMM a 22 = 1 − a 12 and a 11 = 1 − a 21 therefore we do not need to estimate a 22 , a 11 in Θ. To obtain f (O t |Θ) in equation (6) for t > 1, Hamilton showed it could be calculated by a recursive filter. We observe the relation: f (O t |Θ, O 1 , O 2 , ..., O t−1 ) = 2 qt=1 2 q t−1 =1 f (q t , q t−1 , O t |Θ, O 1 , ..., O t−1 ). (7) Now using the relation: 1 p(O, Q|Θ) = p(O|Θ, Q)p(Q|Θ),(8) where Q = q 1 q 2 ... represents some arbitrary state sequence, we make the substitution f (q t , q t−1 , O t |Θ, O 1 , ..., O t−1 ) = p(q t−1 |Θ, O 1 , ..., O t−1 ) × p(q t |q t−1 , Θ) × f (O t |q t , Θ). (9) Therefore f (O t |Θ, O 1 , O 2 , ..., O t−1 ) = 2 qt=1 2 q t−1 =1 p(q t−1 |Θ, O 1 , ..., O t−1 ) × p(q t |q t−1 , Θ) × f (O t |q t , Θ),(10) where • p(q t |q t−1 , Θ) = p(q t = j|q t−1 = i, Θ) represents the transition probability a ij we wish to estimate; • f (O t |q t = i, Θ) = p i (O t ) where p i (·) ∼ N (u i , ϕ i ) the Gaussian probability density function for state i, whose parameters u i , ϕ i we wish to estimate. The parameters Θ = {u 1 , u 2 , ϕ 1 , ϕ 2 , a 12 , a 21 } are obtained by maximising the likelihood function using some chosen search method. (10) (summed over two different values of q t−1 in the summations in equation (10)). This can be achieved through recursion, that is the To calculate f (O t |Θ, O 1 , O 2 , ..., O t−1 ) we require the probability p(q t−1 |Θ, O 1 , O 2 , ..., O t−1 ) in equationprobability p(q t−1 |Θ, O 1 , .., O t−1 ) can be obtained from p(q t−2 |Θ, O 1 , .., O t−2 ): p(q t−1 |Θ, O 1 , O 2 , ..., O t−1 ) = 2 i=1 f (q t−1 , q t−2 = i, O t−1 |Θ, O 1 , ..., O t−2 ) f (O t−1 |Θ, O 1 , ..., O t−2 ) .(11) The denominator of equation (11) is obtained from the previous period of f (O t |Θ, O 1 , O 2 , ..., O t−1 ) (in other words f (O t−1 |Θ, O 1 , O 2 , ..., O t−2 ) ) so by inspecting equation (10) we can see it is a function of p(q t−2 |Θ, O 1 , ..., O t−2 ). The numerator of equation (11) is obtained from calculating equation (9) for the previous time period, which is also a function of p(q t−2 |Θ, O 1 , ..., O t−2 ). To start the recursion of equation (11) η j = lim t→∞ p(q t = j|q 1 = i), ∀i, j = 1, 2, .., R,(12)where R j=1 η j = 1, η j > 0.(13) The probability η j tells us in the long run (t → ∞) the (unconditional) probability of being in state j and this probability is independent of the initial state (at time t=1). An important interpretation of η j is as the fraction of time spent in state j in the long run. Therefore the probability of state j is simply η j and so: f (O 1 |Θ) = f (q 1 = 1, O 1 |Θ) + f (q 1 = 2, O 1 |Θ),(14) where f (q 1 = i, O 1 |Θ) = η i p i (O 1 ).(15) We can therefore calculate p(q 1 = i|O 1 , Θ): p(q 1 = i|O 1 , Θ) = f (q 1 = i, O 1 |Θ) f (O 1 |Θ) ,(16)= η i p i (O 1 ) η 1 p 1 (O 1 ) + η 2 p 2 (O 1 ) . Furthermore it can be proved for a two state HMM that: η 1 = a 21 /(a 12 + a 21 ), η 2 = 1 − η 1 . Therefore p(q 1 = i|O 1 , Θ) can be obtained from estimating the parameter set Θ = {u 1 , u 2 , ϕ 1 , ϕ 2 , a 12 , a 21 }, which is obtained by a chosen search method. The advantages of Hamilton's filter method are firstly we do not need to specify or determine the initial probabilities, therefore there are fewer parameters to estimate (compared to the alternative Baum-Welch method). Therefore the MLE parameter optimisation will be over a lower dimension search space. Secondly, the MLE equation is simpler to understand and so easier to implement compared to other calibration methods. Baum-Welch Algorithm The Baum-Welch (BW) is a complete estimation method since it also provides the required optimisation method to determine the parameters by MLE. We will now explain the BW algorithm and to do so we must first explain the forward algorithm, which we will do now. Forward Algorithm The forward algorithm calculates p(O|M), the probability of a fixed or observed We recall from the definition of HMM that the probability of each observation p(O t ) will change depending on the state at time t (q t ). Hence the most straightforward way to calculate p(O|M) is: sequence O=O 1 O 2 ...O T ,p(O|M) = all Q p(O, Q|M),(18)= all Q p(O|M, Q).p(Q|M),(19)= all Q π q 1 b q 1 (O 1 ).a q 1 q 2 b q 2 (O 2 )....a q T −1 q T b q T (O T ),(20) where p(O|M, Q) = b q 1 (O 1 ).b q 2 (O 2 ).....b q T (O T ).(21) Here "all Q" means all possible state sequences q 1 q 2 ...q T that could account for obser- To overcome the computational difficulty of calculating p(O|M) in equation (20) we apply the forward algorithm, which uses recursion (dynamic programming). The forward algorithm only requires computations of the order R 2 T and so is significantly faster than calculating equation (20) for large R and T. Let us define the forward variable κ t (i): κ t (i) = p(O 1 O 2 ...O t , q t = i|M).(22)p(O|M) = R i=1 κ T (i).(23) Now κ t+1 (j) can be expressed in terms of κ t (i), therefore we can calculate κ t+1 (j) by recursion: κ t+1 (j) = R i=1 κ t (i)a ij b j (O t+1 ), 1 ≤ t ≤ T − 1.(24) The variable κ t+1 (j) in equation (24) Therefore the recursive algorithm is as follows: 1. Initialisation: κ 1 (i) = π i b i (O 1 ), 1 ≤ i ≤ R.(25) 2. Recursion: κ t+1 (j) = R i=1 κ t (i)a ij b j (O t+1 ), 1 ≤ t ≤ T − 1.(26) 3. Termination: t+1=T. Final Output : p(O|M) = R i=1 κ T (i).(27) At t=1 no sequence exists but we initialise the recursion with π i to determine κ 1 (i). Baum-Welch Algorithm Having explained the forward algorithm we can now explain the BW algorithm. No method of analytically finding the globally optimal M exists. However it has been theoretically proven BW is guaranteed to find the local optimum [Rab89]. Let us define ψ t (i, j): ψ t (i, j) = p(q t = i, q t+1 = j | O, M).(28) The variable ψ t (i, j) is the probability of being in state i at time t and state j at time t+1, given the HMM parameters M and the observed observation sequence O. We can re-express ψ t (i, j) as: ψ t (i, j) = p(q t = i, q t+1 = j | O, M),(29)= p(q t = i, q t+1 = j, O | M) p(O|M) .(30) Now we can re-express equation (30) using the forward variable κ t (i) = p(O 1 O 2 . ..O t , q t = i|M) and using analogously the so called backward variable ̺ t+1 (i): ̺ t (i) = p(O t+1 O t+2 ...O T |q t = i, M),(31)so that ̺ t+1 (i) = p(O t+2 O t+3 ...O T |q t+1 = i, M).(32) The backward variable ̺ t (i) is the probability of the partial observed observation sequence from time t+1 to the end T, given M and the state at time t is i. It is calculated in a similar recursive method to the forward variable using the backward algorithm (see [Rab89] for more details). Hence we can rewrite ψ t (i, j) as ψ t (i, j) = κ t (i)a ij b j (O t+1 )̺ t+1 (j) p(O|M) .(33) We can also rewrite the denominator p(O|M) in terms of the forward and backward variables, so that ψ t (i, j) is entirely expressed in terms of κ t (i), a ij , b j (O t+1 ), ̺ t+1 (j): p(O|M) = R i=1 R j=1 κ t (i)a ij b j (O t+1 )̺ t+1 (j).(34) Now let us define Γ t (i): Γ t (i) = p(q t = i|O, M),(35)= R j=1 ψ t (i, j).(36) Equation (36) can be understood from the definition of ψ t (i, j) in equation (29); summing ψ t (i, j) over all j must give p(q t = i|O, M), the probability in state i at time t, given the observation sequence O and model M. Now if we sum Γ t (i) from t=1 to T-1 it gives us Υ(i), the expected number of transitions made from state i: Υ(i) = T −1 t=1 Γ t (i).(37) If we sum Γ t (i) from t=1 to T it gives us ϑ(i), the expected number of times state i is visited: ϑ(i) = T t=1 Γ t (i).(38) We are now in a position to estimate M . The variable a ij is estimated as the expected number of transitions from state i to state j divided by the expected number of transitions from state i: a ij = T −1 t=1 ψ t (i, j) Υ(i) .(39) The variable π i is estimated as the expected number of times in state i at time t=1: π i = Γ 1 (i).(40) The variable b j (s) is estimated as the expected number of times in state j and observing a particular signals, divided by the expected number of times in state j: b j (s) = T t=1 Γ t (j) ′ ϑ(j) ,(41)where Γ t (j) ′ is Γ t (j) with condition O t =s. We can now describe our BW algorithm: Initialisation: Input initial values of M (otherwise randomly initialise) and calculate p(O|M) using the forward algorithm. Estimate new values of M : Iterate until convergence: (a) Using current M calculate variables κ t (i), ̺ t+1 (j) by the forward and backward algorithm and then calculate ψ t (i, j) as in equation (33). Since the BW algorithm has been proven to always converge to the local optimum, the BW will output the local optimum. We also note that correct choice of R is important since p(O|M) changes as M changes for a fixed O, however this disadvantage is common to all MLE methods. Multivariate Gaussian Mixture Baum-Welch Calibration To account for the variety of empirical distributions possible for various assets and capturing asymmetric properties arising from volatility (such as fat tails), we model each regime's distribution by a two component multivariate Gaussian mixture (GM), which is a mixture of two multinormal distributions. Definition 2. (Multinormal Distribution) Let X = (X 1 , X 2 ..., X n ) be an n-dimensional random vector where each dimension is a random variable. Let u=(u 1 , u 2 ..., u n ) represent an n dimensional vector of means, Σ represent an n × n covariance matrix. We say X follows a multinormal distribution if X ∼ N n (u, Σ),(42) which may be alternatively written as   X 1 X ... X n   ∼ N n     u 1 u ... u n   ,   ϕ 11 ϕ ... ϕ 1n ϕ ... ϕ ... ϕ ... ϕ n1 ϕ ... ϕ nn     .(43) The probability of X is p(X) = 1 2π det(Σ) exp − 1 2 (X − u) T Σ −1 (X − u) ,(44) where det(Σ) denotes the determinant of Σ. Definition 3. (Multivariate Gaussian Mixture) A multivariate Gaussian mixture consists of a mixture of K multinormal distributions, spanning n-dimensions. It is defined by: X ∼ c 1 N n (u 1 , Σ 1 ) + ... + c K N n (u K , Σ K ),(45) where c k are weights and k=K k=1 c k = 1, c k ≥ 0. (46) The term p gmm (X) denotes the probability of a multivariate Gaussian mixture variable X and is defined as p gmm (X) = K k=1 c k p k (X),(47) where p k (X) ∼ N n (u k , Σ k ). If we model a stochastic process X by a Gaussian mixture for each regime then for a given regime j we have: X ∼ c j1 N n (u j1 , Σ j1 ) + ... + c jK N n (u jK , Σ jK ).(48) The probability of X for a given regime j, p gmm (X) j , is: p gmm (X) j = k=K k=1 c jk p jk (X).(49) where • p jk (X) ∼ N n (u jk , Σ jk ); • c jk are weights for each regime j and k=K k=1 c jk = 1, ∀j. Note that the dimensions of multivariate distribution n are independent of the number of mixture components K. For an n-asset portfolio X(t) = (X 1 (t), X 2 (t), ..., X n (t)), where X i (t) represents the stock price of asset i, with each asset following a Gaussian mixture, the portfolio returns would be modelled by: dX/X ∼ c j1 N n (u j1 , Σ j1 ) + c j2 N n (u j2 , Σ j2 ).(51) For practial calibration purposes we set the multivariate observation vector O t to annual log returns: O t = log(X(t + ∆t)/X(t)),(52) where ∆t=1 year. Combining GM with HMM gives us a GM-HMM (Gaussian mixture HMM) model and the BW algorithm can be adapted to it: Gaussian mixture BW (GM-BW). For O t our observation (vector) at time t we model b j (O) by GM: b j (O) = p gmm (O) j .(53) The BW algorithm for calculating A, π i remains the same; for B we have a GM. We would like to obtain the GM mixture coeffficents c jk , mean vectors u jk and covariance matrices Σ jk whose estimates are c jk , u jk and Σ jk respectively. These can be incorporated within the BW algorithm as detailed by Rabiner [Rab89]: u jk = T t=1 Γ t (j, k).O t T t=1 Γ t (j, k) ,(54)Σ jk = T t=1 Γ t (j, k).(O t − u jk )(O t − u jk ) T T t=1 Γ t (j, k) ,(55)c jk = T t=1 Γ t (j, k) T t=1 K k=1 Γ t (j, k) ,(56)where Γ t (j, k) = κ t (j)̺ t (j) N j=1 κ t (j)̺ t (j) c jk p jk (O t ) p gmm (O t ) j .(57) Advantages of Baum-Welch Calibration The BW algorithm has significant advantages over the Hamilton filter. Firstly, the Hamilton filter requires observation data to be taken from the invariant distribution in order to estimate the parameters (see equation (12)). To obtain observations from the invariant distribution implies the number of state transitions approaches a large limit, so is not suited to Markov chains that have run for a short time. Furthermore, the time to reach the invariant distribution increases with the number of regimes R and the number of Gaussian mixtures K. Psaradakis and Sola [PS98] investigated the finite sample properties of the Hamilton filter for financial data. They concluded that samples of at least 400 observations are required for a simple two state regime switching model where each state's observation is modelled by a normal distribution. Secondly, the Hamilton filter has no method of estimating the initial state probabilities whereas the BW is able to take account of and estimate initial state probabilities. This has a number of important consequences: 1. BW does not require observations from the invariant distribution and so can be calibrated to data of any observation length. 4. we cannot determine the most likely state sequence that accounts for a given observation sequence and HMM, which can be obtained by the Viterbi algorithm. The Viterbi algorithm tells us the most likely state sequence for a given observation sequence and HMM parameters M (see Forney [FJ73] for more information). 5. without the initial state probabilities, we cannot simulate state sequences since the initial state radically alters the state sequence and its influence on the state sequence increases as the sequence size decreases. Consequently we cannot validate a model's feasibility by simulation. Note that BW estimates initial state probabilities independently of the transition probabilities, whereas in the Hamilton filter η i is a function of estimated transition probabilities. Hence BW is able to independently estimate more HMM parameters than the Hamilton filter. Finally, the BW algorithm is a complete HMM estimation method whereas the Hamilton filter is not. Hamilton's method provides no method or guidance as to the optimisation algorithm to apply for finding the parameters from the non-linear filter, yet the solutions can be significantly influenced by the non-convex optimisation method applied. The BW algorithm includes an estimation method for the full HMM and a numerical optimisation scheme. Additionally, the BW method is guaranteed to find the local optimum. Numerical Experiment: Baum-Welch GM-HMM Calibration Results In this section we calibrate a 2-state regime switching model with 2 Gaussian components to S&P 500 data from 1976-1996. We fitted the model: dX/X ∼ c j1 N 1 (u j1 , ϕ j1 ) + c j2 N 1 (u j2 , ϕ j2 ), j ∈ {1, 2}.(58) We set our observation to annual log returns: O t = log(X(t + ∆t)/X(t)),(59) where ∆t=1 year and X(t) is the stock price. Procedure Due to GM-BW's wide usage in engineering it has already been implemented by numerous authors. We chose K. Murphy's Matlab implementation [Mur08] because it is considered one of the most standard and cited GM-BW programs. It also offers many useful features that are unavailable on other implementations e.g. the Viterbi algorithm for obtaining state sequences. The GM-BW algorithm finds the best GM-HMM parameters that maximise the likelihood of the observations, however, this involves searching a nonconvex search space and BW only finds the local optimum. Theoretically, the globally optimal parameters can be determined by initialising the GM-BW algorithm over every possible starting point, then the globally optimal parameters are those that give the highest likelihood. However, the GM-BW algorithm finds the locally optimal M = (A, B, π) (where B is parameterised by c jk , ϕ jk and u jk ), so that the calibration problem has a nonconvex solution search space over thirteen dimensions. Due to the high dimensionality of the parameter estimation problem for either the univariate or multivariate case, determining the optimal parameters by initialising through different starting points is impractical. Instead we concentrated our effort on finding good initial parameter estimates for M. This was to significantly increase the probability of finding the best GM-BW solutions, particularly as initialisation strongly influences the GM-BW optimisation [LDLK04]. π Initialisation To initialise π, we assign a probability of 0.5 to state one if the first observed return is positive, with the probability increasing in value the more positive it is. A Initialisation Given the HMM structure was chosen due to its ability to capture long term and fundamental properties, we can initialise A based on the long term and fundamental properties we expect it to possess: A = 0.6 0.4 0.7 0.3 .(60) Each state represents an economic regime so A can be interpreted as follows. The economy in the long term follows an upward drift, so we would expect it is more likely the HMM remains in state one rather than goto state two, given it is already in state one -hence we assign probability 0.6. Similarly, if the HMM is in state two we would expect it is far more likely to return to state one than state two to capture the cyclical behaviour and in the long term the economy follows an upward trend, hence we assign probabilities 0.7 and 0.3. GM Initialisation (B) The GM distribution fitting strongly affects the GM-BW algorithm optimisation, hence it must be satisfactorily initialised for GM-BW to provide acceptable results. However it is well known that GM distribution fitting in general (without any regime switching) is a non-trivial problem: • there are a large set of parameters to estimate. • there exists the issue of uniqueness, that is for a given non-parametric distribution there does not always exist a unique set of GM parameter values. • the flexibility of GM distributions to model virtually any unimodal or bimodal distribution means that it incorporates rather than rejects any noise in the data into the distribution. Therefore GM fitting is highly sensitive to noise. • parameter estimation is further complicated with regime switching and the fact we cannot identify with certainty the (hidden) state associated with each observation. Rather than randomly initialise the GM parameters (as is done in Murphy's program) we approximately divided data into each regime by classifying positive returns as belonging to state one, otherwise they belong to state two. We then fit a GM for each "regime's" data using a GM fitting program used by Lund University (stixbox). Results and Discussion We present the results of the Baum-Welch GM-HMM calibration to S&P 500 data from 1976-1996: The regime sequence concurs with the empirical observations; negative or low returns were categorised into state two, whereas other returns were classed as state one. The results also illustrate the behaviour of the transition matrix A; generally remaining in state one (up state) whilst occasionally entering state two, in which case it quickly reverts back to state one. Hence the GM-HMM is able to capture the key characteristics of the economy, namely an upward trend and cyclicity. The Viterbi algorithm was also applied to out of sample data from 1997-2007. Again the GM-HMM was able to satisfactorily classify the years for each state, as shown in Conclusions This paper has shown the advantages of Baum-Welch calibration over the standard Hamilton filter method. Not only does the Baum-Welch method offer a complete calibration procedure but also is able to estimate the full set of HMM parameters, unlike the Hamilton filter. We have also validated the usage of the Baum-Welch method through numerical experiments on S&P 500 data in and out of sample. A hidden Markov model is simply a Markov model where we assume (as a modeller) we do not observe the Markov states. Instead of observing the Markov states (as in standard Markov models) we detect observations or time series data where each observation is assumed to be a function of the hidden Markov state, thus enabling statistical inferences about the HMM. Note that in a HMM it is the states which must be governed by a Markov process, not the observations and throughout the thesis we will assume one observation occurs after one state transition. Proposition 2. A hidden Markov model is fully defined when the parameter set {A, B, π} are known: given all the HMM parameters denoted by M = {A, B, π}. vation sequence O, b (.) (O (.) ) is defined in proposition 2, p(O|M, Q) is the probability of the observed sequence O, given it is along one single state sequence Q = q 1 q 2 ...q T and for HMM M. We must sum equation (20) over all possible Q state sequences, requiring R T computations and so this is computationally infeasible even for small R and T. Using observation sequence O, the BW algorithm iteratively calculates the HMM parameters M = {A, B, π}. Specifically, BW estimates M = {a ij , b j (·), π i }∀i, j, denoted respectively by M = {a ij , b j (·), π i }, such that it maximises the likelihood of p(O|M). (b ) )Using calculated ψ t (i, j) in (a) determine new estimates of M using equations (36)-(41).(c) Calculate p(O|M) with new M values using the forward algorithm. ( d ) dGoto step 3 if two consecutive calculations of p(O|M) are equal (or converge within a specified range). Otherwise repeat iterations: goto (We consider the new estimate M n to be a better estimate than the previous estimate M p , if p(O|M n ) > p(O|M p ), with both probabilities calculated via the forward algorithm. In other words, we prefer the M that increases the probability of observation O occurring.If p(O|M n ) > p(O|M p ) then the iterative calculation is repeated with M n as the input. Note that at the end of step two, if the algorithm re-iterates then inputting the new M at step 2a means we will get a new set of M after executing 2b. The iteration is stopped when p(O|M n ) = p(O|M p ) or is arbitrarily close enough and at this point the BW algorithm finishes. Here Γ t (j, k) is the probability at time t of being in state j with the k mixture component accounting for O t . Using the same logic as in section 3 (for non mixture distributions) we can understand equations (54)-(56), for example c jk is the expected number of times the HMM k-th component is in state j divided by the expected number of times in state j.It is worth noting that a well known problem in maximum likelihood estimation of GM is that observations with low variances give extremely high likelihoods, in which case the likelihood function does not converge[MT07]. To overcome this problem in the univariate case Messina and Toscani[MT07] implement Ridolfi's and Idier's[RI02] penalised maximum likelihood function, which limits the likelihood value of observations. This is beneficial in[MT07] because the observation time scales are of the order of days and therefore the variance of samples may approach zero. For our applications we calibrate the GM-HMM to annual return data, therefore the samples are unlikely to approach variances anywhere near zero. 2. the Hamilton filter cannot fully define the entire HMM model since the initial state probabilities are one of the key HMM parameters in the definition (see HMM definition in section 2).3. we cannot determine the probability of observation sequences p(O|M), since we require the initial state probabilities. This can be understood from the forward algorithm. Note that economic variables other than stock returns, such as inflation, can also be modelled using regime switching models.Regime switching has been developed by various researchers. For example, Kim and Yoo[KY95] develop a multivariate regime switching model for coincident economic indicators. Honda[Hon03] determines the optimal portfolio choice in terms of utility for assets following GBM but with continuous time regime switching mean returns.Alexander and Kaeck [AK08] apply regime switching to credit default swap spreads, Durland and McCurdy [DM94] propose a model with a transition matrix that specifies state durations. The theory of Markov models (MM) and Hidden Markov models (HMM) are methods of mathematically modelling time varying dynamics of certain statistical processes, requiring a weak set of assumptions yet allow us to deduce a significant number of properties. MM and HMM model a stochastic process (or any system) as a set of states with each state possessing a set of signals or observations. The models have been usedFor Hardy's model the regime changes discretely in monthly time steps but stochasti- cally, according to a Markov process. Due to the ability of regime switching models to capture long term and fundamental changes, regime switching models are primarily focussed on modelling the long term behaviour, rather than the continuous time dynamics. Therefore regime switching models switch regimes over time periods of months, rather than switching in continuous time. Examples of regime switching models that model dynamics over shorter time periods are Valls-Pereira et al.[VPHS04], who propose a regime switching GARCH process, while Hamilton and Susmel [HS94] give a regime switching ARCH process. in diverse applications such as economics [SSS02], queuing theory [SF06], engineer- ing [TG01] and biological modelling [MGPG06]. Following Taylor [TK84] we define a Markov model: Definition 1. A Markov model is a stochastic process X(t) with a countable set of states and possesses the Markov property: The observations O 1 , O 2 , ..., O T are statistically independent. Note that for Markov models we assume the conditional observationsIn financial mathematics or economic literature the standard calibration method for regime switching models is the Hamilton filter [Ham89], which works by maximum likelihood estimation (MLE). MLE is a method of estimating a set of parameters of a statistical model (Θ) given some time series or empirical observations O 1 , O 2 , ..., O T . MLE determines Θ by firstly determining the likelihood function L(Θ), then maximis- ing L(Θ) by varying Θ through a search or an optimisation method. A statistical model with known parameter values can determine the probability of an observation sequence O = O 1 O 2 ...O T . MLE does the opposite; we numerically maximise the parameter values of our model Θ such that we maximise the probability of the observation sequence O = O 1 O 2 ...O T . To achieve this the MLE method makes two assumptions: 1. In maximising L(Θ) the local optimum is also the global optimum (although this is generally not true in reality). The optimal values for Θ are in a search space of the same dimensions as Θ. Hamilton in [Ham94] gives a survey of various MLE maximisation techniques such as the Newton-Raphson method; 2. at p(q 1 = i|O 1 , Θ) we require f (O 1 |Θ). assumes the Markov chain has been running sufficiently long enough so that we can make the following assumption about our observations O 1 , O 2 , ..., O T . Techni-Hamilton cally, Hamilton assumes the observations O 1 , O 2 , ..., O T are all drawn from the Markov chain's invariant distribution. If a Markov chain has been running for a sufficiently long time, the following property of Markov chains can be applied: Given the HMM M, κ t (i) is the probability of the joint observation upto time t of O 1 O 2 ...O t and the state at time t is i i.e. q t = i. If we can determine κ T (i) we can calculate p(O|M) since: can be understood as follows: κ t (j)a ij is the probability of the joint event O 1 ....O t is observed, the state at time t is i and state j is reached at time t+1. If we sum this probability over all R possible states for i, we get the probability of j at t+1 accompanied with all previous observations from O 1 O 2 ...O tonly. Thus to get κ t+1 (j) we must multiply by b j (O t+1 ) so that we have all observations O 1 ...O t+1 . Table 1 : 1Initial State Probabilities (π i )State (i) Probability 1 1 × 10 −6 2 1 − 1 × 10 −6 A = 0.78 0.22 0.82 0.18 Table 2 : 2Mixture Means u jk (%/year)Gaussian Component (k) State (j) 1 2 N 1 13.0 -4.8 N 2 28.0 1.4 Overall 14.8 -4.7 Table 3 : 3Mixture Standard Deviations √ ϕ jk (%/year)Gaussian Component (k) State (j) 1 2 N 1 4.5 5.6 N 2 28.0 110.0 Overall 11.6 13.3 Table 4 : 4Weighting Matrix (c jk )From table 2 we can infer that the BW algorithm has attributed state two as the down state since its overall mean is negative, unlike state one. Using the Markowitz's variance measure of risk[Mar52] it is encouraging that we can conclude that the risk level in state two is higher than in state one (see table 3), since a declining economy (state two) is a riskier economic state. Additionally, an increase in variance (and therefore volatility) with lower returns is consistent with the leverage effect. The initial state probabilities π strongly suggest the HMM starts in state two and this is consistent with the data as the first year's return (see table 5) is relatively low (1.2%).The transition matrixĀ is similar to the transition matrix theoretically postulated (for an economy); thus it is consistent with our theoretical expectations. The matrix A tells us that given the model is in state one it is likely to stay in that state most of the time, only transitioning to state two for 22% of the time. Additionally, in state two the model is most likely to return to state one (probability 0.78), thus captures the cyclical nature of the economy.To validate the quality of the GM-HMM calibration in terms of state sequence generation, we ran the Viterbi algorithm. Note that calibration under the Hamilton filter does not provide or enable state sequence estimation. The Viterbi algorithm gave the following state sequence result for in sample data 1976-96 in table 5:Gaussian Component (k) State (j) 1 2 N 1 0.88 0.99 N 2 0.12 0.01 Table 5 : 5In Sample Regime Sequence ResultsYear Regime Empirical Annual Return (%) 1976 2 1.2 1977 2 -13.4 1978 1 11.3 1979 1 13.4 1980 1 12.6 1981 2 -7.3 1982 1 18.8 1983 1 11.7 1984 1 9.5 1985 1 16.5 1986 1 25.8 1987 2 -6.5 1988 1 14.6 1989 1 10.1 1990 1 4.4 1991 1 17.3 1992 1 7.1 1993 1 9.3 1994 2 -2.4 1995 1 30.2 1996 1 21.2 table 6 : 6 Table 6 : 6Out of Sample Regime Sequence ResultsYear Regime Empirical Annual Return (%) 1997 1 22.1 1998 1 26.6 1999 1 8.6 2000 2 -2.1 2001 2 -18.9 2002 1 -27.8 2003 1 27.9 2004 1 4.3 2005 1 8.0 2006 1 11.6 2007 2 -2.6 Note: following discussions with Prof.Rabiner[Rab08] on the equation for p(O, Q|Θ) it was concluded that the equation for p(O, Q|Θ) in Rabiner's paper[Rab89] is incorrect. Regime dependent determinants of credit default swap spreads. C Alexander, A Kaeck, Journal of Banking and Finance. 326C. Alexander and A. Kaeck. Regime dependent determinants of credit default swap spreads. Journal of Banking and Finance, 32(6):637-648, 2008. Basecalling using hidden Markov models. P Boufounos, S El-Difrawy, D Ehrlich, Journal of the Franklin Institute. 3411-2P. Boufounos, S. El-Difrawy, and D. Ehrlich. Basecalling using hidden Markov models. Journal of the Franklin Institute, 341(1-2):23-36, 2004. Growth volatility and financial liberalization. G Bekaert, C R Harvey, C Lundblad, Journal of International Money and Finance. 253G. Bekaert, C.R. Harvey, and C. Lundblad. Growth volatility and financial liberalization. Journal of International Money and Finance, 25(3):370-403, 2006. Multivariate GARCH Models: A Survey. L Bauwens, S Laurent, J V K Rombouts, Journal of Applied Econometrics. 211L. Bauwens, S. Laurent, and J.V.K. Rombouts. Multivariate GARCH Models: A Survey. Journal of Applied Econometrics, 21(1):79-109, 2006. Testing for non-stationarity and cointegration allowing for the possibility of a structural break: an application to Eu-roSterling interest rates. C Brooks, A G Rew, Economic Modelling. 191C. Brooks and A.G. Rew. Testing for non-stationarity and cointegration allowing for the possibility of a structural break: an application to Eu- roSterling interest rates. Economic Modelling, 19(1):65-90, 2002. Portfolio optimization when asset returns have the Gaussian mixture distribution. I Buckley, D Saunders, L Seco, European Journal of Operational Research. 1853I. Buckley, D. Saunders, and L. Seco. Portfolio optimization when as- set returns have the Gaussian mixture distribution. European Journal of Operational Research, 185(3):1434-1461, 2007. Duration-Dependent Transitions in a Markov Model of US GNP Growth. J M Durland, T H Mccurdy, Journal of Business and Economic Statistics. 123J.M. Durland and T.H. McCurdy. Duration-Dependent Transitions in a Markov Model of US GNP Growth. Journal of Business and Economic Statistics, 12(3):279-288, 1994. An application of hidden Markov models to asset allocation problems. R J Elliott, J Van Der Hoek, Finance and Stochastics. 13R.J. Elliott and J. van der Hoek. An application of hidden Markov models to asset allocation problems. Finance and Stochastics, 1(3):229-238, 1997. The behaviour of stock market prices. E F Fama, Journal of Business. 381E.F. Fama. The behaviour of stock market prices. Journal of Business, 38(1):34-105, 1965. The Viterbi algorithm. G D ForneyJr, Proceedings of the IEEE. 613G.D. Forney Jr. The Viterbi algorithm. Proceedings of the IEEE, 61(3):268-278, 1973. A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle. J D Hamilton, Econometrica. 572J.D. Hamilton. A New Approach to the Economic Analysis of Nonsta- tionary Time Series and the Business Cycle. Econometrica, 57(2):357-384, 1989. A Quasi-Bayesian Approach to Estimating Parameters for Mixtures of Normal Distributions. J D Hamilton, Journal of Business & Economic Statistics. 91J.D. Hamilton. A Quasi-Bayesian Approach to Estimating Parameters for Mixtures of Normal Distributions. Journal of Business & Economic Statistics, 9(1):27-39, 1991. Time series analysis. J D Hamilton, PrincetonJ.D. Hamilton. Time series analysis. Princeton, 1994. A Regime-Switching Model of Long-Term Stock Returns. M R Hardy, North American Actuarial Journal. 52M.R. Hardy. A Regime-Switching Model of Long-Term Stock Returns. North American Actuarial Journal, 5(2):41-53, 2001. Optimal portfolio choice for unobservable and regime-switching mean returns. T Honda, Journal of Economic Dynamics and Control. 281T. Honda. Optimal portfolio choice for unobservable and regime-switching mean returns. Journal of Economic Dynamics and Control, 28(1):45-78, 2003. Autoregressive Conditional Heteroskedasticity and Changes in Regime. J D Hamilton, R Susmel, Journal of Econometrics. 641-2J.D. Hamilton and R. Susmel. Autoregressive Conditional Heteroskedas- ticity and Changes in Regime. Journal of Econometrics, 64(1-2):307-33, 1994. Hidden Markov models for speech recognition. B H Juang, L R Rabiner, Technometrics. 333B.H. Juang and L.R. Rabiner. Hidden Markov models for speech recogni- tion. Technometrics, 33(3):251-272, 1991. New index of coincident indicators: A multivariate Markov switching factor model approach. M J Kim, J S Yoo, Journal of Monetary Economics. 363M.J. Kim and J.S. Yoo. New index of coincident indicators: A multivariate Markov switching factor model approach. Journal of Monetary Economics, 36(3):607-630, 1995. Effect of initial HMM choices in multiple sequence training for gesture recognition. N Liu, R I A Davis, B C Lovell, P J Kootsookos, Information Technology: Coding and Computing. 11N. Liu, R.I.A. Davis, B.C. Lovell, and P.J. Kootsookos. Effect of initial HMM choices in multiple sequence training for gesture recognition. Infor- mation Technology: Coding and Computing, 1(1):5-7, 2004. Mathematical Models for Speech Technology. S E Levinson, WileyS.E. Levinson. Mathematical Models for Speech Technology. Wiley, 2005. Portfolio Selection. H Markowitz, The Journal of Finance. 71H. Markowitz. Portfolio Selection. The Journal of Finance, 7(1):77-91, 1952. A computational prediction of isochores based on hidden Markov models. C Melodelima, L Guéguen, D Piau, C Gautier, Gene. 3851C. Melodelima, L. Guéguen, D. Piau, and C. Gautier. A computational prediction of isochores based on hidden Markov models. Gene, 385(1):41- 49, 2006. Hidden Markov models for scenario generation. E Messina, D Toscani, IMA Journal of Management Mathematics. 194E. Messina and D. Toscani. Hidden Markov models for scenario generation. IMA Journal of Management Mathematics, 19(4):379-401, 2007. Hidden markov model (hmm) toolbox for matlab. K Murphy, K. Murphy. Hidden markov model (hmm) toolbox for matlab. Finite-sample properties of the maximum likelihood estimator in autoregressive models with Markov switching. Z Psaradakis, M Sola, Journal of Econometrics. 862Z. Psaradakis and M. Sola. Finite-sample properties of the maximum like- lihood estimator in autoregressive models with Markov switching. Journal of Econometrics, 86(2):369-386, 1998. A tutorial on hidden Markov models and selected applications in speech recognition. L R Rabiner, Proceedings of the IEEE. 772L.R. Rabiner. A tutorial on hidden Markov models and selected applica- tions in speech recognition. Proceedings of the IEEE, 77(2):257-286, 1989. Private Communication. L R Rabiner, L.R. Rabiner. Private Communication, 2008. Penalized Maximum Likelihood Estimation for Normal Mixture Distributions. School of Computer and Information Sciences. A Ridolfi, J Idier, Ecole Polytechnique Federale de LausanneA. Ridolfi and J. Idier. Penalized Maximum Likelihood Estimation for Nor- mal Mixture Distributions. School of Computer and Information Sciences, Ecole Polytechnique Federale de Lausanne, 200285, 2002. Why Does Stock Volatility Change Over Time. G W Schwert, Journal of Finance. 445G.W. Schwert. Why Does Stock Volatility Change Over Time? Journal of Finance, 44(5):1115-1153, 1989. Modeling and analysis of queuing handoff calls in single and two-tier cellular networks. T Salih, K Fidanboylu, Computer Communications. 2917T. Salih and K. Fidanboylu. Modeling and analysis of queuing handoff calls in single and two-tier cellular networks. Computer Communications, 29(17):3580-3590, 2006. A test for volatility spillovers. M Sola, F Spagnolo, N Spagnolo, Economics Letters. 761M. Sola, F. Spagnolo, and N. Spagnolo. A test for volatility spillovers. Economics Letters, 76(1):77-84, 2002. A survey of hybrid ANN/HMM models for automatic speech recognition. E Trentin, M Gori, Neurocomputing. 371-4E. Trentin and M. Gori. A survey of hybrid ANN/HMM models for auto- matic speech recognition. Neurocomputing, 37(1-4):91-126, 2001. Moments of Markov switching models. A Timmermann, Journal of Econometrics. 961A. Timmermann. Moments of Markov switching models. Journal of Econo- metrics, 96(1):75-111, 2000. An introduction to stochastic modeling. H M Taylor, S Karlin, Academic PressSan DiegoH.M. Taylor and S. Karlin. An introduction to stochastic modeling. Aca- demic Press San Diego, 1984. How persistent is volatility? An answer with stochastic volatility models with Markov regime switching state equations. P L Valls-Pereira, S Hwang, S E Satchell, Journal of Business Finance and Accounting. 345-6P.L. Valls-Pereira, S. Hwang, and S.E. Satchell. How persistent is volatil- ity? An answer with stochastic volatility models with Markov regime switching state equations. Journal of Business Finance and Accounting, 34(5-6):1002-1024, 2004.
[]
[ "Mathematical Biology Stochastic neural field equations: a rigorous footing", "Mathematical Biology Stochastic neural field equations: a rigorous footing" ]
[ "O Faugeras [email protected] \nO. Faugeras NeuroMathComp\nINRIA\nB) ToSCA/NeuroMathCompSophia AntipolisFrance\n", "· J Inglis \nO. Faugeras NeuroMathComp\nINRIA\nB) ToSCA/NeuroMathCompSophia AntipolisFrance\n", "J Inglis [email protected] \nINRIA\nSophia AntipolisFrance\n" ]
[ "O. Faugeras NeuroMathComp\nINRIA\nB) ToSCA/NeuroMathCompSophia AntipolisFrance", "O. Faugeras NeuroMathComp\nINRIA\nB) ToSCA/NeuroMathCompSophia AntipolisFrance", "INRIA\nSophia AntipolisFrance" ]
[ "J. Math. Biol" ]
We here consider a stochastic version of the classical neural field equation that is currently actively studied in the mathematical neuroscience community. Our goal is to present a well-known rigorous probabilistic framework in which to study these equations in a way that is accessible to practitioners currently working in the area, and thus to bridge some of the cultural/scientific gaps between probability theory and mathematical biology. In this way, the paper is intended to act as a reference that collects together relevant rigorous results about notions of solutions and wellposedness, which although may be straightforward to experts from SPDEs, are largely unknown in the neuroscientific community, and difficult to find in a very large body of literature. Moreover, in the course of our study we provide some new specific conditions on the parameters appearing in the equation (in particular on the neural field kernel) that guarantee the existence of a solution.
10.1007/s00285-014-0807-6
null
6,469,095
1311.5446
29ce3dbbf4c781a76ad3e4193946258b32d9da50
Mathematical Biology Stochastic neural field equations: a rigorous footing 2015 O Faugeras [email protected] O. Faugeras NeuroMathComp INRIA B) ToSCA/NeuroMathCompSophia AntipolisFrance · J Inglis O. Faugeras NeuroMathComp INRIA B) ToSCA/NeuroMathCompSophia AntipolisFrance J Inglis [email protected] INRIA Sophia AntipolisFrance Mathematical Biology Stochastic neural field equations: a rigorous footing J. Math. Biol 71201510.1007/s00285-014-0807-6Received: 22 November 2013 / Revised: 28 April 2014 / Published online: 29 July 2014123 260 O. Faugeras, J. InglisStochastic neural field equations · Spatially correlated noise · Multiplicative noise · Stochastic integro-differential equation · Existence and uniqueness Mathematics Subject Classification 60H20 · 60H30 · 92C20 We here consider a stochastic version of the classical neural field equation that is currently actively studied in the mathematical neuroscience community. Our goal is to present a well-known rigorous probabilistic framework in which to study these equations in a way that is accessible to practitioners currently working in the area, and thus to bridge some of the cultural/scientific gaps between probability theory and mathematical biology. In this way, the paper is intended to act as a reference that collects together relevant rigorous results about notions of solutions and wellposedness, which although may be straightforward to experts from SPDEs, are largely unknown in the neuroscientific community, and difficult to find in a very large body of literature. Moreover, in the course of our study we provide some new specific conditions on the parameters appearing in the equation (in particular on the neural field kernel) that guarantee the existence of a solution. Introduction Neural field equations have been widely used to study spatiotemporal dynamics of cortical regions. Arising as continuous spatial limits of discrete models, they provide a step towards an understanding of the relationship between the macroscopic spatially structured activity of densely populated regions of the brain, and the underlying microscopic neural circuitry. The discrete models themselves describe the activity of a large number of individual neurons with no spatial dimensions. Such neural mass models have been proposed by Lopes da Silva et al. (1974Silva et al. ( , 1976 to account for oscillatory phenomena observed in the brain, and were later put on a stronger mathematical footing in the study of epileptic-like seizures in Jansen and Rit (1995). When taking the spatial limit of such discrete models, one typically arrives at a nonlinear integro-differential equation, in which the integral term can be seen as a nonlocal interaction term describing the spatial distribution of synapses in a cortical region. Neural field models build on the original work of Cowan (1972, Wilson andCowan (1973)) and Amari (1977), and are known to exhibit a rich variety of phenomena including stationary states, traveling wave fronts, pulses and spiral waves. For a comprehensive review of neural field equations, including a description of their derivation, we refer to Bressloff (2012). More recently several authors have become interested in stochastic versions of neural field equations (see for example Bressloff 2009Bressloff , 2010Bressloff and Webber 2012;Bressloff and Wilkerson 2012;Kilpatrick and Ermentrout 2013), in order to (amongst other things) model the effects of fluctuations on wave front propagation. In particular, in Bressloff and Webber (2012) a multiplicative stochastic term is added to the neural field equation, resulting in a stochastic nonlinear integro-differential equation of the form dY (t, x) = ⎡ ⎣ −Y (t, x) + R w(x, y)G(Y (t, y))dy ⎤ ⎦ dt + σ (Y (t, x))dW (t, x), (1.1) for x ∈ R, t ≥ 0, and some functions G (referred to as the nonlinear gain function), σ (the diffusion coefficient) and w (the neural field kernel, sometimes also called the connectivity function). Here (W (t, x)) x∈R,t≥0 is a stochastic process (notionally a "Gaussian random noise") that depends on both space and time, and which may possess some spatial correlation. Of course the first step towards understanding (1.1) rigorously is defining what we mean by a solution. This is in fact not completely trivial and is somewhat glossed-over in the neuroscientific literature. The main point is that any solution must involve an object of the form σ (Y (t, x))dW (t, x) (1.2) which must be precisely defined. Of course, in the case where there is no spatial dimension, the theory of such stochastic integrals is widely disseminated, but for integrals with respect to space-time white noise (for example) it is far less wellknown. It is for this reason that we believe it to be extremely worthwhile making a detailed review of how to give sense to these objects, and moreover to solutions to (1.1) when they exist, in a way that is accessible to practitioners. Although such results are quite well-known in probability theory, the body of literature is very large and generalistic, posing a daunting prospect for a mathematical neuroscientist looking to apply a specific result. The fact that the equation fits into well-studied frameworks also opens up opportunities to apply existing abstract results (for example large deviation principles-see Remark 2.3). There are in fact two distinct approaches to defining and interpreting the quantity (1.2), both of which allow one to build up a theory of stochastic partial differential equations (SPDEs). Although (1.1) does not strictly classify as a SPDE (since there is no derivative with respect to the spatial variable), both approaches provide a rigorous underlying theory upon which to base a study of such equations. The first approach generalizes the theory of stochastic processes in order to give sense to solutions of SPDEs as random processes that take their values in a Hilbert space of functions [as presented by Da Prato and Zabczyk in (1992) and more recently by Prévôt and Röckner in (2007)]. With this approach, the quantity (1.2) is interpreted as a Hilbert space-valued integral i.e. " B(Y (t))dW (t)", where (Y (t)) t≥0 and (W (t)) t≥0 take their values in a Hilbert space of functions, and B(Y (t)) is an operator between Hilbert spaces (depending on σ ). The second approach is that of Walsh [as described in Walsh (1986)], which, in contrast, takes as its starting point a PDE with a random and highly irregular "white-noise" term. This approach develops integration theory with respect to a class of random measures, so that (1.2) can be interpreted as a random field in both t and x. In the theory of SPDEs, there are advantages and disadvantages of taking both approaches. This is also the case with regards to the stochastic neural field Eq. (1.1), as described in the conclusion below (Sect. 5), and it is for this reason that we here review both approaches. Taking the functional approach of Da Prato and Zabczyk is perhaps more straightforward for those with knowledge of stochastic processes, and the existing general results can be applied more directly in order to obtain, for example, existence and uniqueness. This was the path taken in Kuehn and Riedler (2014) where the emphasis was on large deviations, though in a much less general setup than we consider here (see Remark 2.3). However, it can certainly be argued that solutions constructed in this way may be "non-physical", since the functional theory tends to ignore any spatial regularity properties (solutions are typically L 2 -valued in the spatial direction). We argue that the approach of Walsh is more suited to looking for "physical" solutions that are at least continuous in the spatial dimension. A comparison of the two approaches in a general setting is presented in Dalang and Quer-Sardanyons (2011) or Jetschke (1982Jetschke ( , 1986, and in our setting in Sect. 4 below. Our main conclusion is that in typical cases of interest for practitioners, the approaches are equivalent (see Example 4.2), but one or the other may be more suited to a particular need. To reiterate, the main aim of this article is to present a review of an existing theory, which is accessible to readers unfamiliar with stochastic partial differential equations, that puts the study of stochastic neural field equations on a rigorous mathematical footing. As a by product we will be able to give general conditions on the functions G, σ and w that, as far as we know, do not appear anywhere else in the literature and guarantee the existence of a solution to (1.1) in some sense. Moreover, these conditions are weak enough to be satisfied for all typical choices of functions made by practitioners (see Sects. 2.6, 2.7 and 2.8). By collecting all these results in a single place, we hope this will provide a reference for practitioners in future works. The layout of the article is as follows. We first present in Sect. 2 the necessary material in order to consider the stochastic neural field Eq. (1.1) as an evolution equation in a Hilbert space. This involves introducing the notion of a Q-Wiener process taking values in a Hilbert space and stochastic integration with respect to Q-Wiener processes. A general existence result from Prato and Zabczyk (1992) is then applied in Sect. 2.5 to yield a unique solution to (1.1) interpreted as a Hilbert space valued process. The second part of the paper switches track, and describes Walsh's theory of stochastic integration (Sect. 3.1), with a view of giving sense to a solution to (1.1) as a random field in both time and space. To avoid dealing with distribution-valued solutions, we in fact consider a Gaussian noise that is smoothed in the spatial direction (Sect. 3.2), and show that, under some weak conditions, the neural field equation driven by such a smoothed noise has a unique solution in the sense of Walsh that is continuous in both time and space (Sect. 3.3). We finish with a comparison of the two approaches in Sect. 4, and summarize our findings in a conclusion (Sect. 5). Notation: Throughout the article (Ω, F, P) will be a probability space, and L 2 (Ω, F, P) will be the space of square-integrable random variables on (Ω, F, P). We will use the standard notation B(T ) to denote the Borel σ -algebra on T for any topological space T . The Lebesgue space of p-integrable (with respect to the Lebesgue measure) functions over R N for N ∈ N = {1, 2, . . . } will be denoted by L p (R N ), p ≥ 1, as usual, while L p (R N , ρ), p ≥ 1, will be the Lebesgue space weighted by a measurable function ρ : R N → R + . Stochastic neural field equations as evolution equations in Hilbert spaces As stated in the introduction, the goal of this section is to provide the theory and conditions needed to interpret the solution to (1.1) as a process (Y (t)) t≥0 that takes its values in a Hilbert space of functions i.e. for each t ≥ 0, Y (t) is a function of the spatial variable x. This is in order to try and cast the problem into the well-known theoretical framework of stochastic evolution equations in Hilbert spaces, as detailed in Prato and Zabczyk (1992). In particular we will look for solutions to dY (t) = (−Y (t) + F(Y (t))) dt + "B(Y (t))dW (t)", t ≥ 0, (2.1) such that Y (t) ∈ L 2 (R N , ρ) for some measurable ρ : R N → R + (to be determined), where F is now an operator on L 2 (R N , ρ) given by F(Y (t))(x) = R N w(x, y)G(Y (t, y))dy, x ∈ R N . Here w : R N × R N → R is the neural field kernel, and G : R → R is the nonlinear gain function. Note that we have made a slight generalization here in comparison with (1.1) in that we in fact work on R N , rather than R. The term B(Y (t))dW (t) represents a stochastic differential term that must be made sense of as a differential in the Hilbert space L 2 (R N , ρ). This is done with the help of Sects. 2.1 and 2.2 below. Notation: In this section we will also need the following basic notions from functional analysis. Let U and H be two separable Hilbert spaces. We will write L 0 (U, H ) to denote the space of all bounded linear operators from U to H with the usual norm 1 (with the shorthand L 0 (H ) when U = H ), and L 2 (U, H ) for the space of all Hilbert-Schmidt operators from U to H , i.e. those bounded linear operators B : U → H such that k≥1 B(e k ) 2 H < ∞, for some (and hence all) complete orthonormal systems {e k } k≥1 of U . Finally, a bounded linear operator Q : U → U will be said to be trace-class if Tr(Q):= k≥1 Q(e k ), e k U < ∞, again for some (and hence all) complete orthonormal systems {e k } k≥1 of U . Hilbert space valued Q-Wiener processes The purpose of this section is to provide a basic understanding of how we can generalize the idea of an R d -valued Wiener process to one that takes its values in an infinite dimensional Hilbert space, which for convenience we fix to be U = L 2 (R N ) (this is simply for the sake of being concrete). In the finite dimensional case, it is well-known that R d -valued Wiener processes are characterized by their d × d covariance matrices, which are symmetric and nonnegative. The basic idea is that in the infinite dimensional setup the covariance matrices are replaced by covariance operators, which are linear, non-negative, symmetric and bounded. Indeed, let Q : U → U be a non-negative, symmetric bounded linear operator on U . To avoid introducing extra embeddings, we also suppose Tr(Q) < ∞. Then, completely analogously to the finite dimensional case, there exists a sequence of nonnegative real numbers (λ k ) k≥1 which are eigenvalues of Q, associated with a sequence of eigenfunctions {e k } k≥1 (i.e. Qe k = λ k e k ) that form a complete orthonormal basis for U . Moreover, since Tr(Q) < ∞, it holds that 1 The norm of B ⊂ L 0 (U, H ) is classically defined as sup x =0 Bx H x U . ∞ k=1 λ k < ∞. By a Q-Wiener process W = (W (t)) t≥0 on U we will simply mean that W (t) can be expanded as W (t) = ∞ k=1 λ k β k (t)e k ,(2.2) where (β k (t)) t≥0 , k = 1, 2, . . . are mutually independent standard real-valued Brownian motions. We note that W (t) exists as a U -valued square-integrable random variable i.e. W (t) ∈ L 2 (Ω, F, P). Equation (2.2) shows the role played by Q: the eigenvectors e k are functions that determine "where" the noise "lives" in U , while the eigenvalues λ k determine its dimensionality and relative strength. As an example of a covariance operator 2 , let us compute the covariance operator of W . An easy computation based on (2.2) and the elementary properties of the standard real-valued Brownian motion shows that E[ W (s), g U W (t), h U ] = (s ∧ t) Qg, h U ∀g, h ∈ U. (2. 3) It turns out that W is white in both space and time. The whiteness in time is apparent from the above expression. The whiteness in space is shown explicitly in Sect. 2.7. Stochastic integration with respect to Q-Wiener processes The second point is that we would like to be able to define stochastic integration with respect to these Hilbert space valued Wiener processes. In particular we must determine for which integrands this can be done [exactly as in Prato and Zabczyk (1992)]. As above, let U = L 2 (R N ), Q : U → U a non-negative, symmetric bounded linear operator on U such that Tr(Q) < ∞, and W = (W (t)) t≥0 be a Q-Wiener process on U [given by (2.2)]. Unfortunately, in order to define stochastic integrals with respect to W , we need a couple of technical definitions from functional analysis. This is simply in order to control the convergence of the infinite series that appear in the construction, as we will see in the example below. Indeed, let Q 1 2 (U ) be the subspace of U , which is a Hilbert space under the inner product u, v Q 1 2 (U ) := Q − 1 2 u, Q − 1 2 v U , u, v ∈ Q 1 2 (U ). Q 1 2 (U ) is in fact simply the space generated by the orthonormal basis { √ λ k e k } whenever {e k } is the orthonormal basis for U consisting of eigenfunctions of Q. Moreover, let H = L 2 (R N , ρ) for some measurable ρ : R N → R + (again this is just for the sake of concreteness-one could instead take any separable Hilbert space). It turns out that the space L 2 (Q 1 2 (U ), H ) of all Hilbert-Schmidt operators from Q 1 2 (U ) into H plays an important role in the theory of stochastic integration with respect to W , and for this reason we detail the following simple but illuminating example. Example 2.1 Let B : U → H be a bounded linear operator from U to H i.e. B ∈ L 0 (U, H ). Then, by definition, B 2 L 2 (Q 1 2 (U ),H ) = ∞ k=1 B(Q 1 2 (e k )) 2 H ≤ B 2 L 0 (U,H ) ∞ k=1 Q 1 2 (e k ) 2 U = B 2 L 0 (U,H ) ∞ k=1 Q 1 2 (e k ), Q 1 2 (e k ) U = B 2 L 0 (U,H ) ∞ k=1 Q(e k ), e k U = B 2 L 0 (U,H ) Tr(Q) < ∞, since Tr(Q) < ∞, where {e k } k≥1 is again a complete orthonormal system for U . In other words B ∈ L 0 (U, H ) ⇒ B ∈ L 2 (Q 1 2 (U ), H ). The main point of the section is the following. According to the construction detailed in Chapter 4 of Prato and Zabczyk (1992), we have that for a (random) process (Φ(t)) t≥0 the integral t 0 Φ(s)dW (s) (2.4) has a sense as an element of H when Φ(s) ∈ L 2 (Q 1 2 (U ), H ), Φ(s) is knowable 3 at time s, and if P ⎛ ⎝ t 0 Φ(s) 2 L 2 (Q 1 2 (U ),H ) ds < ∞ ⎞ ⎠ = 1. Now in view of Example 2.1, the take-away message is simply that the stochastic integral (2.4) has a sense in H if Φ(s) : U → H is a bounded linear operator i.e. is in L 0 (U, H ) for all s ∈ [0, t], and that the norm of Φ(s) is bounded on [0, t]. In fact this is the only knowledge that will be needed below. The stochastic neural field equation: interpretation in language of Hilbert space valued processes With the previous two sections in place, we can now return to (2.1) and interpret it (and in particular the noise term) in a rigorous way. Indeed, as above, let W be an L 2 (R N )-valued Q-Wiener process, with Q a non-negative, symmetric bounded linear operator on L 2 (R N ) such that Tr(Q) < ∞ (trace-class). The rigorous interpretation of (2.1) as an equation for a process (Y (t)) t≥0 taking its values in the Hilbert space L 2 (R N , ρ) is then dY (t) = (−Y (t) + F(Y (t))) dt + B(Y (t))dW (t), Y (0) = Y 0 ∈ L 2 (R N , ρ) (2.5) where B is a map from L 2 (R N , ρ) into the space of bounded linear operators L 0 (L 2 (R N ), L 2 (R N , ρ) ). Note that if B is such a map, then the integrated noise term of this equation has a sense thanks to Sect. 2.2. We in fact work with a general map B satisfying a Lipschitz condition (see below), but we keep in mind the following example which provides the link with the diffusion coefficient σ in (1.1): B(h)(u)(x) = σ (h(x)) R N ϕ(x − y)u(y)dy, x ∈ R N ,(2.6) for h ∈ L 2 (R N , ρ) and u ∈ L 2 (R N ), where σ and ϕ are some functions that must be chosen to ensure the conditions stated below are satisfied. We detail potential choices of σ and ϕ (and their significance from a modeling point of view-in particular how ϕ controls the spatial correlation) in Sect. 2.7 below. To summarize, we are here concerned with the solvability of (2.5) in L 2 (R N , ρ) (for some measurable ρ : R N → R + to be determined), where F(h)(x) = R N w(x, y)G(h(y))dy, x ∈ R N , h ∈ L 2 (R N , ρ),(2.7) and B : L 2 (R N , ρ) → L 0 (L 2 (R N ), L 2 (R N , ρ)). To this end, we make the following two Lipschitz assumptions on B and the nonlinear gain function G: • B : H → L 0 (U, H ) is such that B(g) − B(h) L 0 (U,H ) ≤ C σ g − h U , g, h ∈ L 2 (R N , ρ), where U = L 2 (R N ) and H = L 2 (R N , ρ) for notational simplicity; • G : R → R is bounded and globally Lipschitz i.e such that there exists a constant C G with sup a∈R |G(a)| ≤ C G and |G(a) − G(b)| ≤ C G |a − b|, ∀a, b ∈ R. Typically the nonlinear gain function G is taken to be a sigmoid function, for example G(a) = (1 + e −a ) −1 , a ∈ R, which certainly satisfies this assumption. 2.4 Discussion of conditions on the neural field kernel w and ρ Of particular interest to us are the conditions on the neural field kernel w which will allow us to prove existence and uniqueness of a solution to (2.5) by quoting a standard result from Prato and Zabczyk (1992). In Kuehn and Riedler (2014, footnote 1) it is suggested that the condition R N R N |w(x, y)| 2 dxdy < ∞ (C1) together with symmetry of w is enough to ensure that there exists a unique L 2 (R N )valued solution to (2.5). However, the problem is that it does not follow from (C1) that the operator F is stable on the space L 2 (R N ). For instance, suppose that in fact G ≡ 1 (so that G is trivially globally Lipschitz). Then for h ∈ L 2 (R N ) (and assuming w ≥ 0) we have that F(h) 2 L 2 (R N ) = R N w(x, ·) 2 L 1 (R N ) dx. (2.8) The point is that we can choose positive w such that (C1) holds, while (2.8) is not finite. For example in the case N = 1 we could take w(x, y) = (1 + |x|) −1 (1 + |y|) −1 for x, y ∈ R. In such a case the Eq. (2.5) is ill-posed: if Y (t) ∈ L 2 (R) then F(t, Y (t)) is not guaranteed to be in L 2 (R), which in turn implies that Y (t) ∈ L 2 (R)! With this in mind we argue two points. Firstly, if we want a solution in L 2 (R N ), we must make the additional strong assumption that ∀x ∈ R N (y → w(x, y)) ∈ L 1 (R N ), and w(x, ·) L 1 (R N ) ∈ L 2 (R N ). (C2) Indeed, below we will show that (C1) together with (C2) are enough to yield the existence of a unique L 2 (R N )-valued solution to (2.5). On the other hand, if we don't want to make the strong assumptions that (C1) and (C2) hold, then we have to work instead in a weighted space L 2 (R N , ρ), in order to ensure that F is stable. In this case, we will see that if ∃ ρ w ∈ L 1 (R N ) s.t. R N |w(x, y)|ρ w (x)dx ≤ w ρ w (y) ∀y ∈ R N , (C1') for some w > 0, and ∀x ∈ R N (y → w(x, y)) ∈ L 1 (R N ), and sup x∈R N w(x, ·) L 1 (R N ) ≤ C w (C2') for some constant C w , then we can prove the existence of a unique L 2 (R N , ρ w )-valued solution to (2.5). Condition (C1') is in fact a non-trivial eigenvalue problem, and it is not straightforward to see whether it is satisfied for a given function w. However, we chose to state the theorem below in a general way, and then below provide some important examples of when it can be applied. We will discuss these abstract conditions from a modeling point of view below. However, we first present the existence and uniqueness result. Existence and uniqueness Theorem 2.2 Suppose that the neural field kernel w either (i) satisfies conditions (C1) and (C2); or (ii) satisfies conditions (C1') and (C2'). If (i) holds set ρ w ≡ 1, while if (ii) holds let ρ w be the function appearing in condition (C1'). Then, whenever Y 0 is an L 2 (R N , ρ w )-valued random variable with finite pmoments for all p ≥ 2, the neural field Eq. (2.5) has a unique solution taking values in the space L 2 (R N , ρ w ). To be precise, there exists a unique L 2 (R N , ρ w )-valued process (Y (t)) t≥0 such that for all T > 0 P ⎛ ⎝ T 0 Y (s) 2 L 2 (R N ,ρ w ) ds < ∞ ⎞ ⎠ = 1, and Y (t) = e −t Y 0 + t 0 e −(t−s) F(Y (s))ds + t 0 e −(t−s) B(Y (s))dW (s), P − a.s. Moreover, (Y (t)) t≥0 has a continuous modification, and satisfies the bounds sup t∈[0,T ] E Y (t) p L 2 (R N ,ρ w ) ≤ C ( p) T 1 + E Y 0 p L 2 (R N ,ρ w ) , T > 0, (2.9) for all p ≥ 2, while for p > 2, E sup t∈[0,T ] Y (t) p L 2 (R N ,ρ w ) ≤ C ( p) T 1 + E Y 0 p L 2 (R N ,ρ w ) T > 0. (2.10) Proof We simply check the hypotheses of Prato and Zabczyk (1992, Theorem 7.4) (a standard reference in the theory) in both cases (i) and (ii). This involves showing that (a) F : L 2 (R N , ρ w ) → L 2 (R N , ρ w ); (b) the operator B(h) ∈ L 2 (Q 1 2 (U ), H ), for all h ∈ H [recalling that U = L 2 (R N ) and H = L 2 (R N ,L 2 (R N , ρ w ) → L 2 (R N , ρ w ). In case (i) this holds since ρ w ≡ 1 and for any h ∈ L 2 (R N ) F(h) 2 L 2 (R N ) = R N R N w(x, y)G(h(y))dy 2 dx ≤ C 2 G R N w(x, ·) 2 L 1 (R N ) dx < ∞, by assumption (C2). Similarly in case (ii) for any h ∈ L 2 (R N , ρ w ) F(h) 2 L 2 (R N ,ρ w ) = R N R N w(x, y)G(h(y))dy 2 ρ w (x)dx ≤ C 2 G sup x∈R N w(x, ·) 2 L 1 (R N ) ρ w L 1 (R N ) < ∞. Hence in either case F in fact maps L 2 (R N , ρ w ) into a metric ball in L 2 (R N , ρ w ). (b): To show (b) in both cases, we know by Example 2.1 that for h ∈ H , B(h) ∈ L 2 (Q 1 2 (U ), H ) whenever B(h) ∈ L 0 (U, H ), which is true by assumption. (c): To show (c), we first want F : L 2 (R N , ρ w ) → L 2 (R N , ρ w ) to be globally Lipschitz. To this end, for any g, h ∈ L 2 (R N , ρ w ), we see that in either case F(g) − F(h) 2 L 2 (R N ,ρ w ) = R N |F(g) − F(h)| 2 (x)ρ w (x)dx ≤ R N ⎛ ⎜ ⎝ R N |w(x, y)| |G(g(y)) − G(h(y))| dy ⎞ ⎟ ⎠ 2 ρ w (x)dx ≤ C 2 G R N ⎛ ⎜ ⎝ R N |w(x, y)| |g(y) − h(y)| dy ⎞ ⎟ ⎠ 2 ρ w (x)dx, where we have used the Lipschitz property of G. Now in case (i) it clearly follows from the Cauchy-Schwartz inequality that F(g) − F(h) 2 L 2 (R N ) ≤ C 2 G ⎛ ⎜ ⎝ R N R N |w(x, y)| 2 dxdy ⎞ ⎟ ⎠ g − h L 2 (R N ) , so that by condition (C1), F is indeed Lipschitz. In case (ii), by Cauchy-Schwartz and the specific property of ρ w given by (C1'), we see that (2014) was a large deviation principle for the stochastic neural field Eq. (2.5) with small noise, but in a less general situation than we consider here. In particular, the authors only considered the neural field equation driven by a simple additive noise, white in both space and time. F(g) − F(h) 2 L 2 (R N ,ρ w ) ≤ C 2 G sup x∈R N w(x, ·) L 1 (R N ) R N |g(y) − h(y)| 2 ⎛ ⎜ ⎝ R N |w(x, y)|ρ w (x)dx ⎞ ⎟ ⎠ dy ≤ C 2 G w sup x∈R N w(x, ·) L 1 (R N ) g − h 2 L 2 (R N ,ρ w ) , so that again F is Lipschitz. Since we have assumed that B : H → L 0 (U, H ) is Lipschitz, we are done. Remark 2.3 (Large Deviation Principle) The main focus of Kuehn and Riedler We would therefore like to remark that in our more general case, and under much weaker conditions than those imposed in Kuehn and Riedler (2014) (our conditions are for example satisfied for a connectivity function w that is homogeneous, as we will see in Example 2 below), an LDP result for the solution identified by the above theorem still holds and can be quoted from the literature. Indeed, such a result is presented in Peszat (1994, Theorem 7.1). The main conditions required for the application of this result have essentially already been checked above (global Lipschitz properties of F and B), and it thus remains to check conditions (E.1)-(E.4) as they appear in Peszat (1994). In fact these are trivialities, since the strongly continuous contraction semigroup S(t) is generated by the identity in our case. Discussion of conditions on w and ρ in practice Our knowledge about the kinds of neural field kernels that are found in the brains of mammals is still quite limited. Since visual perception is the most active area of research, it should not come as a surprise that it is in cortical regions involved in visual perception that this knowledge is the most extensive, and in particular in the primary visual area called V1 in humans. In models of this region it is usually assumed that w is the sum of two parts: a local part w loc corresponding to local neuronal connections, and a non-local part w lr corresponding to longer range connections. As suggested in Lund et al. (2003), Mariño et al. (2005), w loc is well approximated by a Gaussian function (or a difference of of such functions, see below): w loc (x, y) = K exp(−|x − y| 2 /2β 2 loc ) x, y ∈ R N , K > 0, (2.11) where β loc is the extent of the local connectivity. Hence w loc is isotropic and homogeneous. In fact for practitioners, a very common assumption on w is that it is homogeneous and in L 1 (R N ), which thus concentrates on modeling the local interactions Bressloff and Webber 2012;Bressloff and Wilkerson 2012;Folias and Bressloff 2004;Kilpatrick and Ermentrout 2013;Owen et al. 2007). However, when w is homogeneous it is clear that neither (C1) nor (C2) of the above theorem are satisfied, and so we instead must try to show that (C1') is satisfied [(C2') trivially holds], and look for solutions in a weighted L 2 space. This is done in the second example below. Long range connectivity is best described by assuming N = 2. It is built upon the existence of maps of orientation sensitivity in which the preferred visual orientation at each point x is represented by a function θ(x) ∈ [0, π). This function is smooth except at countably many points called the pinwheels where it is undefined 4 . Depending on the species, the long range connections feature an anisotropy, meaning that they tend to align themselves with the preferred orientation at x. On way to take this into account is to introduce the function A(χ , x) = exp[−((1 − χ) 2 x 2 1 + x 2 2 )/2β 2 lr ], where x = (x 1 , x 2 ), χ ∈ [0, 1) , and β lr is the extent of the long range connectivity. When χ = 0 there is no isotropy (as for the macaque monkey for example) and when χ ∈ (0, 1) there is some anisotropy (as for the tree shrew, for example). Let R α represent the rotation by angle α around the origin. The long range neural field kernel is then defined by (Baker and Cowan 2009;Bressloff 2003) w lr (x, y) = ε lr A(χ , R −2θ(x) (x − y)) · G β θ (θ (x) − θ(y)), where ε lr 1 and G β θ is the one-dimensional Gaussian density with 0 mean and variance β 2 θ . Note that w lr is not homogeneous, even in the case χ = 0, because θ(x) − θ(y) is not a function of x − y. It is easy to verify that w lr ∈ L 2 (R 2 ). Combining the local and non-local parts, one then writes for the neural field kernel of the primary visual area: w pva (x, y) = w loc (x − y) + w lr (x, y). (2.12) In view of our results, in the case where w = w pva , since the first part is homogeneous while the second is non-homogeneous but is in L 2 (R 2 ), we need a combination of the results above. Indeed, the homogeneous part dictates to work in L 2 (R 2 , ρ w loc ) (ρ w loc ∈ L 1 (R 2 )). The second kernel dictates to work in L 2 (R 2 ). But L 2 (R 2 ) ⊂ L 2 (R 2 , ρ w loc ) , because, as shown in Example 2 below ρ w loc can be chosen to be bounded, and hence there is no problem. Another commonly used type of (homogeneous) neural field kernel, when modeling excitatory and inhibitory populations of neurons is the so-called "Mexican hat" kernel defined by w mh (x, y) = K 1 exp(−|x − y| 2 /2β 2 1 )− K 2 exp(−|x − y| 2 /2β 2 2 ), x, y ∈ R N , (2.13) for some K 1 , K 2 > 0. If β 2 > β 1 and K 1 > K 2 for example, this is locally excitatory and remotely inhibitory. It is also important to mention the role of ρ w from a modeling perspective. The first point is that in the case where w is homogeneous, it is very natural to look for solutions that live in L 2 (R N , ρ) for some ρ ∈ L 1 (R N ), rather than in L 2 (R N ). This is because in the deterministic case (see Ermentrout and McLeod 1993), solutions of interest are of the form of traveling waves, which are constant at ∞, and thus are not integrable. Moreover, we emphasize that in Theorem 2.2 and the examples in the next section we identify a single ρ w ∈ L 1 (R N ) so that the standard existence result of Prato and Zabczyk (1992) can be directly applied through Theorem 2.2. We do not claim that this is the only weight ρ for which the solution can be shown to exist in L 2 (R N , ρ) (see also Example 2 below). Remark 2.4 If we replace the spatial coordinate space R N by a bounded domain D ⊂ R N , so that the neural field Eq. (2.5) describes the activity of a neuron found at position x ∈ D then checking the conditions as done Theorem 2.2 becomes rather trivial (under appropriate boundary conditions). Indeed, by doing this one can see that there exists a unique L 2 (D)-valued solution to (2.5) under the condition (C2') only (with R N replaced by D). Although working in a bounded domain seems more physical (since any physical section of cortex is clearly bounded), the unbounded case is still often used, see Bressloff and Webber (2012) or the review Bressloff (2012), and is mathematically more interesting. The problem in passing to the unbounded case stems from the fact that the nonlocal term in (2.5) naturally 'lives' in the space of bounded functions, while according to the theory the noise naturally lives in an L 2 space. These are not compatible when the underlying space is unbounded. 2.7 Discussion of the noise term in (2.5) It is important to understand the properties of the noise term in the neural field Eq. (2.5) which we now know has a solution in some sense. As mentioned above, one particular form of the noise operator B that is of special importance from a modeling point of view is given by (2.6) i.e. B(h)(u)(x) = σ (h(x)) R N ϕ(x − y)u(y)dy, x ∈ R N , (2.14) for h ∈ L 2 (R N , ρ) and u ∈ L 2 (R N ), and some functions σ , and ϕ. This is because such noise terms are spatially correlated depending on ϕ (as we will see below) and make the link with the original Eq. (1.1) considered in Bressloff and Webber (2012), where spatial correlations are important. An obvious question is then for which choices of σ and ϕ can we apply the above results? In particular we need to check that B(h) is a bounded linear operator from ρ), and that B is Lipschitz (assuming as usual that ρ ∈ L 1 (R N )). L 2 (R N ) to L 2 (R N , ρ) for all h ∈ L 2 (R N , To this end, suppose ϕ ∈ L 2 (R N ) and that there exists a constant C σ such that |σ (a) − σ (b)| ≤ C σ |a − b|, and |σ (a)| ≤ C σ (1 + |a|), ∀a, b ∈ R. (2.15) In other words σ : R → R is assumed to be Lipschitz and of linear growth. Then for any h ∈ L 2 (R N , ρ) and u ∈ L 2 (R N ), B(h)(u) 2 L 2 (R N ,ρ) = R N σ 2 (h(x)) ⎛ ⎜ ⎝ R N ϕ(x − y)u(y)dy ⎞ ⎟ ⎠ 2 ρ(x)dx ≤ 2 u 2 L 2 (R N ) ϕ 2 L 2 (R N ) ( ρ L 1 (R N ) + h 2 L 2 (R N ,ρ) ). Thus B(h) is indeed a bounded linear operator from L 2 (R N ) to L 2 (R N , ρ). Moreover, a similar calculation yields the Lipschitz property of B, so that the above results can be applied. In particular our results hold when σ (a) = λa, for some λ ∈ R. This is important because it is this choice of σ that is used for the simulations carried out in Bressloff and Webber (2012, Section 2.3). To see the spatial correlation in the noise term in (2.5) when B has the form (2.14) with ϕ ∈ L 2 (R N ), consider the case σ ≡ 1 (so that the noise is purely additive). Then t 0 B(Y (t))dW (t) = t 0 BdW (t)=:X (t), t ≥ 0, where B(u)(x) = R N ϕ(x − y)u(y)dy, x ∈ R N , u ∈ L 2 (R N ), and X (t) is a well-defined L 2 (R N , ρ)-valued process since B is bounded from L 2 (R N ) into L 2 (R N , ρ) (see Sect. 2. 2). Moreover, by Theorem 5.2 5 of Prato and Zabczyk (1992), (X (t)) t≥0 is Gaussian with mean zero and Cov (X (t)X (s)) = s ∧ t B Q B * , s, t ≥ 0, where B * : L 2 (R N , ρ) → L 2 (R N ) isE g, X (s) L 2 (R N ,ρ) h, X (t) L 2 (R N ,ρ) = s ∧ t B Q B * g, h L 2 (R N ,ρ) . That is, for any g, h ∈ L 2 (R N , ρ) R N R N E [ X (s, x)X (t, y) ] g(x)h(y)ρ(x)ρ(y)dxdy = s ∧ t Q B * h, B * g L 2 (R N ) = s ∧ t R N Q B * g(z)B * h(z)dz . = s ∧ t R N Q 1/2 B * g(z)Q 1/2 B * h(z)dz. (2.16) Now, by definition, for u ∈ L 2 (R N ) and f ∈ L 2 (R N , ρ) R N u(y)B * ( f )(y)dy = R N B(u)(x) f (x)ρ(x)dx = R N u(y) R N ϕ(x − y) f (x)ρ(x)dxdy so that B * ( f )(y) = R N ϕ(x − y) f (x)ρ(x)dx. Using this in (2.16), we see that R N R N E [ X (s, x)X (t, y) ] g(x)h(y)ρ(x)ρ(y)dxdy = s ∧ t R N ⎛ ⎜ ⎝ R N Q 1 2 ϕ(x − z)g(x)ρ(x)dx ⎞ ⎟ ⎠ ⎛ ⎜ ⎝ R N Q 1 2 ϕ(y − z)h(y)ρ(y)dy ⎞ ⎟ ⎠ dz, for all g, h ∈ L 2 (R N , ρ), since Q is a linear operator and is self-adjoint. We can then conclude that E [ X (s, x)X (t, y) ] = s ∧ t R N Q 1 2 ϕ(x −z)Q 1 2 ϕ(y −z)dz = (s ∧ t)c(x − y), (2.17) where c(x) = Q 1 2 ϕ * Q 1 2 ϕ(x) and ϕ(x) = ϕ(−x). Hence (X (t)) t≥0 is white in time but stationary and colored in space with covariance function (s∧t)c(x). We remark that the manipulations above are certainly not new [they are for example used in Brzeźniak and Peszat (1999)], but they illustrate nicely the spatial correlation property of the noise we consider. We conclude that (2.14) is exactly the rigorous interpretation of the noise described in Bressloff and Webber (2012), when interpreting a solution to the stochastic neural field equation as a process taking values in L 2 (R N , ρ w ). Remark 2.5 Note that in the case where B is the identity, X (t) = W (t). We can, at least formally, carry out the above computation with ϕ = δ 0 and find that E [ W (s, x)W (t, y) ] = (s ∧ t)Qδ 0 (x − y), (2.18) which yields for any g, h ∈ L 2 (R N ) E W (s), g L 2 (R N ) W (t), h L 2 (R N ) = R N R N E [ W (s, x)W (t, y) ] g(x)h(y) dxdy = (s ∧ t) Qg, h L 2 (R N ) , which is Eq. (2.3). Equation (2.18) is the reason why we stated in Sect. 2.1 that W was a white noise in space and time. Examples As mentioned we now present two important cases where the conditions (C1') and (C2') are satisfied. For convenience, in both cases we in fact show that (C1') is satisfied for some ρ w ∈ L 1 (R N ) that is also bounded. Example 1: |w| defines a compact integral operator. Suppose that -given ε > 0, there exists δ > 0 and R > 0 such that for all θ ∈ R N with |θ | < δ (i) for almost all x ∈ R N , R N \B(0,R) |w(x, y)|dy < ε, R N |w(x, y + θ) − w(x, y)|dy < ε,(ii) for almost all y ∈ R N , R N \B(0,R) |w(x, y)|dx < ε, R N |w(x + θ, y) − w(x, y)|dx < ε, where B(0, R) denotes the ball of radius R in R N centered at the origin; -There exists a bounded subset Ω ⊂ R N of positive measure such that inf y∈Ω Ω |w(x, y)|dx > 0, or inf x∈Ω Ω |w(x, y)|dy > 0; w satisfies (C2') and moreover ∀y ∈ R N (x → w(x, y)) ∈ L 1 (R N ), and sup y∈R N w(·, y) L 1 (R N ) < ∞. We claim that these assumptions are sufficient for (C1') so that we can apply Theorem 2.2 in this case. Indeed, let X be the Banach space of functions in L 1 (R N ) ∩ L ∞ (R N ) equipped with the norm · X = max{ · L 1 (R N ) , · L ∞ (R N ) }. Thanks to the last point above, we can well-define the map J : X → X by J h(y) = R N |w(x, y)|h(x)dx, h ∈ X. Moreover, it follows from (Eveson, 1995, Corollary 5.1) that the first condition we have here imposed on w is in fact necessary and sufficient for both the operators J : L 1 (R N ) → L 1 (R N ) and J : L ∞ (R N ) → L ∞ (R N ) to be compact. We therefore clearly also have that the condition is necessary and sufficient for the operator J : X → X to be compact. Note now that the space K of positive functions in X is a cone in X such that J (K) ⊂ K, and that the cone is reproducing (i.e. X = { f − g : f, g ∈ K}). If we can show that r (J ) is strictly positive, we can thus finally apply the Krein-Rutman Theorem [see for example (Du, 2006, Theorem 1.1)] to see that r (J ) is an eigenvalue with corresponding non-zero eigenvector ρ ∈ K. To show that r (J ) > 0, suppose first of all that there exists a bounded Ω ⊂ R N of positive measure such that inf y∈Ω Ω |w(x, y)|dx > 0. Define h = 1 on Ω, 0 elsewhere, so that h X = max{1, |Ω|}. Then, trivially, J h X ≥ sup y∈R N Ω |w(x, y)|dx ≥ inf y∈Ω Ω |w(x, y)|dx=:m > 0, by assumption. Replacing h by h = h/ max{1, |Ω|} yields h X = 1 and J h X ≥ m/ max{1, |Ω|}. Thus J ≥ m/ max{1, |Ω|}. Similarly J 2 h X ≥ sup y∈R N R N |w(x 1 , y)| ⎛ ⎝ Ω |w(x 2 , x 1 )|dx 2 ⎞ ⎠ dx 1 ≥ R N |w(x 1 , y)| ⎛ ⎝ Ω |w(x 2 , x 1 )|dx 2 ⎞ ⎠ dx 1 , ∀y ∈ R N ≥ inf x 1 ∈Ω ⎛ ⎝ Ω |w(x 2 , x 1 )|dx 2 ⎞ ⎠ Ω |w(x 1 , y)|dx 1 , ∀y ∈ R N . Therefore J 2 h X ≥ m 2 , so that J 2 ≥ m 2 / max{1, |Ω|}. In fact we have J k ≥ m k / max{1, |Ω|} for all k ≥ 1, so that, by the spectral radius formula, r (J ) ≥ m > 0. The case where inf x∈Ω Ω |w(x, y)|dy > 0 holds instead is proved similarly, by instead taking h = 1/|Ω| on Ω (0 elsewhere) and working with the L 1 (R N ) norm of J h in place of the L ∞ (R N ) norm. We have thus found a non-negative, non-zero function ρ = ρ w ∈ L 1 (R N ) ∩ L ∞ (R N ) such that R N |w(x, y)|ρ w (x)dx = r (J )ρ w (y), ∀y ∈ R N , so that (C1') is satisfied. Example 2: Homogeneous case. Suppose that -w is homogeneous i.e w(x, y) = w(x − y) for all x, y ∈ R N ; -w ∈ L 1 (R N ) and is continuous; -R N |x| 2N |w(x)|dx < ∞. These conditions are satisfied for many typical choices of the neural field kernel in the literature [e.g. the "Mexican hat" kernel Bressloff et al. 2001;Faye et al. 2011;Owen et al. 2007; Veltz and Faugeras 2010 and (2.13) above]. However, it is clear that we are not in the case of the previous example, since for any R > 0 sup x∈R N R N \B(0,R) |w(x − y)|dy = w L 1 (R N ) , which is not uniformly small. We thus again show that (C1') is satisfied in this case so that [since (C2') is trivially satisfied] Theorem 2.2 yields the existence of a unique L 2 (R N , ρ w )-valued solution to (2.5). In order to do this, we use the Fourier transform. Let v = |w|, so that v is continuous and in L 1 (R N ). Let Fv be the Fourier transform of v i.e. Fv(ξ ):= R N e −2πi x.ξ v(x)dx, ξ ∈ R N . Therefore Fv is continuous and bounded by sup ξ ∈R N |Fv(ξ )| ≤ v L 1 (R N ) = w L 1 (R N ) . Now let w = w L 1 (R N ) + 1, and z(x):=e −|x| 2 /2 , x ∈ R N , so that z is in the Schwartz space of smooth rapidly decreasing functions, which we denote by S(R N ). Then defineρ (ξ ):= Fz(ξ ) w − Fv(ξ ) . (2.19) We note that the denominator is continuous and strictly bounded away from 0 (indeed by construction w − Fv(ξ ) ≥ 1 for all ξ ∈ R N ). Thusρ is continuous, bounded and in L 1 (R N ) (since Fz ∈ S(R N ) by the standard stability result for the Fourier transform on S(R N )). We now claim that F −1ρ (x) ∈ L 1 (R N ), where the map F −1 is defined by F −1 g(x):= R N e 2πi x.ξ g(ξ )dξ, g ∈ L 1 (R N ). Indeed, we note that for any k ∈ {1, . . . , N }, ∂ 2N k Fv(ξ ) = (−2πi) 2N R N e −2πi x.ξ x 2N k v(x)dx, which is well-defined and bounded thanks to our assumption on the integrability of x → |x| 2N |w(x)|. Since Fz is rapidly decreasing, we can thus see that the functionρ(ξ ) is 2N times differentiable with respect to every component and ∂ 2N kρ (ξ ) is absolutely integrable for every k ∈ {1, . . . N }. Finally, since F −1 (∂ 2N kρ )(x) = (2πi) 2N x 2N k F −1ρ (x) for each k ∈ {1, . . . , N }, we have that |F −1ρ (x)| ≤ N k=1 |F −1 (∂ 2N kρ )(x)| (2π) 2N N k=1 x 2N k ≤ N N −1 N k=1 ∂ 2N kρ L 1 (R N ) (2π) 2N |x| 2N , for all x ∈ R N . Thus there exists a constant K such that |F −1ρ (x)| ≤ K /|x| 2N . Moreover, since we also have the trivial bound |F −1ρ (x)| ≤ ρ L 1 (R N ) , for all x ∈ R N , it follows that |F −1ρ (x)| ≤ K /(1 + |x| 2N ), by adjusting the constant K . Since this is integrable over R N , the claim is proved. Now, by the classical Fourier Inversion Theorem (which is applicable sinceρ and F −1ρ are both in L 1 (R N )), we thus have that F F −1ρ (ξ ) =ρ(ξ ), for all ξ ∈ R N . By setting ρ(x) = F −1ρ (x), we see that w Fρ(ξ ) − Fρ(ξ )Fv(ξ ):=Fz(ξ ). We may finally again apply the inverse Fourier transform F −1 to both sides, so that by the Inversion Theorem again (along with the standard convolution formula) it holds that w ρ(y) − R N v(x − y)ρ(x)dx = e − |y| 2 2 , y ∈ R N . It then follows that R N |w(x − y)|ρ(x)dx ≤ w ρ(y), y ∈ R N , as claimed. Moreover, Eq. (2.19) shows thatρ(ξ ) is in Schwartz space, hence so is ρ, implying that it is bounded. Note that Eq. (2.19) provides a way of explicitly computing one possible function ρ w appearing in condition (C1') in the cases where the neural field kernel is homogeneous [for example given by (2.11) and (2.13)]. That particular function can be varied for example by changing the function z and/or the constant w . Stochastic neural fields as Gaussian random fields In this section we take an alternative approach, and try to give sense to a solution to the stochastic neural field Eq. (1.1) as a random field, using Walsh's theory of integration. This approach generally takes as its starting point a deterministic PDE, and then attempts include a term which is random in both space and time. With this in mind, consider first the well studied deterministic neural field equation ∂ t Y (t, x) = −Y (t, x) + R N w(x, y)G(Y (t, y))dy, x ∈ R N , t ≥ 0. (3.1) Under some conditions on the neural field kernel w (boundedness, condition (C2') above and L 1 -Lipschitz continuity), this equation has a unique solution (t, x) → Y (t, x) that is bounded and continuous in x and continuously differentiable in t, whenever x → Y (0, x) is bounded and continuous (Potthast 2010). The idea then is to directly add a noise term to this equation, and try and give sense to all the necessary objects in order to be able to define what we mean by a solution. Indeed, consider the following stochastic version of (3.1), ∂ t Y (t, x) = −Y (t, x) + R N w(x, y)G(Y (t, y))dy + σ (Y (t, x))Ẇ (t, x) (3.2) whereẆ is a "space-time white noise". Informally we may think of the objectẆ (t, x) as the random distribution which, when integrated against a test function h ∈ L 2 (R + × R N )Ẇ (h):= ∞ 0 R N h(t, x)Ẇ (t, x)dtdx, h ∈ L 2 (R + × R N ), yields a zero-mean Gaussian random field (Ẇ (h)) h∈L 2 (R + ×R N ) with covariance E Ẇ (g)Ẇ (h) = ∞ 0 R N g(t, x)h(t, x)dxdt, g, h ∈ L 2 (R + × R N ). The point is that with this interpretation of space-time white noise, since Eq. (3.2) specifies no regularity in the spatial direction (the map x → Y (t, x) is simply assumed to be Lebesgue measurable so that the integral makes sense), it is clear that any solution will be distribution-valued in the spatial direction, which is rather unsatisfactory. Indeed, consider the extremely simple linear case when G ≡ 0 and σ ≡ 1, so that (3.2) reads ∂ t Y (t, x) = −Y (t, x) +Ẇ (t, x). (3.3) Formally, the solution to this equation is given by Y (t, x) = e −t Y (0, x) + t 0 e −(t−s)Ẇ (s, x)ds, t ≥ 0, x ∈ R N , and since the integral is only over time it is clear (at least formally) that x → Y (t, x) is a distribution for all t ≥ 0. This differs significantly from the usual SPDE situation, when one would typically have an equation such as (3.3) where a second order differential operator in space is applied to the first term on the right-hand side (leading to the much studied stochastic heat equation). In such a case, the semigroup generated by the second order differential operator can be enough to smooth the space-time white noise in the spatial direction, leading to solutions that are continuous in both space and time [at least when the spatial dimension is 1-see for example Pardoux (2007, Chapter 3) or Walsh (1986, Chapter 3)]. Of course one can develop a theory of distribution-valued processes [as is done in Walsh (1986, Chapter 4)] to interpret solutions of (3.2) in the obvious way: one says that the random field (Y (t, x) s, y))dydxds ) t≥0,x∈R N is a (weak) solution to (3.2) if for all φ ∈ C ∞ 0 (R N ) it holds that R N φ(x)Y (t, x)dx = e −t R N φ(x)Y (0, x)dx + t 0 R N e −(t−s) φ(x) R N w(x, y)G(Y (+ t 0 R N e −(t−s) φ(x)σ (Y (s, x))Ẇ (s, x)dxds, for all t ≥ 0. Here all the integrals can be well-defined, which makes sense intuitively if we think ofẆ (t, x) as a distribution. In fact it is more common to write t 0 R N e −(t−s) φ(x)W (dsdx) for the stochastic integral term, once it has been rigorously defined. However, we argue that it is not worth developing this theory here, since distribution valued solutions are of little interest physically. It is for this reason that we instead look for other types of random noise to add to the deterministic Eq. (3.1) which in particular will be correlated in space that will produce solutions that are real-valued random fields, and are at least Hölder continuous in both space and time. In the theory of SPDEs, when the spatial dimension is 2 or more, the problem of an equation driven by space-time white noise having no real-valued solution is a well-known and much studied one [again see for example Pardoux (2007, Chapter 3) or Walsh (1986, Chapter 3) for a discussion of this]. To get around the problem, a common approach (Dalang and Frangos 1998;Ferrante and Sanz-Solé 2006;Sanz-Solé and Sarrà 2002) is to consider random noises that are smoother than white noise, namely a Gaussian noise that is white in time but has a smooth spatial covariance. Such random noise is known as either spatially colored or spatially homogeneous white-noise. One can then formulate conditions on the covariance function to ensure that real-valued Hölder continuous solutions to the specific SPDE exist. It should also be mentioned, as remarked in Dalang and Frangos (1998), that in trying to model physical situations, there is some evidence that white-noise smoothed in the spatial direction is more natural, since spatial correlations are typically of a much larger order of magnitude than time correlations. In the stochastic neural field case, since we have no second order differential operator, our solution will only ever be as smooth as the noise itself. We therefore look to add a noise term to (3.1) that is at least Hölder continuous in the spatial direction instead of pure white noise, and then proceed to look for solutions to the resulting equation in the sense of Walsh. The section is structured as follows. First we briefly introduce Walsh's theory of stochastic integration, for which the classical reference is Walsh (1986). This theory will be needed to well-define the stochastic integral in our definition of a solution to the neural field equation. We then introduce the spatially smoothed space-time white noise that we will consider, before finally applying the theory to analyze solutions of the neural field equation driven by this spatially smoothed noise under certain conditions. Walsh's stochastic integral We will not go into the details of the construction of Walsh's stochastic integral, since a very nice description is given by D. Khoshnevisan in [see also Walsh (1986)]. Instead we present the bare essentials needed in the following sections. The elementary object of study is the centered Gaussian random field 6 W :=(Ẇ (A)) A∈B(R + ×R N ) indexed by A ∈ B(R + × R N ) (where R + :=[0, ∞)) with covariance function E Ẇ (A)Ẇ (B) = |A ∩ B|, A, B, ∈ B(R + × R N ), (3.4) where |A ∩ B| denotes the Lebesgue measure of A ∩ B. We say thatẆ is a white noise on R + × R N . We then define the white noise process W : =(W t (A)) t≥0,A∈B(R N ) by W t (A):=Ẇ ([0, t] × A), t ≥ 0. (3.5) Now define the norm f 2 W :=E ⎡ ⎢ ⎣ T 0 R N | f (t, x)| 2 dtdx ⎤ ⎥ ⎦ , (3.6) for any (random) function f that is knowable 7 at time t given (W s (A)) s≤t,A∈B(R N ) . Then let P W be the set of all such functions f for which f W < ∞. The point is that this space forms the set of integrands that can be integrated against the white noise process according to Walsh's theory. Indeed, we have then following theorem (Walsh 1986, Theorem 2.5). Theorem 3.1 For all f ∈ P W , t ∈ [0, T ] and A ∈ B(R N ), t 0 A f (s, x)W (dsdx) can be well-defined in L 2 (Ω, F, P). Moreover for all t ∈ (0, T ] and A, B ∈ B(R N ), E[ t 0 A f (s, x)W (dsdx)] = 0 and E ⎡ ⎣ t 0 A f (s, x)W (dsdx) t 0 B f (s, x)W (dsdx) ⎤ ⎦ = E ⎡ ⎣ t 0 A∩B f 2 (s, x)dxdt ⎤ ⎦ . 6 Recall that a collection of random variables X = {X (θ )} θ∈ indexed by a set is a Gaussian random field on if (X (θ 1 ), . . . , X (θ k )) is a k-dimensional Gaussian random vector for every θ 1 , . . . , θ k ∈ . It is characterized by its mean and covariance functions. 7 Precisely we consider functions f such that (t, x, ω) → f (t, x, ω) is measurable with respect to the σalgebra generated by linear combinations of functions of the form X (ω)1 (a,b] (t)1 A (x), where a, b ∈ R + , A ∈ B(R N ) , and X : Ω → R is bounded and measurable with respect to the σ -algebra generated by (W s (A)) s≤a,A∈B(R N ) . The following inequality will also be fundamental: Theorem 3.2 (Burkhölder's inequality) For all p ≥ 2 there exists a constant c p (with c 2 = 1) such that for all f ∈ P W , t ∈ (0, T ] and A ∈ B(R N ), E ⎡ ⎣ t 0 A f (s, x)W (dsdx) p ⎤ ⎦ ≤ c p E ⎡ ⎢ ⎢ ⎣ ⎛ ⎜ ⎝ T 0 R N | f (t, x)| 2 dtdx ⎞ ⎟ ⎠ p 2 ⎤ ⎥ ⎥ ⎦ . Spatially smoothed space-time white noise Let W = (W t (A)) t≥0,A∈B(R N ) be a white-noise process as defined in the previous section. For ϕ ∈ L 2 (R N ), we can well-define the (Gaussian) random field (W ϕ (t, x)) t≥0,x∈R N for any T > 0 by W ϕ (t, x):= t 0 R N ϕ(x − y)W (dsdy). (3.7) To see this one just needs to check that ϕ(x − ·) ∈ P W for every x, where P W is as above. The function ϕ(x − ·) is clearly completely determined by W for each x (since it is non-random) and for every T > 0 ϕ(x − ·) 2 W = E ⎡ ⎢ ⎣ T 0 R N |ϕ(x − z)| 2 dtdz ⎤ ⎥ ⎦ = T ϕ 2 L 2 (R N ) < ∞, so that the integral in (3.7) is indeed well-defined in the sense of the above construction. Moreover, by Theorem 3.1 the random field (W ϕ (t, x)) t≥0,x∈R N has spatial covariance E[W ϕ (t, x)W ϕ (t, y)] = E ⎡ ⎢ ⎣ t 0 R N ϕ(x − z) W (dsdz) t 0 R N ϕ(y − z) W (dsdz) ⎤ ⎥ ⎦ = t 0 R N ϕ(x − z)ϕ(y − z)dzds = tϕ ϕ(x − y), where denotes the convolution operator as usual, and ϕ(x) = ϕ(−x). Thus the random field (W ϕ (t, x)) t≥0,x∈R N is spatially correlated. The regularity in time of this process is the same as that of a Brownian path: Lemma 3.3 For any x ∈ R N , the path t → W ϕ (t, x) has an η-Hölder continuous modification for any η ∈ (0, 1/2). Proof For x ∈ R N , s, t ≥ 0 with s ≤ t and any p ≥ 2 we have by Burkhölder's inequality (Theorem 3.2 above) that E W ϕ (t, x) − W ϕ (s, x) p ≤ c p ϕ 2 L 2 (R N ) (t − s) p 2 . The result follows from the standard Kolmogorov continuity theorem [see for example Theorem 4.3 of Dalang et al. (2009, Chapter 1)]. More importantly, if we impose some (very weak) regularity on ϕ then W ϕ inherits some spatial regularity: Lemma 3.4 Suppose that there exists a constant C ϕ such that ϕ − τ z (ϕ) L 2 (R N ) ≤ C ϕ |z| α , ∀z ∈ R N , (3.8) for some α ∈ (0, 1], where τ z indicates the shift by z operator (so that τ z (ϕ)(y):=ϕ(y + z) for all y, z ∈ R N ). Then for all t ≥ 0, the map x → W ϕ (t, x) has an η-Hölder continuous modification, for any η ∈ (0, α). Proof For x, x ∈ R N , t ≥ 0, and any p ≥ 2 we have (again by Burkhölder's inequality) that E W ϕ (t, x) − W ϕ (t, x) p ≤ t p 2 c p ⎛ ⎜ ⎝ R N |ϕ(x − y) − ϕ( x − y)| 2 dy ⎞ ⎟ ⎠ p 2 = t p 2 c p ⎛ ⎜ ⎝ R N |ϕ(y) − ϕ(y + x − x)| 2 dy ⎞ ⎟ ⎠ p 2 ≤ t p 2 c p C p ϕ |x − x| pα . The result follows by Kolmogorov's continuity theorem. Remark 3.5 The condition (3.8) with α = 1 is true if and only if the function ϕ is in the Sobolev space W 1,2 (R N ) (Brezis 2010, Proposition 9.3). When α < 1 the set of functions ϕ ∈ L 2 (R N ) which satisfy (3.8) defines a Banach space denoted by N α,2 (R N ) which is known as the Nikolskii space. This space is closely related to the more familiar fractional Sobolev space W α,2 (R N ) though they are not identical. We refer to Simon (1990) for a detailed study of such spaces and their relationships. An example of when (3.8) holds with α = 1/2 is found by taking ϕ to be an indicator function. It is in this way we see that (3.8) is a rather weak condition. The stochastic neural field equation driven by spatially smoothed space-time white noise We now have everything in place to define and study the solution to the stochastic neural field equation driven by a spatially smoothed space-time white noise. Indeed, consider the equation ∂ t Y (t, x) = −Y (t, x) + R N w(x, y)G(Y (t, y))dy + σ (Y (t, x)) ∂ ∂t W ϕ (t, x), (3.9) with initial condition Y (0, x) = Y 0 (x) for x ∈ R N and t ≥ 0, where (W ϕ (t, x)) t≥0,x∈R N is the spatially smoothed space-time white noise defined by (3.7) for some ϕ ∈ L 2 (R N ). As above, we will impose Lipschitz assumptions on σ and G, by supposing that • σ : R → R is globally Lipschitz [exactly as in (2.15)] i.e. there exists a constant C σ such that |σ (a) − σ (b)| ≤ C σ |a − b|, and |σ (a)| ≤ C σ (1 + |a|), ∀a, b ∈ R; • G : R → R is bounded and globally Lipschitz (exactly as above) i.e. such that there exists a constant C G with sup a∈R |G(a)| ≤ C G and |G(a) − G(b)| ≤ C G |a − b|, ∀a, b ∈ R. Although the above equation is not well-defined ( ∂ ∂t W ϕ (t, x) does not exist), we will interpret a solution to (3.9) in the following way. Definition 3.6 By a solution to (3.9) we will mean a real-valued random field (Y (t, x)) t≥0,x∈R N such that Y (t, x) = e −t Y 0 (x) + t 0 e −(t−s) R N w(x, y)G(Y (s, y))dyds + t 0 R N e −(t−s) σ (Y (s, x))ϕ(x − y)W (dsdy), (3.10) almost surely for all t ≥ 0 and x ∈ R N , where the stochastic integral term is understood in the sense described in Sect. 3.1. Once again we are interested in the conditions on the neural field kernel w that allow us to prove the existence of solutions in this new sense. Recall that in Sect. 2 we either required conditions (C1) and (C2) or (C1') and (C2') to be satisfied. The difficulty was to keep everything well-behaved in the Hilbert space L 2 (R N ) (or L 2 (R N , ρ)). However, when looking for solutions in the sense of random fields (Y (t, x)) t≥0,x∈R N such that (3.10) is satisfied, such restrictions are no longer needed, principally because we no longer have to concern ourselves with the behavior in space at infinity. Indeed, in this section we simply work with the condition (C2') i.e. that ∀x ∈ R N (y → w(x, y)) ∈ L 1 (R N ), and sup x∈R N w(x, ·) L 1 (R N ) ≤ C w , for some constant C w . Using the standard technique of a Picard iteration scheme [closely following Walsh (1986, Theorem 3.2)] and the simple properties of the Walsh stochastic integral stated in Sect. 3.1, we can prove the following: Theorem 3.7 Suppose that the map x → Y 0 (x) is Borel measurable almost surely, and that sup x∈R N E |Y 0 (x)| 2 < ∞. Suppose moreover that the neural field kernel w satisfies condition (C2'). Then there exists an almost surely unique predictable random field (Y (t, x)) t≥0,x∈R N which is a solution to (3.9) in the sense of Definition 3.6 such that sup t∈[0,T ],x∈R N E |Y (t, x)| 2 < ∞, (3.11) for any T > 0. Proof The proof proceeds in a classical way, but where we are careful to interpret all stochastic integrals as described in Sect. 3.1, and so we provide the details. Uniqueness: Suppose that (Y (t, x)) t≥0,x∈R N and (Z (t, x)) t≥0,x∈R N are both solutions to (3.9) in the sense of Definition 3.6. Let s, y)) − G (Z (s, y))]dyds s, y)) − G (Z (s, y))|dyds D(t, x) = Y (t, x) − Z (t, x) for x ∈ R N and t ≥ 0. Then we have D(t, x) = t 0 e −(t−s) R N w(x, y)[G(Y (+ t 0 R N e −(t−s) [σ (Y (s, x)) − σ (Z (s, x))]ϕ(x − y)W (dsdy). Therefore E |D(t, x)| 2 ≤ 2E ⎡ ⎢ ⎣ ⎛ ⎜ ⎝ t 0 e −(t−s) R N |w(x, y)||G(Y (⎞ ⎟ ⎠ 2 ⎤ ⎥ ⎦ +2E ⎡ ⎢ ⎣ ⎛ ⎜ ⎝ t 0 R N e −(t−s) [σ (Y (s, x)) − σ (Z (s, x))]ϕ(x − y)W (dsdy) ⎞ ⎟ ⎠ 2 ⎤ ⎥ ⎦ ≤ 2t t 0 e −2(t−s) E ⎡ ⎢ ⎣ ⎛ ⎜ ⎝ R N |w(x, y)||G(Y (s, y)) − G(Z (s, y))|dy ⎞ ⎟ ⎠ 2 ⎤ ⎥ ⎦ ds +2 t 0 R N e −2(t−s) E |σ (Y (s, x)) − σ (Z (s, x))| 2 |ϕ(x − y)| 2 dsdy, where we have used Cauchy-Schwarz and Burkhölder's inequality (Theorem 3.2) with p = 2. Thus, using the Lipschitz property of σ and G, E |D(t, x)| 2 ≤ 2tC 2 G t 0 e −2(t−s) E ⎡ ⎢ ⎣ ⎛ ⎜ ⎝ R N |w(x, y)||D(s, y)|dy ⎞ ⎟ ⎠ 2 ⎤ ⎥ ⎦ ds + 2C 2 σ ϕ 2 L 2 (R N ) t 0 e −2(t−s) E |D(s, x)| 2 ds. By the Cauchy-Schwarz inequality once again E |D(t, x)| 2 ≤ 2tC 2 G w(x, ·) L 1 (R N ) t 0 e −2(t−s) R N |w(x, y)| E |D(s, y)| 2 dyds + 2C 2 σ ϕ 2 L 2 (R N ) t 0 e −2(t−s) E |D(s, x)| 2 ds. Let H (s):= sup x∈R N E[|D(s, x)| 2 ] , which is finite since we are assuming Y and Z satisfy (3.11). Writing K = 2 max{C 2 σ , C 2 G }, we have E |D(t, x)| 2 ≤ K tC 2 w + ϕ 2 L 2 (R N ) t 0 e −2(t−s) H (s)ds ⇒ H (t) ≤ K tC 2 w + ϕ 2 L 2 (R N ) t 0 H (s)ds. An application of Gronwall's lemma then yields sup s≤t H (s) = 0 for all t ≥ 0. Hence Y (t, x) = Z (t, x) almost surely for all t ≥ 0, x ∈ R N . Existence: Let Y 0 (t, x) = Y 0 (x). Then define iteratively for n ∈ N 0 , t ≥ 0, x ∈ R N , Y n+1 (t, x):= e −t Y 0 (x) + t 0 e −(t−s) R N w(x, y)G(Y n (s, y))dyds + t 0 R N e −(t−s) σ (Y n (s, x))ϕ(x − y)W (dsdy). (3.12) We first check that the stochastic integral is well-defined, under the assumption that sup t∈[0,T ],x∈R N E(|Y n (t, x)| 2 ) < ∞,(3.13) for any T > 0, which we know is true for n = 0 by assumption, and we show by induction is also true for each integer n ≥ 1 below. To this end for any T > 0 E ⎡ ⎢ ⎣ T 0 R N e −2(t−s) σ 2 (Y n (s, x))ϕ 2 (x − y)dsdy ⎤ ⎥ ⎦ ≤ 2C 2 σ ϕ 2 L 2 (R N ) T 0 (1 + E |Y n (s, x)| 2 )ds ≤ 2C 2 σ ϕ 2 L 2 (R N ) T 1 + sup t∈[0,T ],x∈R N E |Y n (t, x)| 2 < ∞. This shows that the integrand in the stochastic integral is in the space P W (for all T > 0), which in turn implies that the stochastic integral in the sense of Walsh is indeed well-defined (by Theorem 3.1). Now define D n (t, x):=Y n+1 (t, x) − Y n (t, x) for n ∈ N 0 , t ≥ 0 and x ∈ R N . Then exactly as in the uniqueness calculation we have E |D n (t, x)| 2 ≤ 2tC 2 G C w t 0 e −2(t−s) R N |w(x, y)|E |D n−1 (s, y)| 2 dyds + 2C 2 σ ϕ 2 L 2 (R N ) t 0 E |D n−1 (s, x)| 2 e −2(t−s) ds. This implies that by setting H n (s) = sup x∈R N E |D n (s, x)| 2 , (3.14) for all n ∈ N 0 and t ≥ 0. Now, similarly, we can find a constant C t such that H n (t) ≤ K n tC 2 w + ϕ 2 L 2 (R N ) n t 0 t 1 0 . . . t n−1 0 H 0 (t n )dt n . . . dt 1 ,E |D 0 (s, x)| 2 ≤ C t 1 + sup x∈R N E |Y 0 (x)| 2 , for any x ∈ R N and s ∈ [0, t], so that for s ∈ [0, t], H 0 (s) = sup x∈R N E |D 0 (s, x)| 2 ≤ C t 1 + sup x∈R N E |Y 0 (x)| 2 , Using this in (3.14) we see that, H n (t) ≤ C t K n tC 2 w + ϕ 2 L 2 (R N ) n 1 + sup x∈R N E |Y 0 (x)| 2 t n n! , for all t ≥ 0. This is sufficient to see that (3.13) holds uniformly in n. By completeness, for each t ≥ 0 and x ∈ R N there exists Y (t, x) ∈ L 2 (Ω, F, P) such that Y (t, x) is the limit in L 2 (Ω, F, P) of the sequence of square-integrable random variables (Y n (t, x)) n≥1 . Moreover, the convergence is uniform on [0, T ]×R N , i.e. sup t∈[0,T ],x∈R N E |Y n (t, x) − Y (t, x)| 2 → 0. From this we can see that (3.11) is satisfied for the random field (Y (t, x)) t≥0,x∈R N . It remains to show that (Y (t, x)) t≥0,x∈R N satisfies (3.10) almost surely. By the above uniform convergence, we have that E ⎡ ⎢ ⎣ t 0 R N e −(t−s) [σ (Y n (s, x)) − σ (Y (s, x))] ϕ(x − y)W (dsdy) 2 ⎤ ⎥ ⎦ → 0, and E ⎡ ⎢ ⎣ t 0 e −(t−s) R N w(x, y) [G(Y n (s, y)) − G(Y (s, y))] dsdy 2 ⎤ ⎥ ⎦ → 0, uniformly for all t ≥ 0 and x ∈ R N . Thus taking the limit as n → ∞ in (3.12) [in the L 2 (Ω, F, P) sense] proves that (Y (t, x)) t≥0,x∈R N does indeed satisfy (3.10) almost surely. In a very similar way, one can also prove that the solution remains L p -bounded whenever the initial condition is L p -bounded for any p > 2. Moreover, this also allows us to conclude that the solution has time continuous paths for all x ∈ R N . Theorem 3.8 Suppose that we are in the situation of Theorem 3.7, but in addition we have that sup x∈R N E |Y 0 (x)| p < ∞ for some p > 2. Then the solution (Y (t, x)) t≥0,x∈R N to (3.9) in the sense of Definition 3.6 is L p -bounded on [0, T ]×R N for any T i.e. If the initial condition has finite p-moments for all p > 2, then t → Y (t, x) has an η-Hölder continuous version, for any η ∈ (0, 1/2) and any x ∈ R N . Proof The proof of the first part of this result uses similar techniques as in the proof of Theorem 3.7 in order to bound E |Y (t, x)| p uniformly in t ∈ [0, T ] and x ∈ R N . In particular, we use the form of Y (t, x) given by (3.10), Burkhölder's inequality (see Theorem 3.2), Hölder's inequality and Gronwall's lemma, as well as the conditions imposed on w, σ , G and ϕ. For the time continuity, we again use similar techniques to achieve the bound E |Y (t, x) − Y (s, x)| p ≤ C ( p) T 1 + sup r ∈[0,T ],y∈R N E |Y (r, y)| p (t − s) p 2 , for all s, t ∈ [0, T ] with s ≤ t and x ∈ R N , for some constant C T . The results then follow from Kolmogorov's continuity theorem once again. Spatial regularity of solution As mentioned in the introduction to this section, the spatial regularity of the solution (Y (t, x)) t≥0,x∈R N to (3.9) is of interest. In particular we would like to find conditions under which it is at least continuous in space. As we saw in Lemma 3.4, under the weak condition on ϕ given by (3.8), we have that the spatially smoothed space-time white noise is continuous in space. We here show that under this assumption together with a Hölder continuity type condition on the neural field kernel w, the solution (Y (t, x)) t≥0,x∈R N inherits the spatial regularity of the the driving noise. It is worth mentioning that the neural field equation fits into the class of degenerate diffusion SPDEs (indeed there is no diffusion term), and that regularity theory for such equations is an area that is currently very active [see for example Hofmanová (2013) and references therein]. However, in our case we are not concerned with any kind of sharp regularity results [in contrast to those found in Dalang and Sanz-Solé (2009) for the stochastic wave equation], and simply want to assert that for most typical choices of neural field kernels w made by practitioners, the random field solution to the neural field equation is at least regular in space. The results of the section are simple applications of standard techniques to prove continuity in space of random field solutions to SPDEs, as is done for example in Walsh (1986, Corollary 3.4). The condition we introduce on w is the following: ∃K w ≥ 0 s.t. w(x, ·) − w( x, ·) L 1 (R N ) ≤ L w |x − x| α , ∀x, x ∈ R N , (C3') for some α ∈ (0, 1]. Remark 3.9 This condition is certainly satisfied for all typical choices of neural field kernel w. In particular, any smooth rapidly decaying function will satisfy (C3 ). -w satisfies (C3'); -ϕ satisfies (3.8) i.e. ϕ − τ z (ϕ) L 2 (R N ) ≤ C ϕ |z| α , ∀z ∈ R N , where τ z indicates the shift by z ∈ R N operator; -x → Y 0 (x) is α-Hölder continuous. Then (Y (t, x)) t≥0,x∈R N has a modification such that (t, x) → Y (t, x) is (η 1 , η 2 )-Hölder continuous, for any η 1 ∈ (0, 1/2) and η 2 ∈ (0, α). Proof Let (Y (t, x)) t≥0,x∈R N be the mild solution to (3.9), which exists and is unique by Theorem 3.7. The stated regularity in time is given in Theorem 3.8. It thus remains to prove the regularity in space. Let t ≥ 0, x ∈ R N . Then by (3.10) Y (t, x) = e −t Y 0 (x) + I 1 (t, x) + I 2 (t, x), (3.15) for all t ≥ 0 and x ∈ R N , where I 1 (t, x) = t 0 e −(t−s) R N w(x, y)G(Y (s, y))dyds and I 2 (t, x) = t 0 R N e −(t−s) σ (Y (s, x))ϕ(x − y)W (dsdy). Now let p ≥ 2. The aim is to estimate E |Y (t, x) − Y (t, x)| p for x, x ∈ R N and then to use Kolmogorov's theorem to get the stated spatial regularity. To this end, we have that E |I 1 (t, x) − I 1 (t, x)| p ≤ E ⎡ ⎢ ⎣ ⎛ ⎜ ⎝ t 0 R N |w(x, y) − w( x, y)||G(Y (s, y))|dyds ⎞ ⎟ ⎠ p ⎤ ⎥ ⎦ ≤ C p G t p w(x, ·) − w( x, ·) p L 1 (R N ) ≤ C p G t p K p w |x − x| pα ,(3.16) where we have used (C3'). Moreover, by Hölder's and Burkhölder's inequalities once again, we see that E |I 2 (t, x) − I 2 (t, x)| p ≤ 2 p−1 E ⎡ ⎢ ⎣ t 0 R N e −(t−s) [σ (Y (s, x)) − σ (Y (s, x))] ϕ(x − y)W (dyds) p ⎤ ⎥ ⎦ + 2 p−1 E ⎡ ⎢ ⎣ t 0 R N e −(t−s) σ (Y (s, x))[ϕ(x − y) − ϕ( x − y)]W (dyds) p ⎤ ⎥ ⎦ ≤ 2 p−1 c p E ⎡ ⎢ ⎢ ⎣ ⎛ ⎜ ⎝ t 0 R N |σ (Y (s, x)) − σ (Y (s, x))| 2 |ϕ(x − y)| 2 dyds ⎞ ⎟ ⎠ p 2 ⎤ ⎥ ⎥ ⎦ + 2 p−1 c p E ⎡ ⎢ ⎢ ⎣ ⎛ ⎜ ⎝ t 0 R N |σ (Y (s, x))| 2 |ϕ(x − y) − ϕ( x − y)| 2 dyds ⎞ ⎟ ⎠ p 2 ⎤ ⎥ ⎥ ⎦ , for all x, x ∈ R N and p ≥ 2. Thus E |I 2 (t, x) − I 2 (t, x)| p ≤ 2 p−1 c p C p σ t p 2 −1 ϕ p L 2 (R N ) t 0 E |Y (s, x) − Y (s, x)| p ds + 2 2( p−1) c p C p σ t p 2 ϕ − τ x−x (ϕ) p L 2 (R N ) 1 + sup s∈[0,T ],y∈R N E |Y (s, y)| p ,(3.17) where we note that the right-hand side is finite thanks to Theorem 3.8. Returning to (3.15) and using estimates (3.16) and (3.17) we see that there exists a constant C ( p) T (depending on T, p, C G , K w , C σ , C ϕ , ϕ L 2 (R N ) as well as sup s∈[0,T ],y∈R N E |Y (s, y)| p ), such that E |Y (t, x) − Y (t, x)| p ≤ C ( p) T ⎡ ⎣ E |Y 0 (x) − Y 0 ( x)| p + |x − x| pα + t 0 E |Y (s, x) − Y (s, x)| p ds ⎤ ⎦ ≤ C ( p) T ⎡ ⎣ |x − x| pα + t 0 E |Y (s, x) − Y (s, x)| p ds ⎤ ⎦ , where the last line follows from our assumptions on Y 0 and by adjusting the constant C ( p) T . This bound holds for all t ≥ 0, x, x ∈ R N and p ≥ 2. The proof is then completed using Gronwall's inequality, and Kolmogorov's continuity theorem once again. Comparison of the two approaches The purpose of this section is to compare the two different approaches taken in Sects. 2 and 3 above to give sense to the stochastic neural field equation. Such a comparison of the two approaches in a general setting has existed for a long time in the probability literature [see for example Jetschke (1982Jetschke ( , 1986, or more recently Dalang and Quer-Sardanyons (2011)], but we provide a proof of the main result (Theorem 4.1) in the Appendix for completeness. Our starting point is the random field solution, given by Theorem 3.7. Suppose that the conditions of Theorem 3.7 are satisfied [i.e. ϕ ∈ L 2 (R N ), σ : R → R Lipschitz, G : R → R Lipschitz and bounded, w satisfies (C2') and the given assumptions on the initial condition]. Then, by that result, there exists a unique random field (Y (t, x)) t≥0,x∈R N such that Y (t, x) = e −t Y 0 (x) + t 0 e −(t−s) R N w(x, y)G(Y (s, y))dyds + t 0 R N e −(t−s) σ (Y (s, x))ϕ(x − y)W (dsdy) (4.1) where sup t∈[0,T ],x∈R N E |Y (t, x)| 2 < ∞, (4.2) for all T > 0, and we say that (Y (t, x)) t≥0,x∈R N is the random field solution to the stochastic neural field equation. It turns out the that this random field solution is equivalent to the Hilbert space valued solution constructed in Sect. 2, in the following sense. Theorem 4.1 Suppose the conditions of Theorem 3.7 and Theorem 3.8 are satisfied. Moreover suppose that condition (C1') is satisfied for some ρ w ∈ L 1 (R N ). Then the random field (Y (t, x)) t≥0 satisfying (4.1) and (4.2) is such that (Y (t)) t≥0 :=(Y (t, ·)) t≥0 is the unique L 2 (R N , ρ w )-valued solution to the stochastic evolution equation dY (t) = [−Y (t) + F(Y (s))]dt + B(Y (t))dW (t), t ∈ [0, T ], (4.3) constructed in Theorem 2.2, where B : H → L 0 (U, H ) (with U = L 2 (R N ) and H = L 2 (R N , ρ w )) is given by (2.14) i.e. B(h)(u)(x):=σ (h(x)) R N ϕ(x − y)u(y)dy, h ∈ H, u ∈ U. Example 4.2 We finish this section with an example illustrating the above result, and the applicability of the two approaches. Indeed, we make the same choices for the neural field kernel w and noise term as in Bressloff and Webber (2012), by taking w(x, y) = 1 2β e − |x−y| β , x, y ∈ R N , σ(a) = λa, a ∈ R, where β and λ are constants. As noted in Sect. 2.6, β determines the range of the local synaptic connections. Then, first of all, it is clear that condition (C2') is satisfied (indeed w(x − ·) L 1 (R N ) is constant) and σ is Lipschitz and of linear growth, so that (assuming the initial condition has finite moments), Theorems 3.7 and 3.8 can be applied to yield a unique random field solution (Y (t, x)) t≥0 to the stochastic neural field equation. Moreover, by Example 2 in Sect. 2.8, we also see that (C1') is satisfied. Thus Theorem 2.2 can also be applied to construct a Hilbert space valued solution to the stochastic neural field equation (Eq. (4.3)). By Theorem 4.1, the solutions are equivalent. Conclusion We have here explored two rigorous frameworks in which stochastic neural field equations can be studied in a mathematically precise fashion. Both these frameworks are useful in the mathematical neuroscience literature: the approach of using the theory of Hilbert space valued processes is adopted in Kuehn and Riedler (2014), while we the random field framework is more natural for Bressloff, Ermentrout and their associates in Bressloff and Webber (2012), Bressloff and Wilkerson (2012), Kilpatrick and Ermentrout (2013). It turns out that the constructions are equivalent (see Sect. 4), when all the conditions are satisfied (which we emphasize is certainly the case for all usual modeling choices of the neural field kernel w and noise terms made in the literature-see Sects. 2.6, 2.7 and Example 4.2). However, there are still some advantages and disadvantages for taking one approach over the other, depending on the purpose. For example, an advantage of the construction of a solution as a stochastic process taking values in a Hilbert space carried out in Sect. 2, is that it allows one to consider more general diffusion coefficients. Moreover, it easy to apply results from a large body of literature taking this approach (for example LDP results-see Remark 2.3). A disadvantage is that we have to be careful to impose conditions which control the behavior of the solution in space at infinity and guarantee the integrability of the solution. In particular we require that the connectivity function w either satisfies the strong conditions (C1) and (C2), or the weaker but harder to check conditions (C1') and (C2'). On the other hand, the advantage of the random field approach developed in Sect. 3 is that one no longer needs to control what happens at infinity. We therefore require fewer conditions on the connectivity function w to ensure the existence of a solution [(C2') is sufficient-see Theorem 3.7]. Moreover, with this approach, it is easier to write down conditions that guarantee the existence of a solution that is continuous in both space and time (as opposed to the Hilbert space approach, where spatial regularity is somewhat hidden). However, in order to avoid non-physical distribution valued solutions, we had to impose a priori some extra spatial regularity on the noise (see Sect. 3.2). danyons (2011 Proposition 4.10). The most important point is to relate the stochastic integrals that appear in the two different formulations of a solution. To this end, define I(t, x):= t 0 R N e −(t−s) σ (Y (s, x))ϕ(x − y)W (dsdy), x ∈ R N , t ≥ 0, to be the Walsh integral that appears in the random field solution (4.1). Our aim is to show that I(t, ·) = where the integral on the right-hand side is the H -valued stochastic integral which appears in the solution to (4.3). Step 1: Adapting Proposition 2.6 of Dalang and Quer-Sardanyons (2011) very slightly, we have that the Walsh integral I(t, x) can be written as the integral with respect to the cylindrical Wiener process W = {W t (u) : t ≥ 0, u ∈ U } with covariance Id U . 8 Precisely, we have I(t, x) = t 0 g t,x s dW s , for all t ≥ 0, x ∈ R N , where g t,x s (y):=e −(t−s) σ (Y (s, x))ϕ(x − y), y ∈ R N , which is in L 2 (Ω × [0, T ]; U ) for any T > 0 thanks to (4.2). By definition, the integral with respect to the cylindrical Wiener process W is given by where {e k } ∞ k=1 is a complete orthonormal basis for U , and (β k (t)) t≥0 :=(W t (e k )) t≥0 are independent real-valued Brownian motions. This series is convergent in L 2 (Ω). Step 2: Fix arbitrary T > 0. As in Section 3.5 of Dalang and Quer-Sardanyons (2011), we can consider the process {W (t), t ∈ [0, T ]} defined by W (t) = ∞ k=1 β k (t)J (e k ) (6.2) where J : U → U is a Hilbert-Schmidt operator. W (t) takes its values in U , where it is a Q(= J J * )-Wiener process with Tr(Q) < ∞ [Proposition 3.6 of Dalang and Quer-Sardanyons (2011)]. We define J (u):= k √ λ k u, e k U e k for a sequence of positive real numbers (λ k ) k≥1 such that k λ k < ∞. Now define Φ t,x s (u) = g t,x s , u U , which takes values in R. Proposition 3.10 of Dalang and Quer-Sardanyons (2011) tells us that the process {Φ t,x s , s ∈ [0, T ]} defines a predictable process with values in L 2 (U, R) and where the integral on the left is defined according to Sect. (2.2), with values in R. Step 3: We now note that the original Walsh integral I(·, ·) ∈ L 2 (Ω × [0, T ]; H ). Indeed, because of Burkhölder's inequality with p = 2, I 2 L 2 (Ω×[0,T ];H ) = E ⎡ ⎣ T 0 I(t, ·) 2 H dt ⎤ ⎦ = T 0 R N E |I(t, x)| 2 ρ w (x)dxdt ≤ ϕ 2 L 2 (R N ) T 0 t 0 R N e −2(t−s) E σ 2 (Y (s, x)) dsρ w (x)dxdt, which is finite, again thanks to (4.2). Hence I(t, ·) takes values in H , and we can therefore write I(t, ·) = ∞ j=1 I(t, ·), f j H f j = ∞ j=1 t 0 Φ t,I(t, ·) = ∞ j=1 ⎛ ⎜ ⎝ R N ⎛ ⎝ t 0 Φ t,x s dW (s) ⎞ ⎠ f j (x)ρ w (x)dx ⎞ ⎟ ⎠ f j = ∞ j=1 ⎛ ⎜ ⎝ R N ⎛ ⎝ ∞k=1= ∞ j=1 ⎛ ⎜ ⎝ R N ⎛ ⎝ ∞ k=1 t 0 e −(t−s) λ k B(Y (s))(e k )(x)dβ k (s) ⎞ ⎠ f j (x)ρ w (x)dx ⎞ ⎟ ⎠ f j . Here, by definition, for x ∈ R N , 0 ≤ s ≤ t, e −(t−s) λ k B(Y (s))(e k )(x) = R N e −(t−s) σ (Y (s, x))ϕ(x − y) λ k e k (y)dy = e −(t−s) σ (Y (s, x)) ϕ(x −·), λ k e k U = Φ t,x s ( λ k e k ), which proves (6.1) by comparison with (6.4). Step 4: To conclude it suffices to note that the pathwise integrals in (4.1) and the Hvalued solution to (4.3) coincide as elements of H . Indeed, it is clear that, by definition of F, ρ)]; and (c) F and B are globally Lipschitz. (a): We check that the function F : sup t∈[0,T ],x∈R N E |Y (t, x)| p < ∞,and the map t → Y (t, x) has a continuous version for all x ∈ R N . t 0 e 0−(t−s) B(Y (s))dW (s), (6.1) , e k U dβ k (s), t 0 Φee 0t,x s ( λ k e k )dβ k (s)⎞ ⎠ f j (x)ρ w (x)dx ⎞ ⎟ ⎠ f j .(6.4) Finally, consider the H -valued stochastic integral t 0 e −(t−s) B(Y (s))dW (s), where B : H → L 0 (U, H ) is given above. Then similarly t 0 e −(t−s) B(Y (s))dW (s) −(t−s) B(Y (s))dW (s), −(t−s) λ k B(Y (s))(e k )dβ k (s), ·, y)G(Y (s, y))dyds = t 0 e −(t−s) F(Y (s))ds,where the later in an element of H . the adjoint of B. In other words, for all g, h ∈ L 2 (R N , ρ), s, t ≥ 0, we have, by definition of the covariance operator, that274 O. Faugeras, J. Inglis Theorem 3.10 (Regularity) Suppose that we are in the situation of Theorem 3.7 and E |Y 0 (x)| p < ∞ for all p ≥ 2. Suppose moreover that there exists α ∈ (0, 1] such thatsup x∈R N by (6.3), where { f j } ∞ j=1is a complete orthonormal basis in H . Moreover, by using (6.2)· s dW (s), f j H f j , The covariance operator C :U → U of W is defined as E[ W (s), g U W (t), h U ] = s ∧ t Cg, h U for all g, h ∈ U . Technically this means that Φ(s) is measurable with respect the σ -algebra generated by all left-continuous processes that are known at time s when (W (u)) u≤s is known (these process are said to be adapted to the filtration generated by W ). This would be for an infinite size cortex. The cortex is in effect of finite size but the spatial extents of w loc and w lr are very small with respect to this size and hence the model in which the cortex is R 2 is acceptable. This can also be obtained by applying the operator B to the representation (2.2) of W . This is a family of random variables such that for each u ∈ U , (W t (u)) t≥0 is a Brownian motion with variance t u 2 U , and for all s, t ≥ 0, u 1 , u 2 ∈ U , E[W t (u 1 )W s (u 2 )] = (s ∧ t) u 1 , u 2 U . See for example Dalang and Quer-Sardanyons (2011) Section 2.1 AcknowledgmentsThe authors are grateful to James Maclaurin for suggesting the use of the Fourier transform in Example 2 on page 18, to Etienne Tanré for discussions, and to the referees for their useful suggestions and references.Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.AppendixProof (of Theorem 4.1) The proof of the result involves some technical definition chasing, and in fact is contained inDalang and Quer-Sardanyons (2011), though rather implicitly, of but see alsoJetschke (1982Jetschke ( , 1986. It is for this reason that we carry out the proof explicitly in our situation, by closely followingDalang and Quer sar- Dynamics of pattern formation in lateral-inhibition type neural fields. S I Amari, Biol Cybern. 272Amari SI (1977) Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 27(2):77-87 Spontaneous pattern formation and pinning in the primary visual cortex. T Baker, J Cowan, J Physiol Paris. 1031-2Baker T, Cowan J (2009) Spontaneous pattern formation and pinning in the primary visual cortex. J Physiol Paris 103(1-2):52-68 Spatially periodic modulation of cortical patterns by long-range horizontal connections. P Bressloff, Phys D Nonlinear Phenom. 1853-4Bressloff P (2003) Spatially periodic modulation of cortical patterns by long-range horizontal connections. Phys D Nonlinear Phenom 185(3-4):131-157 Stochastic neural field theory and the system-size expansion. P Bressloff, SIAM J Appl Math. 70Bressloff P (2009) Stochastic neural field theory and the system-size expansion. SIAM J Appl Math 70:1488- 1521 Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics. P Bressloff, Phys Rev E. 82551903Bressloff P (2010) Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics. Phys Rev E 82(5):051903 Spatiotemporal dynamics of continuum neural fields. P Bressloff, J Phys A Math Theor. 45333001Bressloff P (2012) Spatiotemporal dynamics of continuum neural fields. J Phys A Math Theor 45(3):033001 Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. P Bressloff, J Cowan, M Golubitsky, P Thomas, M Wiener, Philos Trans R Soc Lond B. 306Bressloff P, Cowan J, Golubitsky M, Thomas P, Wiener M (2001) Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philos Trans R Soc Lond B 306(1407):299- 330 Front propagation in stochastic neural fields. P Bressloff, M Webber, SIAM J Appl Dyn Syst. 112Bressloff P, Webber M (2012) Front propagation in stochastic neural fields. SIAM J Appl Dyn Syst 11(2):708-740 Front bifurcations in an excitatory neural network. P C Bressloff, S E Folias, SIAM J Appl Math. 651Bressloff PC, Folias SE (2004) Front bifurcations in an excitatory neural network. SIAM J Appl Math 65(1):131-151 Traveling pulses in a stochastic neural field model of direction selectivity. P C Bressloff, J Wilkerson, Front Comput Neurosci. 690Bressloff PC, Wilkerson J (2012) Traveling pulses in a stochastic neural field model of direction selectivity. Front Comput Neurosci 6(90) Space-time continuous solutions to SPDE's driven by a homogeneous Wiener process. H Brezis, Berlin Brzeźniak Z, Peszat S. 1373SpringerStudia MathBrezis H (2010) Functional analysis, Sobolev spaces and Partial Differential Equations. Springer, Berlin Brzeźniak Z, Peszat S (1999) Space-time continuous solutions to SPDE's driven by a homogeneous Wiener process. Studia Math 137(3):261-299 The stochastic wave equation in two spatial dimensions. R Dalang, D Khoshnevisan, C Mueller, D Nualart, Y Xiao, A minicourse on stochastic partial differential equations. Khoshnevisan and Firas Rassoul-AghaBerlin; Utah, Salt Lake City Dalang RC, Frangos NESpringer26Ann. Probab.Dalang R, Khoshnevisan D, Mueller C, Nualart D, Xiao Y (2009) In: Khoshnevisan and Firas Rassoul-Agha (eds) A minicourse on stochastic partial differential equations, Lecture Notes in Mathematics, vol 1962. Springer, Berlin. Held at the University of Utah, Salt Lake City Dalang RC, Frangos NE (1998) The stochastic wave equation in two spatial dimensions. Ann. Probab. 26(1):187-212 Stochastic integrals for spde's: a comparison. R C Dalang, L Quer-Sardanyons, Expo. Math. 291Dalang RC, Quer-Sardanyons L (2011) Stochastic integrals for spde's: a comparison. Expo. Math. 29(1):67- 109 Hölder-Sobolev regularity of the solution to the stochastic wave equation in dimension three. R C Dalang, M Sanz-Solé, Mem. Am. Math. Soc. 19993170Dalang RC, Sanz-Solé M (2009) Hölder-Sobolev regularity of the solution to the stochastic wave equation in dimension three. Mem. Am. Math. Soc. 199(931):vi+70 Existence and uniqueness of travelling waves for a neural network. Y Du, Order structure and topological methods in nonlinear partial differential equations. Hackensack Ermentrout G, McLeod JApplicationsWorld Scientific Publishing Co1Pte. Ltd.Du Y (2006) Order structure and topological methods in nonlinear partial differential equations, vol 1., Series in Partial Differential Equations and ApplicationsWorld Scientific Publishing Co., Pte. Ltd., Hackensack Ermentrout G, McLeod J (1993) Existence and uniqueness of travelling waves for a neural network. In: Proceedings of the Royal Society of Edinburgh, vol 123, pp 461-478 Compactness criteria for integral operators in L ∞ and L 1 spaces. S P Eveson, Proc. Am. Math. Soc. 12312Eveson SP (1995) Compactness criteria for integral operators in L ∞ and L 1 spaces. Proc. Am. Math. Soc. 123(12):3709-3716 Analysis of a hyperbolic geometric model for visual texture perception. G Faye, P Chossat, O Faugeras, J Math Neurosci. 14Faye G, Chossat P, Faugeras O (2011) Analysis of a hyperbolic geometric model for visual texture perception. J Math Neurosci 1(4) SPDEs with coloured noise: analytic and stochastic approaches. M Ferrante, M Sanz-Solé, ESAIM Probab. Stat. 10electronicFerrante M, Sanz-Solé M (2006) SPDEs with coloured noise: analytic and stochastic approaches. ESAIM Probab. Stat. 10:380-405 (electronic) Breathing pulses in an excitatory neural network. S E Folias, P C Bressloff, SIAM J Appl Dyn Syst. 33Folias SE, Bressloff PC (2004) Breathing pulses in an excitatory neural network. SIAM J Appl Dyn Syst 3(3):378-407 Degenerate parabolic stochastic partial differential equations. M Hofmanová, Stoch Process Appl. 12312Hofmanová M (2013) Degenerate parabolic stochastic partial differential equations. Stoch Process Appl 123(12):4294-4336 Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. B H Jansen, V G Rit, Biol Cybern. 73Jansen BH, Rit VG (1995) Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol Cybern 73:357-366 Different approaches to stochastic parabolic differential equations. G Jetschke, Proceedings of the 10th Winter School on Abstract Analysis. the 10th Winter School on Abstract AnalysisJetschke G (1982) Different approaches to stochastic parabolic differential equations. In: Proceedings of the 10th Winter School on Abstract Analysis, pp 161-169 On the equivalence of different approaches to stochastic partial differential equations. G Jetschke, Math Nachr. 1281Jetschke G (1986) On the equivalence of different approaches to stochastic partial differential equations. Math Nachr 128(1):315-329 Wandering bumps in stochastic neural fields. Z P Kilpatrick, B Ermentrout, SIAM J Appl Dyn Syst. 121Kilpatrick ZP, Ermentrout B (2013) Wandering bumps in stochastic neural fields. SIAM J Appl Dyn Syst 12(1):61-94 Large deviations for nonlocal stochastic neural fields. C Kuehn, M G Riedler, J Math Neurosci. 41Kuehn C, Riedler MG (2014) Large deviations for nonlocal stochastic neural fields. J Math Neurosci 4(1) Model of brain rhythmic activity. F Lopes Da Silva, A Hoeks, L Zetterberg, Kybernetik. 15Lopes da Silva F, Hoeks A, Zetterberg L (1974) Model of brain rhythmic activity. Kybernetik 15:27-37 Model of neuronal populations. The basic mechanism of rhythmicity. F Lopes Da Silva, A Van Rotterdam, P Barts, E Van Heusden, W Burr, Progress in brain research. Corner MA, Swaab DFAmsterdamElsevierLopes da Silva F, van Rotterdam A, Barts P, van Heusden E, Burr W (1976) Model of neuronal populations. The basic mechanism of rhythmicity. In: Corner MA, Swaab DF (eds) Progress in brain research. Elsevier, Amsterdam, pp 281-308 Anatomical substrates for functional columns in macaque monkey primary visual cortex. J S Lund, A Angelucci, P C Bressloff, Cereb Cortex. 12Lund JS, Angelucci A, Bressloff PC (2003) Anatomical substrates for functional columns in macaque monkey primary visual cortex. Cereb Cortex 12:15-24 Invariant computations in local cortical networks with balanced excitation and inhibition. J Mariño, J Schummers, D Lyon, L Schwabe, O Beck, P Wiesing, K Obermayer, M Sur, Nat Neurosci. 82Mariño J, Schummers J, Lyon D, Schwabe L, Beck O, Wiesing P, Obermayer K, Sur M (2005) Invariant computations in local cortical networks with balanced excitation and inhibition. Nat Neurosci 8(2):194- 201 Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities. M Owen, C Laing, S Coombes, New J Phys. 910Owen M, Laing C, Coombes S (2007) Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities. New J Phys 9(10):378-401 Stochastic partial differential equations. Lectures given in Fudan University. E Pardoux, Probab Theory Relat Fields. 981Shanghaï Peszat SPardoux E (2007) Stochastic partial differential equations. Lectures given in Fudan University, Shanghaï Peszat S (1994) Large deviation principle for stochastic evolution equations. Probab Theory Relat Fields 98(1):113-136 Existence and properties of solutions for neural field equations. R Potthast, P Beim Graben, Math Methods Appl Sci. 338Potthast R, Beim Graben P (2010) Existence and properties of solutions for neural field equations. Math Methods Appl Sci 33(8):935-949 Stochastic equations in infinite dimensions. G D Prato, J Zabczyk, Cambridge University PressCambridgePrato GD, Zabczyk J (1992) Stochastic equations in infinite dimensions. Cambridge University Press, Cambridge A concise course on stochastic partial differential equations. C Prévôt, M Röckner, Lecture Notes in MathematicsSpringer. Prévôt C, Röckner M (2007) A concise course on stochastic partial differential equations., Lecture Notes in MathematicsSpringer, Berlin Hölder continuity for the stochastic heat equation with spatially correlated noise. M Sanz-Solé, M Sarrà, Seminar on Stochastic Analysis, Random Fields and Applications. III (Ascona; BaselBirkhäuser52Sanz-Solé M, Sarrà M (2002) Hölder continuity for the stochastic heat equation with spatially correlated noise. In: Seminar on Stochastic Analysis, Random Fields and Applications, III (Ascona, 1999), Progr. Probab., vol 52. Birkhäuser, Basel, pp 259-268 Sobolev, Besov and Nikol' skiȋ fractional spaces: embeddings and comparisons for vector valued spaces on an interval. J Simon, Ann. Math. Pura Appl. 4157Simon J (1990) Sobolev, Besov and Nikol' skiȋ fractional spaces: embeddings and comparisons for vector valued spaces on an interval. Ann. Math. Pura Appl. 4(157):117-148 Local/global analysis of the stationary solutions of some neural field equations. R Veltz, O Faugeras, SIAM J Appl Dyn Syst. 93Veltz R, Faugeras O (2010) Local/global analysis of the stationary solutions of some neural field equations. SIAM J Appl Dyn Syst 9(3):954-998 An introduction to stochastic partial differential equations. J B Walsh, Lecture Notes in Mathematics. SpringerÉcole d'été de probabilités de Saint-Flour, XIV-1984Walsh JB (1986) École d'été de probabilités de Saint-Flour, XIV-1984, Lecture Notes in Mathematics. An introduction to stochastic partial differential equations. Springer, Berlin, pp 265-439 Excitatory and inhibitory interactions in localized populations of model neurons. H Wilson, J Cowan, Biophys J. 12Wilson H, Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J 12:1-24 A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. H Wilson, J Cowan, Biol Cybern. 132Wilson H, Cowan J (1973) A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Biol Cybern 13(2):55-80
[]
[ "Fundamental and Real-World Challenges in Economics", "Fundamental and Real-World Challenges in Economics" ]
[ "Dirk Helbing \nETH Zurich\nClausiusstr. 50CLU E1, 8092ZurichSwitzerland\n\nSanta Fe Institute\n1399 Hyde Park Road, Santa Fe87501NMUSA\n\nCollegium Budapest -Institute for Advanced Study\nSzentháromság u. 21014BudapestHungary\n", "Stefano Balietti \nETH Zurich\nClausiusstr. 50CLU E1, 8092ZurichSwitzerland\n" ]
[ "ETH Zurich\nClausiusstr. 50CLU E1, 8092ZurichSwitzerland", "Santa Fe Institute\n1399 Hyde Park Road, Santa Fe87501NMUSA", "Collegium Budapest -Institute for Advanced Study\nSzentháromság u. 21014BudapestHungary", "ETH Zurich\nClausiusstr. 50CLU E1, 8092ZurichSwitzerland" ]
[]
In the same way as the Hilbert Program was a response to the foundational crisis of mathematics[1], this article tries to formulate a research program for the socio-economic sciences. The aim of this contribution is to stimulate research in order to close serious knowledge gaps in mainstream economics that the recent financial and economic crisis has revealed. By identifying weak points of conventional approaches in economics, we identify the scientific problems which need to be addressed. We expect that solving these questions will bring scientists in a position to give better decision support and policy advice. We also indicate, what kinds of insights can be contributed by scientists from other research fields such as physics, biology, computer and social science. In order to make a quick progress and gain a systemic understanding of the whole interconnected socio-economic-environmental system, using the data, information and computer systems available today and in the near future, we suggest a multi-disciplinary collaboration as most promising research approach.
10.2139/ssrn.1680262
[ "https://arxiv.org/pdf/1012.4446v1.pdf" ]
125,638,726
1012.4446
2628e7c0b6adf67be8824235b0c87bdfd55befd7
Fundamental and Real-World Challenges in Economics 20 Dec 2010 Dirk Helbing ETH Zurich Clausiusstr. 50CLU E1, 8092ZurichSwitzerland Santa Fe Institute 1399 Hyde Park Road, Santa Fe87501NMUSA Collegium Budapest -Institute for Advanced Study Szentháromság u. 21014BudapestHungary Stefano Balietti ETH Zurich Clausiusstr. 50CLU E1, 8092ZurichSwitzerland Fundamental and Real-World Challenges in Economics 20 Dec 2010arXiv:1012.4446v1 [q-fin.GN]PACS numbers: In the same way as the Hilbert Program was a response to the foundational crisis of mathematics[1], this article tries to formulate a research program for the socio-economic sciences. The aim of this contribution is to stimulate research in order to close serious knowledge gaps in mainstream economics that the recent financial and economic crisis has revealed. By identifying weak points of conventional approaches in economics, we identify the scientific problems which need to be addressed. We expect that solving these questions will bring scientists in a position to give better decision support and policy advice. We also indicate, what kinds of insights can be contributed by scientists from other research fields such as physics, biology, computer and social science. In order to make a quick progress and gain a systemic understanding of the whole interconnected socio-economic-environmental system, using the data, information and computer systems available today and in the near future, we suggest a multi-disciplinary collaboration as most promising research approach. In the same way as the Hilbert Program was a response to the foundational crisis of mathematics [1], this article tries to formulate a research program for the socio-economic sciences. The aim of this contribution is to stimulate research in order to close serious knowledge gaps in mainstream economics that the recent financial and economic crisis has revealed. By identifying weak points of conventional approaches in economics, we identify the scientific problems which need to be addressed. We expect that solving these questions will bring scientists in a position to give better decision support and policy advice. We also indicate, what kinds of insights can be contributed by scientists from other research fields such as physics, biology, computer and social science. In order to make a quick progress and gain a systemic understanding of the whole interconnected socio-economic-environmental system, using the data, information and computer systems available today and in the near future, we suggest a multi-disciplinary collaboration as most promising research approach. PACS numbers: I. INTRODUCTION "How did economists get it so wrong?". Facing the financial crisis, this question was brilliantly articulated by the Nobel prize winner of 2008, Paul Krugman, in the New York Times [2]. A number of prominent economists even sees a failure of academic economics [3]. Remarkably, the following declaration has been signed by more than 2000 scientists [4]: "Few economists saw our current crisis coming, but this predictive failure was the least of the field's problems. More important was the profession's blindness to the very possibility of catastrophic failures in a market economy ... the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth ... economists fell back in love with the old, idealized vision of an economy in which rational individuals interact in perfect markets, this time gussied up with fancy equations ... Unfortunately, this romanticized and sanitized vision of the economy led most economists to ignore all the things that can go wrong. They turned a blind eye to the limitations of human rationality that often lead to bubbles and busts; to the problems of institutions that run amok; to the imperfections of marketsespecially financial markets-that can cause the economy's operating system to undergo sudden, unpredictable crashes; and to the dangers created when regulators don't believe in regulation. ... When it comes to the all-too-human problem of recessions and depressions, economists need to abandon the neat but wrong solution of assuming that everyone is rational and markets work perfectly." Apparently, it has not always been like this. DeLisle Worrell writes: "Back in the sixties ... we were all too aware of the limitations of the discipline: it was static where the world was dynamic, it assumed competitive markets where few existed, it assumed rationality when we knew full well that economic agents were not rational ... economics had no way of dealing with changing tastes and technology ... Econometrics was equally plagued with intractable problems: economic observations are never randomly drawn and seldom independent, the number of excluded variables is always unmanageably large, the degrees of freedom unacceptably small, the stability of significance tests seldom unequivocably established, the errors in measurement too large to yield meaningful results ..." [5]. In the following, we will try to identify the scientific challenges that must be addressed to come up with better theories in the near future. This comprises practical challenges, i.e. the real-life problems that must be faced (see Sec. II), and fundamental challenges, i.e. the methodological advances that are required to solve these problems (see Sec. III). After this, we will discuss, which contribution can be made by related scientific disciplines such as econophysics and the social sciences. The intention of this contribution is constructive. It tries to stimulate a fruitful scientific exchange, in order to find the best way out of the crisis. According to our perception, the economic challenges we are currently facing can only be mastered by large-scale, multi-disciplinary efforts and by innovative approaches [6]. We fully recognize the large variety of non-mainstream approaches that has been developed by "heterodox economists". However, the research traditions in economics seem to be so powerful that these are not paid much attention to. Besides, there is no agreement on which of the alternative modeling approaches would be the most promising ones, i.e. the heterogeneity of alternatives is one of the problems, which slows down their success. This situation clearly implies institutional challenges as well, but these go beyond the scope of this contribution and will therefore be addressed in the future. II. REAL-WORLD CHALLENGES Since decades, if not since hundreds of years, the world is facing a number of recurrent socio-economic problems, which are obviously hard to solve. Before addressing related fundamental scientific challenges in economics, we will therefore point out practical challenges one needs to pay attention to. This basically requires to classify the multitude of problems into packages of interrelated problems. Probably, such classification attempts are subjective to a certain extent. At least, the list presented below differs from the one elaborated by Lomborg et al. [7], who identified the following top ten problems: air pollution, security/conflict, disease control, education, climate change, hunger/malnutrition, water sanitation, barriers to migration and trade, transnational terrorism and, finally, women and development. The following (non-ranked) list, in contrast, is more focused on socio-economic factors rather than resource and engineering issues, and it is more oriented at the roots of problems rather than their symptoms: Some of these challenges are interdependent. III. FUNDAMENTAL CHALLENGES In the following, we will try to identify the fundamental theoretical challenges that need to be addressed in order to understand the above practical problems and to draw conclusions regarding possible solutions. The most difficult part of scientific research is often not to find the right answer. The problem is to ask the right questions. In this context it can be a problem that people are trained to think in certain ways. It is not easy to leave these ways and see the problem from a new angle, thereby revealing a previously unnoticed solution. Three factors contribute to this: 1. We may overlook the relevant facts because we have not learned to see them, i.e. we do not pay attention to them. The issue is known from internalized norms, which prevent people from considering possible alternatives. 2. We know the stylized facts, but may not have the right tools at hand to interpret them. It is often difficult to make sense of patterns detected in data. Turning data into knowledge is quite challenging. 3. We know the stylized facts and can interpret them, but may not take them seriously enough, as we underestimate their implications. This may result from misjudgements or from herding effects, i.e. from a tendency to follow traditions and majority opinions. In fact, most of the issues discussed below have been pointed out before, but it seems that this did not have an effect on mainstream economics so far or on what decisionmakers know about economics. This is probably because mainstream theory has become a norm [8], and alternative approaches are sanctioned as norm-deviant behavior [9,10]. As we will try to explain, the following fundamental issues are not just a matter of approximations (which often lead to the right understanding, but wrong numbers). Rather they concern fundamental errors in the sense that certain conclusions following from them are seriously misleading. As the recent financial crisis has demonstrated, such errors can be very costly. However, it is not trivial to see what dramatic consequences factors such as dynamics, spatial interactions, randomness, non-linearity, network effects, differentiation and heterogeneity, irreversibility or irrationality can have. A. Homo economicus Despite of criticisms by several nobel prize winners such as Reinhard Selten (1994), Joseph Stiglitz and George Akerlof (2001), or Daniel Kahneman (2002), the paradigm of the homo economicus, i.e. of the "perfect egoist", is still the dominating approach in economics. It assumes that people would have quasi-infinite memory and processing capacities and would determine the best one among all possible alternative behaviors by strategic thinking (systematic utility optimization), and would implement it into practice without mistakes. The Nobel prize winner of 1976, Milton Friedman, supported the hypothesis of homo economicus by the following argument: "irrational agents will lose money and will be driven out of the market by rational agents" [11]. More recently, Robert E. Lucas Jr., the Nobel prize winner of 1995, used the rationality hypothesis to narrow down the class of empirically relevant equilibria [12]. The rational agent hypothesis is very charming, as its implications are clear and it is possible to derive beautiful and powerful economic theorems and theories from it. The best way to illustrate homo economicus is maybe a company that is run by using optimization methods from operation research, applying supercomputers. Another example are professional chess players, who are trying to anticipate the possible future moves of their opponents. Obviously, in both examples, the future course of actions can not be fully predicted, even if there are no random effects and mistakes. It is, therefore, no wonder that people have repeatedly expressed doubts regarding the realism of the rational agent approach [13,14]. Bertrand Russell, for example, claimed: "Most people would rather die than think". While this seems to be a rather extreme opinion, the following scientific arguments must be taken seriously: 1. Human cognitive capacities are bounded [16,17]. Already phone calls or conversations can reduce people's attention to events in the environment a lot. Also, the abilities to memorize facts and to perform complicated logical analyses are clearly limited. 2. In case of NP-hard optimization problems, even supercomputers are facing limits, i.e. optimization jobs cannot be performed in real-time anymore. Therefore, approximations or simplifications such as the application of heuristics may be necessary. In fact, psychologists have identified a number of heuristics, which people use when making decisions [18]. 3. People perform strategic thinking mainly in important new situations. In normal, everyday situation, however, they seem to pursue a satisficing rather than optimizing strategy [17]. Meeting a certain aspiration level rather than finding the optimal strategy can save time and energy spent on problem solving. In many situation, people even seem to perform routine choices [14], for example, when evading other pedestrians in counterflows. 4. There is a long list of cognitive biases which question rational behavior [19]. For example, individuals are favorable of taking small risks (which are preceived as "chances", as the participation in lotteries shows), but they avoid large risks [20]. Furthermore, non-exponential temporal discounting may lead to paradoxical behaviors [21] and requires one to rethink, how future expectations must be modeled. 5. Most individuals have a tendency towards otherregarding behavior and fairness [22,23]. For example, the dictator game [24] and other experiments [25] show that people tend to share, even if there is no reason for this. Leaving a tip for the waiter in a restaurant that people visit only once is a typical example (particularly in countries where tipping is not common) [26]. Such behavior has often been interpreted as sign of social norms. While social norms can certainly change the payoff structure, it has been found that the overall payoffs resulting from them do not need to create a user or system optimum [27][28][29]. This suggests that behavioral choices may be irrational in the sense of non-optimal. A typical example is the existence of unfavorable norms, which are supported by people although nobody likes them [30]. 6. Certain optimization problems can have an infinite number of local optima or Nash equilibria, which makes it impossible to decide what is the best strategy [31]. 7. Convergence towards the optimal solution may require such a huge amount of time that the folk theorem becomes useless. This can make it practically impossible to play the best response strategy [32]. 8. The optimal strategy may be deterministically chaotic, i.e. sensitive to arbitrarily small details of the initial condition, which makes the dynamic solution unpredictable on the long run ("butterfly effect") [33,34]. This fundamental limit of predictability also implies a limit of control-two circumstances that are even more true for non-deterministic systems with a certain degree of randomness. In conclusion, although the rational agent paradigm (the paradigm of homo economicus) is theoretically powerful and appealing, there are a number of empirical and theoretical facts, which suggest deficiencies. In fact, most methods used in financial trading (such as technical analysis) are not well compatible with the rational agent approach. Even if an optimal solution exists, it may be undecidable for practical reasons or for theoretical ones [35,36]. This is also relevant for the following challenges, as boundedly rational agents may react inefficently and with delays, which questions the efficient market hypothesis, the equilibrium paradigm, and other fundamental concepts, calling for the consideration of spatial, network, and time-dependencies, heterogeneity and correlations etc. It will be shown that these points can have dramatic implications regarding the predictions of economic models. B. The efficient market hypothesis The efficient market hypothesis (EMH) was first developed by Eugene Fama [37] in his Ph.D. thesis and rapidly spread among leading economists, who used it as an argument to promote laissez-faire policies. The EMH states that current prices reflect all publicly available information and (in its stronger formulation) that prices instantly change to reflect new public information. The idea of self-regulating markets goes back to Adam Smith [38], who believed that "the free market, while appearing chaotic and unrestrained, is actually guided to produce the right amount and variety of goods by a so-called 'invisible hand'." Furthermore, "by pursuing his own interest, [the individual] frequently promotes that of the society more effectually than when he intends to promote it" [39]. For this reason, Adam Smith is often considered to be the father of free market economics. Curiously enough, however, he also wrote a book on "The Theory of Moral Sentiments" [40]. "His goal in writing the work was to explain the source of mankind's ability to form moral judgements, in spite of man's natural inclinations towards self-interest. Smith proposes a theory of sympathy, in which the act of observing others makes people aware of themselves and the morality of their own behavior... [and] seek the approval of the 'impartial spectator' as a result of a natural desire to have outside observers sympathize with them" [38]. Such a reputation-based concept would be considered today as indirect reciprocity [41]. Of course, there are criticisms of the efficient market hypothesis [42], and the Nobel prize winner of 2001, Joseph Stiglitz, even believes that "There is not invisible hand" [43]. The following list gives a number of empirical and theoretical arguments questioning the efficient market hypothesis: 1. Examples of market failures are well-known and can result, for example, in cases of monopolies or oligopolies, if there is not enough liquidity or if information symmetry is not given. 2. While the concept of the "invisible hand" assumes something like an optimal self-organization [44], it is well-known that this requires certain conditions, such as symmetrical interactions. In general, however, selforganization does not necessarily imply system-optimal solutions. Stop-and-go traffic [45] or crowd disasters [46] are two obvious examples for systems, in which individuals competitively try to reach individually optimal outcomes, but where the optimal solution is dynamically unstable. 3. The limited processing capacity of boundedly rational individuals implies potential delays in their responses to sensorial inputs, which can cause such instabilities [47]. For example, a delayed adaptation in production systems may contribute to the occurrence of business cycles [48]. The same applies to the labor market of specially skilled people, which cannot adjust on short time scales. Even without delayed reactions, however, the competitive optimization of individuals can lead to suboptimal individual results, as the "tragedy of the commons" in public goods dilemmas demonstrates [49,50]. 4. Bubbles and crashes, or more generally, extreme events in financial markets should not occur, if the efficient market hypothesis was correct (see next subsection). 5. Collective social behavior such as "herding effects" as well as deviations of human behavior from what is expected from rational agents can lead to such bubbles and crashes [51], or can further increase their size through feedback effects [52]. Cyclical feedbacks leading to oscillations are also known from the beer game [53] or from business cycles [48]. C. Equilibrium paradigm The efficient market paradigm implies the equilibrium paradigm. This becomes clear, if we split it up into its underlying hypotheses: 1. The market can be in equilibrium, i.e. there exists an equilibrium. 2. There is one and only one equilibrium. 3. The equilibrium is stable, i.e. any deviations from the equilibrium due to "fluctuations" or "perturbations" tend to disappear eventually. 4. The relaxation to the equilibrium occurs at an infinite rate. Note that, in order to act like an "invisible hand", the stable equilibrium (Nash equilibrium) furthermore needs to be a system optimum, i.e. to maximize the average utility. This is true for coordination games, when interactions are well-mixed and exploration behavior as well as transaction costs can be neglected [54]. However, it is not fulfilled by so-called social dilemmas [49]. Let us discuss the evidence for the validity of the above hypotheses one by one: 1. A market is a system of extremely many dynamically coupled variables. Theoretically, it is not obvious that such a system would have a stationary solution. For example, the system could behave periodic, quasiperiodic, chaotic, or turbulent [81-83, 85-87, 94]. In all these cases, there would be no convergence to a stationary solution. 2. If a stationary solution exists, it is not clear that there are no further stationary solutions. If many variables are non-linearly coupled, the phenomenon of multistability can easily occur [55]. That is, the solution to which the system converges may not only depend on the model parameters, but also on the initial condition, history, or perturbation size. Such facts are known as path-dependencies or hysteresis effects and are usually visualized by so-called phase diagrams [56]. 3. In systems of non-linearly interacting variables, the existence of a stationary solution does not necessarily imply that it is stable, i.e. that the system will converge to this solution. For example, the stationary solution could be a focal point with orbiting solutions (as for the classical Lotka-Volterra equations [57]), or it could be unstable and give rise to a limit cycle [58] or a chaotic solution [33], for example (see also item 1). In fact, experimental results suggest that volatility clusters in financial markets may be a result of over-reactions to deviations from the fundamental value [59]. 4. An infinite relaxation rate is rather unusual, as most decisions and related implemenations take time [15,60]. The points listed in the beginning of this subsection are also questioned by empirical evidence. In this connection, one may mention the existence of business cycles [48] or unstable orders and deliveries observed in the experimental beer game [53]. Moreover, bubbles and crashes have been found in financial market games [61]. Today, there seems to be more evidence against than for the equilibrium paradigm. In the past, however, most economists assumed that bubbles and crashes would not exist (and many of them still do). The following quotes are quite typical for this kind of thinking (from [62]): In 2004, the Federal Reserve chairman of the U.S., Alan Greenspan, stated that the rise in house values was "not enough in our judgment to raise major concerns". In July 2005 when asked about the possibility of a housing bubble and the potential for this to lead to a recession in the future, the present U.S. Federal Reserve chairman Ben Bernanke (then Chairman of the Council of Economic Advisors) said: "It's a pretty unlikely possibility. We've never had a decline in housing prices on a nationwide basis. So, what I think is more likely is that house prices will slow, maybe stabilize, might slow consumption spending a bit. I don't think it's going to drive the economy too far from it's full path though." As late as May 2007 Bernanke stated that the Federal Reserve "do not expect significant spillovers from the subprime market to the rest of the economy". According to the classical interpretation, sudden changes in stock prices result from new information, e.g. from innovations ("technological shocks"). The dynamics in such systems has, for example, been described by the method of comparative statics (i.e. a series of snapshots). Here, the system is assumed to be in equilibrium in each moment, but the equilibrium changes adiabatically (i.e. without delay), as the system parameters change (e.g. through new facts). Such a treatment of system dynamics, however, has certain deficiencies: 1. The approach cannot explain changes in or of the system, such as phase transitions ("systemic shifts"), when the system is at a critical point ("tipping point"). 2. It does not allow one to understand innovations and other changes as results of an endogeneous system dynamics. 3. It cannot describe effects of delays or instabilities, such as overshooting, self-organization, emergence, systemic breakdowns or extreme events (see Sec. III D). 4. It does not allow one to study effects of different time scales. For example, when there are fast autocatalytic (self-reinfocing) effects and slow inhibitory effects, this may lead to pattern formation phenomena in space and time [63,64]. The formation of settlements, where people agglomerate in space, may serve as an example [65,66]. 5. It ignores long-term correlations such as memory effects. 6. It neglects frictional effects, which are often proportional to change ("speed") and occur in most complex systems. Without friction, however, it is difficult to understand entropy and other path-dependent effects, in particular irreversibility (i.e. the fact that the system may not be able to get back to the previous state) [67]. For example, the unemployment rate has the property that it does not go back to the previous level in most countries after a business cycle [68]. D. Prevalence of linear models Comparative statics is, of course, not the only method used in economics to describe the dynamics of the system under consideration. As in physics and other fields, one may use a linear approximation around a stationary solution to study the response of the system to fluctuations or perturbations [69]. Such a linear stability analysis allows one to study, whether the system will return to the stationary solution (which is the case for a stable [Nash] equilibrium) or not (which implies that the system will eventually be driven into a new state or regime). In fact, the great majority of statistical analyses use linear models to fit empirical data (also when they do not involve time-dependencies). It is know, however, that linear models have special features, which are not representative for the rich variety of possible functional dependencies, dynamics, and outcomes. Therefore, the neglection of non-linearity has serious consequences: 1. As it was mentioned before, phenomena like multiple equilibria, chaos or turbulence cannot be understood by linear models. The same is true for self-organization phenomena or emergence. Additionally, in non-linearly coupled systems, usually "more is different", i.e. the system may change its behavior fundamentally as it grows beyond a certain size. Furthermore, the system is often hard to predict and difficult to control (see Sec. III H). 2. Linear modeling tends to overlook that a strong coupling of variables, which would show a normally distributed behavior in separation, often leads to fat tail distributions (such as "power laws") [70,71]. This implies that extreme events are much more frequent than expected according to a Gaussian distribution. For example, when additive noise is replaced by multiplicative noise, a number of surprising phenomena may result, including noise-induced transitions [72] or directed random walks ("ratchet effects") [73]. 3. Phenomena such as catastrophes [74] or phase transition ("system shifts") [75] cannot be well understood within a linear modeling framework. The same applies to the phenomenon of "self-organized criticality" [79] (where the system drives itself to a critical state, typically with power-law characteristics) or cascading effects, which can result from network interactions (overcritically challenged network nodes or links) [77,78]. It should be added that the relevance of network effects resulting from the on-going globalization is often underestimated. For example, "the stock market crash of 1987, began with a small drop in prices which triggered an avalanche of sell orders in computerized trading programs, causing a further price decline that triggered more automatic sales." [80] Therefore, while linear models have the advantage of being analytically solvable, they are often unrealistic. Studying nonlinear behavior, in contrast, often requires numerical computational approaches. It is likely that most of today's unsolved economic puzzles cannot be well understood through linear models, no matter how complicated they may be (in terms of the number of variables and parameters) [81][82][83][84][85][86][87][88][89][90][91][92][93][94]. The following list mentions some areas, where the importance of nonlinear interdependencies is most likely underestimated: • collective opinions, such as trends, fashions, or herding effects, • the success of new (and old) technologies, products, etc., • cultural or opinion shifts, e.g. regarding nuclear power, genetically manipulated food, etc., • the "fitness" or competitiveness of a product, value, quality perceptions, etc., • the respect for copyrights, • social capital (trust, cooperation, compliance, solidarity, ...), • booms and recessions, bubbles and crashes, • bank panics, • community, cluster, or group formation. • relationships between different countries, including war (or trade war) and peace. E. Representative agent approach Another common simplification in economic modeling is the representative agent approach, which is known in physics as mean field approximation. Within this framework, timedependencies and non-linear dependencies are often considered, but it is assumed that the interaction with other agents (e.g. of one company with all the other companies) can be treated as if this agent would interact with an average agent, the "representative agent". Let us illustrate this with the example of the public goods dilemma. Here, everyone can decide whether to make an individual contribution to the public good or not. The sum of all contributions is multiplied by a synergy factor, reflecting the benefit of cooperation, and the resulting value is equally shared among all people. The prediction of the representative agent approach is that, due to the selfishness of agents, a "tragedy of the commons" would result [49]. According to this, everybody should free-ride, i.e. nobody should make a contribution to the public good and nobody would gain anything. However, if everybody would contribute, everybody could multiply his or her contribution by the synergy factor. This example is particularly relevant as society is facing a lot of public goods problems and would not work without cooperation. Everything from the creation of public infrastructures (streets, theaters, universities, libraries, schools, the World Wide Web, Wikipedia etc.) over the use of environmental resources (water, forests, air, etc.) or of social benefit systems (such as a public health insurance), maybe even the creation and maintainance of a commonly shared language and culture are public goods problems (although the last examples are often viewed as coordination problems). Even the process of creating public goods is a public good [95]. While it is a well-known problem that people tend to make unfair contributions to public goods or try to get a bigger share of them, individuals cooperate much more than one would expect according to the representative agent approach. If they would not, society could simply not exist. In economics, one tries to solve the problem by introducing taxes (i.e. another incentive structure) or a "shadow of the future" (i.e. a strategic optimization over infinite time horizons in accordance with the rational agent approach) [96,97]. Both comes down to changing the payoff structure in a way that transforms the public good problem into another one that does not constitute a social dilemma [98]. However, there are other solutions of the problem. When the realm of the mean-field approximation underlying the representative agent approach is left and spatial or network interactions or the heterogeneity among agents are considered, a miracle occurs: Cooperation can survive or even thrive through correlations and co-evolutionary effects [99][100][101]. A similar result is found for the public goods game with costly punishment. Here, the representative agent model predicts that individuals avoid to invest into punishment, so that punishment efforts eventually disappear (and, as a consequence, cooperation as well). However, this "second-order free-rider problem" is naturally resolved and cooperation can spread, if one discards the mean-field approximation and considers the fact that interactions take place in space or social networks [56]. Societies can overcome the tragedy of the commons even without transforming the incentive structure through taxes. For example, social norms as well as group dynamical and reputation effects can do so [102]. The representative agent approach implies just the opposite conclusion and cannot well explain the mechanisms on which society is built. It is worth pointing out that the relevance of public goods dilemmas is probably underestimated in economics. Partially related to Adam Smith's belief in an "invisible hand", one often assumes underlying coordination games and that they would automatically create a harmony between an individually and system optimal state in the course of time [54]. However, running a stable financial system and economy is most likely a public goods problem. Considering unemployment, recessions always go along with a breakdown of solidarity and cooperation. Efficient production clearly requires mutual cooperation (as the counter-example of countries with many strikes illustrates). The failure of the interbank market when banks stop lending to each other, is a good example for the breakdown of both, trust and cooperation. We must be aware that there are many other systems that would not work anymore, if people would lose their trust: electronic banking, e-mail and internet use, Facebook, eBusiness and eGovernance, for example. Money itself would not work without trust, as bank panics and hyperinflation scenarios show. Similarly, cheating customers by selling low-quality products or selling products at overrated prices, or by manipulating their choices by advertisements rather than informing them objectively and when they want, may create profits on the short run, but it affects the trust of customers (and their willingness to invest). The failure of the immunization campaign during the swine flu pandemics may serve as an example. Furthermore, people would probably spend more money, if the products of competing companies were better compatible with each other. Therefore, on the long run, more cooperation among companies and with the customers would pay off and create additional value. Besides providing a misleading picture of how cooperation comes about, there are a number of other deficiencies of the representative agent approach, which are listed below: 1. Correlations between variables are neglected, which is acceptable only for "well-mixing" systems. According to what is known from critical phenomena in physics, this approximation is valid only, when the interactions take place in high-dimensional spaces or if the system elements are well connected. (However, as the example of the public goods dilemma showed, this case does not necessarily have beneficial consequences. Well-mixed interactions could rather cause a breakdown of social or economic institutions, and it is conceivable that this played a role in the recent financial crisis.) 2. Percolation phenomena, describing how far an idea, innovation, technology, or (computer) virus spreads through a social or business network, are not well reproduced, as they depend on details of the network structure, not just on the average node degree [103]. 3. The heterogeneity of agents is ignored. For this reason, factors underlying economic exchange, perturbations, or systemic robustness [104] cannot be well described. Moreover, as socio-economic differentiation and specialization imply heterogeneity, they cannot be understood as emergent phenomena within a representative agent approach. Finally, it is not possible to grasp innovation without the consideration of variability. In fact, according to evolutionary theory, the innovation rate would be zero, if the variability was zero [105]. Furthermore, in order to explain innovation in modern societies, Schumpeter introduced the concept of the "political entrepreneur" [106], an extra-ordinarily gifted person capable of creating disruptive change and innovation. Such an extraordinary individual can, by definition, not be modeled by a "representative agent". One of the most important drawbacks of the representative agent approach is that it cannot explain the fundamental fact of economic exchange, since it requires one to assume a heterogeneity in resources or production costs, or to consider a variation in the value of goods among individuals. Ken Arrow, Nobel prize winner in 1972, formulated this point as follows [107]: "One of the things that microeconomics teaches you is that individuals are not alike. There is heterogeneity, and probably the most important heterogeneity here is heterogeneity of expectations. If we didn't have heterogeneity, there would be no trade." We close this section by mentioning that economic approaches, which go beyond the representative agent approach, can be found in Refs. [108,109]. F. Lack of micro-macro link and ecological systems thinking Another deficiency of economic theory that needs to be mentioned is the lack of a link between micro-and macroeconomics. Neoclassical economics implicitly assumes that individuals make their decisions in isolation, using only the information received from static market signals. Within this oversimplified framework, macro-aggregates are just projections of some representative agent behavior, instead of the outcome of complex interactions with asymmetric information among a myriad of heterogeneous agents. In principle, it should be understandable how the macroeconomic dynamics results from the microscopic decisions and interactions on the level of producers and consumers [81,110] (as it was possible in the past to derive micro-macro links for other systems with a complex dynamical behavior such as interactive vehicle traffic [111]). It should also be comprehensible how the macroscopic level (the aggregate econonomic situation) feeds back on the microscopic level (the behavior of consumers and producers), and to understand the economy as a complex, adaptive, self-organizing system [112,113]. Concepts from evolutionary theory [114] and ecology [115] appear to be particularly promising [116]. This, however, requires a recognition of the importance of heterogeneity for the system (see the the previous subsection). The lack of ecological thinking implies not only that the sensitive network interdependencies between the various agents in an economic system (as well as minority solutions) are not properly valued. It also causes deficiencies in the development and implementation of a sustainable economic approach based on recycling and renewable resources. Today, forestry science is probably the best developed scientific discipline concerning sustainability concepts [117]. Economic growth to maintain social welfare is a serious misconception. From other scientific disciplines, it is well known that stable pattern formation is also possible for a constant (and potentially sustainable) inflow of energy [69,118]. G. Optimization of system performance One of the great achievements of economics is that it has developed a multitude of methods to use scarce resources efficiently. A conventional approach to this is optimization. In principle, there is nothing wrong about this approach. Nevertheless, there are a number of problems with the way it is usually applied. 1. One can only optimize for one goal at a time, while usually, one needs to meet several objectives. This is mostly addressed by weighting the different goals (objectives), by executing a hierarchy of optimization steps (through ranking and prioritization), or by applying a satisficing strategy (requiring a minimum performance for each goal) [119,120]. However, when different optimization goals are in conflict with each other (such as maximizing the throughput and minimizing the queue length in a production system), a sophisticated timedependent strategy may be needed [121]. 2. There is no unique rule what optimization goal should be chosen. Low costs? High profit? Best customer satisfaction? Large throughput? Competitive advantage? Resilience? [122] In fact, the choice of the optimization function is arbitrary to a certain extent and, therefore, the result of optimization may vary largely. Goal selection requires strategic decisions, which may involve normative or moral factors (as in politics). In fact, one can often observe that, in the course of time, different goal functions are chosen. Moreover, note that the maximization of certain objectives such as resilience or "fitness" depends not only on factors that are under the control of a company. Resilience and "fitness" are functions of the whole system, in particularly, they also depend on the competitors and the strategies chosen by them. 3. The best solution may be the combination of two bad solutions and may, therefore, be overlooked. In other words, there are "evolutionary dead ends", so that gradual optimization may not work. (This problem can be partially overcome by the application of evolutionary mechanisms [120]). 4. In certain systems (such as many transport, logistic, or production systems), optimization tends to drive the system towards instability, since the point of maximum efficiency is often in the neighborhood or even identical with the point of breakdown of performance. Such breakdowns in capacity or performance can result from inefficiencies due to dynamic interaction effects. For example, when traffic flow reaches its maximum capacity, sooner or later it breaks down. As a consequence, the road capacity tends to drop during the time period where it is most urgently needed, namely during the rush hour [45,123]. 5. Optimization often eliminates reduncancies in the system and, thereby, increases the vulnerability to perturbations, i.e. it decreases robustness and resilience. 6. Optimization tends to eliminate heterogeneity in the system [80], while heterogeneity frequently supports adaptability and resilience. 7. Optimization is often performed with centralized concepts (e.g. by using supercomputers that process information collected all over the system). Such centralized systems are vulnerable to disturbances or failures of the central control unit. They are also sensitive to information overload, wrong selection of control parameters, and delays in adaptive feedback control. In contrast, decentralized control (with a certain degree of autonomy of local control units) may perform better, when the system is complex and composed of many heterogeneous elements, when the optimization problem is NP hard, the degree of fluctuations is large, and predictability is restricted to short time periods [77,124]. Under such conditions, decentralized control strategies can perform well by adaptation to the actual local conditions, while being robust to perturbations. Urban traffic light control is a good example for this [121,125]. 8. Further on, today's concept of quality control appears to be awkward. It leads to a never-ending contest, requiring people and organizations to fulfil permanently increasing standards. This leads to over-emphasizing measured performance criteria, while non-measured success factors are neglected. The engagement into nonrewarded activities is discouraged, and innovation may be suppressed (e.g. when evaluating scientists by means of their h-index, which requires them to focus on a big research field that generates many citations in a short time). While so-called "beauty contests" are considered to produce the best results, they will eventually absorb more and more resources for this contest, while less and less time remains for the work that is actually to be performed, when the contest is won. Besides, a large number of competitors have to waste considerable resources for these contests which, of course, have to be paid by someone. In this way, private and public sectors (from physicians over hospitals, administrations, up to schools and universities) are aching under the evaluation-related administrative load, while little time remains to perform the work that the corresponding experts have been trained for. It seems naïve to believe that this would not waste resources. Rather than making use of individual strengths, which are highly heterogeneous, today's way of evaluating performance enforces a large degree of conformity. There are also some problems with parameter fitting, a method based on optimization as well. In this case, the goal function is typically an error function or a likelihood function. Not only are calibration methods often "blindly" applied in practice (by people who are not experts in statistics), which can lead to overfitting (the fitting of meaningless "noise"), to the neglection of collinearities (implying largely variable parameter values), or to inaccurate and problematic parameter determinations (when the data set is insufficient in size, for example, when large portfolios are to be optimized [126]). As estimates for past data are not necessarily indicative for the future, making predictions with interpolation approaches can be quite problematic (see also Sec. III C for the challenge of time dependence). Moreover, classical calibration methods do not reveal inappropriate model specifications (e.g. linear ones, when non-linear models would be needed, or unsuitable choices of model variables). Finally, they do not identify unknown unknowns (i.e. relevant explanatory variables, which have been overlooked in the modeling process). H. Control approach Managing economic systems is a particular challenge, not only for the reasons discussed in the previous section. As large economic systems belong to the class of complex systems, they are hard or even impossible to manage with classical control approaches [76,77]. Complex systems are characterized by a large number of system elements (e.g. individuals, companies, countries, ...), which have non-linear or network interactions causing mutual dependencies and responses. Such systems tend to behave dynamic rather than static and probabilistic rather than deterministic. They usually show a rich, hardly predictable, and sometimes paradoxical system behavior. Therefore, they challenge our way of thinking [127], and their controllability is often overestimated (which is sometimes paraphrased as "illusion of control") [80,128,129]. In particular, causes and effects are typically not proportional to each other, which makes it difficult to predict the impact of a control attempt. A complex system may be unresponsive to a control attempt, or the latter may lead to unexpected, large changes in the system behavior (so-called "phase transitions", "regime shifts", or "catastrophes") [75]. The unresponsiveness is known as principle of Le Chatelier or Goodhart's law [130], according to which a complex system tends to counteract external control attempts. However, regime shifts can occur, when the system gets close to so-called "critical points" (also known as "tipping points"). Examples are sudden changes in public opinion (e.g. from pro to anti-war mood, from a smoking tolerance to a public smoking ban, or from buying energy-hungry sport utilities vehicles (SUVs) to buying environmentally-friendly cars). Particularly in case of network interactions, big changes may have small, no, or unexpected effects. Feedback loops, unwanted side effects, and circuli vitiosi are quite typical. Delays may cause unstable system behavior (such as bull-whip effects) [53], and over-critical perturbations can create cascading failures [78]. Systemic breakdowns (such as large-scale blackouts, bankruptcy cascades, etc.) are often a result of such domino or avalanche effects [77], and their probability of occurrence as well as their resulting damage are usually underestimated. Further examples are epidemic spreading phenomena or disasters with an impact on the socio-economic system. A more detailed discussion is given in Refs. [76,77]. Other factors contributing to the difficulty to manage economic systems are the large heterogeneity of system elements and the considerable level of randomness as well as the possibility of a chaotic or turbulent dynamics (see Sec. III D). Furthermore, the agents in economic systems are responsive to information, which can create self-fulfilling or self-destroying prophecy effects. Inflation may be viewed as example of such an effect. Interestingly, in some cases one even does not know in advance, which of these effects will occur. It is also not obvious that the control mechanisms are well designed from a cybernetic perspective, i.e. that we have sufficient information about the system and suitable control variables to make control feasible. For example, central banks do not have terribly many options to influence the economic system. Among them are performing open-market operations (to control money supply), adjustments in fractional-reserve banking (keeping only a limited deposit, while lending a large part of the assets to others), or adaptations in the discount rate (the interest rate charged to banks for borrowing short-term funds directly from a central bank). Nevertheless, the central banks are asked to meet multiple goals such as • to guarantee well-functioning and robust financial markets, • to support economic growth, • to balance between inflation and unemployment, • to keep exchange rates within reasonable limits. Furthermore, the one-dimensional variable of "money" is also used to influence individual behavior via taxes (by changing behavioral incentives). It is questionable, whether money can optimally meet all these goals at the same time (see Sec. III G). We believe that a computer, good food, friendship, social status, love, fairness, and knowledge can only to a certain extent be replaced by and traded against each other. Probably for this reason, social exchange comprises more than just material exchange [131][132][133]. It is conceivable that financial markets as well are trying to meet too many goals at the same time. This includes • to match supply and demand, • to discover a fair price, • to raise the foreign direct investment (FDI), • to couple local economies with the international system, • to facilitate large-scale investments, • to boost development, • to share risk, • to support a robust economy, and • to create opportunities (to gamble, to become rich, etc.). Therefore, it would be worth stuyding the system from a cybernetic control perspective. Maybe, it would work better to separate some of these functions from each other rather than mixing them. I. Human factors Another aspect that tends to be overlooked in mainstream economics is the relevance of psychological and social factors such as emotions, creativity, social norms, herding effects, etc. It would probably be wrong to interpret these effects just as a result of perception biases (see Sec. III A). Most likely, these human factors serve certain functions such as supporting the creation of public goods [102] or collective intelligence [134,135]. As Bruno Frey has pointed out, economics should be seen from a social science perspective [136]. In particular, research on happiness has revealed that there are more incentives than just financial ones that motivate people to work hard [133]. Interestingly, there are quite a number of factors which promote volunteering [132]. It would also be misleading to judge emotions from the perspective of irrational behavior. They are a quite universal and a relatively energy-consuming way of signalling. Therefore, they are probably more reliable than non-emotional signals. Moreover, they create empathy and, consequently, stimulate mutual support and a readiness for compromises. It is quite likely that this creates a higher degree of cooperativeness in social dilemma situations and, thereby, a higher payoff on average as compared to emotionless decisions, which often have drawbacks later on. J. Information Finally, there is no good theory that would allow one to assess the relevance of information in economic systems. Most economic models do not consider information as an explanatory variable, although information is actually a stronger driving force of urban growth and social dynamics than energy [137]. While we have an information theory to determine the number of bits required to encode a message, we are lacking a theory, which would allow us to assess what kind of information is relevant or important, or what kind of information will change the social or economic world, or history. This may actually be largely dependent on the perception of pieces of information, and on normative or moral issues filtering or weighting information. Moreover, we lack theories describing what will happen in cases of coincidence or contradiction of several pieces of information. When pieces of information interact, this can change their interpretation and, thereby, the decisions and behaviors resulting from them. That is one of the reasons why socio-economic systems are so hard to predict: "Unknown unknowns", structural instabilities, and innovations cause emergent results and create a dynamics of surprise [138]. IV. ROLE OF OTHER SCIENTIFIC FIELDS A. Econophysics, ecology, computer science The problems discussed in the previous two sections pose interesting practical and fundamental challenges for economists, but also other disciplines interested in understanding economic systems. Econophysics, for example, pursues a physical approach to economic systems, applying methods from statistical physics [81], network theory [139,140], and the theory of complex systems [85,87]. A contribution of physics appears quite natural, in fact, not only because of its tradition in detecting and modeling regularities in large data sets [141]. Physics also has a lot of experience how to theoretically deal with problems such as time-dependence, fluctuations, friction, entropy, non-linearity, strong interactions, correlations, heterogeneity, and many-particle simulations (which can be easily extended towards multi-agent simulations). In fact, physics has influenced economic modeling already in the past. Macroeconomic models, for example, were inspired by thermodynamics. More recent examples of relevant contributions by physicists concern models of self-organizing conventions [54], of geographic agglomeration [65], of innovation spreading [142], or of financial markets [143], to mention just a few examples. One can probably say that physicists have been among the pioneers calling for new approaches in economics [81,87,[143][144][145][146][147]. A particularly visionary book beside Wolfgang Weidlich's work was the "Introduction to Quantitative Aspects of Social Phenomena" by Elliott W. Montroll and Wade W. Badger, which addressed by mathematical and empirical analysis subjects as diverse as population dynamics, the arms race, speculation patterns in stock markets, congestion in vehicular traffic as well as the problems of atmospheric pollution, city growth and developing countries already in 1974 [148]. Unfortunately, it is impossible in our paper to reflect the numerous contributions of the field of econophysics in any adequate way. The richness of scientific contributions is probably reflected best by the Econophysics Forum run by Yi-Cheng Zhang [149]. Many econophysics solutions are interesting, but so far they are not broad and mighty enough to replace the rational agent paradigm with its large body of implications and applications. Nevertheless, considering the relatively small number of econophysicists, there have been many promising results. The probably largest fraction of publications in econophysics in the last years had a data-driven or computer modeling approach to financial markets [143]. But econophyics has more to offer than the analysis of financial data (such as fluctuations in stock and foreign currency exchange markets), the creation of interaction models for stock markets, or the development of risk management strategies. Other scientists have focused on statistical laws underlying income and wealth distributions, non-linear market dynamics, macroeconomic production functions and conditions for economic growth or agglomeration, sustainable economic systems, business cycles, microeconomic interaction models, network models, the growth of companies, supply and produc-tion systems, logistic and transport networks, or innovation dynamics and diffusion. An overview of subjects is given, for example, by Ref. [152] and the contributions to annual spring workshop of the Physics of Socio-Economic Systems Division of the DPG [153]. To the dissatisfaction of many econophysicists, the transfer of knowledge often did not work very well or, if so, has not been well recognized [150]. Besides scepticism on the side of many economists with regard to novel approaches introduced by "outsiders", the limited resonance and level of interdisciplinary exchange in the past was also caused in part by econophysicists. In many cases, questions have been answered, which no economist asked, rather than addressing puzzles economists are interested in. Apart from this, the econophysics work was not always presented in a way that linked it to the traditions of economics and pointed out deficiencies of existing models, highlighting the relevance of the new approach well. Typical responses are: Why has this model been proposed and not another one? Why has this simplification been used (e.g. an Ising model of interacting spins rather than a rational agent model)? Why are existing models not good enough to describe the same facts? What is the relevance of the work compared to previous publications? What practical implications does the finding have? What kind of paradigm shift does the approach imply? Can existing models be modified or extended in a way that solves the problem without requiring a paradigm shift? Correspondingly, there have been criticisms not only by mainstream economists, but also by colleagues, who are open to new approaches [151]. Therefore, we would like to suggest to study the various economic subjects from the perspective of the abovementioned fundamental challenges, and to contrast econophysics models with traditional economic models, showing that the latter leave out important features. It is important to demonstrate what properties of economic systems cannot be understood for fundamental reasons within the mainstream framework (i.e. cannot be dealt with by additional terms within the modeling class that is conventionally used). In other words, one needs to show why a paradigm shift is unavoidable, and this requires careful argumentation. We are not claiming that this has not been done in the past, but it certainly takes an additional effort to explain the essence of the econophysics approach in the language of economics, particularly as mainstream economics may not always provide suitable terms and frameworks to do this. This is particularly important, as the number of econophysicists is small compared to the number of economists, i.e. a minority wants to convince an established majority. To be taken seriously, one must also demonstrate a solid knowledge of related previous work of economists, to prevent the stereotypical reaction that the subject of the paper has been studied already long time ago (tacitly implying that it does not require another paper or model to address what has already been looked at before). A reasonable and promising strategy to address the above fundamental and practical challenges is to set up multidisciplinary collaborations in order to combine the best of all relevant scientific methods and knowledge. It seems plausible that this will generate better models and higher impact than working in separation, and it will stimulate scientific innovation. Physicists can contribute with their experience in handling large data sets, in creating and simulating mathematical models, in developing useful approximations, in setting up laboratory experiments and measurement concepts. Current research activities in economics do not seem to put enough focus on • modeling approaches for complex systems [154], • computational modeling of what is not analytically tractable anymore, e.g. by agent-based models [155][156][157], • testable predictions and their empirical or experimental validation [164], • managing complexity and systems engineering approaches to identify alternative ways of organizing financial markets and economic systems [91,93,165], and • an advance testing of the effectiveness, efficiency, safety, and systemic impact (side effects) of innovations, before they are implemented in economic systems. This is in sharp contrast to mechanical, electrical, nuclear, chemical and medical drug engineering, for example. Expanding the scope of economic thinking and paying more attention to these natural, computer and engineering science aspects will certainly help to address the theoretical and practical challenges posed by economic systems. Besides physics, we anticipate that also evolutionary biology, ecology, psychology, neuroscience, and artificial intelligence will be able to make significant contributions to the understanding of the roots of economic problems and how to solve them. In conclusion, there are interesting scientific times ahead. B. Social Sciences It is a good question, whether answering the above list of fundamental challenges will sooner or later solve the practical problems as well. We think, this is a precondition, but it takes more, namely the consideration of social factors. In particular, the following questions need to be answered: 9. How can the formation of social norms and conventions, social roles and socialization, conformity and integration be understood? 10. How do language and culture evolve? 11. How to comprehend the formation of group identity and group dynamics? What are the laws of coalition formation, crowd behavior, and social movements? 12. How to understand social networks, social structure, stratification, organizations and institutions? 13. How do social differentiation, specialization, inequality and segregation come about? 14. How to model deviance and crime, conflicts, violence, and wars? 15. How to understand social exchange, trading, and market dynamics? We think that, despite the large amount of research performed on these subjects, they are still not fully understood. The ultimate goal would be to formulate mathematical models, which would allow one to understand these issues as emergent phenomena based on first principles, e.g. as a result of (co-)evolutionary processes. Such first principles would be the basic facts of human capabilities and the kinds of interactions resulting from them, namely 1. birth, death, and reproduction, 2. the need of and competition for resources (such as food and water), 3. the ability to observe their environment (with different senses), 4. the capability to memorize, learn, and imitate, 5. empathy and emotions, 6. signaling and communication abilities, 7. constructive (e.g. tool-making) and destructive (e.g. fighting) abilities, 8. mobility and (limited) carrying capacity, 9. the possibility of social and economic exchange. Such features can, in principle, be implemented in agentbased models [158][159][160][161][162][163]. Computer simulations of many interacting agents would allow one to study the phenomena emerging in the resulting (artificial or) model societies, and to compare them with stylized facts [163,168,169]. The main challenge, however, is not to program a seemingly realistic computer game. We are looking for scientific models, i.e. the underlying assumptions need to be validated, and this requires to link computer simulations with empirical and experimental research [170], and with massive (but privacy-respecting) mining of social interaction data [141]. In the ideal case, there would also be an analytical understanding in the end, as it has been recently gained for interactive driver behavior [111]. 1 . 1Demographic change of the population structure (change of birth rate, migration, integration...)2. Financial and economic (in)stability (government debts, taxation, and inflation/deflation; sustainability of social benefit systems; consumption and investment be- havior...) 3. Social, economic and political participation and inclu- sion (of people of different gender, age, health, educa- tion, income, religion, culture, language, preferences; reduction of unemployment...) 4. Balance of power in a multi-polar world (between dif- ferent countries and economic centers; also between in- dividual and collective rights, political and company power; avoidance of monopolies; formation of coali- tions; protection of pluralism, individual freedoms, mi- norities...) 5. Collective social behavior and opinion dynamics (abrupt changes in consumer behavior; social conta- gion, extremism, hooliganism, changing values; break- down of cooperation, trust, compliance, solidarity...) 6. Security and peace (organized crime, terrorism, social unrest, independence movements, conflict, war...) 7. Institutional design (intellectual property rights; over- regulation; corruption; balance between global and lo- cal, central and decentral control...) 8. Sustainable use of resources and environment (con- sumption habits, travel behavior, sustainable and effi- cient use of energy and other resources, participation in recycling efforts, environmental protection...) 9. Information management (cyber risks, misuse of sensi- tive data, espionage, violation of privacy; data deluge, spam; education and inheritance of culture...) 10. Public health (food safety; spreading of epidemics [flu, SARS, H1N1, HIV], obesity, smoking, or unhealthy di- ets...) 1 . 1How to understand human decision-making? How to explain deviations from rational choice theory and the decision-theoretical paradoxes? Why are people risk averse? 2. How does consciousness and self-consciousness come about? 3. How to understand creativity and innovation? 4. How to explain homophily, i.e. the fact that individuals tend to agglomerate, interact with and imitate similar others? 5. How to explain social influence, collective decision making, opinion dynamics and voting behavior? 6. Why do individuals often cooperate in social dilemma situations? 7. How do indirect reciprocity, trust and reputation evolve? 8. How do costly punishment, antisocial punishment, and discrimination come about? AcknowledgementsThe authors are grateful for partial financial support by the ETH Competence Center "Coping with Crises in Complex Socio-Economic Systems" (CCSS) through ETH Research Grant CH1-01 08-2 and by the Future and Emerging Technologies programme FP7-COSI-ICT of the European Commission through the project Visioneer (grant no.: 248438). They would like to thank for feedbacks on the manuscript by Kenett Dror, Tobias Preis and Gabriele Tedeschi as well as for inspiring discussions during a Visioneer workshop in Zurich from January 13 to 15, 2010, involving, besides the authors, Stefano Battiston Guido Caldarelli, Anna Carbone, Giovanni Luca Ciampaglia, Andreas Flache, Imre Kondor, Sergi Lozano, Thomas Maillart, Amin Mazloumian, Tamara Mihaljev, Alexander Mikhailov, Ryan Murphy, Carlos Perez Roca, Stefan Reimann, Aki-Hiro Sato, Christian Schneider, Piotr Swistak, Gabriele Tedeschi, and Jiang Wu. Last but not least, we are grateful to Didier Sornette, Frank Schweitzer and Lars-Erik Cederman for providing some requested references. Mancosu, Brouwer to Hilbert: The Debate on the Foundations of Mathematics in the 1920's. New YorkOxford University PressMancosu (ed.) From Brouwer to Hilbert: The Debate on the Foundations of Mathematics in the 1920's (Oxford University Press, New York, 1998). How did economists get it so wrong?. Paul Krugman, The New York Times. Paul Krugman, "How did economists get it so wrong?", The New York Times (September 2, 2009), see http://www.nytimes.com/2009/09/06/magazine/06Economic-t.html The Financial Crisis and the Systemic Failure of Academic Economics (Dahlem Report. D Colander, Univ. of Copenhagen Deptof Economics Discussion Paper No. 09-03D. Colander et al., The Financial Crisis and the Systemic Failure of Academic Economics (Dahlem Report, Univ. of Copenhagen Dept. of Eco- nomics Discussion Paper No. 09-03, 2009), see http://papers.ssrn.com/sol3/papers.cfm?abstract id=1355882 Address of the Governor of the Central Bank of Barbados on. Delisle Worrell, What's wrong with economicsDeLisle Worrell, What's wrong with economics. Address of the Governor of the Central Bank of Barbados on June 30, 2010, see http://www.bis.org/review/r100713c.pdf The FuturIcT knowledge accelerator: Unleashing the power of information for a sustainable future. D Helbing, D. Helbing, The FuturIcT knowledge accelerator: Unleash- ing the power of information for a sustainable future, see http://arxiv.org/abs/1004.4969 and http://www.futurict.eu Bjorn Lomborg, Crises, Global Solutions: Costs and Benefits. CambridgeCambridge University Press2nd ed.Bjorn Lomborg, Global Crises, Global Solutions: Costs and Benefits (Cambridge University Press, Cambridge, 2nd ed., 2009). R H Nelson, Economics as Religion: from Samuelson to Chicago and Beyond. Pennsylvania State UniversityR. H. Nelson, Economics as Religion: from Samuelson to Chicago and Beyond (Pennsylvania State University, 2001). Individualism and Economic Order. F A Von Hayek, University of Chicago PressF.A. Von Hayek, Individualism and Economic Order (Univer- sity of Chicago Press, 1948) The Counter Revolution of Science The Free Press of Glencoe. F A Von Hayek, 1955LondonF. A. Von Hayek, The Counter Revolution of Science The Free Press of Glencoe, London, 1955 The Case of flexible exchange rates. M Friedman, Essays in Positive Economics. ChicagoUniversity of Chicago PressM. Friedman, The Case of flexible exchange rates, in: Essays in Positive Economics (University of Chicago Press, Chicago, 1953). Adaptive behavior and economic theory. R Lucas, Journal of Business. 59R. Lucas, Adaptive behavior and economic theory, Journal of Business 59, S401-S426 (1986). Bounded Rationality. The Adaptive Toolbox. G. Gigerenzer and R. SeltenCambridge, MAMIT PressG. Gigerenzer and R. Selten (eds.) Bounded Rationality. The Adaptive Toolbox (MIT Press, Cambridge, MA, 2001). H Gintis, The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences. PrincetonPrinceton University PressH. Gintis, The Bounds of Reason: Game Theory and the Unifi- cation of the Behavioral Sciences (Princeton University Press, Princeton, 2009). F Caccioli, M Marsili, On information efficiency and financial stability. F. Caccioli and M. Marsili, On information efficiency and fi- nancial stability. Preprint http://arxiv.org/abs/1004.5014 H A Simon, Models of Man. WileyH. A. Simon, Models of Man (Wiley, 1957). H A Simon, Models of Bounded Rationality. BostonMIT Press1H. A. Simon, Models of Bounded Rationality, Vol. 1 (MIT Press, Boston, 1984). G Gigerenzer, P M Todd, Research Group, Simple Heuristics That Make Us Smart. OxfordOxford University PressG. Gigerenzer, P. M. Todd, and the ABC Research Group, Sim- ple Heuristics That Make Us Smart (Oxford University Press, Oxford, 2000). D Kahneman, P Slovic, A Tversky, Judgment under Uncertainty: Heuristics and Biases. Cambridge, MACambridge University PressD. Kahneman, P. Slovic, and A. Tversky, Judgment under Un- certainty: Heuristics and Biases (Cambridge University Press, Cambridge, MA, 1982). Physics of risk and uncertainty in quantum decision making. V I Yukalov, D Sornette, European Physical Journal B. 71V. I. Yukalov and D. Sornette, Physics of risk and uncertainty in quantum decision making, European Physical Journal B 71, 533-548 (2009). A theory of fairness, competition, and cooperation. E Fehr, K M Schmidt, The Quarterly Journal of Economics. 1143E. Fehr and K. M. Schmidt, A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics 114(3), 817-868 (1999). J Henrich, R Boyd, S Bowles, C Camerer, Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies. OxfordOxford UniversityJ. Henrich, R. Boyd, S. Bowles, and C. Camerer, Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies (Oxford Univer- sity, Oxford, 2004). Social distance and other-regarding behavior in dictator games. E Hoffman, K Mccabe, V L Smith, The American Economic Review. 863E. Hoffman, K. McCabe, and V. L. Smith, Social distance and other-regarding behavior in dictator games. The American Economic Review 86(3), 653-660 (1996). Measuring social value orientation. R O Murphy, K Ackermann, preprintR. O. Murphy and K. Ackermann, Measuring social value ori- entation, preprint (2010). Economics and restaurant gratuities: Determining tip rates. Ö B Bodvarsson, W A Gibson, American Journal of Economics and Sociology. 562Ö. B. Bodvarsson and W. A. Gibson, Economics and restau- rant gratuities: Determining tip rates. American Journal of Economics and Sociology 56(2),187-203 (1997). The Cement of Society: A Study of Social Order. J Elster, Cambridge University PressCambridgeJ. Elster, The Cement of Society: A Study of Social Order (Cambridge University Press, Cambridge, 1989). The Rewards of Punishment. A Relational Theory of Norm Enforcement. C Horne, Stanford University PressStanfordC. Horne, The Rewards of Punishment. A Relational Theory of Norm Enforcement (Stanford University Press, Stanford, 2009). The formation of homogeneous norms in heterogeneous populations. D Helbing, W Yu, K.-D Opp, H Rauhut, SubmittedD. Helbing, W. Yu, K.-D. Opp, and H. Rauhut, The formation of homogeneous norms in heterogeneous populations. Submit- ted (2010). The emperor's dilemma: A computational model of self-enforcing norms. D Centola, R Willer, M Macy, American Journal of Sociology. 110D. Centola, R. Willer, and M. Macy, The emperor's dilemma: A computational model of self-enforcing norms. American Journal of Sociology 110, 1009-1040 (2005). Non-explanatory equilibria: An extremely simple game with (mostly) unattainable fixed points. J M Epstein, R A Hammond, Complexity. 74J. M. Epstein and R. A. Hammond, Non-explanatory equi- libria: An extremely simple game with (mostly) unattainable fixed points. Complexity 7(4), 18-22 (2002). The myth of the folktheorem, Games and Economic Behavior. C Borgs, 10.1016/j.geb.2009.04.016in pressC. Borgs et al., The myth of the folktheorem, Games and Eco- nomic Behavior, in press, doi:10.1016/j.geb.2009.04.016 Deterministic Chaos. H G Schuster, Wiley VCHWeinheimH. G. Schuster, Deterministic Chaos (Wiley VCH, Weinheim, 2005). For example, three-body planetary motion has deterministic chaotic solutions, although it is a problem in classical mechanics, where the equations of motion optimize a Lagrangian functional. For example, three-body planetary motion has deterministic chaotic solutions, although it is a problem in classical me- chanics, where the equations of motion optimize a Lagrangian functional. K Gödel, On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Basic, New YorkK. Gödel, On Formally Undecidable Propositions of Principia Mathematica and Related Systems (Basic, New York, 1962). On computable numbers, with an application to the Entscheidungsproblem. A M Turing, Proceedings of the London Mathematical Society. 2A. M. Turing, On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Math- ematical Society 2.42, 230-265 (1936). The Behavior of stock market prices. E Fama, Journal of Business. 38E. Fama, The Behavior of stock market prices. Journal of Business 38 34-105. . Adam Wikipedia Article On, Smith, Wikipedia article on Adam Smith, see http://en. wikipedia.org/wiki/Adam Smith, downloaded on July 14, 2010. 1776) An Inquiry into the Nature and Causes of the Wealth of Nations. A Smith, University of Chicago PressA. Smith (1776) An Inquiry into the Nature and Causes of the Wealth of Nations (University of Chicago Press, 1977). 1759) The Theory of Moral Sentiments. A Smith, A. Smith (1759) The Theory of Moral Sentiments. Evolution of indirect reciprocity. M A Nowak, K Sigmund, Nature. 437M. A. Nowak and K. Sigmund, Evolution of indirect reci- procity. Nature 437, 1291-1298 (2005). Revisiting market efficiency: the stock market as a complex adaptive system. M J Mauboussin, Journal of Applied Corporate Finance. 14M. J. Mauboussin, Revisiting market efficiency: the stock mar- ket as a complex adaptive system. Journal of Applied Corpo- rate Finance 14, 47-55 (2005). There is no invisible hand. The Guardian (December. J Stiglitz, 20J. Stiglitz, There is no invisible hand. The Guardian (De- cember 20, 2002), see http://www.guardian.co.uk/ educa- tion/2002/dec/20/highereducation.uk1 Optimal self-organization. D Helbing, T Vicsek, 13.1-13.17New Journal of Physics. 1D. Helbing and T. Vicsek, Optimal self-organization. New Journal of Physics 1, 13.1-13.17 (1999). Traffic and related self-driven many-particle systems. D Helbing, Reviews of Modern Physics. 73D. Helbing, Traffic and related self-driven many-particle sys- tems. Reviews of Modern Physics 73, 1067-1141 (2001). From crowd dynamics to crowd safety: A video-based analysis. A Johansson, D Helbing, H Z A-Abideen, S Al-Bosta, Advances in Complex Systems. 114A. Johansson, D. Helbing, H. Z. A-Abideen, and S. Al-Bosta, From crowd dynamics to crowd safety: A video-based analy- sis. Advances in Complex Systems 11(4), 497-527 (2008). W Michiels, S.-I , Niculescu Stability and Stabilization of Time-Delay Systems (siam-Society for Industrial and Applied Mathematics. PhiladelphiaW. Michiels and S.-I. Niculescu Stability and Stabilization of Time-Delay Systems (siam-Society for Industrial and Ap- plied Mathematics, Philadelphia, 2007). Networkinduced oscillatory behavior in material flow networks and irregular business cycles. D Helbing, U Witt, S Lämmer, T Brenner, Physical Review E. 7056118D. Helbing, U. Witt, S. Lämmer, and T. Brenner, Network- induced oscillatory behavior in material flow networks and ir- regular business cycles. Physical Review E 70, 056118 (2004). The tragedy of the commons. G Hardin, Science. 162G. Hardin, The tragedy of the commons. Science 162, 1243- 1248 (1968). Altruistic punishment in humans. E Fehr, S Gächter, Nature. 415E. Fehr and S. Gächter, Altruistic punishment in humans. Na- ture 415, 137-140 (2002). Switching phenomena in a system with no switches. T Preis, H E Stanley, Journal of Statistical Physics (JSTAT). 138T. Preis and H. E. Stanley, Switching phenomena in a system with no switches. Journal of Statistical Physics (JSTAT) 138, 431-446 (2010). Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism. G A Akerlof, R J Shiller, Princeton University PressG. A. Akerlof and R. J. Shiller, Animal Spirits: How Hu- man Psychology Drives the Economy, and Why It Matters for Global Capitalism (Princeton University Press, 2010). Testing behavioral simulation models by direct experiment. J D Sterman, Management Science. 3312J. D. Sterman, Testing behavioral simulation models by direct experiment. Management Science 33(12), 1572-1592 (1987). A mathematical model for behavioral changes by pair interactions. D Helbing, Formal Models in Social Sciences. G. Haag, U. Mueller, and K. G. Troitzsch330SpringerEconomic Evolution and Demographic ChangeD. Helbing, A mathematical model for behavioral changes by pair interactions. Pages 330-348 in: G. Haag, U. Mueller, and K. G. Troitzsch (eds.) Economic Evolution and Demographic Change. Formal Models in Social Sciences (Springer, Berlin, 1992). Multistability in a dynamic Cournot game with three oligopolists. H N Agiza, G I Bischib, M Kopel, Mathematics and Computers in Simulation. 51H. N. Agiza, G. I. Bischib, and M. Kopel, Multistability in a dynamic Cournot game with three oligopolists. Mathematics and Computers in Simulation 51, 63-90 (1999). Evolutionary establishment of moral and double moral standards through spatial interactions. D Helbing, A Szolnoki, M Perc, G Szabo, PLoS Computational Biology. 641000758D. Helbing, A. Szolnoki, M. Perc, and G. Szabo, Evolu- tionary establishment of moral and double moral standards through spatial interactions. PLoS Computational Biology 6(4), e1000758 (2010). . A J , Lotka Elements of Mathematical Biology. DoverA. J. Lotka Elements of Mathematical Biology. (Dover, New York, 1956). Abzweigungen einer periodischen Lösung von einer stationären Lösung eines Differentialgleichungssystems. E Hopf, Math. Naturwiss. Klasse. 941E. Hopf (1942) Abzweigungen einer periodischen Lösung von einer stationären Lösung eines Differentialgleichungssystems. Math. Naturwiss. Klasse 94, 1ff. Dynamic decision behavior and optimal guidance through information services: Models and experiments. D Helbing, Human Behaviour and Traffic Networks. M. Schreckenberg and R. SeltenBerlinSpringerD. Helbing, Dynamic decision behavior and optimal guidance through information services: Models and experiments. Pages 47-95 in: M. Schreckenberg and R. Selten (eds.) Human Be- haviour and Traffic Networks (Springer, Berlin, 2004). The dynamics of general equilibrium. H Gintis, The Economic Journal. 117H. Gintis, The dynamics of general equilibrium. The Eco- nomic Journal 117, 1280-1309 (2007). Modeling the stylized facts in finance through simple nonlinear adaptive systems. C H Hommes, Proceedings of the National Academy of Sciences of the USA. the National Academy of Sciences of the USAPNAS99Suppl. 3C. H. Hommes, Modeling the stylized facts in finance through simple nonlinear adaptive systems. Proceedings of the Na- tional Academy of Sciences of the USA (PNAS) 99, Suppl. 3, 7221-7228 (2002). Predicting moments of crisis in physics and finance" during the workshop "Windows to. Complexity" in Münster, Germany. The quotes were presented by Jorgen Vitting Andersen in his talkThe quotes were presented by Jorgen Vitting Andersen in his talk "Predicting moments of crisis in physics and finance" dur- ing the workshop "Windows to Complexity" in Münster, Ger- many, on June 11, 2010. The chemical basis of morphogenesis. A M Turing, Philosophical Transactions of the Royal Society of London B. 237Turing A. M., The chemical basis of morphogenesis. Philo- sophical Transactions of the Royal Society of London B 237, 37-72 (1952). J D Murray, Lectures on Nonlinear Differential Equation-Models in Biology. OxfordClanderon PressJ. D. Murray, Lectures on Nonlinear Differential Equation- Models in Biology (Clanderon Press, Oxford, 1977). Settlement formation I: A dynamic theory. W Weidlich, M Munz, Annals of Regional Science. 24W. Weidlich and M. Munz, Settlement formation I: A dynamic theory, Annals of Regional Science 24, 83-106 (2000); Settlement formation II: Numerical simulation. Annals of Regional Science. 24Settle- ment formation II: Numerical simulation, Annals of Regional Science 24, 177-196 (2000). Self-organization in space and induced by fluctuations. D Helbing, T Platkowski, International Journal of Chaos Theory and Applications. 54D. Helbing and T. Platkowski, Self-organization in space and induced by fluctuations. International Journal of Chaos The- ory and Applications 5(4), 47-62 (2000). Stokes integral of economic growth: Calculus and the Solow model. J Mimkes, Physica A. 3898J. Mimkes, Stokes integral of economic growth: Calculus and the Solow model. Physica A 389(8), 1665-1676 (2010). Hysteresis and the European unemployment problem. O J Blanchard, L H Summers, NBER Macroeconomics Annual. 1O. J. Blanchard and L. H. Summers, Hysteresis and the Euro- pean unemployment problem. NBER Macroeconomics Annual 1, 15-78 (1986). M Cross, H Greenside, Pattern Formation and Dynamics in Nonequilibrium Systems. Cambridge UniversityM. Cross and H. Greenside, Pattern Formation and Dynamics in Nonequilibrium Systems (Cambridge University, 2009). D Sornette, Critical Phenomena in Natural Sciences: Chaos, Fractals, Selforganization and Disorder. BerlinSpringerD. Sornette, Critical Phenomena in Natural Sciences: Chaos, Fractals, Selforganization and Disorder (Springer, Berlin, 2006). The Science of Disasters. A. Bunde, J. Kropp, and H.-J. SchellnhuberBerlinSpringerA. Bunde, J. Kropp, and H.-J. Schellnhuber (eds.) The Science of Disasters (Springer, Berlin, 2002). W Horsthemke, R Lefever, Noise-Induced Transitions: Theory and Applications in Physics, Chemistry, and Biology. BerlinSpringerW. Horsthemke and R. Lefever, Noise-Induced Transitions: Theory and Applications in Physics, Chemistry, and Biology (Springer, Berlin, 1983). Brownian motors: noisy transport far from equilibrium. P Reimann, Physics Reports. 361P. Reimann, Brownian motors: noisy transport far from equi- librium. Physics Reports 361, 57-265 (2002). E C Zeeman, Catastrophe Theory. LondonAddison-WesleyE. C. Zeeman (ed.) Catastrophe Theory (Addison-Wesley, London, 1977). H E Stanley, Introduction to Phase Transitions and Critical Phenomena. Oxford UniversityH. E. Stanley, Introduction to Phase Transitions and Critical Phenomena (Oxford University, 1987) Managing complexity: an introduction. D Helbing, S Lämmer, Pages 1-16 in: D. Helbing (ed.) Managing Complexity: Insights, Concepts, Applications. BerlinSpringerD. Helbing and S. Lämmer, Managing complexity: an intro- duction. Pages 1-16 in: D. Helbing (ed.) Managing Complex- ity: Insights, Concepts, Applications (Springer, Berlin, 2008). Systemic risks in society and economics. D Helbing, 09-12-044SFI Working PaperD. Helbing, Systemic risks in society and eco- nomics. SFI Working Paper 09-12-044, see http://www.santafe.edu/media/workingpapers/09-12-044.pdf Systemic risk in a unifying framework for cascading processes on networks. J Lorenz, S Battiston, F Schweitzer, The European Physical Journal B. 714J. Lorenz, S. Battiston, and F. Schweitzer (2009) Systemic risk in a unifying framework for cascading processes on networks. The European Physical Journal B 71(4), 441-460. How Nature Works: The Science of Self-Organized Criticality. P Bak, SpringerBerlinP. Bak, How Nature Works: The Science of Self-Organized Criticality. (Springer, Berlin, 1999). Why economists failed to predict the financial crisis. Knowledge@wharton, Knowledge@Wharton, Why economists failed to pre- dict the financial crisis (May 13, 2009), see http:// knowledge.wharton.upenn.edu/article.cfm?articleid=2234 or http://www.ftpress.com/articles/article.aspx?p=1350507 Physics and social science-the approach of synergetics. W Weidlich, Physics Reports. 2041W. Weidlich, Physics and social science-the approach of syn- ergetics. Physics Reports 204(1), 1-163 (1991). T Puu, Nonlinear Economic Dynamics. LavoisierT. Puu, Nonlinear Economic Dynamics (Lavoisier, 1991); Attractors, Bifurcations, & Chaos. Nonlinear Phenomena in Economics. SpringerBerlinAt- tractors, Bifurcations, & Chaos. Nonlinear Phenomena in Economics (Springer, Berlin, 2003). H W Lorenz, Nonlinear Dynamical Equations and Chaotic Economy. BerlinSpringerH. W. Lorenz, Nonlinear Dynamical Equations and Chaotic Economy (Springer, Berlin, 1993). The Self-Organizing Economy. P Krugman, BlackwellP. Krugman, The Self-Organizing Economy (Blackwell, 1996). Self-Organization of Complex Structures: From Individual to Collective Dynamics. F. SchweitzerCRC PressF. Schweitzer (ed.) Self-Organization of Complex Structures: From Individual to Collective Dynamics (CRC Press, 1997); . Modeling Complexity in Economic and Social Systems. World ScientificModeling Complexity in Economic and Social Systems (World Scientific, 2002). R H Day, Complex Economic Dynamics. MIT Press1R. H. Day, Complex Economic Dynamics (MIT Press, Vol. 1: 1998; Vol. 2: 1999). W Weidlich, Sociodynamics: A Systematic Approach to Mathematical Modelling in the Social Sciences. CRC PressW. Weidlich, Sociodynamics: A Systematic Approach to Math- ematical Modelling in the Social Sciences (CRC Press, 2000). J D Sterman, Business Dynamics: Systems Thinking and Modeling for a Complex World. McGraw HillJ. D. Sterman, Business Dynamics: Systems Thinking and Modeling for a Complex World (McGraw Hill, 2000). W A Brock, Growth Theory, Non-Linear Dynamics and Economic Modelling. Edward ElgarW. A. Brock, Growth Theory, Non-Linear Dynamics and Eco- nomic Modelling (Edward Elgar, 2001). S Y Auyang, Foundations of Complex-System Theories. Cambridge UniversityS. Y. Auyang, Foundations of Complex-System Theories (Cambridge University, 1998). Complexity Hints for Economic Policy. M. Salzano and D. ColanderSpringerM. Salzano and D. Colander (eds.) Complexity Hints for Eco- nomic Policy (Springer, 2007). . D Delli Gatti, E Gaffeo, M Gallegati, G Giulioni, A Palestrini, Emergent Economics. SpringerD. Delli Gatti, E. Gaffeo, M. Gallegati, G. Giulioni, and A. Palestrini, Emergent Economics (Springer, 2008). Coping with the Complexity of Economics. M. Faggini and T. LuxSpringerM. Faggini and T. Lux (eds.) Coping with the Complexity of Economics (Springer, 2009). J B RosserJr, K L Cramer, Handbook of Research on Complexity. and J. MadisonEdward Elgar PublishersJ. B. Rosser, Jr., K. L. Cramer, and J. Madison (eds.) Hand- book of Research on Complexity (Edward Elgar Publishers, 2009). M Olson, The rise and decline of nations: economic growth, stagflation, and social rigidities. New HavenYale University PressM. Olson, The rise and decline of nations: economic growth, stagflation, and social rigidities (Yale University Press, New Haven, 1982). The Evolution of Cooperation (Basic Books. R Axelrod, New YorkR. Axelrod, The Evolution of Cooperation (Basic Books, New York, 1984). J C Harsanyi, R Selten, A General Theory of Equilibrium Selection. Cambridge, MAMIT PressJ. C. Harsanyi and R. Selten, A General Theory of Equilibrium Selection (MIT Press, Cambridge, MA, 1988). Phase transitions to cooperation in the prisoner's dilemma. D Helbing, S Lozano, Physical Review E. 81557102D. Helbing and S. Lozano, Phase transitions to cooperation in the prisoner's dilemma. Physical Review E 81(5), 057102 (2010). Evolutionary games and spatial chaos. M A Nowak, R M May, Nature. 359M. A. Nowak and R. M. May, Evolutionary games and spatial chaos. Nature 359, 826-829 (1992). Social diversity promotes the emergence of cooperation in public goods games. F C Santos, M D Santos, J M Pacheco, Nature. 454F. C. Santos, M. D. Santos, and J. M. Pacheco, Social diver- sity promotes the emergence of cooperation in public goods games. Nature 454, 213-216 (2008). The outbreak of cooperation among success-driven individuals under noisy conditions. Proceedings of the National Academy of. D Helbing, W Yu, Sciences USA. 8PNASD. Helbing and W. Yu, The outbreak of cooperation among success-driven individuals under noisy conditions. Proceed- ings of the National Academy of Sciences USA (PNAS) 106(8), 3680-3685 (2009). Governing the Commons. The Evolution of Institutions for Collective Action. E Ostrom, New YorkCambridge UniversityE. Ostrom, Governing the Commons. The Evolution of Institu- tions for Collective Action. (Cambridge University, New York, 1990). Topological traps control flow on real networks: The case of coordination failures. C P Roca, S Lozano, A Arenas, A Sanchez, submittedC. P. Roca, S. Lozano, A. Arenas, and A. Sanchez, Topological traps control flow on real networks: The case of coordination failures, submitted (2010). Modelling supply networks and business cycles as unstable transport phenomena. D Helbing, 90.1-90.28New Journal of Physics. 5D. Helbing, Modelling supply networks and business cycles as unstable transport phenomena. New Journal of Physics 5, 90.1-90.28 (2003). The selforganization of matter and the evolution of biological macromolecules. M Eigen, Naturwissenschaften. 58M. Eigen, The selforganization of matter and the evolution of biological macromolecules. Naturwissenschaften 58, 465-523 (1971). The Theory of Economic Development. J Schumpeter, Harvard University PressCambridgeJ. Schumpeter, The Theory of Economic Development (Har- vard University Press, Cambridge, 1934). Ken Arrow, The Changing Face of Economics. Conversations with Cutting Edge Economists. D. Colander, R.P.F. Holt and J. Barkley RosserAnn ArborThe University of Michigan Press301Ken Arrow, in: D. Colander, R.P.F. Holt and J. Barkley Rosser (eds.) The Changing Face of Economics. Conversations with Cutting Edge Economists (The University of Michigan Press, Ann Arbor, 2004), p. 301. M Gallegati, A Kirman, Beyond the Representative Agent. Edward ElgarM. Gallegati and A. Kirman, Beyond the Representative Agent (Edward Elgar, 1999). A Kirman, Economics with Heterogeneous Interacting Agents. SpringerA. Kirman, Economics with Heterogeneous Interacting Agents (Springer, 2001). M Aoki, Modeling Aggregate Behavior and Fluctuations in Economics. Cambridge UniversityM. Aoki, Modeling Aggregate Behavior and Fluctuations in Economics (Cambridge University, 2002). Collection of papers on An Analytical Theory of Traffic Flows in. D Helbing, European Physical Journal B. D. Helbing, Collection of papers on An Analytical The- ory of Traffic Flows in European Physical Journal B, see http://www.soms.ethz.ch/research/traffictheory The Economy as An Evolving Complex System II. W. B. Arthur, S. N Durlauf, and D. LaneWestview PressSanta Fe Institute SeriesW. B. Arthur, S. N Durlauf, and D. Lane (eds.) The Economy as An Evolving Complex System II (Santa Fe Institute Series, Westview Press, 1997). The Economy as an Evolving Complex System III. L. E. Blume and S. N. DurlaufOxford University, OxfordL. E. Blume and S. N. Durlauf (eds.) The Economy as an Evolving Complex System III. (Oxford University, Oxford, 2006). Evolutionary economics. U Witt, 67-73The New Palgrave Dictionary of Economics. S. N. Durlauf and L. E. BlumePalgrave MacmillanU. Witt, Evolutionary economics. Pages 67-73 in: S. N. Durlauf and L. E. Blume (eds.) The New Palgrave Dictionary of Economics (Palgrave Macmillan, 2008). Ecology for bankers. R M May, S A Levin, G Sugihara, Nature. 451R. M. May, S. A. Levin, and G. Sugihara, Ecology for bankers. Nature 451, 894-895 (2008). Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics. E D Beinhocker, Harvard Business PressE. D. Beinhocker, Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics (Harvard Business Press, 2007). Introduction to Forest Ecosystem Science and Management. R. A. Young and R. L. GieseWileyR. A. Young and R. L. Giese (eds.) Introduction to Forest Ecosystem Science and Management (Wiley, 2002). H Meinhardt, Models of Biological Pattern Formation. Academic PressH. Meinhardt, Models of Biological Pattern Formation (Aca- demic Press, 1982). Multi-objective Management in Freight Logistics: Increasing Capacity, Service Level and Safety with Optimization Algorithms. M Caramia, P Dell&apos;olmo, SpringerM. Caramia and P. Dell'Olmo, Multi-objective Management in Freight Logistics: Increasing Capacity, Service Level and Safety with Optimization Algorithms (Springer, 2008). Jean-Philippe Rennard, Handbook of Research on Nature-Inspired Computing for Economics and Management. IGI GlobalJean-Philippe Rennard (ed.) Handbook of Research on Nature- Inspired Computing for Economics and Management (IGI Global, 2006). Self-control of traffic lights and vehicle flows in urban road networks. S Lämmer, D Helbing, Journal of Statistical Mechanics (JSTAT). 4019S. Lämmer and D. Helbing, Self-control of traffic lights and vehicle flows in urban road networks. Journal of Statistical Mechanics (JSTAT), P04019 (2008). Biologistics and the struggle for efficiency: Concepts and perspectives. D Helbing, Advances in Complex Systems. 126D. Helbing et al. Biologistics and the struggle for efficiency: Concepts and perspectives. Advances in Complex Systems 12(6), 533-548 (2009). Analytical calculation of critical perturbation amplitudes and critical densities by non-linear stability analysis of a simple traffic flow model. D Helbing, M Moussaid, European Physical Journal B. 694D. Helbing and M. Moussaid, Analytical calculation of criti- cal perturbation amplitudes and critical densities by non-linear stability analysis of a simple traffic flow model. European Physical Journal B 69(4), 571-581 (2009). Complexity cube for the characterization of complex production systems. K Windt, T Philipp, F Böse, International Journal of Computer Integrated Manufacturing. 212K. Windt, T. Philipp, and F. Böse (2007) Complexity cube for the characterization of complex production systems. Interna- tional Journal of Computer Integrated Manufacturing 21(2), 195-200. Verfahren zur Koordination konkurrierender Prozesse oder zur Steuerung des Transports von mobilen Einheiten innerhalb eines Netzwerkes (Method for coordination of concurrent processes for control of the transport of mobile units within a network. D Helbing, S Lämmer, WO/2006/122528D. Helbing and S. Lämmer, Verfahren zur Koordination konkurrierender Prozesse oder zur Steuerung des Trans- ports von mobilen Einheiten innerhalb eines Netzwerkes (Method for coordination of concurrent processes for con- trol of the transport of mobile units within a network), Patent WO/2006/122528 (2006). Noise sensitivity of portfolio selection under various risk measures. I Kondor, S Pafka, G Nagy, Journal of Banking & Finance. 315I. Kondor, S. Pafka, and G. Nagy, Noise sensitivity of portfolio selection under various risk measures. Journal of Banking & Finance 31(5), 1545-1573 (2007). The Logic of Failure: Recognizing and Avoiding Error in Complex Situations. D Dorner, Basic; New YorkD. Dorner, The Logic of Failure: Recognizing and Avoiding Error in Complex Situations. (Basic, New York, 1997). Complexity and the enterprise: The illusion of control. K Kempf, Managing Complexity: Insights, Concepts, Applications. D. HelbingBerlinSpringerK. Kempf, Complexity and the enterprise: The illusion of con- trol. Pages 57-87 in: D. Helbing (ed.) Managing Complexity: Insights, Concepts, Applications (Springer, Berlin, 2008). Nurturing breakthroughs: lessons from complexity theory. D Sornette, Journal of Economic Interaction and Coordination. 3D. Sornette, Nurturing breakthroughs: lessons from complex- ity theory. Journal of Economic Interaction and Coordination 3, 165-181 (2008). Monetary Relationships: A View from Threadneedle Street. (Papers in Monetary Economics, Reserve Bank of Australia. C A E Goodhart, Foundations of Economic Analysis (Harvard UniversityFor applications of Le Chatelier's principle to economics see also P. A.SamuelsonC. A. E. Goodhart, Monetary Relationships: A View from Threadneedle Street. (Papers in Monetary Economics, Reserve Bank of Australia, 1975). For applications of Le Chatelier's principle to economics see also P. A.Samuelson, Foundations of Economic Analysis (Harvard University, 1947). A P Fiske, Structures of Social Life: The Four Elementary Forms of Human Relations. The Free PressA. P. Fiske, Structures of Social Life: The Four Elementary Forms of Human Relations (The Free Press, 1993). Understanding and assessing the motivations of volunteers: a functional approach. E G Clary, Journal of Personality and Social Psychology. 746E. G. Clary et al., Understanding and assessing the motivations of volunteers: a functional approach. Journal of Personality and Social Psychology 74(6), 1516-1530 (1998). Happiness: A Revolution in Economics. B S Frey, MIT PressB. S. Frey, Happiness: A Revolution in Economics (MIT Press, 2008). The Wisdom of Crowds: Why the Many Are Smarter than the Few and How Collective Wisdom Shapes Business. J Surowiecki, Economies, Societies, and NationsDoubledayJ. Surowiecki, The Wisdom of Crowds: Why the Many Are Smarter than the Few and How Collective Wisdom Shapes Business, Economies, Societies, and Nations (Doubleday, 2004). Swarm Intelligence. Introduction and Applications. C. Blum and D. MerkleSpringerC. Blum and D. Merkle (eds.) Swarm Intelligence. Introduc- tion and Applications (Springer, 2008). B S Frey, Economics as a Science of Human Behaviour: Towards a New Social Science Paradigm. SpringerB. S. Frey, Economics as a Science of Human Behaviour: To- wards a New Social Science Paradigm (Springer, 1999). Growth, innovation, scaling and the pace of life in cities. L M A Bettencourt, J Lobo, D Helbing, C Khnert, G B West, Proceedings of the National Academy of Sciences USA. 104PNASL. M. A. Bettencourt, J. Lobo, D. Helbing, C. Khnert, and G. B. West, Growth, innovation, scaling and the pace of life in cities. Proceedings of the National Academy of Sciences USA (PNAS) 104, 7301-7306 (2007). R R Mcdaniel, Jr , D , Uncertainty and Surprise in Complex Systems. J. DriebeSpringerR. R. McDaniel, Jr. and D. J. Driebe (eds.) Uncertainty and Surprise in Complex Systems (Springer, 2005). Scale-free networks: a decade and beyond. A.-L Barabasi, Science. 325A.-L. Barabasi, Scale-free networks: a decade and beyond. Science 325, 412-413 (2009). Economic networks: the new challenges. F Schweitzer, Science. 325F. Schweitzer et al., Economic networks: the new challenges. Science 325, 422-425 (2009). . D Lazer, Computational social science. Science. 323D. Lazer et al., Computational social science. Science 323, 721-723 (2009). Hyperselection and innovation described by a stochastic model of technological evolution. E Bruckner, W Ebeling, M A J Montano, A Scharnhorst, Pages 79-90Evolutionary Economics and Chaos Theory: New Directions in Technology Studies. L. Leydesdorff, L. and P. van den BesselaarLondonPinterE. Bruckner, W. Ebeling, M. A. J. Montano, and A. Scharn- horst, Hyperselection and innovation described by a stochastic model of technological evolution; Pages 79-90 in: L. Leydes- dorff, L. and P. van den Besselaar (eds.) Evolutionary Eco- nomics and Chaos Theory: New Directions in Technology Studies (Pinter, London, 1994). R N Mantegna, H E Stanley, Introduction to Econophysics: Correlations and Complexity in Finance. Cambridge University PressR. N. Mantegna and H. E. Stanley, Introduction to Econo- physics: Correlations and Complexity in Finance (Cambridge University Press, 2000). D Challet, M Marsili, Y.-C Zhang, Minority Games: Interacting Agents in Financial Markets. Oxford UniversityD. Challet, M. Marsili, and Y.-C. Zhang, Minority Games: Interacting Agents in Financial Markets (Oxford University, 2005). Economics needs a scientific revolution. J.-P Bouchaud, Nature. 4551181J.-P. Bouchaud, Economics needs a scientific revolution. Na- ture 455, 1181 (2008). The economy needs agent-based modelling. J D Farmer, D Foley, Nature. 460J. D. Farmer and D. Foley, The economy needs agent-based modelling. Nature 460, 685-686 (2009). Meltdown modelling. M Buchanan, Nature. 460M. Buchanan, Meltdown modelling. Nature 460, 680-682 (2009). Introduction to Quantitative Aspects of Social Phenomena. W Elliott, Wade W Montroll, Badger, Gordon and BreachElliott W. Montroll and Wade W. Badger, Introduction to Quantitative Aspects of Social Phenomena (Gordon and Breach, 1974). Y.-C Zhang, Econophysics Forum. Y.-C. Zhang, Econophysics Forum, see http://www.unifr.ch/ econophysics B M Roehner, Fifteen years of econophysics: worries, hopes and prospects. B. M. Roehner, Fifteen years of econophysics: worries, hopes and prospects (2010), see http://arxiv.org/ abs/1004.3229 Worrying trends in econophysics. M Gallegati, S Keen, T Lux, P Omerod, Physica A. 370M. Gallegati, S. Keen, T. Lux, and P. Omerod, Worrying trends in econophysics. Physica A 370, 1-6 (2006). Econophysics and Sociophysics: Trends and Perspectives. B. K. Chakrabarti, A. Chakraborti, and A. ChatterjeeWiley VCH, WeinheimB. K. Chakrabarti, A. Chakraborti, and A. Chatterjee (eds.) Econophysics and Sociophysics: Trends and Perspectives (Wi- ley VCH, Weinheim, 2006). Aims and Scope of the Physics of Socio-Economic Systems Division of the German Physical Society. Aims and Scope of the Physics of Socio-Economic Systems Division of the German Physical Society, see http://www.dpg-physik.de/dpg/gliederung/fv/soe/aims.html . For past events see http://www.dpg-physik.de/dpg/ gliederung/fv/soe/veranstaltungen/vergangene.html Pluralistic modeling of complex systems. D Helbing, D. Helbing, Pluralistic modeling of complex systems. Preprint http://arxiv.org/abs/1007.2818 (2010). Handbook of Computational Economics. L. Tesfatsion and K. L. JuddNorth-Holland2Agent-Based Computational EconomicsL. Tesfatsion and K. L. Judd (eds.) Handbook of Computa- tional Economics, Vol. 2: Agent-Based Computational Eco- nomics (North-Holland, 2006). Simulation modeling in organizational and management research. J R Harrison, Z Lin, G R Carroll, K M Carley, Academy of Management Review. 324J. R. Harrison, Z. Lin, G. R. Carroll, and K. M. Carley, Sim- ulation modeling in organizational and management research. Academy of Management Review 32(4), 1229-1245 (2007). Developing theory through simulation methods. J P Davis, K M Eisenhardt, C B Bingham, Management Review. 322AcademyJ. P. Davis, K. M. Eisenhardt, and C. B. Bingham, Developing theory through simulation methods. Academy of Management Review 32(2), 480-499 (2007). T Schelling, Micromotives and Macrobehavior. Norton, New YorkT. Schelling, Micromotives and Macrobehavior (Norton, New York, 1978). Understanding complex social dynamics: A plea for cellular automata based modelling. R Hegselmann, A Flache, Journal of Artificial Societies and Social Simulation. 13R. Hegselmann and A. Flache, Understanding complex social dynamics: A plea for cellular automata based modelling. Jour- nal of Artificial Societies and Social Simulation 1, no. 3, see http://www.soc.surrey.ac.uk/JASSS/1/3/1.html Platforms and methods for agentbased modeling. N Gilbert, S Bankes, PNAS. 993N. Gilbert and S. Bankes, Platforms and methods for agent- based modeling. PNAS 99(3), 7197-7198 (2002). From factors to actors: Computational sociology and agent-based modeling. M W Macy, R Willer, Annu. Rev. Sociol. 28M. W. Macy and R. Willer, From factors to actors: Computa- tional sociology and agent-based modeling. Annu. Rev. Sociol. 28, 143-166 (2002). Artificial societies. Multiagent systems and the micro-macro link in sociological theory. R K Sawyer, Sociological Methods & Research. 313R. K. Sawyer, Artificial societies. Multiagent systems and the micro-macro link in sociological theory. Sociological Methods & Research 31(3), 325-363 (2003). J M Epstein, Generative Social Science: Studies in Agent-Based Computational Modeling. Princeton UniversityJ. M. Epstein, Generative Social Science: Studies in Agent- Based Computational Modeling (Princeton University, 2007). The Handbook of Experimental Economics. J H Kagel, A E Roth, Princeton UniversityJ. H. Kagel and A. E. Roth, The Handbook of Experimental Economics (Princeton University, 1997). D Helbing, Managing Complexity: Concepts, Insights, Applications. SpringerD. Helbing Managing Complexity: Concepts, Insights, Appli- cations (Springer, 2008). Designing Economic Mechanisms. L Hurwicz, S Reiter, Cambridge UniversityL. Hurwicz and S. Reiter, Designing Economic Mechanisms (Cambridge University, 2006). . W G Sullivan, E M Wicks, C P Koelling, Prentice HallEngineering EconomyW. G. Sullivan, E. M. Wicks, and C. P. Koelling, Engineering Economy (Prentice Hall, 2008). . D Helbing, Kluwer AcademicD. Helbing, Quantitative Sociodynamics (Kluwer Academic, 1995). Statistical physics of social dynamics. C Castellano, S Fortunato, V Loreto, Reviews of Modern Physics. 81C. Castellano, S. Fortunato, V. Loreto, Statistical physics of social dynamics. Reviews of Modern Physics 81, 591-646 (2009). The future of social experimenting. D Helbing, W Yu, Proceedings of the National Academy of Sciences USA. 12PNASD. Helbing and W. Yu (2010) The future of social ex- perimenting. Proceedings of the National Academy of Sciences USA (PNAS) 107(12), 5265-5266 (2010); see http://www.soms.ethz.ch/research/socialexperimenting for a longer version.
[]
[ "GHz nanomechanical resonator in an ultraclean suspended graphene p-n junction", "GHz nanomechanical resonator in an ultraclean suspended graphene p-n junction" ]
[ "Minkyung Jung \nDepartment of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland\n\nDGIST Research Institute\nDGIST\n42988DaeguKorea\n", "Peter Rickhaus \nDepartment of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland\n\nInstitute for Solid State Physics\nETH Zurich\nCH-8093ZurichSwitzerland\n", "Simon Zihlmann \nDepartment of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland\n", "Alexander Eichler \nInstitute for Solid State Physics\nETH Zurich\nCH-8093ZurichSwitzerland\n", "Peter Makk \nDepartment of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland\n\nDepartment of Physics\nand Economics and Nanoelectronics Momentum Research Group\nBudapest University of Technology\nHungarian Academy of Sciences\nBudafoki ut 81111BudapestHungary\n", "Christian Schönenberger \nDepartment of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland\n" ]
[ "Department of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland", "DGIST Research Institute\nDGIST\n42988DaeguKorea", "Department of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland", "Institute for Solid State Physics\nETH Zurich\nCH-8093ZurichSwitzerland", "Department of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland", "Institute for Solid State Physics\nETH Zurich\nCH-8093ZurichSwitzerland", "Department of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland", "Department of Physics\nand Economics and Nanoelectronics Momentum Research Group\nBudapest University of Technology\nHungarian Academy of Sciences\nBudafoki ut 81111BudapestHungary", "Department of Physics\nUniversity of Basel\nKlingelbergstrasse 82CH-4056BaselSwitzerland" ]
[]
We demonstrate high-frequency mechanical resonators in ballistic graphene p-n junctions.Fully suspended graphene devices with two bottom gates exhibit ballistic bipolar behavior after current annealing. We determine the graphene mass density and built-in tension for different current annealing steps by comparing the measured mechanical resonant response to a simplified membrane model. In a graphene membrane with high built-in tension, but still of macroscopic size with dimensions 3 × 1 m 2 , a record resonance frequency of 1.17 GHz is observed after the final current annealing step. We further compare the resonance response measured in the unipolar with the one in the bipolar regime. Remarkably, the resonant signals are strongly enhanced in the bipolar regime. a) [email protected] b) [email protected]
10.1039/c8nr09963d
[ "https://arxiv.org/pdf/1812.06412v2.pdf" ]
73,471,292
1812.06412
23df2c822e857d6256cfb059cd2b2e8c5e9be377
GHz nanomechanical resonator in an ultraclean suspended graphene p-n junction Minkyung Jung Department of Physics University of Basel Klingelbergstrasse 82CH-4056BaselSwitzerland DGIST Research Institute DGIST 42988DaeguKorea Peter Rickhaus Department of Physics University of Basel Klingelbergstrasse 82CH-4056BaselSwitzerland Institute for Solid State Physics ETH Zurich CH-8093ZurichSwitzerland Simon Zihlmann Department of Physics University of Basel Klingelbergstrasse 82CH-4056BaselSwitzerland Alexander Eichler Institute for Solid State Physics ETH Zurich CH-8093ZurichSwitzerland Peter Makk Department of Physics University of Basel Klingelbergstrasse 82CH-4056BaselSwitzerland Department of Physics and Economics and Nanoelectronics Momentum Research Group Budapest University of Technology Hungarian Academy of Sciences Budafoki ut 81111BudapestHungary Christian Schönenberger Department of Physics University of Basel Klingelbergstrasse 82CH-4056BaselSwitzerland GHz nanomechanical resonator in an ultraclean suspended graphene p-n junction 1 We demonstrate high-frequency mechanical resonators in ballistic graphene p-n junctions.Fully suspended graphene devices with two bottom gates exhibit ballistic bipolar behavior after current annealing. We determine the graphene mass density and built-in tension for different current annealing steps by comparing the measured mechanical resonant response to a simplified membrane model. In a graphene membrane with high built-in tension, but still of macroscopic size with dimensions 3 × 1 m 2 , a record resonance frequency of 1.17 GHz is observed after the final current annealing step. We further compare the resonance response measured in the unipolar with the one in the bipolar regime. Remarkably, the resonant signals are strongly enhanced in the bipolar regime. a) [email protected] b) [email protected] Introduction Owing to the exceptional mechanical properties of graphene, such as high strength, graphenebased nanoelectromechanical systems have stimulated intensive research activities in recent years. [1][2][3][4][5][6] For example, extremely high quality factors 7 as well as ultrasensitive mass and force sensors 8 have been demonstrated. In addition, the low mass density and the high maximal tension allows for extremely high fundamental resonance frequencies. This makes graphene an excellent candidate for exploring quantum physics, since it is possible to cool the resonator to the quantum mechanical ground state. Recently, bilayer and multilayer of graphene have been successfully coupled to superconducting microwave resonators and optical cavities and the interaction between light and nanomechanical motion via radiation pressure has been observed. [9][10][11][12][13][14] The coupling strength between cavities and graphene was sufficiently strong to observe cavity backaction cooling. Furthermore, owing to the large tunability of the resonance frequency, strong coupling to other resonators and parametric amplification have been demonstrated in recent works. 15,16 In previous works, graphene mechanical resonators were operated in the megahertz (MHz) range. A gigahertz (GHz) graphene mechanical resonator has not been demonstrated yet. However, such resonators are needed in order to reach the quantum regime without having to actively cool the resonator by opto-mechanical side-band cooling. [17][18] Furthermore, graphene mechanical resonators reported previously were operated in the unipolar regime where charge is transported by either electrons or holes (n or p regime). In this work, we demonstrate a GHz mechanical resonator in a ballistic graphene p-n junction. Fully suspended graphene resonators were fabricated with two bottom gates which are used to control the carrier type and density. To increase the quality of the suspended graphene layer, it is current annealed in a vacuum chamber at low temperature. 19,20 It is known that this procedure increases the electron mobility, yielding ballistic graphene devices. 21 It has been suggested that residues that remained from the device fabrication and other potentially charged adsorbates are desorbed while current annealing. 19 Using a frequency modulation (FM) technique, we measured the mechanical resonance response of the graphene p-n junction at different annealing steps. We extract the graphene mass density and built-in tension and confirm that mass is desorbed. Results and discussion A device schematic and measurement setup are shown in Fig. 1(a). Devices are fabricated by first defining an array of Ti/Au gates on an oxidized Si substrate. The gates are 45 nm thick, 600 nm wide, and spaced at a pitch of 600 nm. A 20 nm layer of MgO is then deposited to prevent an accidental gate leak. After covering the gate array with a nominally 1 m thick resist layer (LOR 5A, MicroChem Corp.), an exfoliated graphene flake is transferred onto the LOR aligned to the bottom gate array by using a mechanical transfer technique. 50 nm of Pd sourcedrain contacts are deposited to define ohmic contacts to the graphene layer. Finally, the LOR layer underneath the graphene flake is e-beam exposed and developed in order to suspend the graphene. The fabrication process is outlined in detail in ref. 20 and 22. The device is then mounted on a circuit board which provides integrated radio frequency (RF) and DC line connectors. (d) Mechanical model for a graphene resonator simplified to a one-dimensional (1D) string under tension. L0 is the length between the source and drain contacts, F the electrostatic force and T the longitudinal tension in the graphene. denotes a small time-varying displacement relative to the equilibrium position z. All measurements in this work were carried out in a vacuum chamber with a pressure of typically < 10 -5 mbar at T ~ 8 K. We first performed DC conductance measurements of an ultraclean graphene p-n junction using standard lock-in technique. Most as-fabricated devices exhibit a very weak dependence of the conductance on the gate voltage, not showing the suppressed conductance that is expected to arise at the charge-neutrality point (CNP). This is due to strong doping by resist residues. To remove these residues, the device is current-annealed in a vacuum chamber at 8 K until the CNP peak is significantly pronounced. 20 Figure 1(b) shows the differential conductance of device A in units of e 2 /h as a function of two bottom gates labeled VG1 and VG2 by applying a source-drain voltage of VSD = 400 μV after the final current annealing step. The device exhibits four different conductance regions p-p, n-n, p-n, and n-p according to carrier doping in the left and right regions depending on the two bottom gates. In the bipolar region (p-n and n-p) we observe conductance oscillations that can be attributed to Fabry-Pérot interference emerging due to electron waves that interfere with the reflected wave scattered from the p-n junction. The Fabry-Pérot pattern supports that our graphene is in ballistic regime. 21,23 The resonance of a vibrating graphene device at high-frequencies can be best detected by a mixing method. 24 When applying a small time-varying bias voltage V(t)=VAC cos(ft) to the source, keeping the drain contact on ground, the current through the graphene device contains both a linear term = and a squared term ∝ 2 , where G is the conductance of the graphene device. The squared term is due to the bias, which causes the total charge in the device to be modulated, modulating G too. Hence, one can write = ( + ) , where ∝ . The term proportional to V 2 mixes a high-frequency signal down to a DC signal. The applied AC signal also exerts a time-varying force to the graphene membrane, driving a displacement in position indicated in Fig. 1(d) by . This change in displacement leads additionally to a change in gate charge, and hence to a change in conductance . If the AC signal has a frequency f close to the resonance frequency of the membrane f0, z will increase as will . If a FM modulation technique is applied, in which the carrier frequency is modulated with frequency , the mixing current can be detected with a conventional lock-in technique synchronized to . 7,25 The modulation of the mixing current appears due to the dependence of the vibration amplitude on frequency. A circuit diagram of the measurement setup for the FM technique is shown in Fig. 1(a). The source electrode of the device is connected to a DC source ( ) and an RF generator ( ) via a bias tee. The drain contact is directed to an I/V converter, whose signal is detected in a lock-in amplifier. The graphene resonator is actuated electrostatically by applying a frequency modulated signal with an amplitude VAC at the source electrode. The applied signal at the source electrode can be written as, (t) = cos(2 + ( ∆ / ) sin(2 )) ,(1) where f is the carrier frequency, ∆ is the frequency deviation, t is time, and denotes the modulation frequency, which we have typically chosen to be 671 Hz. For a unipolar graphene doubly-clamped membrane, the amplitude of the mixing current can be expressed as, 7,25 = 1 2 | 1 ∆ [ ( )] |,(2) where G is the conductance of the graphene device, ( ) is the capacitance between the gate electrode and graphene, and [ ( )] is the real part of the graphene oscillation amplitude. Equation (2) can be traced back to the mechanical oscillation of the membrane generating an AC contribution in the gate capacitance, which in turn induces an AC gate charge that modulates the conductance. It is based on the assumption that / is proportional to the transconductance / . 25 Since the dependence of [ ( )] changes sign at the resonance frequency, the derivative [ ( )] has a peak at resonance as shown in the inset of Fig. 1(c). We note also, that = 0 for = 0, which again is nicely seen in Fig. 1(c). The Inset shows a line trace taken along the dashed arrow at VG1=G2 = 4.8 V in Fig. 1(c). A line shape with a pronounced peak at the mechanical resonance frequency of approximately 405 MHz is observed. We determine the mechanical quality factor Q to be 600 from the resonance line shape. 7,25 As shown in Fig. 1(c), we observe an additional resonant response at a slightly lower frequency marked by the solid black arrow mostly pronounced in the n-regime. This could be another flexural mode of the resonator, something that we also see in other devices (see e.g. Fig. 2(c)). To interpret the experimental data, we model the behavior of the graphene resonators. We assume pure uniaxial strain which allows us to apply a simplified model, shown in Fig. 1(d). In this model the membrane is reduced to a 1D string with length L. We also assume that there is only a single homogenous gate instead of two gates like in our device. Furthermore, it is assumed that the force of the gate voltage is only acting at the middle of the string. As a result, the resonant frequency f of the graphene membrane can be approximated as (see ESI for the derivation of the equation), = 1 2 √ 1 ( 4 0 + 3 16 ( ′ 2 0 ) 2 − 1 2 ′′ 2 ) ,(3) where L is the length of the suspended graphene membrane, w the width, ρ the 2D mass density, We here investigate the evolution of the resonant response as a function of current annealing steps. We could not observe the CNP and resonant responses in the as-fabricated devices due to strong chemical doping by resist residues. As shown in Fig. 2(a), after the initial current annealing the device shows multiple conductance minima, indicating that the graphene sheet is not sufficiently clean. After further current annealing, the device shows a clean CNP and Fabry-Pérot resonances appear in the conductance as shown in Fig. 2(b) (the full plot of conductance is shown in Fig. 1(b)), confirming that the graphene sheet is clean and in the panels a and b). After the final current annealing step, the resonance frequency increased to ~ 1.17 GHz at VG1 = 0 V, which is the highest resonance frequency for a graphene mechanical resonator. The inset of (d) shows the quality factor Q as a function of incident RF power. Using the same method, we achieved a GHz graphene mechanical resonator in device B. Similar to Fig. 2, we display the conductance map as a function of VG1 and VG2 for the initial and final current annealing step for device B in Fig. 3 The inset in Fig. 3(d) shows the extracted quality factor Q as a function of incident RF power. For the lowest power we obtain a ≈ 1500. We then obtain for the quality frequency product • the value 1.8 ×10 12 s -1 , which brings this resonator at the measurement temperature of 8 K well into the quantum regime with > /ℎ , where optical side-band cooling could efficiently be applied to bring the resonator into the ground state. 18 Without additional cooling there are only 150 phonons in this resonator. It is worth noting that a decrease of Q with RF power has been observed before in graphene resonators 7 and might be due to nonlinear coupling between modes. 30,31 It is thus possible that the intrinsic quality factor of our device is significantly higher than what we measured. We further think that higher tension could increase the measurement is kept to P = -17.5 dBm. The mixing signal is much stronger in the bipolar regime (n-p or p-n) as compared to the unipolar one (n-n or p-p). Next, we investigate the amplitude of the mixing current depending on the gate voltage sweep direction in device C. Until now, we only have looked into the unipolar gating condition. Now, we will in addition look into the bipolar case where a p-n junction resides in the graphene device. The conductance as a function of VG1 and VG2 after the final annealing step is reproduced in Fig. 4(a) and shows distinct conductance values for the unipolar and bipolar condition, as explained in Fig. 1 For comparison we present the obtained parameters of the three graphene mechanical resonators described in this work in Table 1. It is seen that the resonance frequency 0 systematically increased with annealing, while the built-in tension was reduced. This is only compatible with a reduction of mass. Hence, current annealing does, as anticipated, desorb material, likely resist residues, from the graphene memebrane. Interestingly, the mixing signal amplitude scanned along the bipolar regime (dashed line B) is remarkably stronger than that scanned along the unipolar regime ( Fig. 4(b) and (c)). A better comparison is provided by Fig. 4(d), displaying the measured mixing amplitude IMix as a 2D map as a function of gate voltages 1 and 2 . The dashed lines mark the CNPs in the left and right regions and are taken from the conductance map in Fig. 4(a). It is seen that the mixing signal in the bipolar region is by up to 10 times larger as compared to the unipolar one. As seen from Eq. (2) Fig. 4(a), the transconductance in the bipolar regime is much larger than that in the unipolar one due to the large conductance oscillations induced by Fabry-Pérot interferences. This results in a strong mixing current signal in the bipolar regime. As proven by the data in Fig. 4(d), the bipolar setting in graphene resonators is very convenient for the detection of small mechanical signals, due to the increased sensitivity in this regime. However, there is also another reason why the mixing current could be substantially larger in the bipolar as compared to the unipolar regime. The photothermoelectric (PTE) effect can become very pronounced in systems with p-n junctions. In our previous work, 34 we observed a strong photocurrent in the bipolar regime of a p-n graphene device when applying a microwave signal, while the photocurrent almost vanished in the unipolar regime. Electronhole pairs around the CNP cause a temperature gradient in the p-n junction towards the source and drain contacts, generating a photocurrent. The device used in this work has the same structure, so that the microwave used for the mechanical actuation of the graphene sheet can generate a photocurrent as well. There is both an AC and DC (rectified) photocurrent. The former should behave similar to an electrically induced AC current and should therefore yield a mixing current contribution that depends on the mechanical oscillator amplitude. In order to distinguish the two effects, a refined model is needed with which the exact amplitude of the graphene resonator can be calculated, which is beyond the current work. Conclusions We have demonstrated graphene mechanical resonators with very high frequencies of several 100 MHz to > 1 GHz. We have used ultraclean suspended and current-annealed graphene p-n junctions and determine the graphene mass density and built-in tension after different current annealing steps by fitting the measured resonance frequency dependence on gate voltage to a simplified resonator model. After the final current annealing step, the graphene mass density decreases and likely reaches the pure graphene mass density, indicating that virtually resist residues are removed. In a clean graphene membrane the fundamental mechanic resonance mode has been found to be 1.17 GHz at = 0 V. This large resonance frequency for a macroscopic membrane of size 3 × 1.2 m is only possible due to low mass density of graphene and the high tension that graphene can sustain. In this particular GHz case, the built-in tension is estimated to be T0/W ~ 15 N/m, corresponding to a strain of ~ 4 %. Furthermore, the graphene membrane with two electrically separated gates enables bipolar gating. In this bipolar regime (either p-n or n-p) we have found a strongly enhanced mixing current, while it is weak in the unipolar regime (either n-n or p-p). Our work shows that graphene p-n junctions could be useful for detecting mechanical resonance signals. Conflicts of interest There are no conflicts to declare. FIG. 1 : 1(a) Schematic of a suspended graphene device with two bottom gates at voltages VG1 and VG2 and a diagram of the measurement circuit. A frequency-modulated signal VFM with a carrier frequency in the MHz to GHz regime is applied to the source. We measure the mixeddown current IMix through the graphene by a lock-in amplifier synchronized to the modulation frequency. (b) Differential electrical DC conductance in units of e 2 /h as a function of VG1 and VG2 at T = 8 K. Four regions are labeled according to carrier doping, p or n-type, in the left and right graphene regions, controlled by the respective bottom gates. (c) Mixing current IMix measured as a function of carrier frequency and gate voltage (VG1 = VG2). Inset: Mixing current vs frequency taken along the dashed line at VG = 4.8 V, which shows the resonance signal peak. Figure 1 ( 1c) shows a typical resonant response measured after the final current annealing to remove resist residues. The mixing current IMix is measured as a function of the two bottom gates 1 = 2 and frequency f for a monolayer graphene resonator (Device A, Width/Length W/L = 4.1 μm/1.1 μm) at T = 8 K. The graphene resonant frequency shifts upwards as the gate voltage |VG1=G2| increases due to the tension induced by the gate voltage. 0 the built-in tension in units of Newton (N), and E the 1D Young's modulus in units of force/meter (N/m). Note, that the built-in tension is defined as 0 = ( )| =0 , see Fig. 1(d). The primes on denote derivatives with respect to z. From the argument in the root we see that the negative term ∝ 2 dominates if 0 is large, resulting in a prominent negative dispersion of the resonant frequency. As the gate voltage increases, the 4 term becomes dominant, leading to an upturn in the dispersion relation. On the other hands, if 0 is small, the 4 term dominates also for small values and the resonant frequency shows a positive dispersion with VG from the beginning. By fitting the experimental data to this model, we aim to determine both ρ and 0 of our devices. To do so, we use in the following = 335 / , for the 2D Young's modulus. 3 This value is deduced from the graphite 3D modulus of 1 TPa, using for the graphene interlayer distance the value 0.335 nm. Details on the fitting procedure are given in the ESI. ballistic regime. The mechanical resonant responses 0 for each current annealing step are measured as a function of VG1=G2 and displayed in Fig. 2(c) and (d) for the initial and final annealing steps, respectively. FIG. 2: Differential conductance as a function of VG1 and VG2 for Device A (W/L = 4.1 μm /1.1 μm) after (a) initial and (b) final current annealing steps. After the final current annealing, the Dirac peak is significantly narrower and pronounced at VG1 = VG2 ~ 0 V, indicating that resist residues are removed. Corresponding mechanical resonant responses for the initial and final current annealing steps are displayed in (c) and (d), respectively. The mechanical mixing current is measured as a function of frequency f and equal voltage of the two gates (measured along the dashed lines in panel a and b). The applied RF power at the device is P = -26 dBm. The solid red lines in (c) and (d) are fits to the equation of a membrane model with the effective graphene mass and built-in tension described in the text. After the initial current annealing, the resonant frequency shifts downwards with increasing |VG1=G2|, as shown in Fig. 2(c). As mentioned above, this indicates that built-in tension is significant. By fitting the data to our membrane model, we estimate the mass density and the built-in tension of the actual membrane to be ρ = 9.1ρ0 and T0/W = 4.2 N/m, respectively, where ρ0 = 7.4×10 -7 kg/m 2 is the calculated mass density of monolayer graphene. The estimated mass density is by an order of magnitude larger compared to a clean single layer of graphene, indicating that substantial resist residues still remained on the graphene surface. After the final current annealing, the resonant frequency increased significantly from 226 MHz for the initial annealing to 405 MHz for the final annealing and the frequency shifts upwards as a function of |VG1=G2|, indicating that the built-in tension and the mass density decreased significantly. By fitting to Eq. (3) we obtain for the built-in tension T0/W = 1.5 N/m, assuming that the mass density ρ equals the graphene mass density 0 when the sample is clean. The built-in strain values converted from the tensions are estimated to be 1.2 % and 0.4 % after the initial and final current annealing, respectively, showing that the built-in strain significantly decreased after current annealing. This is probably due to the heat in the device generated during current annealing leading to a partial release of the built-in in tension. FIG. 3: Differential conductance as a function of VG1 and VG2 for Device B (f and VG2 at VG1 = 0 V (dashed lines in (a) and (b), respectively. The corresponding mechanical resonant responses in as a function of f and VG2 at VG1 = 0 V for each annealing step are displayed in Fig. 3(c) and (d), respectively. After the final annealing, we observe a remarkably large resonance frequency of f ~ 1.17 GHz. This is the highest mechanical resonance in a fully suspended graphene membrane to our knowledge. Unlike for device A, the frequency as a function of VG2 shifts downwards for each annealing step, indicating that the built-in tension is significant, also after the last annealing step. The data fitted to the membrane model confirms that the built-in tension T0 changes only slightly from T0/W = 16 N/m to T0/W = 15 N/m, while the mass density is reduced significantly from ρ = 3.3ρ0 to ρ = 1ρ0, increasing the resonance frequency up into the GHz regime. The built-in tension converts to a built-in strain of 4.7 % and 4.4 % for the initial and final current annealing steps, respectively. The large built-in strain of ~ 4 % and the low graphene mass density allows for a mechanical resonance frequency of > 1 GHz for a suspended membrane with m dimensions. We think that this large built-in tension originates from the device fabrication. The LOR layer on which the graphene membranes is supported deforms as the device is cooled down. This creates a large built-in tension in the device. While the obtained strain of ~ 4 % appears large, comparable values of ~ 2-4 % have been reported before using Raman spectroscopy. 26,27 It is also important to emphasize that the strain value is obtained using a 2D Young's modulus of 340 N/m derived from a 3D Young's modulus of 1 TPa. There have also been reports of larger values for the graphene modulus, which could lower our estimated strain values. 3,28,29 Q through 'dissipation dilution', a technique that has recently enabled ground-breaking nanomechanical devices made from silicon nitride. 32,33 FIG. 4: (a) Differential conductance as a function of VG1 and VG2 for Device C (W/L = 2 μm /1.5 μm) after final current annealing. (b, c) Resonant responses measured for unipolar (dashed line A) and bipolar gating (dashed line B). The red solid curve in (b) is a fit to the membrane model yielding ρ = 1ρ0 and T0/W = 2.8 N/m. This solid curve is superimposed also in (c) where a bipolar gate voltage is applied. It is seen that the actual frequency shift as a function of VDiag in this experiment is lower. This can be explained by a weaker capacitive coupling due to the built-in p-n junction and indicated with the dashed curve and explained further in the text. (d) Amplitude of the mixing current determined from the resonance frequency measurements for each gate voltage. Blue dashed lines indicate the CNPs in the left and right regions and are taken from the conductance measurement in (a). The microwave power during (b). In the bipolar regimes the conductance is markedly lower and Fabry-Pérot resonances are clearly visible, confirming that the graphene sheet is clean. The resonant response measured along the unipolar and bipolar regimes indicated by black dashed lines A and B are displayed in Fig. 4(b) and (c), respectively. The red curve is a fit to the unipolar case in (b) using the membrane model with values ρ = 1ρ0 and T0/W = 2.8 N/m. The frequency response in the bipolar case is markedly smaller. We see that for the same values of gate voltages, the frequency shift in Fig. 4(c) is approximately 50 % of that in (b). The most likely reason for the reduced response in the bipolar case is the charge distribution. Unlike in the unipolar case, where the charge is homogeneously distributed, there is a depletion zone for charge in the middle of the device in the bipolar case. The dynamically added and removed charge appears therefore closer to the source and drain contacts where the graphene membrane is clamped and where it is effectively stiffer. Consequently, the mechanical movement is reduced in the bipolar case, leading to a reduced response of the softening contribution in the frequency dispersion. Table 1 . 1Summary of device geometry and parameters of three monolayer graphene mechanical resonators. W, L denote the length and width of the graphene membrane, respectively, f0 the zero-applied voltage resonance frequency, the mass density is given relative to the density of aclean monolayer graphene sheet, T0/W is the zero-voltage pre-tension. In the last column the strain is given obtained by dividing the pre-tension per width with the 2D Young's modulus of graphene denoted by E2D. Device W [µm] L [µm] Annealing f0 [MHz] Mass density ratio [ρ/ρ0] T0/W [N/m] Strain [T0/E2DW, %] A 4.1 1.1 Initial 230 9.1 4.2 1.2 Final 405 1.0 1.5 0.4 B 3 1.2 Initial 674 3.3 16 4.7 Final 1170 1.0 15 4.4 C 2 1.5 Initial Not measured Final 413 1.0 2.8 0.8 , the mixing signal is proportional to both the transconductance dG/dVG and the oscillation amplitude d(Re[z(f)])/df of the resonator. If we assume the same oscillation amplitude at resonance for both the unipolar and bipolar regime, the mixing signal should follow the transconductance. This is what we observe qualitatively when we compare the experiment with numerically calculated transconductance plots (see ESI Fig. S3 for the comparison between the mixing current and calculated transconductance). As can be expected from V. Singh, S. J. Bosman, B. H. Schneider, Y. M. Blanter, A. Castellanos-Gomez, and G. A. Steele, Nat. Nanotech., 2014, 9, 820-824. 11 P. Weber, J. Guttinger, I. Tsioutsios, D. E. Chang, and A. Bachtold, Nano Lett., 2014, 14, M. Huang, H. Yan, C. Chen, D. Song, T. F. Heinz, and J. Hone, Proc. Natl. Acad. Sci., 2009, 106, 7304-7308. 27 . A K Geim, Science. 324A. K. Geim, Science, 2009, 324, 1530-1534. . J S Bunch, A M Van Der Zande, S S Verbridge, I W Frank, D M Tanenbaum, J M Parpia, H G Craighead, P L Mceuen, Science. 315J. S. Bunch, A. M. van der Zande, S. S. Verbridge, I. W. Frank, D. M. Tanenbaum, J. M. Parpia, H. G. Craighead, and P. L. McEuen, Science, 2007, 315, 490-493. . C Lee, X Wei, J W Kysar, J Hone, Science. 321C. Lee, X. Wei, J. W. Kysar, and J. Hone, Science, 2008, 321, 385-388. . V Singh, S Sengupta, H S Solanki, R Dhall, A Allain, S Dhara, P Pant, M M Deshmukh, Nanotechnol. 21V. Singh, S. Sengupta, H. S. Solanki, R. Dhall, A. Allain, S. Dhara, P. Pant, and M. M. Deshmukh, Nanotechnol., 2010, 21, 165204-165211. . D Garcia-Sanchez, A M Van Der Zande, A Paulo, B Lassagne, P L Mceuen, A Bachtold, Nano Lett. 8D Garcia-Sanchez, A. M. van der Zande, A. San Paulo, B. Lassagne, P. L. McEuen, and A. Bachtold, Nano Lett., 2008, 8, 1399-1403. . A Castellanos-Gomez, V Singh, H S J Van Der Zant, G A Steele, Ann. Phys. (Berlin). 527A. Castellanos-Gomez, V. Singh, H. S. J. van der Zant, and G. A. Steele, Ann. Phys. (Berlin), 2015, 527, 27-44. . A Eichler, J Moser, J Chaste, M Zdrojek, I Wilson-Rae, A Bachtold, Nat. Nanotechnol. 6A. Eichler, J. Moser, J. Chaste, M. Zdrojek, I. Wilson-Rae, and A. Bachtold, Nat. Nanotechnol., 2011, 6, 339-342. . C Chen, S Rosenblatt, K I Bolotin, W Kalb, P Kim, I Kymissis, H L Stormer, T F Heinz, J Hone, Nat. Nanotech. 4C. Chen, S. Rosenblatt, K. I. Bolotin, W. Kalb, P. Kim, I. Kymissis, H. L. Stormer, T. F. Heinz, and J. Hone, Nat. Nanotech.,2009, 4, 861-867. . X Song, M Oksanen, J Li, P J Hakonen, M A Sillanpaa, Phys. Rev. Lett. 113X. Song, M. Oksanen, J. Li, P. J. Hakonen, and M. A. Sillanpaa, Phys. Rev. Lett., 2014, 113, 027404-027408. 2854-2860. . P Weber, J Guttinger, A Noury, J Vergara-Cruz, A Bachtold, Nat. Commun. 7P. Weber, J. Guttinger, A. Noury, J. Vergara-Cruz, and A. Bachtold, Nat. Commun., 2016, 7, 12496-12503. . V Singh, O Shevchuk, Y M Blanter, G A Steele, Phys. Rev. B. 93V. Singh, O. Shevchuk, Y. M. Blanter, and G. A. Steele, Phys. Rev. B, 2016, 93, 245407- 245411. . R A Barton, I R Storch, V P Adiga, R Sakakibara, B R Cipriany, B Ilic, S P Wang, P , R. A. Barton, I. R. Storch, V. P. Adiga, R. Sakakibara, B. R. Cipriany, B. Ilic, S. P. Wang, P. . P L Ong, J M Mceuen, H G Parpia, Craighead, Nano Lett. 12Ong, P. L. McEuen, J. M. Parpia, and H. G. Craighead, Nano Lett., 2012, 12, 4681-4686. . T Miao, S Yeom, P Wang, B Standley, M Bockrath, Nano Lett. 14T. Miao, S. Yeom, P. Wang, B. Standley, and M. Bockrath, Nano Lett., 2014, 14, 2982-2987. . J P Mathew, R N Patel, A Borah, R Vijay, M M Deshmukh, Nat. Nanotech. 11J. P. Mathew, R. N. Patel, A. Borah, R. Vijay, and M. M. Deshmukh, Nat. Nanotech.,2016, 11, 747-751. . D Zhu, X.-H Wang, W.-C Kong, G.-W Deng, J.-T Wang, H.-O Li, G Cao, M Xiao, K , D. Zhu, X.-H. Wang, W.-C. Kong, G.-W. Deng, J.-T. Wang, H.-O. Li, G. Cao, M. Xiao, K.- . L Jiang, X.-C Dai, G.-C Guo, F Nori, G.-P Guo, Nano Lett, 17L. Jiang, X.-C. Dai, G.-C. Guo, F. Nori, and G.-P. Guo, Nano Lett., 2017, 17, 915-921. . R A Norte, J P Moura, S Groblacher, Phys. Rev. Lett. 116R. A. Norte, J. P. Moura, and S. Groblacher, Phys. Rev. Lett., 2016, 116, 147202-147207. . J Moser, A Barreiro, A Bachtold, Appl. Phys. Lett. 91J. Moser, A. Barreiro, and A. Bachtold, Appl. Phys. Lett., 2007, 91, 163513-163515. . R Maurand, P Rickhaus, P Makk, S Hess, E Tovari, C Handschin, M Weiss, C , R. Maurand, P. Rickhaus, P. Makk, S. Hess, E. Tovari, C. Handschin, M. Weiss, and C. . Carbon Schönenberger, 79Schönenberger, Carbon, 2014, 79, 486-492. . A L Grushina, D.-K Ki, A Morpurgo, Appl. Phys. Lett. 102A. L. Grushina, D.-K. Ki, and A. Morpurgo, Appl. Phys. Lett., 2013, 102, 223102-223105. . N Tombros, A Veligura, J Junesch, J J Van Den, P J Berg, M Zomer, I J Wojtaszek, N. Tombros, A. Veligura, J. Junesch, J. J. Van den Berg, P. J. Zomer, M. Wojtaszek, I. J. V. . H T Marun, B J Jonkman, Van Wees, J. Appl. Phys. 109Marun, H. T. Jonkman, and B. J. Van Wees, J. Appl. Phys., 2011, 109, 093702-093706. . P Rickhaus, R Maurand, M H Liu, M Weiss, K Richter, C Schönenberger, Nat. Commun. 4P. Rickhaus, R. Maurand, M. H. Liu, M. Weiss, K. Richter, and C. Schönenberger, Nat. Commun., 2013, 4, 2342-2347. . B Witkamp, M Poot, H S J Van Der Zant, Nano Lett. 6B. Witkamp, M. Poot, and H. S. J. van der Zant, Nano Lett., 2006, 6, 2904-2908. . V Gouttenoire, T Barois, S Perisanu, J. -L Leclercq, S T Purcell, P Vincent, A Ayari, Small. 6V. Gouttenoire, T. Barois, S. Perisanu, J. -L. Leclercq, S. T. Purcell, P. Vincent, and A. Ayari, Small, 2010, 6, 1060-1065. . T M G Mohiuddin, A Lombardo, R R Nair, A Bonetti, G Savini, R Jalil, N Bonini, D , T. M. G. Mohiuddin, A. Lombardo, R. R. Nair, A. Bonetti, G. Savini, R. Jalil, N. Bonini, D. . M Basko, C Galiotis, N Marzari, K S Novoselov, A K Geim, A C Ferrari, Phys. Rev. M. Basko, C. Galiotis, N. Marzari, K. S. Novoselov, A. K. Geim, and A. C. Ferrari, Phys. Rev. . F Memarian, A Fereidoon, M D Ganji, Superlattices Microstruct. 85F. Memarian, A. Fereidoon, M. D. Ganji, Superlattices Microstruct., 2015, 85, 348-356. . I R Storch, R Alba, V P Adiga, T S Abhilash, R A Barton, H G Craighead, J M Parpia, P L Mceuen, Phys. Rev. B. 98I. R. Storch, R. De Alba, V. P. Adiga, T. S. Abhilash, R. A. Barton, H. G. Craighead, J. M. Parpia, and P. L. McEuen, Phys. Rev. B, 2018, 98, 085408-085415. . J Güttinger, A Noury, P Weber, A M Eriksson, C Lagoin, J Moser, C Eichler, A Wallraff, A Isacsson, A Bachtold, Nat. Nanotech. 15J. Güttinger, A. Noury, P. Weber, A. M. Eriksson, C. Lagoin, J. Moser, C. Eichler, A. Wallraff, A. Isacsson, and A. Bachtold, Nat. Nanotech., 2017, 15, 631-636. . D Midtvedt, A Croy, A Isacsson, Z Qi, H S Park, Phy. Rev. Lett. 112145503D. Midtvedt, A. Croy, A. Isacsson, Z. Qi, and H. S. Park, Phy. Rev. Lett. 112, 145503 (2014). . Y Tsaturyan, A Barg, E S Polzik, A Schliesser, Nat. Nanotech. 12Y. Tsaturyan, A. Barg, E. S. Polzik, and A. Schliesser, Nat. Nanotech., 2017, 12, 776-783. . A H Ghadimi, S A Fedorov, N J Engelsen, M J Bereyhi, R Schilling, D J Wilson, T J Kippenberg, Science. 360A. H. Ghadimi, S. A. Fedorov, N. J. Engelsen, M. J. Bereyhi, R. Schilling, D. J. Wilson, T. J. Kippenberg, Science, 2018, 360, 764-768. . M Jung, P Rickhaus, S Zihlmann, P Makk, C Schönenberger, Nano Lett. 16M. Jung, P. Rickhaus, S. Zihlmann, P. Makk, and C. Schönenberger, Nano Lett., 2016, 16, 6988-6993.
[]
[ "Synthesis of DNA Templated Tri-functional Electrically Conducting, Optical and Magnetic nano-chain of Ni core -Au shell for Bio-device", "Synthesis of DNA Templated Tri-functional Electrically Conducting, Optical and Magnetic nano-chain of Ni core -Au shell for Bio-device" ]
[ "Madhuri Mandal *e-mail:[email protected] \nMaterial Sciences Division\nS. N\nBose National Centre for Basic Sciences\nBlock JD, Sector III, Salt Lake700 098KolkataIndia\n", "Kalyan Mandal \nMaterial Sciences Division\nS. N\nBose National Centre for Basic Sciences\nBlock JD, Sector III, Salt Lake700 098KolkataIndia\n" ]
[ "Material Sciences Division\nS. N\nBose National Centre for Basic Sciences\nBlock JD, Sector III, Salt Lake700 098KolkataIndia", "Material Sciences Division\nS. N\nBose National Centre for Basic Sciences\nBlock JD, Sector III, Salt Lake700 098KolkataIndia" ]
[]
Synthesis of tri-functional e.g., electrically conducting, optical and magnetic nano-chain of Ni core -Au shell have been discussed here. Our Investigation indicates that such material attached with biomolecule 'DNA chain' will have great potentiality in medical instrument and bio computer device.
10.1063/1.3169707
[ "https://arxiv.org/pdf/0901.0087v1.pdf" ]
32,813,068
0901.0087
65b9e07f25f3dbf9aafe8e517532d7b36d6e0535
Synthesis of DNA Templated Tri-functional Electrically Conducting, Optical and Magnetic nano-chain of Ni core -Au shell for Bio-device Madhuri Mandal *e-mail:[email protected] Material Sciences Division S. N Bose National Centre for Basic Sciences Block JD, Sector III, Salt Lake700 098KolkataIndia Kalyan Mandal Material Sciences Division S. N Bose National Centre for Basic Sciences Block JD, Sector III, Salt Lake700 098KolkataIndia Synthesis of DNA Templated Tri-functional Electrically Conducting, Optical and Magnetic nano-chain of Ni core -Au shell for Bio-device Synthesis of tri-functional e.g., electrically conducting, optical and magnetic nano-chain of Ni core -Au shell have been discussed here. Our Investigation indicates that such material attached with biomolecule 'DNA chain' will have great potentiality in medical instrument and bio computer device. DNA has well-defined & predictable geometry and diverse & programmable hybridization properties. Therefore various DNA multi assemblies; for example, nanoscaled tiles, cubes, and machines, have been fabricated. 1,2,3 On the other hand biological systems with very small size are very active in a very small scale. Manufacture of various small substances with biological molecules may do many kinds of marvelous things, all on a very small scale. So proper engineering with such system having the property of magnetic as well as optical, have possible applications in brain research, neuro-computation, prosthetics, biosensors, bio-computer etc. For these we need to understand how DNA works with attachment of tiny magnetic and optical materials or nanoparticles. Now a days the creation of three-dimensional, ordered, crystalline structures of metal nanoparticles guiding by DNA is a big challenge to the researchers. 4,5,6 Mirkin and co-workers 7 described a method of assembling colloidal gold nanoparticles into macroscopic aggregates using DNA as the linking element. Nanowires of noble metals like gold, 7 silver, 8 palladium, 9 platinum, 10 and copper 11 have been metallized on DNA. Synthesis methods, however, required long processing times and high temperatures with multiple steps. People have also synthesized conductive gold on DNA scaffold. 12 Synthesis of DNA templated chain like magnetic and optical material will provide new break through in the nanotechnology research. We have synthesized wire of gold coated Ni magnetic nanoparticles by DNA directing method. For the first time we have reported synthesis of this kind of material. Our material has magnetic, conducting and optical properties all together. This kind of material can be monitored by its optical property in one hand and by magnetic property on the other. Application of this kind of material will also be in versatile fields such as in computer devise, drug delivery, hyperthermia treatment, SERS effect, biosensor etc. All the reagents used were 99.9% pure and DNAse, RNAse free ultra-pure water was used through out the synthesis method. The aqueous DNA solution has an absorption band at ~260 nm (curve a, Figure 1). The mixing of Ni-sulfate with DNA (curve b, Figure 1 Magnetic hysteresis loops for Ni nanoparticles attached on DNA and after its coating by gold are shown in Figure 3 'a' and 'b'. ~0.003 gm of sample was taken for magnetic measurement. Magnetic hysteresis loop indicate super-paramagnetic nature of Ni-attached DNA chain. Curve 'b' indicates both the superpara and paramagnetic character in one hysteresis loop. The superparamagnetic character is due to smaller Ni nanoparticles present in core of the particles and paramagnetic is due to presence of gold as shell. This nature of hysteresis loop again indicates the formation of Ni core -Au shell type particles not of alloy type. People have synthesized electrically conductive gold nanowires using DNA as template exploiting microwave heating method where DNA serves as a reducing and nonspecific capping agent for the growth of nanowires. 12 They have synthesized this kind of material as it has great potential to be used as interconnects of nanodevices and computational elements. We have synthesized Ni core -Au shell attached to DNA. Here Au is conductive and Ni is magnetic. So we have added another more advantage in our material as our material is conductive, optical as well as magnetic also. More over Ni after a gold coating become very stable. This kind of material with both magnetic and conductive properties attached with biomolecule 'DNA' will have great potentiality in medical instrument and computer device. Figure 1 . 1) resulted no significant decrease and red shift of absorption peaks for DNA (at ~260 nm) which indicate no aggregation of DNA strands. A slight shift and very small amount of increase of absorbance value of DNA at about 260 nm is observed. But this small shift does not signify any complex formation as there is no considerable shift of the absorption maximum for DNA at 260 nm. After addition of Hydrazene Hydrate and a negligibly small amount of sodium borohydride the solution turned black and this solution show no characteristic absorbance in UV visible spectrum, shown in Figure 1 'c' curve. The HAuCl4 solution in 1/3 molar ratio of Ni-sulfate was added to it. After addition of HAuCl 4 the solution turned to blackish pink color with appearance of an additional hump at ~540 nm (Figure 1 d) due to the surface plasmon resonance (SPR) mode of gold nanoparticles. After a long period of time a broad nature SPR peak is observed in curve e, Figure 1, which indicates the formation of thicker gold coating on Ni nanoparticles. UV visible spectra for DNA solution (a), NiSO 4 added to DNA solution (b), Ni-nanoparticles attached DNA (c), after addition of HAuCl 4 (d), after complete formation of Ni core Au shell -DNA composite TEM images after synthesis of Ni nanoparticles attached on DNA chain and forming the shell of gold on top of Ni nanoparticles are shown inFigure 2'a' and 'b' respectively. The Ni attached DNA chain is of smaller width of ~40 nm but after gold shell formation the width become larger of ~65 nm. DNA consisting of phosphate, amino groups are good binding agents of metal like Ni, which direct the formation of chain like composite structure. Figure 2 .Figure 3 . 23TEM images of Ni-nanoparticles attached on DNA (a), and Ni core Au shell -DNA (b) I-V measurement of gold coated Ni-DNA sample shows it can conduct the electricity. The behavior is Ohmic with low resistance. No hysteresis indicates good contacts, continuous structure of one to one nanoparticles attached on DNA. Magnetic hysteresis loop of Ninanoparticles attached on DNA (a), and Ni core Au shell -DNA (b) . J Chen, N C Seeman, Nature. 350631J. Chen, N. C. Seeman, Nature 1991, 350, 631. . P W K Rothemund, Nature. 440297P. W. K. Rothemund, Nature 2006, 440, 297. . E Winfree, F Liu, L A Wenzler, N C Seeman, Nature. 394539E. Winfree, F. Liu, L. A. Wenzler, N. C. Seeman, Nature 1998, 394, 539. . L M Dillenback, G P Goodrich, C D Keating, Nano Let. 616L. M. Dillenback, G. P. Goodrich, C. D. Keating, Nano Let, 2006, 6, 16. . G P Goodrich, M R Helfrich, J J Overberg, C D Keating, Langmuir. G. P. Goodrich, M. R. Helfrich, J. J. Overberg, C. D. Keating, Langmuir, 2004, 20, 10246. . C Ross, Annu. Re V. Mater. Res. 203C. Ross, Annu. Re V. Mater. Res. 2001, 31, 203. . C A Mirkin, R L Letsinger, R C Mucic, J J Storhoff, Nature. 382607C. A. Mirkin, R. L. Letsinger, R. C. Mucic, J. J. Storhoff, Nature 1996, 382, 607. . E Braun, Y Eichen, U Sivan, G Ben-Yoseph, Nature. 775E. Braun, Y. Eichen, U. Sivan, G. Ben-Yoseph, Nature 1998, 391, 775. . J Richter, R Seidel, R Kirsch, M Mertig, W Pompe, J Plaschke, H K Schackert, Ad V Mater, 12507J. Richter, R. Seidel, R. Kirsch, M. Mertig, W. Pompe, J. Plaschke, H. K. Schackert, Ad V. Mater. 2000, 12, 507. . R Seidel, L C Ciacchi, M Weigel, W Pompe, M Mertig, J. Phys. Chem. B. 10801R. Seidel, L. C. Ciacchi, M. Weigel, W. Pompe, M. Mertig, J. Phys. Chem. B 2004, 108, 10801. . C F Monson, A T Woolley, Nano Lett. 3359C. F. Monson, A. T. Woolley, Nano Lett. 2003, 3, 359. . S Kundu, H Liang, Langmuir. 9668S. Kundu, H. Liang, Langmuir, 2008, 24, 9668. . K Mallik, M Mandal, N Pradhan, T Pal, ; M Mandal, S K Ghosh, S Kundu, K Esumi, T , Nano. Lett. 17792LangmuirK. Mallik, M. Mandal, N. Pradhan, T. Pal, Nano. Lett., 2001, 1, 319. b) M. Mandal, S. K, Ghosh, S. Kundu, K. Esumi, T. Pal, Langmuir, 2002, 18, 7792.
[]
[ "Backpropagation for long sequences: beyond memory constraints with constant overheads", "Backpropagation for long sequences: beyond memory constraints with constant overheads" ]
[ "Navjot Kukreja [email protected] \nDepartment of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nImperial College London\nImperial College London\nImperial College London\n\n", "Jan Hückelheim [email protected] \nDepartment of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nImperial College London\nImperial College London\nImperial College London\n\n", "Gerard J Gorman [email protected] \nDepartment of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nImperial College London\nImperial College London\nImperial College London\n\n" ]
[ "Department of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nImperial College London\nImperial College London\nImperial College London\n", "Department of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nImperial College London\nImperial College London\nImperial College London\n", "Department of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nDepartment of Earth Science and Engineering\nImperial College London\nImperial College London\nImperial College London\n" ]
[]
Naive backpropagation through time has a memory footprint that grows linearly in the sequence length, due to the need to store each state of the forward propagation. This is a problem for large networks. Strategies have been developed to trade memory for added computations, which results in a sublinear growth of memory footprint or computation overhead. In this work, we present a library that uses asynchronous storing and prefetching to move data to and from slow and cheap storage. The library only stores and prefetches states as frequently as possible without delaying the computation, and uses the optimal Revolve backpropagation strategy for the computations in between. The memory footprint of the backpropagation can thus be reduced to any size (e.g. to fit into DRAM), while the computational overhead is constant in the sequence length, and only depends on the ratio between compute and transfer times on a given hardware. We show in experiments that by exploiting asyncronous data transfer, our strategy is always at least as fast, and usually faster than the previously studied "optimal" strategies.A third strategy that is recently getting increased attention is checkpointing, and is briefly reviewed in the following section.
null
[ "https://arxiv.org/pdf/1806.01117v1.pdf" ]
46,936,600
1806.01117
3969963605d088b017ec085c1912a4a10e5aa272
Backpropagation for long sequences: beyond memory constraints with constant overheads Navjot Kukreja [email protected] Department of Earth Science and Engineering Department of Earth Science and Engineering Department of Earth Science and Engineering Imperial College London Imperial College London Imperial College London Jan Hückelheim [email protected] Department of Earth Science and Engineering Department of Earth Science and Engineering Department of Earth Science and Engineering Imperial College London Imperial College London Imperial College London Gerard J Gorman [email protected] Department of Earth Science and Engineering Department of Earth Science and Engineering Department of Earth Science and Engineering Imperial College London Imperial College London Imperial College London Backpropagation for long sequences: beyond memory constraints with constant overheads Naive backpropagation through time has a memory footprint that grows linearly in the sequence length, due to the need to store each state of the forward propagation. This is a problem for large networks. Strategies have been developed to trade memory for added computations, which results in a sublinear growth of memory footprint or computation overhead. In this work, we present a library that uses asynchronous storing and prefetching to move data to and from slow and cheap storage. The library only stores and prefetches states as frequently as possible without delaying the computation, and uses the optimal Revolve backpropagation strategy for the computations in between. The memory footprint of the backpropagation can thus be reduced to any size (e.g. to fit into DRAM), while the computational overhead is constant in the sequence length, and only depends on the ratio between compute and transfer times on a given hardware. We show in experiments that by exploiting asyncronous data transfer, our strategy is always at least as fast, and usually faster than the previously studied "optimal" strategies.A third strategy that is recently getting increased attention is checkpointing, and is briefly reviewed in the following section. Introduction The current trend is towards training ever deeper networks as deeper networks have a larger capacity to learn. Since backpropagation requires the complete state of the forward propagation in reverse order, training a neural network with backpropagation requires memory that is proportional to the size of the network. Many state-of-the-art models already run out of memory on current hardware and this trend is only expected to get worse. [10] One of the most common ways of managing memory consumption of neural network training is by controlling the batch size [10]. However, since the batch size is also used to sample from the training data, the choice of batch size can affect the convergence rate and cannot be used to tune the model's memory consumption without side-effects. Another common mitigation strategy is to split the training over multiple computational nodes [7]. However, this incurs significant message passing overheads and costs for hardware with low-latency interconnects. This strategy can also be wasteful if the peak memory consumption is only slightly larger than that of a single compute node. device limit conventional backpropagation asynchronous checkpointing runtime memory consumption store store load load Figure 1: Memory requirement of a neural network during training. In conventional backpropagation, all states need to be stored, leading to a peak memory footprint at the end of the forward computation. During the backward pass, the stored states are used and their memory subsequently freed in the reverse order. Training can not be performed on hardware with too small memory. In contrast, checkpointing strategies store some intermediate states and resume recomputation from there when required. With asynchronous multistage checkpointing, the data is further offloaded to a larger, slower storage system (e.g. solid state drive) in the background while the computation is running, and prefetched before it is needed. Checkpointing for neural networks The idea behind checkpointing is not to store the entire state of the network through the forward propagation. Instead, the state of forward propagation is stored only at certain layers, and the number of layers that are kept at any given time can be limited to fit into the available memory. During the backpropagation phase, states that have not been stored can be recomputed as needed from the nearest available state. This allows a tradeoff between memory and computation. With this, problems can be made to fit on systems with limited memory in exchange for an increased computation time. The pressure on the memory system during a backpropagation execution can be quantified using a memory ratio, i.e. the ratio between the memory available on a computer system and the expected peak memory consumption of a particular instance of backpropagation. We are only interested here in scenarios where the memory ratio is less than 1. The amount of recomputation required in a checkpointing strategy is quantified using a recompute factor where a factor of 1 implies no recomputation. The factor grows as the memory ratio is reduced. The choice of layers at which to store checkpoints during the forward propagation directly affects the recompute factor and is called the checkpointing schedule. Checkpointing is widely used for similar purposes in adjoint based optimisation problems, and a number of schedules have been developed that are optimal under certain assumptions. If the number of layers is known a priori, states have a uniform size, each layer takes the same time to compute, and the memory is fast and the time to store a checkpoint is thus negligible, then the Revolve algorithm [4] gives the optimal schedule that minimises recomputation given a fixed amount of memory. Another schedule has been found to be optimal if the number of layers is not known initially [13]. The development of these algorithms was motivated by adjoint solvers, where these assumptions are usually valid. In contrast, the state size and computation cost of layers in neural networks is often non-uniform (e.g. different before and after a pooling layer). New checkpointing schedules have been developed specifically for machine learning applications [3], including a dynamic program that can be used to compute optimal schedules for networks of uniform and non-uniform checkpoint sizes [6]. Multistage checkpointing When additional levels of memory are available, it is possible to leverage these additional levels to reduce the recompute factor [12]. In the context of modern computer systems, the two levels of memory could be the accelerator memory and the host memory. Even on systems where only one level of memory is usually used, the second level memory could be a high-bandwidth disk, e.g. an SSD. In the foreseeable future, other types of memory are expected to become available, such as storage-class memory [14]. For systems with two levels of memory, [1] describes the optimal schedule that reduces the total time to solution for adjoint solvers or backpropagation, assuming that the first level memory is fast but has limited-capacity, while the second level is slow but has infinite capacity. The key idea is to increase the number of stored checkpoints, by storing the least frequently used checkpoints on the slow, large storage. The schedule assumes blocking data transfer, that is, the computation waits while data is transferred from the fast to the slow storage level. Since transfers between first-level memory and second-level memory take a non-trivial amount of time, they can be carried out in parallel. This motivated a recent paper [11] describing the use of asychronous multistage checkpointing for a PDE solver. In that work, the solver itself uses all available RAM on a system, and the checkpoints are thus stored directly to a hard drive. Since the overall stored data is much larger than available hard drives, another system is transferring the data over the network to a tape archive while the computation is running. A similar concept was also previously applied to neural networks [10]. However, in this case every layer was transferred to the second-level memory, which slows down the forward propagation. A variation of this strategy, where a subset of states is transferred to the host memory and transferred back when required was also implemented for Tensorflow, but without any recomputation of forward propagated states. [9] Contributions While the work in this paper is conceptually similar to that presented in [11], to the best of our knowledge, multistage checkpointing with recomputation of forward states has not been applied in the context of neural networks before. It has also not previously been investigated for systems other than the aforementioned hard drive/tape system. This is despite the fact that non-blocking asynchronous data transfer is possible on a variety of commonly used systems, such as GPU DRAM / CPU RAM, or from host RAM to another devide using direct memory access (DMA). We therefore investigate asynchronous multistage checkpointing for neural networks on a system that consists an Intel XeonPhi Knight's Landing (KNL), where the main computation and Level 1 memory is in fast MCDRAM, and the Level 2 storage is in the system's DRAM. Figure 1 gives a high-level illustration of this idea. After presenting the scheme in Section 2, we present a performance model for asynchronous checkpointing that works across a variety of hardware configurations in Section 3. We also developed a prototype implementation for asynchronous multistage checkpointing in Python, shown in Section 4. In Section 5, we demonstrate the use of this scheme on two different modern hardware platforms using an LSTM based network as a test case. We conclude in Section 6. Asynchronous multistage checkpointing In this section we outline the asynchronous multistage checkpointing strategy. We assume that there are two storage stages: Level 1, a fast but small memory, and Level 2, a large but slow storage. Examples for Level 1 memory include GPU DRAM or Xeon Phi MCDRAM, while a example for Level 2 storage is a solid state drive (SSD). Note that these roles depend on the overall configuration of the system. For example, RAM could either be a Level 2 storage in a system that is using DRAM as Level 1, or it could be Level 1 memory in a system that is using SSD or a hard drive for Level 2. What matters is not the absolute speed or size of the storage, but rather the relative speed and size compared to other storage in the same system. In the asynchronous multistage checkpointing strategy, the computation itself completely resides in Level 1 memory. During the forward pass, copies of the state are transferred to the Level 2 storage at regular intervals, i.e. after every I layers, where I is the checkpointing interval. The transfer to storage happens asychnronously so as to not slow down the computation of the forward propagation. All forward activations are then cleared from Level 1 memory. The backward pass will require the stored data in reverse order, at well-known points in time during the computation. For this reason, checkpoints that are required from Level 2 storage can be asynchronously transferred to Level 1 before they are needed. Since every I-th state was stored, the intermediate states need to be recomputed from the restored state. Assuming there is enough Level 1 memory available to store the entire forward propagation state for I layers, backpropagation can then proceed normally for these I layers. If there is not enough memory available, Revolve can be applied to find an optimal schedule for backpropagation through I layers within the limits of the Level 1 memory. Compared to conventional backpropagation where every state is stored, this obviously has the advantage that it can fit into limited amounts of memory. Perhaps less obviously, this strategy is guaranteed to be faster than the "optimal" Revolve checkpointing strategy. This is because Revolve (or any of the other published single-stage checkpointing strategies) trades memory for additional computations, resulting in a time overhead that increases with the number of layers. Through the use of Level 2 storage, Revolve is only used for the n states between two subsequent stores, resulting in a time overhead that is constant in the number of layers. This is illustrated in Figure 2 and explained in more detail in Section 3. Performance Model We analyse in this section the expected performance of asynchronous multistage checkpointing and compare it with Revolve checkpointing. Following that, we demonstrate the performance in an experiment in Section 5. On a given target computer, let the time taken to compute one layer's activations be given as T A and the time taken to propagate sensitivities backwards through that layer as T B . For a network with n layers, the total time T ∞ for a combined forward/backward pass as used in training, assuming that there is no memory limit, is then obviously T ∞ = n · T A + n · T B . If Revolve checkpointing is used, some states need to be recomputed, leading to additional computations of activations. This is expressed in the recompute factor, which depends on the total number of layers n, as well as the number of checkpoints that simultaneously fit into memory, s. We refer to this as R(n, s) . The recompute factor is defined in [5], and can be computed by the reference implementation of Revolve or by using the pyrevolve Python package that can be downloaded from https://github.com/opesci/pyrevolve/. We note that the recompute factor increases if the number of layers n is increased, and also increases if the storage space s is decreased. This is true for all known single-stage checkpointing schemes, and the precise nature of the increase (sub-linear for most schemes, logarithmic for Revolve) determines the optimality of a schedule. The total time T revolve for the combined forward/backward pass is then T revolve = n · R(n, s) · T A + n · T B . For asynchronous multistage checkpointing, we are also interested in the time that it takes to transfer a state from Level 1 memory to Level 2 storage. We refer to this time as T T . If T T ≤ T A , then we could asynchronously stream all data to storage while the computation is running without ever waiting for disk access. If T T > T A , then we can only store a subset of all states. We choose to store states in regular intervals of length I, given by I = T T T A . In general, there are then n/I such intervals. Storing and prefetching happens asynchronously, meaning that these operations do not affect performance in this model (albeit they have a slight effect on performance in practice, see Section 5. Within each interval, we can use Revolve with a recompute factor of R(I, s). Overall, we thus have a runtime T async = n I · (I · R(I, s) · T A + I · T B ) = n · R(I, s) · T A + n · T B . Due to the fact that R(I, s) ≤ R(n, s) if the interval is at most n, the asynchronous strategy is at least as fast as the classic Revolve strategy. In particular, the recompute factor in T async depends only on I, not on the total sequence length n. Figure 3 shows this for a small number of interval lengths and assuming that 100 states fit into memory. Note that in the case where there are very few layers, there might not be time to save a single checkpoint to second level memory before the entire forward pass is over. In this case this strategy would fall back to classic Revolve. Implementation The Revolve algorithm was accompanied by a similarly named utility that could be used to compute optimal schedules for a particular checkpointing scenario. pyrevolve [8] is a python package that uses schedules automatically derived from this utility to provide checkpointing as a feature in python applications with minimal changes to the code. pyrevolve expects function references to a Forward Operator, and a Backward Operator, along with a Checkpoint object that describes the state variables from the Forward Operator that the Backward Operator requires. Provided these three objects, pyrevolve can drive the entire forward and backward passes, automatically calling the forward or backward operator as required by the optimal schedule. The implementation of the asynchronous multistage checkpointing strategy is offered as an additional mode in pyrevolve 1 . Due to the way it has been formulated, pyrevolve, and consequently the implementation for this strategy, can be used in applications ranging from PDE-constrained optimisation problems in seismic imaging and CFD to neural networks. The implementation uses the python threading API to create background threads where the asynchronous reads and writes happen. Python threads are known to suffer from issues relating to the Global Interpreter Lock (GIL). However, python releases the GIL when doing IO-bound operations [2]. Hence, this implementation is expected to be asynchronous despite, if not even due to, the python GIL. As of now, we implemented this strategy with two hardware architectures in mind -compute on CPU, DRAM for first level memory and SSD for second level memory -here we shall call this the CPU platform. The second architecture is -compute on an accelerator such as the Intel R Xeon Phi TM 7210 (KNL), with the accelerator memory, the MCDRAM in the case of the KNL, acting as the first-level memory and the host memory, or the DRAM in the case of KNL, acting as the second-level memory. In principle, what we describe here for the KNL platform applies equally to a GPU architecture where the GDDR memory acts as the first level and the host memory acts as the second level. On the CPU platform, the background threads use the SSD by writing and reading the data to files using the python filesystem API. On the KNL platform, a ramdisk mounted to host memory is used as a second level memory, though this could be improved in future implementations. Experimental Results The test case on which to measure the performance of this strategy and implementation was adapted from an open source implementation of simple vanilla LSTM 2 . An LSTM was chosen because a simple LSTM has uniformly sized checkpoints as we go through the network. Using one of the popular frameworks like Tensorflow or pyTorch we could have implemented an LSTM in very few lines but the multiple layers of abstraction involved would hide some very important details that were relevant for this study. For example, the framework might be calling precompiled primitives for performant calculations, and choosing which implementation of a function to call based on runtime parameters. This caused spikes at certain network depths that are not relevant to the study at hand. Another issue was about the transparency of memory management, since we would like to choose exactly which objects to keep in memory. However, because the purpose of this experiment is to demonstrate the principle of asynchronous multistage checkpointing, we believe that this implementation written with numpy as the only external library is sufficiently representative of a full-fledged LSTM training inside any of the popular NN frameworks. The test case 3 sets up a basic LSTM for text generation, including a manual implementation of RMSProp. Additional tweaks like learning rate decay would probably help the convergence of this code in a real-life scenario. However here we are not concerned about a complete training cycle, our interest is limited to a single forward-backward iteration and its performance characteristics as the number of LSTM recurrences is changed. Figure 4 shows the peak memory footprint for a single forward-backward pass for a network of given depth, and figure 5 shows how the recompute factor varies with network depth. The times were measured for 5 individual runs and the minimum reported. The memory reported was measured using the maximum resident set size reported by the time utility on the bash command line. The python interpreter was exited after each iteration to ensure that the memory is released back to the OS. Although the peak memory footprint is theoretically expected to be constant, regardless of the number of recurrent layers, we observe in the plots that the memory does go up slightly although at a rate significantly lower than standard backpropagation. This is because the implementation still requires some variables whose size is dependent on the depth of the network. In the case of this LSTM implementation, the list of expected outputs is the main such variable that can not be easily made to be independent of the depth of the network. Conclusions and future work We introduced asynchronous multistage checkpointing for backpropagation in large RNNs in environments with limited memory. The method allows backpropagation through sequences with arbitrary length for any given memory size, by recomputing some data and asynchronously transferring other data to a larger, slower storage such as host memory, RAM, or even SSDs. The runtime overhead compared to a pure inference is constant in the sequence length, as was shown in our experiment. The overhead is also at most as large as that of the optimal single-stage checkpointing strategy Revolve, as shown in a theoretical performance model. The implementation currently only supports networks that have layers of the same size throughout, i.e. uniform checkpoint size. Instead of storing every I-th state for some fixed interval I, one could instead easily store the next state whenever the previous data transfer has completed, thereby supporting non-uniform checkpoint sizes. Within each interval, the known algorithm for non-uniform single-stage checkpointing could be used instead of Revolve. The implementation currently supports Intel XeonPhi processors. In future work, we plan to extend our implementation to support more platforms, such as GPUs. Finally, the current implementation assumes that the states within each interval fit into memory, and this was true for the experiments conducted in this work. If required, our package can be modified to use Revolve within each interval, for example using the pyrevolve package. Figure 2 : 2Timeline of events for conventional backpropagation, Revolve checkpointing, and asynchronous multistage checkpointing. The conventional backpropagation would have the shortest runtime, but exceeds the available memory. Both other strategies respect the memory limits, but result in different time overheads. Revolve alternates between forward and reverse computations in a rather complex fashion to minimise the overhead if only one level of memory is available. The asynchronous strategy stores data to Level 2 storage in regular intervals, and restores the data before it is needed in backpropagation. Figure 3 : 3Recompute factors, assuming that s = 100 (that is, 100 states fit into memory), for classic Revolve, and asynchronous multistage checkpointing with interval sizes I = 8, 64, 1024. Figure 4 :Figure 5 : 45Comparison Comparison of recompute factors on KNL https://github.com/opesci/pyrevolve 2 https://github.com/kevin-bruhwiler/Simple-Vanilla-LSTM 3 Code provided as supplementary material AcknowledgmentsThis work has been funded by the Intel Parallel Computing Centre at Imperial College London. This paper benefitted greatly from conversations with Paul Kelly, Nicolas Melot, Lukas Mosser, Paul Hovland and Michel Schanen. This work was performed using the Darwin Supercomputer of the University of Cambridge High Performance Computing Service (http://www.hpc.cam.ac.uk/), provided by Dell Inc. using Strategic Research Infrastructure Funding from the Higher Education Funding Council for England and funding from the Science and Technology Facilities Council. Optimal multistage algorithm for adjoint computation. Guillaume Aupy, Julien Herrmann, Paul Hovland, Yves Robert, SIAM Journal on Scientific Computing. 383Guillaume Aupy, Julien Herrmann, Paul Hovland, and Yves Robert. Optimal multistage algorithm for adjoint computation. SIAM Journal on Scientific Computing, 38(3):C232-C255, 2016. Understanding the python gil. David Beazley, PyCon. David Beazley. Understanding the python gil. In PyCon, 2010. Training deep nets with sublinear memory cost. Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin, arXiv:1604.06174arXiv preprintTianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174, 2016. Algorithm 799: revolve: an implementation of checkpointing for the reverse or adjoint mode of computational differentiation. Andreas Griewank, Andrea Walther, ACM Transactions on Mathematical Software (TOMS). 261Andreas Griewank and Andrea Walther. Algorithm 799: revolve: an implementation of check- pointing for the reverse or adjoint mode of computational differentiation. ACM Transactions on Mathematical Software (TOMS), 26(1):19-45, 2000. Algorithm 799: Revolve: An implementation of checkpointing for the reverse or adjoint mode of computational differentiation. Andreas Griewank, Andrea Walther, 0098-3500ACM Trans. Math. Softw. 261Andreas Griewank and Andrea Walther. Algorithm 799: Revolve: An implementation of checkpointing for the reverse or adjoint mode of computational differentiation. ACM Trans. Math. Softw., 26(1):19-45, March 2000. ISSN 0098-3500. Memoryefficient backpropagation through time. Audrunas Gruslys, Rémi Munos, Ivo Danihelka, Marc Lanctot, Alex Graves, Advances in Neural Information Processing Systems. Audrunas Gruslys, Rémi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. Memory- efficient backpropagation through time. In Advances in Neural Information Processing Systems, pages 4125-4133, 2016. Alex Krizhevsky, arXiv:1404.5997One weird trick for parallelizing convolutional neural networks. arXiv preprintAlex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014. High-level python abstractions for optimal checkpointing in inversion problems. Navjot Kukreja, Jan Hückelheim, Michael Lange, Mathias Louboutin, Andrea Walther, Gerard Simon W Funke, Gorman, arXiv:1802.02474arXiv preprintNavjot Kukreja, Jan Hückelheim, Michael Lange, Mathias Louboutin, Andrea Walther, Simon W Funke, and Gerard Gorman. High-level python abstractions for optimal checkpointing in inversion problems. arXiv preprint arXiv:1802.02474, 2018. Training deeper models by gpu memory optimization on tensorflow. Chen Meng, Minmin Sun, Jun Yang, Minghui Qiu, Yang Gu, Chen Meng, Minmin Sun, Jun Yang, Minghui Qiu, and Yang Gu. Training deeper models by gpu memory optimization on tensorflow. 2017. vdnn: Virtualized deep neural networks for scalable, memory-efficient neural network design. Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulfiqar, Stephen W Keckler, 49th Annual IEEE/ACM International Symposium on. IEEEMicroarchitecture (MICRO)Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulfiqar, and Stephen W Keckler. vdnn: Virtualized deep neural networks for scalable, memory-efficient neural network design. In Microarchitecture (MICRO), 2016 49th Annual IEEE/ACM International Symposium on, pages 1-13. IEEE, 2016. Asynchronous two-level checkpointing scheme for large-scale adjoints in the spectral-element solver nek5000. Michel Schanen, Oana Marin, Hong Zhang, Mihai Anitescu, Procedia Computer Science. 80Michel Schanen, Oana Marin, Hong Zhang, and Mihai Anitescu. Asynchronous two-level checkpointing scheme for large-scale adjoints in the spectral-element solver nek5000. Procedia Computer Science, 80:1147-1158, 2016. Multistage approaches for optimal offline checkpointing. Philipp Stumm, Andrea Walther, SIAM Journal on Scientific Computing. 313Philipp Stumm and Andrea Walther. Multistage approaches for optimal offline checkpointing. SIAM Journal on Scientific Computing, 31(3):1946-1967, 2009. Minimal repetition dynamic checkpointing algorithm for unsteady adjoint calculation. Qiqi Wang, Parviz Moin, Gianluca Iaccarino, SIAM Journal on Scientific Computing. 314Qiqi Wang, Parviz Moin, and Gianluca Iaccarino. Minimal repetition dynamic checkpointing algorithm for unsteady adjoint calculation. SIAM Journal on Scientific Computing, 31(4): 2549-2567, 2009. Exploiting the performance benefits of storage class memory for hpc and hpda workflows. Michele Weiland, Adrian Jackson, Nick Johnson, Mark Parsons, 2313-8734Supercomputing Frontiers and Innovations. 51Michele Weiland, Adrian Jackson, Nick Johnson, and Mark Parsons. Exploiting the performance benefits of storage class memory for hpc and hpda workflows. Supercomputing Frontiers and Innovations, 5(1), 2018. ISSN 2313-8734. URL http://superfri.org/superfri/ article/view/164.
[ "https://github.com/opesci/pyrevolve/.", "https://github.com/opesci/pyrevolve", "https://github.com/kevin-bruhwiler/Simple-Vanilla-LSTM" ]
[]
[ "Philippe Cara \nDepartment of Mathematics Vrije\nUniversiteit Brussel\nPleinlaan 2B-1050BrusselBELGIUM\n", "Rudger Kieboom \nDepartment of Mathematics Vrije\nUniversiteit Brussel\nPleinlaan 2B-1050BrusselBELGIUM\n", "Tina Vervloet \nDepartment of Mathematics Vrije\nUniversiteit Brussel\nPleinlaan 2B-1050BrusselBELGIUM\n" ]
[ "Department of Mathematics Vrije\nUniversiteit Brussel\nPleinlaan 2B-1050BrusselBELGIUM", "Department of Mathematics Vrije\nUniversiteit Brussel\nPleinlaan 2B-1050BrusselBELGIUM", "Department of Mathematics Vrije\nUniversiteit Brussel\nPleinlaan 2B-1050BrusselBELGIUM" ]
[]
In this paper we study loops, neardomains and nearfields from a categorical point of view. By choosing the right kind of morphisms, we can show that the category of neardomains is equivalent to the category of sharply 2-transitive groups. The other categories are also shown to be equivalent with categories whose objects are sets of permutations with suitable extra properties.Up to now the equivalence between neardomains and sharply 2-transitive groups was only known when both categories were equipped with the obvious isomorphisms as morphisms. We thank Hubert Kiechle for this observation[6].
10.36045/bbms/1354031553
[ "https://arxiv.org/pdf/1207.3600v1.pdf" ]
118,202,837
1207.3600
3173dbe7671f08767078ac1de4066b5bea0c2a0d
16 Jul 2012 May 2, 2014 Philippe Cara Department of Mathematics Vrije Universiteit Brussel Pleinlaan 2B-1050BrusselBELGIUM Rudger Kieboom Department of Mathematics Vrije Universiteit Brussel Pleinlaan 2B-1050BrusselBELGIUM Tina Vervloet Department of Mathematics Vrije Universiteit Brussel Pleinlaan 2B-1050BrusselBELGIUM 16 Jul 2012 May 2, 2014A categorical approach to loops, neardomains and nearfields In this paper we study loops, neardomains and nearfields from a categorical point of view. By choosing the right kind of morphisms, we can show that the category of neardomains is equivalent to the category of sharply 2-transitive groups. The other categories are also shown to be equivalent with categories whose objects are sets of permutations with suitable extra properties.Up to now the equivalence between neardomains and sharply 2-transitive groups was only known when both categories were equipped with the obvious isomorphisms as morphisms. We thank Hubert Kiechle for this observation[6]. Introduction Loops and nearfields are structures in algebra which generalize groups and fields, respectively. The first examples of finite proper nearfields were constructed by L.E. Dickson in 1905. Thirty years later the finite nearfields were completely classified by H. Zassenhaus. In 1965 H. Karzel introduced neardomains (which are a weakening of nearfields) in such a way that there is a one-to-one correspondence with sharply 2-transitive groups (see [3]). At that time morphisms were not considered. The still unsolved problem is whether there exist proper neardomains, i.e. neardomains which are not nearfields. The link between loops and regular (i.e. sharply 1-transitive) permutation sets is of a similar but simpler kind. The present paper describes these links in a uniform way. By considering the right kind of morphisms we show that loops, resp. neardomains are categories equivalent to the categories of regular permutation sets resp. sharply 2-transitive groups. The latter equivalence nicely restricts to an equivalence between the category of nearfields and the category of sharply 2-transitive groups with the property that the translations form a subgroup. In [3] the correspondence between sharply 2-transitive groups and neardomains is described. We define morphisms which turn this correspondence into an equivalence of categories. Kiechle [6] was also aware of an equivalence of categories but with a more restricted class of morphisms, namely isomorphisms. 2 Loops and regular permutation sets 2 .1 Loops A loop is a set L, together with a binary operation (k, l) → kl with identity satisfying the left and right loop property. This means that for every a, b ∈ L there exist unique elements x, y ∈ L such that ax = b and ya = b. A morphism of loops (L, .) → (L ′ , * ) is a map f : L → L ′ preserving the operations. This means that f (a.b) = f (a) * f (b) for all a, b ∈ L. It follows that f maps the identity of L onto the identity of L ′ . We denote by Loop the category of all loops together with all morphisms of loops. Regular permutation sets Let Ω be a set and let Sym(Ω) denote the set of all permutations of Ω. A regular permutation set (r.p.s.) on Ω is a subset M of Sym(Ω) such that the identity permutation 1 Ω is in M and M acts regularly on Ω, i.e. ∀α, β ∈ Ω : ∃! m ∈ M : m(α) = β. We construct a category rps whose objects are triples (M, Ω, ω), where M is an r.p.s. on Ω and ω ∈ Ω is a base point. A morphism of r.p.s. (M, Ω, ω) → (N, Σ, σ) is a pair (f, Φ) such that f : M → N and Φ : Ω → Σ are maps satisfying Φ(ω) = σ and ∀m ∈ M, ∀α ∈ Ω : Φ(m(α)) = (f (m))(Φ(α)). The latter property can be summarized by the following commutative diagram in which the horizontal maps are the (left) actions of M (resp. N) on the set Ω (resp. Σ). M × Ω f ×Φ / / Ω Φ N × Σ / / Σ Composition of morphisms (f, Φ) : (M, Ω, ω) → (N, Σ, σ) and (g, Ψ) : (N, Σ, σ) → (P, Γ, τ ) is defined by (g • f, Ψ • Φ). The identity 1 (M,Ω,ω) is defined as (1 M , 1 Ω ). One easily verifies that rps is a category. Notice that, by regularity, a morphism (f, Φ) : (M, Ω, ω) → (N, Σ, σ) satisfies f (1 Ω ) = 1 Σ . Equivalence of the categories Loop and rps The one-to-one correspondence between loops and regular permutation sets is folklore (see for instance [2]). However we think it is useful to describe an explicit categorical equivalence. Let (M, Ω, ω) be an object of rps. By regularity, the map µ : M → Ω : m → m(ω) is a bijection such that µ(1 Ω ) = ω. Now define an operation ⊗ ω : M × M → M (m, n) → m ⊗ ω n := µ −1 ((m • n)(ω)) Notice that m ⊗ ω n is equivalently defined by (m ⊗ ω n)(ω) = (m • n)(ω) (compare [2, p. 618]). It is easy to check that PROPERTY 2.1. The pair (M, ⊗ ω ) is a loop with identity 1 Ω . We can also construct a loop structure on the set Ω. Let (M, Ω, ω) be an object of rps. We still have the bijection µ : M → Ω. We define the operation · ω : Ω × Ω → Ω (α, β) → α · ω β := (µ −1 (α) ⊗ ω µ −1 (β))(ω) = (µ −1 (α) • µ −1 (β))(ω) One easily verifies the following PROPERTY 2.2. The pair (Ω, · ω ) is a loop with identity ω, and µ : (M, ⊗ ω ) → (Ω, · ω ) is a loop isomorphism. The following property gives a useful characterization of morphisms of regular permutation sets. 1. ∀m ∈ M : f (m)(σ) = Φ(m(ω)) 2. f : (M, ⊗ ω ) → (N, ⊗ σ ) is a morphism of loops. Proof. Let (f, Φ) : (M, Ω, ω) → (N, Σ, σ) be a morphism, then (1) follows immediately, since ∀m ∈ M : f (m)(σ) = f (m)(Φ(ω)) = Φ(m(ω)). In order to prove (2) it suffices, by the regular action of N on Σ, to establish that ∀m 1 , m 2 ∈ M : (f (m 1 ⊗ ω m 2 ))(σ) = (f (m 1 ) ⊗ σ f (m 2 ))(σ). The left hand side equals Φ((m 1 ⊗ ω m 2 )(ω)) = Φ((m 1 • m 2 )(ω)) = Φ(m 1 (m 2 (ω))) = (f (m 1 ))(Φ(m 2 (ω))) = (f (m 1 ))((f (m 2 ))(Φ(ω))) = (f (m 1 )•f (m 2 ))(σ) = (f (m 1 )⊗ σ f (m 2 ))(σ). Conversely (1) implies that f : M → N, Φ : Ω → Σ with Φ(ω) = σ satisfy f (m)(Φ(α)) = Φ(m(α)) for all m ∈ M, and for α = ω. We have to show that this condition holds for all α ∈ Ω. For any such α there exists a unique m ′ ∈ M such that m ′ (ω) = α (by the regular action of M on Ω). Then it follows that for all m ∈ M and all α ∈ Ω we have f (m)(Φ(α)) = f (m)(Φ(m ′ (ω))) (1) = f (m)(f (m ′ )(σ)) = (f (m) • f (m ′ ))(σ) = (f (m) ⊗ σ f (m ′ ))(σ) (2) = (f (m ⊗ ω m ′ ))(σ) (1) = Φ((m ⊗ ω m ′ )(ω)) = Φ((m • m ′ )(ω)) = Φ(m(α)). COROLLARY 2.4. Let (f, Φ) ∈ rps((M, Ω, ω), (N, Σ, σ)). Then Φ : (Ω, · ω ) → (Σ, · σ ) is a loop homomorphism. The proof easily follows from the definition of the operations · ω and · σ and Properties 2.2 and 2.3. Now let (Ω, ·) be a loop. For α ∈ Ω we define λ α : Ω → Ω : γ → α · γ. Proof. Since (Ω, ·) is a (left) loop, L is a subset of Sym(Ω). Moreover L acts regularly on Ω by the (right) loop property of Ω. Also 1 Ω = λ ω ∈ L. Hence (L, Ω, ω) is a r.p.s.. We now have a correspondence between the objects of rps and Loop which can be extended to an equivalence of categories. THEOREM 2.6. F : rps −→ Loop (M, Ω, ω) −→ (Ω, · ω ) | | (f, Φ) → Φ ↓ ↓ (N, Σ, σ) −→ (Σ, · σ ) is an equivalence of categories. Proof. The functorial properties of F are easily checked. We use [1, prop. 3.4.3(4)] to show that F is an equivalence of categories. F is faithful when for every pair of objects (M, Ω, ω), (N, Σ, σ) of rps the induced map F (M,Ω,ω),(N,Σ,σ) : rps((M, Ω, ω), (N, Σ, σ)) → Loop((Ω, · ω ), (Σ, · σ )) is injective. Let (f, Φ), (f ′ , Φ ′ ) ∈ rps((M, Ω, ω), (N, Σ, σ)) be such that F (f, Φ) = F (f ′ , Φ ′ ). Then Φ = Φ ′ follows immediately. Hence, for all (m, α) ∈ M × Ω, we have that (f (m))(Φ(α)) = Φ(m(α)) = Φ ′ (m(α)) = (f ′ (m))(Φ ′ (α)). In particular, for α = ω, it follows that ν(f (m)) = (f (m))(σ) = (f (m))(Φ(ω)) = (f ′ (m))(Φ ′ (ω)) = (f ′ (m))(σ) = ν(f ′ (m)) (where ν : N → Σ : n → n(σ) is the bijection analogous to µ). Hence f (m) = f ′ (m) for all m ∈ M and thus f = f ′ . F is full when all induced maps F (M,Ω,ω),(N,Σ,σ) (as above) are surjective. Let Φ ∈ Loop((Ω, · ω ), (Σ, · σ )), then Φ(ω) = σ (since ω and σ are identities in the respective loops). Define f Φ : M → N : m → (ν −1 • Φ • µ)(m) (with µ and ν as above). One uses Property 2.3 and the regularity of the action of N on Σ to show that (f Φ , Φ) ∈ rps((M, Ω, ω), (N, Σ, σ)). Clearly F (f Φ , Φ) = Φ. We now take a loop (Ω, ·) with identity ω with its left translations as defined above. We then know (Prop. 2.5) that (L, Ω, ω) is a r.p.s.. For α, β ∈ Ω we have that (λ α ⊗ ω λ β )(ω) = (λ α • λ β )(ω) = α · (β · ω) = α · β = (α · β) · ω = λ α·β (ω). Again by regularity, it follows that λ α ⊗ ω λ β = λ α·β . Hence F (L, Ω, ω) = (Ω, · ω ) where α · ω β = (µ −1 (α) • µ −1 (β))(ω) = (λ α • λ β )(ω) = α · β, which shows that (Ω, ·) = F (L, Ω, ω) . So F turns out to be even strictly surjective on objects. 3 Neardomains, nearfields and sharply 2-transitive groups 3 .1 Neardomains and nearfields A triple (F, +, .) is said to be a neardomain if 1. (F, +) is a loop with neutral element 0; 2. ∀a, b ∈ F : a + b = 0 ⇒ b + a = 0; 3. (F \ {0}, .) is a group (with neutral element 1); 4. ∀a ∈ F : 0.a = 0; 5. ∀a, b, c ∈ F : a.(b + c) = a.b + a.c; 6. ∀a, b ∈ F : ∃d a,b ∈ F \ {0} : (∀x ∈ F : a + (b + x) = (a + b) + d a,b .x). Notice that (F, +, .) is a nearfield if and only if all d a,b are 1. In that case (F, +) is a group. Also notice that 1 and 5 imply that ∀a ∈ F : a.0 = 0. THEOREM 3.1 (see [3], [4] or [5]). A finite neardomain is a nearfield. It is still an open problem whether there exists a (necessarily infinite) neardomain which is not a nearfield. We define the category n-Dom of neardomains where objects are neardomains and morphisms are maps preserving both operations. Proof. Let f : (F, +, .) −→ (F ′ , + ′ , . ′ ) be a morphism of neardomains. Suppose f (x) = f (y) for some x, y ∈ F . Since (F, +, .) is a neardomain we have by conditions 1, 2 a unique additive inverse −y of y. Then, f (x + (−y)) = f (x) + ′ f (−y) = f (y) + ′ f (−y) = f (y + (−y)) = f (0) = 0 ′ . If x + (−y) = 0 this element has an inverse z in (F \ {0}, .). Hence we get 1 ′ = f (1) = f ((x + (−y)).z) = f (x + (−y)). ′ f (z) = 0 ′ . ′ f (z) = 0 ′ , a contradiction.T 2 (F ) = {τ a,b : F → F : x → a + bx | a ∈ F, b ∈ F \ {0}} Then (T 2 (F ), •) is a group whose action on F is sharply 2-transitive, i.e. for any two ordered pairs (α 1 , α 2 ), (β 1 , β 2 ) of points of F with α 1 = α 2 , β 1 = β 2 there exists a unique element τ a,b ∈ T 2 (F ) such that τ a,b (α 1 ) = β 1 and τ a,b (α 2 ) = β 2 . Proof. See [3, (5.1)], [4, (6.1)] or [5, (7.8)]. Sharply 2-transitive groups and involutions Let G be a sharply 2-transitive permutation group on a set Ω with #Ω ≥ 2. We denote by J the set of involutions in G, i.e. J = {g ∈ G | g 2 = 1 Ω = g} One can quickly see that J is never empty. Indeed, as #Ω ≥ 2, we can take α = β in Ω and find a (unique) g ∈ G with g(α) = β and g(β) = α. Such g must have order 2 (since g 2 fixes both α and β, implying g 2 = 1 Ω , by sharp 2-transitivity). 2. all j ∈ J are fixpoint-free. In the latter case we say that G has characteristic 2 and write char G = 2. In the first case we put char G = 2. Proof. For a detailed proof, we refer to [4, p.12], where the case char G = 2, resp. char G = 2 is referred to as G of type 0, resp. of type 1. (The type refers to the number of fixpoints of an involution in G.) The main idea behind the proof is that sharp 2-transitivity implies that a nontrivial element of G cannot have 2 (or more) fixed points. Moreover all elements of J are conjugate in G, hence they have the same number of fixpoints. The reason for this notation and for using the word characteristic will be clarified when we establish the correspondence between sharply 2-transitive groups and neardomains (see Property 3.6). In the case char G = 2 we write ν for the unique involution fixing an arbitrarily chosen base point ω 0 ∈ Ω. The following subset of G plays an important role. We define A ⊆ G as follows The category of sharply 2-transitive groups will be denoted by s2t-Gp. Its objects are quadruples (G, Ω, ω 0 , ω 1 ) where G is a permutation group which operates sharply 2transitively on the set Ω (with #Ω ≥ 2), with two different base points ω 0 and ω 1 of Ω. Morphisms will be defined after Property 3.6. A = J • ν if char G = 2 J ∪ {1 Ω } if char G = 2(1) Let (G, Ω, ω 0 , ω 1 ) be an object in s2t-Gp. On Ω we define an addition and a multiplication as follows. For α, β ∈ Ω we define α + 0 β to be a(β), where a ∈ A is unique such that a(ω 0 ) = α. Since the stabilizer G ω 0 is regular on Ω \ {ω 0 } we can also define α · 1 β as g(β) where g ∈ G ω 0 is unique such that g(ω 1 ) = α. We also put α · 1 β = 0 when α = ω 0 or β = ω 0 . We now define the morphisms in s2t-Gp to be pairs of maps (f, Φ) : (G, Ω, ω 0 , ω 1 ) → (H, Σ, σ 0 , σ 1 ) where either both G and H have characteristic 2 or both have characteristic different from 2 and where f : G → H is a group homomorphism, Φ : Ω → Σ an injective map with Φ(ω 0 ) = σ 0 and Φ(ω 1 ) = σ 1 , and such that the following diagram commutes. G × Ω f ×Φ λ / / Ω Φ H × Σ µ / / Σ(2) In this diagram λ and µ denote the evaluation maps defined by λ(g, ω) = g(ω) and µ(h, σ) = h(σ). In the case either char G = 2 and char H = 2 or char G = 2 and char H = 2, we put s2t-Gp((G, Ω, ω 0 , ω 1 ), (H, Σ, σ 0 , σ 1 )) = ∅ (see the second remark just below). 1. The injectivity of Φ implies the injectivity of f since 1 Σ = f (g) for some g ∈ G implies that ∀ω ∈ Ω we have Φ(g(ω)) = f (g)(Φ(ω)) = Φ(ω). Hence ∀ω ∈ Ω : g(ω) = ω, proving g = 1 Ω . 2. The reason for not allowing any morphisms in the case where G and H have different characteristics is the following. If (f, Φ) is a morphism from (G, Ω, ω 0 , ω 1 ) to (H, Σ, σ 0 , σ 1 ) with either char G = 2 and char H = 2 or char G = 2 and char H = 2, the map Φ between the associated neardomains (Ω, + 0 , · 1 ) and (Σ, + 0 ′ , · 1 ′ ) cannot be a morphism of neardomains. Indeed, the first case char G = 2 = char H implies (by Property 3.6.2.) that the multiplicative identities 1 of (Ω, + 0 , · 1 ) and 1 ′ of (Σ, + 0 ′ , · 1 ′ ) satisfy 1 + 1 = 0 and 1 ′ + 1 ′ = 0 ′ , which conflicts with Φ : (Ω, + 0 , · 1 ) → (Σ, + 0 ′ , · 1 ′ ) being a morphism of neardomains, as can be seen by 0 ′ = 1 ′ + 1 ′ = Φ(1) + ′ Φ(1) = Φ(1 + 1) = Φ(0) = 0 ′ . Similarly the second case char G = 2 = char H leads to the contradiction 0 ′ = 1 ′ + ′ 1 ′ = Φ(1) + ′ Φ(1) = Φ(1 + 1) = Φ(0) = 0 ′ , where we used the injectivity of Φ. Our elimination of "bad" morphisms in s2t-Gp enables us to show (in Property 3.9) that the corresponding Φ is a morphism of neardomains. f (A) ⊆ B where A ⊆ G is defined in (1) above Property 3.5 and B ⊆ H is defined by B = K • f (ν) if char H = 2 K ∪ {1 Σ } if char H = 2 (f (ν) is the unique element of K fixing the base point σ 0 = Φ(ω 0 )) Proof. 1. Let g ∈ J, then f (g) satisfies (f (g)) 2 = f (g 2 ) = f (1 Ω ) = 1 Σ = f (g), since f (1 Ω ) = f (g) would imply (using the injectivity of f ) that g = 1 Ω , contradicting the fact that g ∈ J. Hence f (J) ⊆ K. 2. By the previous remark 2., the existence of the morphism (f, Φ) implies that either char G = char H = 2 or char G = char H = 2. In the first case it follows that f (A) = f (J • ν) = f (J) • f (ν) ⊆ K • f (ν) = B. In the second case we have Proof. Consider the diagram (2) above involving the morphism (f, Φ) from (G, Ω, ω 0 , ω 1 ) to (H, Σ, σ 0 , σ 1 ). For α, β ∈ Ω, f (A) = f (J ∪ {1 Ω }) = f (J) ∪ {f (1 Ω )} ⊆ K ∪ {1 Σ } = B.α + 0 β = a(β) ∈ Ω where a ∈ A ⊆ G is unique such that a(ω 0 ) = α. Now Φ(α + 0 β) = Φ(a(β)) = Φ(λ(a, β)) = (µ • (f × Φ))(a, β) = µ(f (a), Φ(β)) = f (a)(Φ(β)). On the other hand, Φ(α) + 0 ′ Φ(β) = b(Φ(β)) where b ∈ B ⊆ H is unique such that b(σ 0 ) = Φ(α). Since f (A) ⊆ B we also have f (a) ∈ B and moreover (f (a))(σ 0 ) = µ(f (a), Φ(ω 0 )) = Φ(λ(a, ω 0 )) = Φ(a(ω 0 )) = Φ(α). Since B acts regularly on Σ it follows that f (a) = b. This implies that Φ(α + 0 β) = f (a)(Φ(β)) = b(Φ(β)) = Φ(α) + 0 ′ Φ(β). We recall that, for α, β ∈ Ω, α · 1 β = g(β) when α ∈ Ω \ {ω 0 } with g ∈ G ω 0 unique such that g(ω 1 ) = α ω 0 when α = ω 0 or β = ω 0 When α = ω 0 we have immediately Φ(α) · 1 ′ Φ(β) = σ 0 · 1 ′ Φ(β) = σ 0 = Φ(ω 0 ) = Φ(ω 0 · 1 β). When α = ω 0 , we have Φ(α · 1 β) = Φ(g(β)) = Φ(λ(g, β)) = (µ • (f × Φ))(g, β) = µ(f (g), Φ(β)) (with g ∈ G ω 0 unique such that g(ω 1 ) = α). On the other hand, Φ(α) · 1 ′ Φ(β) = h(Φ(β)) with h ∈ H σ 0 unique such that h(σ 1 ) = Φ(α). Note that, by the injectivity of Φ we have Φ(α) = Φ(ω 0 ) = σ 0 . We must have h = f (g) since f (g)(σ 1 ) = µ(f (g), Φ(ω 1 )) = (µ • (f × Φ))(g, ω 1 ) = (Φ • λ)(g, ω 1 ) = Φ(g(ω 1 )) = Φ(α) = h(σ 1 ) and f (g), h ∈ H σ 0 . This implies Φ(α · 1 β) = Φ(g(β)) = Φ(λ(g, β)) = (µ • (f × Φ))(g, β) = µ(f (g), Φ(β)) = f (g)(Φ(β)) = h(Φ(β)) = Φ(α) · 1 ′ Φ(β). Hence Φ is a neardomain homomorphism. For any neardomain (F, +, ·) we can construct the object (T 2 (F ), F, 0, 1) in s2t-Gp. It is clear that the stabilizer of 0 in T 2 (F ) consists of the elements τ 0,δ with δ ∈ F \ {0}. LEMMA 3.10. When G = T 2 (F ) we have A = {τ γ,1 | γ ∈ F }. Proof. See [4, (6.5)]. Equivalence of the categories s2t-Gp and n-Dom K : s2t-Gp −→ n-Dom (G, Ω, ω 0 , ω 1 ) −→ (Ω, + 0 , · 1 ) | | (f, Φ) → Φ ↓ ↓ (H, Σ, σ 0 , σ 1 ) −→ (Σ, + 0 , · 1 )(Φ) = (f Φ , Φ) : (T 2 (F ), F, 0, 1) → (T 2 (F ′ ), F ′ , 0 ′ , 1 ′ ) where f Φ : (T 2 (F ), •) → (T 2 (F ′ ), •) maps τ a,b to τ Φ(a),Φ(b) . Since Φ(1) = 1 ′ it immediately fol- lows, using Lemma 3.10, that f Φ (A) = {f Φ (τ a,1 ) | a ∈ F } = {τ Φ(a),1 ′ | a ∈ F } ⊆ {τ a ′ ,1 ′ | a ′ ∈ F ′ } = A ′ . Moreover f Φ is a group homomorphism since, on the one hand, for a, k ∈ F and b, l ∈ F \ {0} we have f Φ (τ a,b • τ k,l ) = f Φ (τ a+bk,d a,bk bl ) = τ Φ(a+bk),Φ(d a,bk bl) = τ Φ(a)+Φ(b)Φ(k),Φ(d a,bk )Φ(b)Φ(l) while, on the other hand, f Φ (τ a,b )•f Φ (τ k,l ) = τ Φ(a),Φ(b) •τ Φ(k),Φ(l) = τ Φ(a)+Φ(b)Φ(k),d ′ Φ(a),Φ(b)Φ(k) Φ(b)Φ(l) . Finally the missing link Φ(d a,bk ) = d ′ Φ(a),Φ(b)Φ(k) is provided by applying Φ to the identity a + (bk + x) = (a + bk) + d a,bk x, which follows from axiom 6 for F , and expanding Φ(a) + (Φ(b)Φ(k) + Φ(x)), again using rule 6 for F ′ . Then it follows that L(Φ) is a morphism in s2t-Gp by easily verifying that for all a ∈ F , b ∈ F \ {0}, ω ∈ F we have Φ(τ a,b (ω)) = (f Φ (τ a,b ))(Φ(ω)). The functoriality of L, i.e. L(1 F ) = 1 F for every neardomain F and L(Φ ′ • Φ) = L(Φ ′ ) • L(Φ) for every composable pair Φ : F → F ′ , Φ ′ : F ′ → F ′′ of morphisms of neardomains, is easily verified. In order to prove K • L = 1 n-Dom it suffices to check that + 0 , · 1 (resp. + ′ 0 , · ′ 1 ) coincide with +, · (resp. + ′ , · ′ ). We first look at the multiplication. Take any α, β in F . By definition α · 1 β equals τ 0,α (β) since T 2 (F ) 0 = {τ 0,δ | δ ∈ F \ {0}} and τ 0,δ (1) = α ⇔ δ = α. Hence we get α · 1 β = α · β. For the addition we consider α, β ∈ F . By Lemma 3.10, the unique element in A mapping 0 to α must be equal to τ α,1 . Therefore we get α + 0 β = τ α,1 (β) = α + β. Secondly, we have to find a natural isomorphism γ : L • K ⇒ 1 s2t-Gp . We define each component γ (G,Ω,ω 0 ,ω 1 ) : (L • K)(G, Ω, ω 0 , ω 1 ) = L(Ω, + 0 , · 1 ) = (T 2 (Ω), Ω, ω 0 , ω 1 ) → (G, Ω, ω 0 , ω 1 ) as follows. For each τ α,β ∈ T 2 (Ω) there exists, by the sharp 2-transitivity of G on (Ω, + 0 , · 1 ) a unique element g ∈ G, denoted by g α,β such that g α,β (ω 0 ) = α = τ α,β (ω 0 ) and g α,β (ω 1 ) = α + 0 β · 1 ω 1 = τ α,β (ω 1 ). This correspondence defines a bijective map k : T 2 (Ω) → G : τ α,β → g α,β and one verifies easily that k : (T 2 (Ω), •) → (G, •) is a group homomorphism (using also the sharp 2-transitivity of T 2 (Ω) on Ω). Finally we define γ (G,Ω,ω 0 ,ω 1 ) : (T 2 (Ω), Ω, ω 0 , ω 1 ) → (G, Ω, ω 0 , ω 1 ) by γ (G,Ω,ω 0 ,ω 1 ) (τ α,β , ω) = (k(τ α,β ), ω) = (g α,β , ω), which is an isomorphism in s2t-Gp. The naturality of γ amounts to the commutativity of each square (T 2 (Ω), Ω, ω 0 , ω 1 ) (f Φ ,Φ) γ (G,Ω,ω 0 ,ω 1 ) / / (G, Ω, ω 0 , ω 1 ) (f,Φ) (T 2 (Σ), Σ, σ 0 , σ 1 ) γ (H,Σ,σ 0 ,σ 1 ) / / (H, Σ, σ 0 , σ 1 ) with f : G → H and Φ : Ω → Σ as in 3.2 and f Φ : T 2 (Ω) → T 2 (Σ) : τ α,β → τ Φ(α),Φ(β) . On the one hand we have, for each (τ α,β , ω) ∈ T 2 (Ω) × Ω, ((f, Φ) • γ (G,Ω,ω 0 ,ω 1 ) )(τ α,β , ω) = (f, Φ)(k(τ α,β ), ω) = (f, Φ)(g α,β , ω) = (f (g α,β ), Φ(ω)) ∈ H × Σ. On the other hand, (γ (H,Σ,σ 0 , σ 1 ) • (f Φ , Φ))(τ α,β , ω) = γ (H,Σ,σ 0 ,σ 1 ) (f Φ (τ α,β ), Φ(ω)) = γ (H,Σ,σ 0 ,σ 1 ) (τ Φ(α),Φ(β) , Φ(ω)) = (g Φ(α),Φ(β) , Φ(ω)) ∈ H × Σ. Finally f (g α,β ) = g Φ(α),Φ(β) since (f (g α,β ))(σ 0 ) = (f (g α,β ))(Φ(ω 0 )) = Φ(α) = g Φ(α),Φ(β) (σ 0 ) and, similarly, (f (g α,β ))( σ 1 ) = (µ • (f × Φ))(g α,β , ω 1 ) = Φ(g α,β (ω 1 )) = Φ(α + 0 β · 1 ω 1 ) = Φ(α + 0 β) = Φ(α) + 0 ′ Φ(β) = Φ(α) + 0 ′ Φ(β) · 1 ′ σ 1 = τ Φ(α),Φ(β) (σ 1 ) = g Φ(α),Φ(β) (σ 1 ) . Hence, again by sharp 2-transitivity, f (g α,β ) = g Φ(α),Φ(β) . For each τ α,β ∈ T 2 (Ω) there exists, by sharp 2-transitivity of G on Ω a unique g α,β ∈ G such that g α,β (ω 0 ) = τ α,β (0) = α, and g α,β (ω 1 ) = τ α,β (1) = α + 0 β. For each morphism (f, Φ) : (G, Ω, ω 0 , ω 1 ) → (H, Σ, σ 0 , σ 1 ) in s2t-Gp we have to check the naturality condition (f, Φ) • γ (G,Ω,ω 0 ,ω 1 ) = γ (H,Σ,σ 0 ,σ 1 ) • (f Φ , Φ) ( * ) where f Φ : (T 2 (Ω), •) → (T 2 (Σ), •) maps τ α,β : Ω → Ω to τ Φ(α),Φ(β) : Σ → Σ. The left hand side of ( * ) sends τ α,β ∈ T 2 (Ω) to (f, Φ)(g α,β ) = f (g α,β ) ∈ H. The right hand side maps τ α,β to γ (H,Σ,σ 0 , σ 1 ) (f Φ (τ α,β )) = γ (H,Σ,σ 0 ,σ 1 ) (τ Φ(α),Φ(β) ) = h Φ(α),Φ(β) ∈ H. Clearly f (g α,β ) = h Φ(α),Φ(β) since both elements of the sharply 2-transitive group H on Σ send σ 0 to Φ(α) and σ 1 to Φ(β). Hence γ : L • K ⇒ 1 s2t-Gp is a natural transformation. Since a full and faithful functor reflects isomorphisms we immediately get (1)) is a subgroup of G. Then the functor K restricts (and corestricts) to an equivalence K A : s2t-Gp A −→ n-Fld, where n-Fld denotes the full subcategory of n-Dom with nearfields as objects. Proof. In Theorem (7.1) of [3] it is shown that, for a sharply 2-transitive group on a set Ω, the associated neardomain is a nearfield if and only if J 2 = {gh | g, h ∈ J} (with J the set of involutions in G) is a subgroup of G. In connection with Theorem (3.7) of [3] it follows that J 2 is a subgroup of G if and only if A is a subgroup of G. Thus K A : s2t-Gp A −→ n-Fld is a functor into the category of nearfields. On the other hand, the functor L : n-Dom → s2t-Gp restricts (and corestricts) to a functor L A : n-Fld → s2t-Gp A . It suffices to notice that when (F, +, .) is a nearfield, (F, +) and (F \ {0}, .) are groups. Hence (A, •) = ({τ γ,1 | γ ∈ F }, •) is a group since, for each α, β, x ∈ F we have (τ α,1 • τ β,1 )(x) = τ α,1 (β + x) = α + (β + x) = (α + β) + x = τ α+β,1 (x), which shows that τ α,1 • τ β,1 = τ α+β,1 . Similarly τ −1 α,1 = τ −α,1 . Finally one verifies that K A • L A = 1 n-Fld (as in the proof of Theorem 4.1) and that γ A = (γ (G,Ω,ω 0 ,ω 1 ) ) (G,Ω,ω 0 ,ω 1 ) : L A • K A ⇒ 1 s2t-Gp A is a natural isomorphism (as in the proof of Theorem 4.1). Acknowledgement. We thank the referee for helpful suggestions. PROPERTY 2. 3 . 3Let (M, Ω, ω), (N, Σ, σ) be objects of rps, f : M → N and Φ : Ω → Σ maps with Φ(ω) = σ. Then (f, Φ) is a morphism (M, Ω, ω) → (N, Σ, σ) if and only if the following conditions are both satisfied. PROPERTY 2. 5 . 5For a loop (Ω, ·) with identity ω we write L = {λ α | α ∈ Ω}, the set of left translations of Ω. The triple (L, Ω, ω) is a r.p.s.. PROPERTY 3 . 2 . 32Neardomain morphisms are injective. THEOREM 3 . 3 . 33Let (F, +, .) be a neardomain and let PROPERTY 3.4. J satisfies exactly one of the following properties:1. every j ∈ J has a unique fixpoint; PROPERTY 3. 5 . 5The triple (A, Ω, ω 0 ) is a regular permutation set on Ω. . The triple F = (Ω, + 0 , · 1 ) is a neardomain.2. char G = 2 ⇔ char F = 2, i.e. 1 + 1 = 0 in F (1 denotes the multiplicative identity of the neardomain F ). LEMMA 3. 8 . 8For a morphism (f, Φ) : (G, Ω, ω 0 , ω 1 ) → (H, Σ, σ 0 , σ 1 ) the following hold. 1. f (J) ⊆ K (where K denotes the set of involutions in H); PROPERTY 3. 9 . 9For a morphism (f, Φ) : (G, Ω, ω 0 , ω 1 ) → (H, Σ, σ 0 , σ 1 ) the map Φ : (Ω, + 0 , · 1 ) → (Σ, + 0 ′ , · 1 ′ ) is a morphism of neardomains. is an equivalence of categories. Here f : G → H, Φ : Ω → Σ with Φ(ω 0 ) = σ 0 and Φ(ω 1 ) = σ 1 are defined as in section 3.2. The neardomain operations + 0 : Ω × Ω → Ω and · 1 : Ω \ {ω 0 } × Ω \ {ω 0 } → Ω \ {ω 0 } are defined as in the previous section. Proof. The shortest proof should consist in proving that K is full and faithful and essentially surjective on objects ([1, prop.3.4.3 (4)]). However we prefer to show that there exists a functor L : n-Dom → s2t-Gp, which is of interest in its own right, and two natural isomorphisms 1 n-Dom ∼ = K • L and L • K ∼ = 1 s2t-Gp ([1, prop. 3.4.3 (3)]). The proof of functoriality of K is left as an exercise for the reader. The definition of L : n-Dom → s2t-Gp is as follows. A neardomain (F, +, .) is sent to L(F ) = (T 2 (F ), F, 0, 1) where (T 2 (F ), •), as defined in Theorem 3.3, is a sharply 2-transitive group acting on F and 0, 1 are the identities of the loop (F, +) and the group (F \ {0}, .) respectively. A morphism Φ : (F, +, .) → (F ′ , + ′ , . ′ ) of neardomains is sent to L PROPERTY 4. 2 . 2Let (G, F ) and (G ′ , F ′ ) be sharply 2-transitive permutation groups. Then (G, F ) and (G ′ , F ′ ) are isomorphic as permutation groups if and only if the associated neardomains (F, +, .) and (F ′ , + ′ , . ′ ) are isomorphic in n-Dom. (For a noncategorical proof, see [4, (6.3)].) The equivalence obtained in Theorem 4.1 can be restricted to interesting subcategories of s2t-Gp and n-Dom respectively, which sheds new light on the possible difference between neardomains and nearfields. THEOREM 4 . 3 . 43Let s2t-Gp A be the full subcategory of s2t-Gp on objects (G, Ω, ω 0 , ω 1 ) in which the subset A (as defined in Handbook of Categorical Algebra 1. F Borceux, Cambridge University PressF. Borceux. Handbook of Categorical Algebra 1. Cambridge University Press, 1994. Regular permutation sets and loops. R Capodaglio, Boll. U.M.I. 8R. Capodaglio, Regular permutation sets and loops, Boll. U.M.I. 8 6-B (2003), 617-628. Zusammenhänge zwischen Fastbereichen, scharf zweifach transitiven Permutationsgruppen und 2-Strukturen mit Rechteckaxiom. H Karzel, Abh. Math. Sem. Univ. Hamburg. 32H. Karzel, Zusammenhänge zwischen Fastbereichen, scharf zweifach transitiven Permu- tationsgruppen und 2-Strukturen mit Rechteckaxiom, Abh. Math. Sem. Univ. Hamburg 32 (1968), 191-206. On infinite sharply multiply transitive groups. Vandenhoeck & Ruprecht, Göttingen, 1974. Hamburger Mathematische Einzelschriften, Neue Folge. W Kerby, Heft 6W. Kerby. On infinite sharply multiply transitive groups. Vandenhoeck & Ruprecht, Göttingen, 1974. Hamburger Mathematische Einzelschriften, Neue Folge, Heft 6. Theory of K-loops. H Kiechle, Lecture Notes in Mathematics. 1778Springer-VerlagH. Kiechle. Theory of K-loops, volume 1778 of Lecture Notes in Mathematics. Springer- Verlag, Berlin, 2002. Private communication by e-mail to R. Kieboom. H Kiechle, H. Kiechle, Private communication by e-mail to R. Kieboom, 19-8-2010. A categorical approach to loops, neardomains and nearfields, master's thesis (written in Dutch). T Vervloet, Vrije Universiteit BrusselT. Vervloet. A categorical approach to loops, neardomains and nearfields, master's thesis (written in Dutch). Vrije Universiteit Brussel, 2009.
[]
[ "Renormalizable Quantum Gravity in Low Energy without Violating Unitarity", "Renormalizable Quantum Gravity in Low Energy without Violating Unitarity" ]
[ "Hajime Isimori \nGraduate School of Science and Technology\nNiigata University\n950-2181NiigataJapan\n" ]
[ "Graduate School of Science and Technology\nNiigata University\n950-2181NiigataJapan" ]
[]
We introduce new techniques that can preserve unitarity of the system including ghost particles. Negative norms of the particles can be involved in zero-norm states by constraints of the physical space. These are useful to apply the higher-derivative propagator for quantum gravity to suppress divergences of vacuum energy and graviton mass correction. The quantum effects are mainly depending on the ghost mass scale. As the scale can be chosen in any order, the observed cosmological constant is realized. Further, applying ghost partners for the standard model particles, quantum gravity with matter fields becomes renormalizable with power counting arguments.
null
[ "https://arxiv.org/pdf/1010.5122v1.pdf" ]
115,861,868
1010.5122
bb7b31226c7acb8962fa154bd17797ca4cd2fb0e
Renormalizable Quantum Gravity in Low Energy without Violating Unitarity 25 Oct 2010 Hajime Isimori Graduate School of Science and Technology Niigata University 950-2181NiigataJapan Renormalizable Quantum Gravity in Low Energy without Violating Unitarity 25 Oct 2010 We introduce new techniques that can preserve unitarity of the system including ghost particles. Negative norms of the particles can be involved in zero-norm states by constraints of the physical space. These are useful to apply the higher-derivative propagator for quantum gravity to suppress divergences of vacuum energy and graviton mass correction. The quantum effects are mainly depending on the ghost mass scale. As the scale can be chosen in any order, the observed cosmological constant is realized. Further, applying ghost partners for the standard model particles, quantum gravity with matter fields becomes renormalizable with power counting arguments. Introduction There is a long way to unify general relativity and quantum mechanics as in the framework of quantum gravity [1,2,3]. The two leading candidates of quantum gravity are string theory and loop quantum gravity. Unfortunately, as any theory of quantum gravity has some deep problem [4], the theory is not complete yet. In general, it is usually thought that something new must happen at the Planck scale to make a consistent theory of quantum gravity. Nevertheless, the scale is too high to reach with current experiments, hence, there is no experimental hint for the statement. Meanwhile, considering the cutoff scheme, observed cosmological constant suggests that the cutoff scale of vacuum energy should be around the neutrino mass or micrometer scale. To accommodate with the observation, new method that can have small energy scale seems necessary. Perturbative approach of quantum gravity is appropriate for low energy [5,6,7]. Taking the flat-space background, one can quantize the weak gravitational field. In this method, there appear bad divergences in many Feynman diagrams because the coupling constant has negative dimension. For instance, quantum correction to the graviton mass squared has quartic divergence. If the cutoff scale is the Planck mass, graviton mass is correspondent to the Planck mass. This is similar to the Higgs mass problem though it is renormalizable and less problematic for physics. The problem of graviton mass is more serious because observations of galaxy and clusters ensure a very long-range force but there is no natural way to protect the mass from quantum correction. Quartic divergence is the worst in the perturbative quantum gravity for one-loop. This is the same to the divergence of vacuum energy. Then, if some method can solve the cosmological constant problem, the problem of perturbative quantum gravity may be also solved. The simplest way is to take into account a ghost particle [8]. For the sake of opposite sign of commutation relation, divergences induced by normal particle (meaning it has positive norm) can be canceled. A difficult problem relating to the ghost is the violation of unitarity that indicates the violation of probability conservation due to negative norm. Although there are several discussions and possible solutions e.g. [9,10], the problem still remains in general understanding. This paper suggests new solution to avoid unitarity violation by selecting proper physical state relating between normal and ghost particles. In particular, the mixing state of them has zero norm that plays a key role to hold positive semi-definite norm. All the conditions of unitarity can be satisfied by the constraints to be physical states. We propose the existence of ghost partner for graviton. To be consistent with largescale gravity, the ghost should have a mass. In order to realize the higher-derivative propagator, usual graviton is ought to be massive because there is a mismatch between the propagators of massless and massive gravitons. A mass term of graviton induces negative norm in general so that we choose the Pauli-Fierz mass term. Taking the term for both gravitons, all divergences can be improved and the mass squared correction of graviton will be better. However, to achieve consistent theory without fine-tuning, additional gravitons and their partners are necessary to increase the power of propagator. They are assumed to have the same quantum number of the original graviton but with different masses. If some conditions are satisfied, the theory can be super-renormalizable or completely finite. This can be seen as a modified version of the Lee-Wick model [12]. When we take enough number of gravitons and some conditions, quantum effects including vacuum energy are sufficiently suppressed. Amounts of the effects are mainly dependent on masses of gravitons instead of the cutoff scale. As the unitarity violation can be avoided, scales of the masses are not need to be very high such as the Planck mass. Hence we possess the correct value of the cosmological constant by taking the graviton masses around the neutrino mass scale. This implies that gravity is modified at micrometer scale. For the Yukawa-type modification, the inverse square law of gravity is confirmed on the scale longer than 56µm at 95% precision [13] and more stringent bound about 20µm is recently given by [14]. It is not easy to be consistent with these experiments. On the other hand, the loop calculation yields small graviton mass which is not negligible even for the case getting the correct vacuum energy. Writing the cosmological constant λ and the gravitational constant G, the lightest graviton mass becomes about √ Gλ which is of order 10 −10 pc −1 . This is also close to the experimental bound of the lightest graviton mass [15]. Thus, the theory of quantum gravity without fine-tuning makes modifications for short and long distances simultaneously and both are near to current experimental bounds. In the sense of possible experimental test, quantum gravity with multi-gravitons in low energy is interesting. The paper is organized as follows. In section 2, we explore the model of multi-gravitons with Pauli-Fierz mass terms, afterward, the gravitational wave for each graviton is canonically quantized. Section 3 introduces the interaction with fermions using connection and covariant derivative to calculate the gravitational potential. Section 4 explains how to preserve unitarity from ghost particles of negative norm by selecting physical state and wave packets. In section 5, vacuum energy is calculated with higher-derivative propagator mentioning the modification of gravity for short distance to keep it viable with experiments of gravity and the cosmological constant. In section 6, loop correction of lightest graviton mass is calculated with one contribution of a Feynman diagram. At last, we have a short conclusion at section 7. In our notation, we use η µν = diag(−1, 1, 1, 1) and natural units c = = 1. Quantization of gravitons and ghost gravitons Throughout this paper, we assume gravitational wave is quantized [16] which can be done in the same way of other fundamental fields, especially similar to photon. Though this is not a standard method to quantize gravity, the scheme is convenient to apply higher-derivative propagator and to see unitarity of the system. Some problems may arise since the quantization violates Lorentz invariance and leads unusual interaction terms for momentum operator. Further, we need to cancel out many terms inside the Lagrangian to quantize the gravitons. It looks like very schematic but can be a powerful method. As an effective field theory, the system will be valid. Lagrangian Let us start with the Einstein-Hilbelt action L = 1 16πG R + L mat ,(1) which leads the Einstein equation G µν = 8πGT µν . In the Minkowski space background, the metric can be expanded as g µν = η µν + κh µν , g µν = η µν − κh µν + κ 2 h µ ρ h ρν − · · · .(2) Similarly, √ −g can be written as √ −g = 1 + 1 2 κh − 1 4 κ 2 h µ ν h ν µ + 1 8 κ 2 hh + · · · ,(3) where h = η µν h µν . To make the higher-derivative propagator, we explore multi-gravitons model. The gravitons are denoted by h µν . All the gravitons are assumed to be massive with Pauli-Fierz mass terms L mas = − κ 2 32πG √ −g N −1 n=0 (−1) n m 2 n (h (n) µν h (n)µν − h (n) h (n) )(4) Mass terms of ghost gravitons take opposite signs, and relating to those minus signs, ghost kinetic terms are modified by L gho = − κ 2 16πG √ −g N /2−1 n=0 (h (2n+1)µν h (2n+1) µν − 2h (2n+1) µν ∂ µ ∂ σ h (2n+1)σν + 2h (2n+1) µν ∂ µ ∂ ν h (2n+1) − h (2n+1) h (2n+1) )),(5) where we ignore total derivatives. Still, it is not enough to make canonical quantization for each field h (n) µν . To remain only necessary terms, following terms are taken for the cancellation: L kin = − 1 √ −g κ 2 32πG m,n,m =n (h (m)µν h (n) µν − 2h (m)µν ∂ µ ∂ σ h (n)σν + 2h (m)µν ∂ µ ∂ ν h (n) − h (m) h (n) )).(6) The energy momentum tensor is given by T µν = 2 √ −g (∂ ρ ∂(Lmat √ −g) ∂(∂ρg µν ) − ∂(Lmat √ −g) ∂g µν ), then (−1) n 8πGT µν = 1 2 κ(∂ ρ ∂ µ h (n) νρ + ∂ ρ ∂ ν h µρ (n) − h (n) µν − ∂ µ ∂ ν h (n) ) − 1 2 κ(∂ ρ ∂ σ h (n)ρσ − h (n) )η µν + 1 2 κm 2 0 (h (n) µν − h (n) η µν ).(7) Multiplying ∂ µ for both sides and applying ∂ µ T µν = 0 as the first order approximation, we get ∂ µ h (n) µν − ∂ ν h (n) = 0. Plugging them into the equations above, we get ( − m 2 0 )h (n) µν = −(−1) n 16πG κ (T µν + 1 3 ( ∂ µ ∂ ν m 2 0 − η µν )T ).(8) They can be related to the equations of motion of higher-derivative theory. Considering N = 2, they yield ( − m 2 0 )( − m 2 1 )h µν = (m 2 1 − m 2 0 ) 16πG κ (T µν + 1 3 ( ∂ µ ∂ ν m 2 0 − η µν )T ).(9) Quantization Hereafter, we choose the transverse-traceless gauge (TT-gauge) for all gravitational fields. If gravitational wave of each field can be quantized we can write h (n) µν (x) = d 3 p (2π) 3 1 2E (n) p λ e (λ) µν (a (λ,n) p e (−1) n ip·x + a (λ,n) † p e −(−1) n ip·x ),(10) where E (n) p = p 2 + m 2 n . Creation and annihilation operators obey a (λ,n) p |0 = 0, [a (λ,n) p , a (λ ′ ,n ′ ) p ′ † ] = (−1) n δ nn ′ δ λλ ′ δ 3 (p − p ′ ).(11) Polarization tensors have following properties [21]: e (λ) µν = e (λ) νµ , e (λ)µ µ = 0, p µ e (λ)µν = 0, e (λ) µν e (λ ′ )µν = δ λλ ′ .(12) In the TT-gauge, the Einstein tensor with second order of expansion can be calculated as G (2) µν = −κ 2 ( 1 2 h ρσ ∂ µ ∂ ν h ρσ + h µρ h ρ ν + 1 4 h ρσ h ρσ η µν ).(13) Note, the last two terms are automatically canceled if graviton is massless. To perform the canonical commutation relations for massive gravitons, they are modified by taking L can = κ 3 16πG √ −g ( m,n,m =n 1 2 h µν h (m) ρσ ∂ µ ∂ ν h (n)ρσ + 1 3 h µν h νρ h ρ µ + 1 4 hh µν h µν ). (14) As a convention, we set κ as √ 16πG. The energy momentum tensor now becomes T µν = n (∂ µ h (n) ρσ ∂ ν h (n)ρσ ). For the quantization of gravitational wave, the Hamiltonian is given by H = d 3 x T 00 then H = d 3 p (2π) 3 λ,n E (n) p a (λ,n) † p a (λ,n) p ,(15) where we omit the zero point energy. This is a familiar form and commutation relations are as usual except minus sings: [H, a µν (x). In addition, energies of ghost particles are positive [17]. Here, let us note the aspect of this quantization scheme. At first, the interaction part of the Hamiltonian violates Lorentz and CPT symmetries. In particular, matter-gravity couplings are important to see the effect of the violations. Various scenarios are sought to observe the Lorentz violation [18]. Other perturbative methods of quantum gravity such as the Lagrange formalism [5] and the ADM action [19] rarely violate the Lorentz symmetry so that it is a characteristic feature. Secondly, interaction terms appear for the momentum operator since G 0i and T 0i of matter sector in a curved spacetime are not zero in general. Adding to the time evolution, the configuration of the wave packet gains space evolution by the interaction via gravitons. It is not discussed in the literature and we do not know how it affects to particle physics. This effect may be observable as the size of wave packet is relating to the probability density. At any rate, the gravitational interaction is very weak so that it will not leave problematic result. Other important thing to discuss is how to derive mass, ghost kinetic, and cancellation terms. There needs more fundamental theory to produce them. Considering the theory including higher curvature terms, they will appear naturally and the quantization can be performed without adding terms by hand. However, higher curvature terms make stronger divergences and we cannot avoid fine-tuning to cancel out them. To derive the system discussing here, one will need more powerful theory. Propagator To calculate vacuum energy and graviton mass correction, let us consider concrete forms of propagators for small N . In curved spacetime, vacuum is not static or not empty in general. We can define that the vacuum is empty at t = 0 by a (λ,n) p |Ω(0) = 0 but in a later time the background metric mixes the positive and negative frequency components [16]. However, calculations of quantum effects with empty vacuum writing |0 will be sufficient because we only try order estimation for latter sections. The propagator of empty vacuum can be calculated by 0|T (h (n) µν (x)h (n) ρσ (y))|0 = (−1) n d 4 p (2π) 4 −iP µνρσ e (−1) n ip·(x−y) p 2 + m 2 n − (−1) n iǫ ,(16) where P µνρσ = 1 2 (η µσ η νρ + η µρ η νσ − 2 3 η µν η σρ ). From now on, we do not explicitly write ǫ. As an example, the Feynman rule of the propagator for N = 2 is D µνρσ = −i(m 2 1 − m 2 0 ) (p 2 + m 2 0 )(p 2 + m 2 1 ) P µνρσ .(17) This can make the theory renormalizable and the number of courter terms will be finite. Since the purpose of this paper is to avoid fine-tuning, N = 2 is not enough. To make a super-renormalizable model, we use N = 4 with assuming m 3 = m 2 0 − m 2 1 + m 2 2 , then D µνρσ = −i(m 2 1 − m 2 0 )(m 2 1 − m 2 2 )(m 2 0 + m 2 2 + 2p 2 ) (p 2 + m 2 0 )(p 2 + m 2 1 )(p 2 + m 2 2 )(p 2 + m 2 3 ) P µνρσ .(18) In the same way, we can get the super-renormalizable model for any even N with the condition m N −1 = N −2 n=0 (−1) n m 2 n . Giving one more condition for N ≥ 6, the theory becomes finite. For instance, when N = 6, conditions for the finite-field theory are m 2 0 − m 2 1 + m 2 2 − m 2 3 + m 2 4 − m 2 5 = 0 and m 2 0 (m 2 2 + m 2 4 ) + m 2 2 m 2 4 − m 2 1 (m 2 3 + m 2 5 ) − m 2 3 m 2 5 . In this case, the propagator is D µνρσ = −i(f 1 + f 2 p 2 + f 3 p 4 ) (p 2 + m 2 0 )(p 2 + m 2 1 )(p 2 + m 2 2 )(p 2 + m 2 3 )(p 2 + m 2 4 )(p 2 + m 2 5 ) P µνρσ ,(19) where f 1 , f 2 , f 3 denote functions of masses. In the approximation m 1 ≈ m 3 and m 0 ≈ 0, it can be written as −i(m 2 2 − m 2 1 ) 2 (p 2 + m 2 1 )(3(m 2 2 − 2m 2 1 )p 2 + 2m 4 2 − 3m 2 1 m 2 2 ) (m 2 2 − 2m 2 1 ) 2 (p 2 + m 2 0 )(p 2 + m 2 1 )(p 2 + m 2 2 )(p 2 + m 2 3 )(p 2 + m 2 4 )(p 2 + m 2 5 ) P µνρσ ,(20) where we used m 2 > m 1 . With this propagator, all loop calculations become finite and we do not need renormalization. Fermion interaction The extension of fermion field to be valid in curved spacetime is formulated via vielbein and covariant derivative. Vielbein can transform the generic metric tensor to the Minkowski metric, i.e. g µν = e µ a e ν b η ab . The connection for the orthonormal frame is written by ω ab µ . Using this connection, the covariant derivative which acts on a spinor is given by D µ ψ = ∂ µ ψ + i 4 ω ab µ σ ab ψ where σ ab = − 1 2 [γ a , γ b ]. We choose the notation of gamma matrices to be {γ a , γ b } = −2η ab . As a general framework, we assume torsion is not vanished. Extending the theory based on the Cartan geometry, torsion and curvature can be implemented naturally. The Lagrangian of fermion field in curved spacetime becomes L D = i 2 (−(D µ ψ)γ a e a µ ψ +ψγ a e a µ D µ ψ) − mψψ.(21) The connection can be explicitly written as ω µab = 1 2 e a ν (∂ µ e bν − ∂ ν e bµ ) − 1 2 e b ν (∂ µ e aν − ∂ ν e aµ ) + 1 2 e a ρ e b σ (∂ σ e cρ − ∂ ρ e cσ )e µ c .(22) Since the energy momentum tensor is obtained by T µν = e aν T a µ = − eaν √ −g ∂(L D √ −g) ∂ea µ [22], we have T µν = i 2 (D ν ψ)γ a e a µ ψ − i 2ψ γ a e a µ D ν ψ.(23) This is asymmetric if there is a torsion. Around the Minkowski space background, vielbein can be expanded as e a µ = δ a µ − κ 2 h a µ + · · · and e aµ = η aµ + κ 2 h aµ + · · · . Then the first order of the Hamiltonian density is H int = iκ 4 (∂ 0ψ )γ a h a 0 ψ − iκ 4ψ γ a h a 0 ∂ 0 ψ + κ 16 (∂ a h 0 b − ∂ b h 0 a )ψ(γ 0 σ ab + σ ab † γ 0 )ψ. (24) It can make the Feynman rule of = − iκ 8 (E + E ′ )(γ µ η ν0 + γ ν η µ0 ) − κ 4i ǫ ijk (η µi η ν0 + η νi η µ0 )(p j − p ′j )Σ k ,(25) where Σ k = σ k ⊗ 1 2×2 . Newtonian gravity In the non-relativistic limit, (0, 0) component of the vertex of fermion-fermion-graviton is dominant then ≈ iκ 2 M 1 M 2 P 0000 u † (p ′ 2 )u(p 2 )u † (p ′ 1 )u(p 1 ) 4 N −1 n=0 (−1) n q 2 + m 2 0 ,(26) where M 1 and M 2 are masses of fermions. The calculation of the gravitational potential is straightforward: V (r) = − κ 2 M 1 M 2 12πr N n=0 (−1) n e −mnr .(27) Since m 0 is quite light while others are much heavier, it can be approximated by V (r) ≈ − κ 2 M 1 M 2 12πr which is 4/3 times larger than Newtonian gravity when G is the observed gravitational constant. This is the famous problem known as the vDVZ discontinuity. To see the problem in our framework, let us consider the Einstein equation (8). It is sufficient to deal only with h ( − m 2 0 )h (0) µν = − 16πG κ (T µν − 1 3 T η µν ),(28) where we neglect gauge dependent term ∂ µ ∂ ν /m 2 0 . Considering a point source with T µν = Mη µ0 η ν0 δ 3 (x), general solution is h (0) µν = 16πG κ d 4 p (2π) 4 2πMe ip·x δ(p 0 ) p 2 + m 2 0 (η µ0 η ν0 + 1 3 η µν ).(29) Then h (0) 00 = 16πG κ 2Me −m 0 r 12πr , h (0) ij = 16πG κ Me −m 0 r 12πr δ ij .(30) For the light bending, the angle of deflection can be estimated as α ≈ 4GM R ,(31) where R is the closest approach to the source. This is the same result of general relativity, then we cannot change G to fit the gravitational potential (27). Commonly, it is considered that nonlinear effect of quantum correction can recover the smooth connection of massless and massive gravitational theories. Leading contribution (26) is reliable only when the distance is longer than the Vainshtein radius which is defined by (m −4 0 R S ) 1/5 where R S is the Schwarzschild radius [24]. This becomes quite long if the mass is extremely small, e.g., considering m 0 is the inverse of the Hubble constant, the Vainshtein radius of the Sun becomes about 100kpc which is longer than the size of the Milky way. Then nonlinearity is important for the region of the solar system. Summing up all the quantum corrections, the result may be correspondent to the case of massless graviton (see e.g. [25]). However, this scenario can be applied when corrections from loop diagrams are not small compared to the leading order. If usual propagator is used and cutoff scale is the Planck mass, loop corrections are comparable. In our case, ghost gravitons suppress the corrections so that nonlinear effect is not important. That means the problem is serious and the theory is inconsistent with experiments unless we can find other remedy. As noted in previous section, there may be more fundamental theory that may lead all additional terms of the Lagrangian. If the fundamental theory does not involve the problem of the vDVZ discontinuity, the theory will be correspondent to the model of massless graviton. For instance, basing higher-derivative gravity without Pauli-Fierz mass, the discontinuity does not exist [23]. By modifying kinetic terms or strengths of interaction terms, we can avoid the problem. A simple example is to take L vDVZ = − 1 8 √ −g (h (0)µν h (0) µν − 2h (0) µν ∂ µ ∂ σ h (0)σν + 2h (0) µν ∂ µ ∂ ν h (0) − h (0) h (0) )) + 1 8 √ −g m 2 0 (h (0) µν h (0)µν − h (0) h (0) ).(32) They can change the equations of motion so that the angle becomes α ≈ 16GM 3R . With this modification, the gravitational force and light bending are consistent with experiments when G = 3 4 G N where G N is the measured gravitational constant. Ghost and unitarity Ghost particles generally violate unitarity because of negative norms. There are three conditions to preserve unitarity [26]: (i) the S-matrix of whole state space V is unitary, (ii) physical space V phys which is a subset of V is invariant under the S-matrix, namely SV phys = S † V phys = V phys , (iii) physical space has positive semi-definite metric, for any |phys ∈ V phys , we should hold phys|phys ≥ 0. Even when ghost particles are added, all of them can be satisfied by assuming the mixing state of normal and ghost particles as a fundamental set. Additionally, specific wave packet is assumed to satisfy the unitarity. With renormalizable condition We first consider N = 2 so the metric is h µν = h (0) µν + h(1) µν . In the analogy of longitudinal polarization of photon, we will try to cancel negative norms by constraining the system. Suppose the Gupta-Bleuler quantization which gives the constraint (a )|phys = 0. In this constraint, the ghost graviton is always appeared with the normal graviton, then the negative norm will not appear. This is the key to satisfy the third condition of unitarity. For the case of gauge group, general proof of unitarity is given by [26] and it will be partly applicable for the case here. The constraint for the physical state will differ from the one of photon due to different masses and wave functions. To be more precise, we define |h (n) µν = 1 √ 5 d 3 p (2π) 3 λ e (λ) µν a (λ,n) † p Φ (n) p |0 ,(33) where wave packets Φ p ) = 0. Then the wave packets of normal and ghost particles will be the same. The constraint is rather simple and seems reasonable if the cancellation works between them, nonetheless, the problem is the dependence on the S-matirx. As it cannot satisfy the conditions of unitarity, the constraint needs to be modified. Alternatively, we take (h h (n,+) µν = d 3 p (2π) 3 1 2E (n) p λ e (λ) µν a (λ,n) p .(34) The new constraint can realize the independence on the S-matrix since any interaction appears with h (0) µν + h(1) µν . If |h µν is a physical state, wave packets are related with d 3 p(Φ 0 p / E (0) p −Φ 1 p / E(1) p ) = 0. When the wave packets are Gaussian Φ (n) p = e −p 2 /2ω 2 n ( √ πωn) 3/2 , the relation becomes ω 0 U[ 1 4 , − 1 4 , m 2 0 2ω 2 0 ] − ω 1 U[ 1 4 , − 1 4 , m 2 1 2ω 2 1 ] = 0,(35) where U[a, b, z] is the confluent hypergeometric function of the second kind. Assuming m 0 /ω 0 ≪ 1 and m 1 /ω 1 ≪ 1, we get ω 1 ≈ 4 √ 2Γ 2 [ 5 4 ]ω 2 0 πm 1 .(36) In this way, we can get S|h µν = 0 by the constraint of wave packets. Usually, time-independence is supposed as a necessary condition for the conservation of unitarity. In fact, if there is a time evolution of |h µν , there appears new state |h ′ µν = |h (0) µν − |h (1) µν and the state of gravitational field turns out to be the mixing of |h µν and |h ′ µν . Similar to the neutrino oscillation, this phenomenon occurs since normal and ghost gravitons have different masses. Although both states have zero norms, their product h µν |h ′µν is negative. Then the appearance of whichever |h µν or |h ′ µν is problematic. Once one can exist, the other is derived by the oscillation. However, it does not a matter since the probability to produce |h µν is zero while |h ′ µν cannot satisfy the constraint of physical state. Therefore the problem can be avoided if we add the constraint that |h µν does not exist in the initial condition. A proof of unitarity conservation will follow by subtracting zero-norm states from the entire physical space if they do not affect for physics. After this, only positive definite metrics remain. Nonetheless, a confusion may occur since the energy of the state |h µν is not zero. The energy is calculated by h µν |H|h µν = 1 2   e m 2 0 /2ω 2 0 m 2 0 K 1 [ m 2 0 2ω 2 0 ] √ πω 0 + e m 2 1 /2ω 2 1 m 2 1 K 1 [ m 2 1 2ω 2 1 ] √ πω 1   ,(37) where K is the modified Bessel function of the second kind. This energy will be related to when the constraint of physical state begins. Below the energy scale, there is no problem since |h µν indicates the ordinary gravitational wave which is indirectly observed. Therefore, it is necessary to assume that the physical state of the graviton is first |h (0) µν and later replaced by |h µν when the energy is beyond the scale that |h µν can appear in the real world. The exact value of the lowest energy should be calculated by varying ω 0 and ω 1 with the constraint of Eq. (35). A subtlety remains since the energy changes with the normalization factor which cannot be determined due to zero norm. In other words, the zero-norm state affects for the observation indirectly that is completely different from the case of longitudinal photon. For now, as the determination of the factor seems not a crucial issue, we just postulate that normalizations of zero norms are the same to those of positive definite metrics and then there is no ambiguity. The phenomenon of the change of the state from the one of gravitational wave to the zero-norm state makes important prediction. Since S-matrix with zero-norm states become trivial, all the probabilities including the states are zero. Therefore, the process that increases the energy of gravitational wave beyond the energy about m 1 /2 when the state is changed to |h µν will be canceled out. That implies the energy of gravitational wave will have the upper bound. Concretely, there is no way to increase the energy beyond the bound. This prediction is not so significant for gravitons, but if we apply the same scenario for other elementary particles like electron and neutrino, it is noteworthy. As discussed in section 6, particles of the standard model may have ghost partners so that the prediction to have upper energy bound will be interesting. With super-renomalizable condition We use the same techniques for N > 2 with conditions which can lead higher power of propagator. For instance, when N = 4, the super-renormalization condition is m 3 = m 2 0 − m 2 1 + m 2 2 . In this case, the mass m 2 should be heavier than m 3 as m 0 < m 1 . Then, we cannot use the cancellation of the negative norm of |h (2) µν by constraining to the zero norm 1 √ 2 (|h (2) µν + |h µν , a condition for a mass appears, precisely, m 3 should be larger than the lowest energy of |h µν . It can be calculated by searching the minimum of h µν |H|h µν = 1 4 3 n=0 e m 2 n /2ω 2 n m 2 n K 1 [ m 2 n 2ω 2 n ] √ πω 0 ,(38) with varying the parameters ω n and taking the condition n (−1) n ω n U[ 1 4 , − 1 4 , m 2 n 2ω 2 n ] = 0. When m 2 and m 3 are similar values, ω 2 and ω 3 are approximately equal. Then the constraints for the masses can be approximated as 0 < m 1 < m 2 / √ 2 and m 2 < m 3 . Let us discuss in more general context. The super-renormalization condition for general N is m N −1 = (−1) n m 2 n . Ghost masses are assumed to be larger as the number is increasing, i.e. m 2n−1 < m 2n+1 for any n. Defining |h [2N ] µν = 1 √ 2N 2N −1 n=0 |h (n) µν , we requre that states are changing in a stepwise fasion from |h (0) µν to |h [2] µν , · · · , |h m 2N +1 > 1 2N +2 2N +1 n=0 m n to cancel each negative norm by involved in zero-norm states. Except |h (0) µν , norms are zero and unaffected by the S-matrix so that nothing appears in the physical world. For individual ghost state, there is at least one condition to be involved in zero-norm states, on the other hand, normal gravitons do not need to be a part of such states. For instance, the mass of h (4) µν in N = 6 can be lighter than the lowest energy of |h µν . There is no other constraint for h (4) µν then |h (4) µν can appear in the final state. The behavior of this particle is uncommon because the available momentum is limited, namely, the range is from m 4 to 1 6 m n . Depending on a parameter region, the motion of h n=0 m n . The allowed region of mass parameters will be more constrained, but it is possible to obtain the physical state. This will be useful to make quantum effects significantly suppressed. Vacuum energy The vacuum energy diverges in the standard calculation of quantum field theory and it induces the cosmological constant problem. Using the ghost particle, the propagator gains higher power so that the divergence becomes weaker. Using the cutoff scale Λ, the vacuum energy without ghost is of order Λ 4 and it can be improved by ghost particles with the super-renormalization condition into the value of order the fourth power of the ghost masses. Experimentally, the cosmological constant is observed about 29meV 4 . As we can insert any value for the ghost mass, it is not difficult to lead the observed value. However, additional gravitons enforce to change the gravitational potential to G N M 1 M 2 r (1 + (−1) n e −nmnr ). The strongest bound of the Newtonian gravity for the type G N M 1 M 2 r (1 + e −myr ) is roughly m y > 1/20µm [14]. Then, we cannot use the mass scale derived from the cosmological constant λ 1/4 ≈ 1/85µm despite the natural choice to be consistent with the observation. Actually, it is not an easy task to find allowed parameter region satisfying all the constraints of experiments and unitarity. We abandon the complete analysis but show an example of consistent solution. The vacuum energy can be calculated by T µν vac = 1 8πG 0|G µν |0 .(39) To the leading order, it becomes T µν vac = lim x−y→0 n 0|T (∂ µ h (n) ρσ (x)∂ ν h (n)ρσ (y))|0 .(40) This can be rewritten as T µν vac = lim x−y→0 n d 4 p (2π) 4 −i(−1) n p µ p ν p 2 + m 2 n − (−1) n iǫ e ip·(x−y) .(41) When N = 2 and the momentum cutoff is taken, it is integrated into T 00 vac = 8(m 2 1 − m 2 0 )Λ 2 128π 3 , T ii vac = − 8(m 2 1 − m 2 0 )Λ 2 384π 3 .(42) Considering Λ is the Planck scale, m 2 1 − m 2 0 should be O(10 −38 )GeV. Apparently, m 1 cannot be so small to realize large-scale gravity. On the other hand, if we consider N = 4 with the condition of super-renormalization, the energy-momentum tensor becomes T µν vac = d 4 p (2π) 4 −i(m 2 1 − m 2 0 )(m 2 1 − m 2 2 )(m 2 0 + m 2 2 + 2p 2 )p µ p ν (p 2 + m 2 0 )(p 2 + m 2 1 )(p 2 + m 2 2 )(p 2 + m 2 3 ) . The leading contribution is proportional to the logarithm of Λ: T µν vac = m 2 1 (m 2 2 − m 2 1 ) 16π 3 ln[ Λ m 2 ]η µν + · · · .(44) It has a wrong sign and the vacuum energy becomes negative. This is because the contribution from ghost particles is larger than the one of normal gravitons. Although N = 4 is the least number to be super-renormalizable, it cannot yield a consistent result. The simplest predictive model is N = 6 with the super-renormalization condition. To get a rough estimation, we take m 0 ≈ 0 and m 2 ≈ m 4 , then the energy-momentum tensor in vacuum becomes T µν vac ≈ −i d 4 p (2π) 4 −2((m 2 2 − m 2 5 ) 2 − m 2 1 m 2 3 )p 4 + 3m 2 1 m 2 3 m 2 5 (p 2 + m 2 2 ) (p 2 + m 2 1 )(p 2 + m 2 2 )(p 2 + m 2 3 )(p 2 + m 2 5 ) η µν .(45) The leading is Compared with the Yukawa-type modification, the effective mass is approximately 1/14-1/10 µm −1 which is shown in Fig. 1 (right). Increasing the tuning level, larger mass sets will appear whereas they are less interesting since the problem of lightest graviton mass arises as discussed in the next section. The cosmological constant problem is much improved compared to the fine-tuning solution without ghost particles. However, if future experiments verify the inverse square law in shorter range, we need to increase the tuning level depending on the range. Strictly speaking, as the scale 85µm of natural prediction is already rejected, it does not solve the cosmological constant problem at all. Using larger N , we can take the finite-field conditions exactly, but similar tuning will be necessary. It may be important to consider additional mechanism or symmetry to reduce the amount of vacuum energy. T µν vac ≈ −4π 2 (2π) 4 ((m 2 1 + m 2 3 − m 2 2 ) 2 − m 2 1 m 2 3 ) ln[ Λ m 2 ]η µν + · · · .(46) Graviton mass There are many models that predict modification of gravity in short range by using extra dimension. In our model with ghost particles, it can be distinguished from them by predicting modified Newtonian gravity at the cosmological scale via lightest graviton mass. From the loop calculations, parameters of gravitons masses are related to the lightest mass. The estimation of lightest mass is also important as it is near to the experimental bound. Conversely, if the mass of lightest graviton can be measured by observation, the parameter set of other gravitons masses can be restricted. After finding modified gravity in either short or long range, the model would make stronger prediction. The most important effect of loop calculation of graviton mass correction comes from the four-gravitons vertex. The Feynman rule of the vertex is given by H (4) int = − 1 2 κ 4 ∂ µ (h µν h νρ h ρσ )∂ 0 h σ0 + · · · .(48) The Feynman rule of the first term is estimated as = 1 4! (− 1 2 κ 2 (p 2 + p 3 ) µ 1 (p 4 ) 0 η ν 1 µ 2 η ν 2 µ 3 η ν 3 µ 4 η ν 4 0 + 1 ↔ 2 ↔ 3 ↔ 4). (49) Using this rule, the graviton mass correction becomes ∆m 2 ∼ − i 2 κ 2 d 4 p (2π) 4 p 2 0 N n=0 (−1) n p 2 + m 2 n .(50) The estimation of whole diagrams is difficult and we do not try here. In quantum gravity without ghosts, it becomes ∆m 2 ∼ Λ 2 which needs strong fine-tuning. When N = 2, it is ∆m 2 ∼ κ 2 Λ 2 m 2 1 . This case also needs fine-tuning to make the graviton mass light to be consistent with the solar system. To maintain the gravity for the scale larger than 1pc, ∆m should be less than 10 −20 meV. If a model is super-renormalizable, we can get a realistic graviton mass. The correction for N = 4 is ∆m 2 ∼ − 1 2 κ 2 d 4 p (2π) 4 −2im 2 1 (m 2 1 − m 2 2 )p 2 p 2 0 (p 2 + m 2 0 )(p 2 + m 2 1 )(p 2 + m 2 2 )(p 2 + m 2 3 ) . Then the propagator becomes D µνρσ ∼ −iP µνρσ m 2 1 (m 2 1 − m 2 2 )(m 2 2 + p 2 ) p 2 (p 2 + m 2 1 )(p 2 + m 2 2 )(p 2 + m 2 3 ) + m 2 1 (m 2 1 − m 2 2 )(m 2 2 + p 2 )∆m 2 .(52) Since ∆m 2 ∼ 1 8π 2 κ 2 m 4 1 ln[Λ 2 /m 2 1 ], it is easy to get ∆m < 1pc −1 when ghost masses are chosen to fit the cosmological constant. Inserting m 1 = λ 1/4 and Λ the Planck mass, it is of order 4 × 10 −31 meV or 6 × 10 −11 pc −1 . The calculation for the case N = 6 is also straightforward. In previous section, one possible solution is given as example. Using the parameter set, we found ∆m ∼ 1.4 × 10 −28 meV or 2.2 × 10 −8 pc −1 . Since the case of higher-derivative theory is independent from the DGP model (Dvali-Gabadadze-Porrati), the graviton mass limit is λ g ≥ 100Mpc [15]. Then the solution of the model is marginal to the experiments. Taking the parameter set which has larger graviton masses, it will be inconsistent with the observation of the cosmological structure. Hence, ghost masses are favored to be in the range of 10-100meV to suppress the lightest graviton mass. Above calculations reflect just one contribution of the quantum correction and actually there are thousands of contributions from similar terms and diagrams. It is inferred that by summing up other contributions lightest mass exceeds the experimental bound, then we will need some tuning with the bare mass m 0 . The model does not indicate strong predictions for modified scales of gravity but implies favored ranges of the modification. Lastly, let us comment about divergences induced by interactions with the matter sector. They are usually quartic divergences and non-renormalizable. There need infinite counter terms to cancel all the divergences even when higher curvature terms are added. One possible solution is to consider supergravity [27] and it is known that supergravity can reproduce finite results for up to three-loop diagrams. But in general loop, no one knows whether the result can be finite. A different approach is to consider higher-derivative propagator for all the matter fields then it can make the theory renormalizable with power counting arguments. Taking the same procedure of section 4, unitarity can be preserved when ghost particles are introduced for any other particle. A critical difference is that particles in the standard model are investigated accurately for high energy so that ghost masses must be very large. For example, cosmic ray experiments of neutrino find that neutrino can have the energy 10 12 GeV, then the mass of ghost partner of neutrino must be larger than this value. This mass scale is too high to render the loop correction of graviton mass small enough. Although the method is more powerful than supergravity, it is still impossible to derive the consistent mass correction of lightest graviton without the fine-tuning. There is also a problem when the vacuum energy is calculated for the particles of the standard model. If ghost masses substantially exceed the energy scale of milli-electron volt, the cosmological constant cannot be explained. Thus, there needs more powerful method that can suppress the vacuum energy and the graviton mass correction when matter sector is added. For instance, by combining higher-derivative theory and supergravity, some strong prediction may be found. In this vein, it there is a strong tool, one can predict modified gravity via the quantum effects. Conclusion We have invented new techniques that can preserve unitarity for the system including ghost partners. They are so powerful that quantum gravity with matter fields can be renormalizable if ghost partners are introduced for every quantum field. Prediction of modified gravity in both short and long ranges is present for the model N = 6 with the super-renormalization condition, assuming the contribution of pure gravity is dominant and tuning level is weak. It can avoid current experiments covering the inverse square law of Newtonian gravity. If future experiments can find modified gravity in short range, it can be a clue of ghost partner as well as extra dimension. A difference from the models with extra dimension is that it can predict a modification for the gravity on the scale of galaxy group and cluster. The scheme with ghost partner is indeed powerful, but so far, it is not used widely as the unitarity violation is crucial for physics. We have resolved a main difficulty of the problem and it can be a good approach to the hierarchy problem. By choosing proper physical state with setting the relation with wave packets, positive-semi definite norm and independence on the S-matrix are realized. As we did not provide a strict proof of the unitarity, there might be some faults. In particular, it is curious that zero-norm states indirectly affect for physics by giving the upper bound of energy. This is not seen in other quantum theories and possibly meets a problem. All the things we have done in this paper are weak statement and prediction of possible new physics toward quantum gravity in low energy. , where n runs from zero to N − 1. These fields are dealt with a part of the real gravitational field i.e. h µν = n h (n) P = d 3 xT 0i . Writing P µ = (H, P ), they hold the identities e −iP ·x a n ip·x . Thus, they provide the consistent specetime dependence of h (n) since other gravitons do not influence in macroscopic scale. The equation of motion of lightest graviton is operators of photon, the same constraint for the creation operators of gravitons will be useful (a |h (m)µν = (−1) n+1 δ nm . Using them, we get the zero-norm state |h µν = normalization factor is put to be compared with positive-definite norms. The state |h µν can be a physical state if wave packets Φ has positive norm and the ghost |h ). Instead, we require two steps for the physical state: the first is the same to N = 2, i.e. from its lowest energy about (m 1 + m 2 + m 3 )/4 with the constraint n h (n,+) µν |phys = 0. To suppress the ghost state |h(3) = |h µν depending on the energy scale where each state can appear. A constraint for the physical state |h [N ′ ] µν is given by N ′ n=0 h (n,+) µν |phys = 0. All ghosts masses should have = 0 . 0be restricted only in non-relativistic regime. Further, as it is neutral from any gauge interaction, life-time will be long enough. Then it is a new candidate of a dark matter.When N ≥ 6, all quantum effects become finite by the two conditions: m Even in this case, the above scheme can be applied to preserve unitarity with the constraints n h Figure 1 : 1This time, it is possible to keep a positive energy within the constraint of unitarity. In order to get T 00 ≈ 29meV 4 , masses are estimated around 3meV if they have the same order without parameter tuning. On the other hand, if we tune the parameters e.g. m 1 ≈ m 3 ≈ m 2 / √ 3, masses can be in any order depending on the level of tuning. Morem2 meV m3 meV m4 meV m5 meV Left figure shows the parameters sets which satisfy the cosmological constant and the constraints of unitarity. The solid line indicates the mass of m 1 . Large masses can appear when the conditions of finite-field theory are approximately satisfied since they suppress the leading order proportional to the logarithm of the cutoff scale which is taken as the Planck mass. In right figure, we describe the modified gravity at short distance using one parameter set with relatively large masses obtained from the left figure. It is compared with the Yukawa-type potential.concretely, giving masses around the condition of finite-field theory, the magnitude of the leading contribution can be suppressed. For the general case (45), we performed numerical analysis to find that all masses can exceed 20meV andFig. 1(left) exhibits some parameter sets. Taking a set of masses, we predict the modified Newtonian gravity byV (r) ≈ − G N M 1 M 2 r (e m0 r − e m 1 r + e m 2 r − e m 3 r + e m 4 r − e m 5 r ). (47) One interesting example is (m 1 , m 2 , m 3 , m 4 , m 5 ) ≈ (21.3, 37.1, 21.4, 30.5, 37.3)meV. . S Carlip, Rept. Prog. Phys. 64885S. Carlip, Rept. Prog. Phys. 64, 885 (2001). . A Ashtekar, Curr. Sci. 882064A. Ashtekar, Curr. Sci. 88, 2064 (2005). . C Kiefer, Annalen Phys. 15129C. Kiefer, Annalen Phys. 15, 129 (2005). . C Rovelli, arXiv:gr-qc/9803024C. Rovelli, arXiv:gr-qc/9803024. . B S Dewitt, Phys. Rev. 1621195B. S. DeWitt, Phys. Rev. 162, 1195 (1967). M J Veltman, gQuantum theory of Gravitation,h In Les Houches 1975, Proceedings, Methods In Field Theory. AmsterdamM. J. Veltman, gQuantum theory of Gravitation,h In Les Houches 1975, Proceed- ings, Methods In Field Theory, (Amsterdam 1976) 265-327. . N E J Bjerrum-Bohr, arXiv:hep-th/0410097N. E. J. Bjerrum-Bohr, arXiv:hep-th/0410097. . J W Moffat, arXiv:hep-th/0610162J. W. Moffat, arXiv:hep-th/0610162. . S W Hawking, Thomas Hertog, Phys. Rev. D. 65103515S.W. Hawking, Thomas Hertog, Phys. Rev. D 65, 103515 (2002). . Philip D Mannheim, Found Phys, 37532Philip D. Mannheim, Found. of Phys. 37, 532 (2007). . T Shi, C P Sun, arXiv:0905.1771T. Shi, C. P. Sun, arXiv:0905.1771. . B Grinstein, D O&apos;connell, M B Wise, Phys. Rev. D. 7725012B. Grinstein, D. O'Connell, M. B. Wise, Phys. Rev. D 77, 025012 (2008). . D J Kapner, T S Cook, E G Adelberger, J H Gundlach, B R Heckel, C D Hoyle, H E Swanson, Phys. Rev. Lett. 9821101D.J. Kapner, T.S. Cook, E.G. Adelberger, J.H. Gundlach, B.R. Heckel, C.D. Hoyle, H.E. Swanson, Phys. Rev. Lett. 98, 021101 (2007). . G Rajalakshmi, C S Unnikrishnan, Class. Quant. Grav. 27215007G. Rajalakshmi, C. S. Unnikrishnan, Class. Quant. Grav. 27, 215007 (2010). . A S Goldhaber, M M Nieto, Rev. Mod. Phys. 82939A. S. Goldhaber, M. M. Nieto, Rev. Mod. Phys. 82, 939 (2010). . I Lovas, Heavy Ion Phys. 13297I. Lovas, Heavy Ion Phys. 13, 297 (2001). . M V Takook, Int. J. Mod. Phys. E. 11509M. V. Takook, Int. J. Mod. Phys. E 11, 509 (2002). . A Kostelecky, J Tasson, arXiv:1006.4106A. Kostelecky, J. Tasson, arXiv:1006.4106. . W R Bomstad, J R Klauder, Class. Quant. Grav. 235961W. R. Bomstad, J. R. Klauder, Class. Quant. Grav. 23, 5961 (2006). . V Mashkevich, arXiv:0706.1802V. Mashkevich, arXiv:0706.1802. . A Aubert, Phys. Rev. D. 6987502A. Aubert, Phys. Rev. D 69, 087502 (2004). . T Kawai, E Sakane, T Tojo, Prog. Theor. Phys. 99971T. Kawai, E. Sakane, T. Tojo, Prog. Theor. Phys. 99, 971 (1998). . M Nakasone, I Oda, Phys. Rev. D. 79104012M. Nakasone, I. Oda, Phys. Rev. D 79, 104012 (2009). . E Babichev, C Deffayet, R Ziour, JHEP. 0598E. Babichev, C. Deffayet and R. Ziour, JHEP 05, 098 (2009). . C Deffayet, G Dvali, G Gabadadze, A Vainshtein, Phys. Rev. D. 6544026C. Deffayet, G. Dvali, G. Gabadadze, A. Vainshtein, Phys. Rev. D 65, 044026 (2002). T Kugo, Quantum Theory of Gauge Fields, I (Baifukan. in JapaneseT. Kugo, Quantum Theory of Gauge Fields, I (Baifukan, 1989) (in Japanese). . Z. Bern, Living Rev. Rel. 55Z. Bern, Living Rev. Rel. 5, 5 (2002).
[]
[ "Experimental evidence of auxetic features in seismic metamaterials: Ellipticity of seismic Rayleigh waves for subsurface architectured ground with holes", "Experimental evidence of auxetic features in seismic metamaterials: Ellipticity of seismic Rayleigh waves for subsurface architectured ground with holes" ]
[ "Stéphane Brûlé [email protected] \nMénard, ChaponostFrance\n\nRoute du Dôme\n69630Chaponost\n", "Stefan Enoch \nInstitut Fresnel\nAix Marseille Univ\nCNRS\nCentrale Marseille\nMarseilleFrance\n", "Sébastien Guenneau \nInstitut Fresnel\nAix Marseille Univ\nCNRS\nCentrale Marseille\nMarseilleFrance\n", "\nAvenue Escadrille Normandie Niemen\n13013Marseille\n" ]
[ "Ménard, ChaponostFrance", "Route du Dôme\n69630Chaponost", "Institut Fresnel\nAix Marseille Univ\nCNRS\nCentrale Marseille\nMarseilleFrance", "Institut Fresnel\nAix Marseille Univ\nCNRS\nCentrale Marseille\nMarseilleFrance", "Avenue Escadrille Normandie Niemen\n13013Marseille" ]
[]
Structured soils with regular meshes of metric size holes implemented in first ten meters of the ground have been theoretically and experimentally tested under seismic disturbance this last decade. Structured soils with rigid inclusions embedded in a substratum have been also recently developed. The influence of these inclusions in the ground can be characterized in different ways: redistribution of energy within the network with focusing effects for seismic metamaterials, wave reflection, frequency filtering, reduction of the amplitude of seismic signal energy, etc. Here we first provide some time-domain analysis of the flat lens effect in conjunction with some form of external cloaking of Rayleigh waves and then we experimentally show the effect of a finite mesh of cylindrical holes on the ellipticity of the surface Rayleigh waves at the level of the Earth's surface. Orbital diagrams in time domain are drawn for the surface particle's velocity in vertical (x, z) and horizontal (x, y) planes. These results enable us to observe that the mesh of holes locally creates a tilt of the axes of the ellipse and changes the direction of particle movement. Interestingly, changes of Rayleigh waves ellipticity can be interpreted as changes of an effective Poisson ratio. However, the proximity of the source is also important to explain the shape of the ellipses. We analyze these observations in terms of wave mode conversions inside the mesh and we propose to broaden the discussion on the complexity of seismic wave phenomena in structured soils such as soils foundations and on the coupling effects specific to the soil-structure interaction.
null
[ "https://arxiv.org/pdf/1809.05841v1.pdf" ]
73,534,465
1809.05841
a55585d8468c397c953cb92e0f59b97aced9df71
Experimental evidence of auxetic features in seismic metamaterials: Ellipticity of seismic Rayleigh waves for subsurface architectured ground with holes Stéphane Brûlé [email protected] Ménard, ChaponostFrance Route du Dôme 69630Chaponost Stefan Enoch Institut Fresnel Aix Marseille Univ CNRS Centrale Marseille MarseilleFrance Sébastien Guenneau Institut Fresnel Aix Marseille Univ CNRS Centrale Marseille MarseilleFrance Avenue Escadrille Normandie Niemen 13013Marseille Experimental evidence of auxetic features in seismic metamaterials: Ellipticity of seismic Rayleigh waves for subsurface architectured ground with holes Rayleigh wave ellipticitymode conversionseismic metamaterialssoil-structure interactionPoisson ratio Structured soils with regular meshes of metric size holes implemented in first ten meters of the ground have been theoretically and experimentally tested under seismic disturbance this last decade. Structured soils with rigid inclusions embedded in a substratum have been also recently developed. The influence of these inclusions in the ground can be characterized in different ways: redistribution of energy within the network with focusing effects for seismic metamaterials, wave reflection, frequency filtering, reduction of the amplitude of seismic signal energy, etc. Here we first provide some time-domain analysis of the flat lens effect in conjunction with some form of external cloaking of Rayleigh waves and then we experimentally show the effect of a finite mesh of cylindrical holes on the ellipticity of the surface Rayleigh waves at the level of the Earth's surface. Orbital diagrams in time domain are drawn for the surface particle's velocity in vertical (x, z) and horizontal (x, y) planes. These results enable us to observe that the mesh of holes locally creates a tilt of the axes of the ellipse and changes the direction of particle movement. Interestingly, changes of Rayleigh waves ellipticity can be interpreted as changes of an effective Poisson ratio. However, the proximity of the source is also important to explain the shape of the ellipses. We analyze these observations in terms of wave mode conversions inside the mesh and we propose to broaden the discussion on the complexity of seismic wave phenomena in structured soils such as soils foundations and on the coupling effects specific to the soil-structure interaction. I. Introduction Following two full-scale experiments on control of surface waves led in France in 2012 [1,2], structured soils made of cylindrical voids or soft/rigid inclusions [3][4][5][6][7], have been coined seismic metamaterials [8]. One of the full-scale experiments was performed near Lyon, with metric cylindrical holes that allowed the identification of various effects such as a Bragg effect and the distribution of energy inside a grid, which can be interpreted as the consequence of an effective negative refraction index [8]. The pattern of the grid of holes was as follows: 20 m in width, 40 m in length) made of 23 holes (2 m in diameter, 5 m in depth, triangular grid spacing 7.07 x 7.07 m). This flat lens effect for surface seismic waves in [1] is reminiscent of what Veselago [9] and Pendry [10] envisioned for light in Electromagnetism. Bearing in mind the different ways of characterizing the effects of a structured ground in civil engineering, we decided to go a step further in the analysis of physical phenomena reported in [1] by showing their complexity. This opens new avenues for discussion and interpretation. We consider that we are at the beginning of a period that will demonstrate the significant role of buried structures (local geology, high density of foundations, etc.) on the free surface dynamic response. To illustrate the interaction of seismic waves with structured soils, researchers have to bring specific theoretical and original experimental approaches in particular because of the complexity of the wave propagation in the real Earth's surface layers ( [1] and [2]). Except for a few cases of study with full numerical modeling, a complete study of both the structure and deep foundations (piles, shear-walls) is rather complex and usually the problem is split into two canonical sub-problems: kinematic and inertial interactions [11]. Kinematic interaction results from the presence of stiff foundation elements on or in soil, which causes motions at the foundation to depart from free-field motions. Inertial interaction refers to displacements and rotations at the foundation level of a structure that result from inertiadriven forces such as base shear and moment. New developments during this last decade are related to the interaction of structured soil with the seismic signal propagating in superficial Earth's layers [12] and then, linked to an active action on the kinematic effect described above. In this article we propose new arguments, based on experimental facts, on the polarization change of Rayleigh surface waves. We show that the holes behave like resonators that can lead to a change of mode propagation inside the grid. To do this, we propose to change the conventional approach based on the distribution of seismic energy in the grid and to focus on the advantages brought by the simple polarization change of the surface waves on the design of buildings. Indeed, buildings are especially sensitive to horizontal displacement of the soil at the free surface [11]. II. Ellipticity of Rayleigh waves Surface waves are produced by body waves in media with a free surface and propagate parallel to the surface. For an epicenter remote from the zone of interest, the body waves emitted by the seismic source (shear and compression waves) are assimilated to plane waves because of their large radius of curvature. By essence, the amplitude of surface wave decreases quickly in depth. For our purpose, it is enough to consider an elastic, homogeneous half-space made of soils (Supplemental Figure 1). Equations governing the displacement of soil's particle are described in §1 of the Supplemental Material. Rayleigh waves are theoretically polarized in the vertical plane ( , ) and we obtain an ellipse for the particle's motion with a vertical major axis and retrograde motion, opposite to that of wave propagation (Supplemental Figure 2). There is a value of = −0.19Λ (Λ = 2 / is the wavelength inversely proportional to the wavenumber) for which ! is null, whereas ! is never null. At this depth, the amplitude of ! changes sign. For greater depths, the particle motion is prograde. With increasing depth, the amplitudes of ! and ! decrease exponentially, with ! always larger than ! (Supplemental Figure 3). We led a parametric study on the Poisson's ratio for the ellipse motion at free surface ( = 0). The size of the ellipse grows with the Poisson's ratio and we notice that the dimensions of the ellipses increase very quickly when n tends towards 0.5 (FIG. 1). The geometric transformation is not just a homothety. In fact if one would like that the Poisson's ratio n takes values between 0.01 and 0.499, the ratio ! !"# / ! !"# should vary from 0.54 to 0.77. It is worthwhile noticing that controlling the Poisson's ratio opens an interesting avenue to cloaking, in a way analogous to what has been proposed with mechanical metamaterials at a micrometer scale [13]. Let us stress again that the above discussion is based upon a simplified soil model and most real Earth-motion records are more complex, due to the variation of ideal elastic halfspace theory and the real soils (fine horizontal strata, increase of the density with depth, material damping describing all anelastic effects, etc.). Lamb (1904) [14] extended Rayleigh's results, which were limited to the free propagation of waves, by finding the complete response of a homogeneous half-space to both a line and a point force [15,16]. The rigorous solution of the ground surface displacement is expressed as a sum of the branch integrals and the residue of Rayleigh poles. The former corresponds to the contribution of body waves and the latter to that of Rayleigh waves. The evaluation of the residue of Rayleigh poles is comparatively simple, while the evaluation of the branch line integrals is rather complex. Therefore, Harkrider (1964) proposed « normal mode solution » in which the displacement is expressed as a sum of Rayleigh poles only, neglecting the branch line integrals [17]. For a vertically oscillating source on the surface of elastic, homogeneous and isotropic half-space, Miller and Pursey (1955) showed that two thirds of the total energy are converted in Rayleigh waves, while the remaining part goes to body waves [17]. Otherwise, the surface waves are proportionally attenuated with the square root of the distance versus the square of the distance for the body waves along the surface (Ewing et al., 1957) [18]. Thus, Rayleigh waves become an order of magnitude greater than that of body waves if the distance from the source exceeds three times the wavelength [20]. Experimentally we observed this phenomenon during the 2012 full-scale test [1]. In Figure 4 of the Supplemental Material, we have schematically represented the different types of waves produced by a punctual and vertical impact at the free surface of the Earth's surface. This is typically the case of dynamic compaction, which is a ground densification technique [11], with the fall of a mass of several tons. The radiation diagram for pressure waves shows a slight inclination with respect to the horizontal plane ( , ). Thus we assume that III. Vertical point force A B z = -Λ/5 the horizontal component of the motion is very strong at a distance close to the impact and in our case, it can be several tens of meters. The full Green's function ( , ! , ) describing the displacement field due to Rayleigh and bulk waves radiated by a force source located at ( = 0, = ! ) and satisfying the momentum equation is developed in §2 of Supplemental Material. IV. Description of field test The experimental grid of the test held in 2012 near Lyon in France, is made of 23 holes distributed along five discontinuous lines of self-stable boreholes 2 m in diameter (FIG. 2). The depth of the boreholes is 5 m and the grid spacing is 7 m. The velocity of P wave was estimated between 600 et 650 m/s for the earth material near the surface. The artificial source consists of the fall of a 17 tons steel pounder from a height of about 12 m to generate clear transient vibrations pulses. In this case the depth of the source is about z 0 = 3 m (see Supplemental Equations) , in a crater made after 6 successive impacts. The typical waveform of the source in time-domain looks like a second order Ricker wavelet. In this letter, we present data recorded for the source 1 located 30 m from the long axis of the grid. As suggested by Aki and Richards [15], this is a "near seismic field" because all the sensors are located within three wavelengths of the source and that is decisive for the interpretation of the experimental results. The signal is characterized by a mean frequency value at 8.15 Hz. The grid is mainly "sub-wavelength". At 5 m from the impact, the peak ground acceleration is around 0.9 (where = 9.81 m. s −2 is the gravity of Earth), which is significant but necessary to compensate for the strong attenuation versus distance in soils (see §3 of the Supplemental Material for more details on sensors). On each seismogram we identify, in time domain, the signature of Rayleigh waves in the global duration of the main signal (0.375 s [1]). After a few iterations on orbit graphics, this corresponds to several unambiguous criteria: a transient signal which is more energetic, a duration of 0.1 to 0.15 s, an ellipsoidal motion identified in at least two of the three planes of space, the predominance of the motion in the ( , ) plane. In frequency domain, we basically considered the grid of holes as a filter without any consideration of initial soil properties (wave velocity, pattern of the grid, etc.). We have simply calculated the magnitude in dB of three transfer functions ! , ! and ! as the spectral ratio of the ground particle velocity for a couple of sensors (A, B) ( §3 and Figure 5). V. Experimental results with an emphasis on elliptical polarization In time domain we present the results of the preliminary land streamer test (yellow line in Figure 5 of Supplemental Materials) and we have selected three seismograms for an extraction of the Rayleigh waves: B1, outside the grid of holes, C1 at the edge of the grid and F1 inside the grid (FIG. 2). In Figure 6 (Supplemental Materials) we present the different components of the particle's velocity recorded. [1]. Green disks are cylindrical holes, red disks are different source locations (S1 is tested here) and squares are the seismic sensors. Top right, the mesh of holes. We select a set of two doublets: a sensor located between the source and the grid compared to a sensor inside the grid and this latter compared to a sensor located behind the mesh [22]. Thanks to a preliminary land streamer test with seismic sensors located in a single line (coordinates ( = 20, = 0), (−10, 0) and (−30,0)), before digging holes we can compare the transfer functions before and after the structuration of the soil (See Supplemental Materials, Figure 5). In Supplementary Figure 6(A) we can observe for sensors at 10 and 20 m that the orbits are subhorizontal i.e. vibrating mainly in the plane ( , ) with an elongation according to . The orbits are gradually verticalized with the distance (50 and 70 m). Compared with the theoretical case of vertical ellipses for Rayleigh waves, this suggests that this characteristic regime of particle's motion at the surface, only takes place after a distance of one wavelength for P-waves. From Supplementary Figure 6, we obtain identifiable ellipses of contrasted geometries and one should notice that the directions of the main axis ( , , ) conventionally described in the bottom part of FIG.3, and the ratios / and / can drastically change. Thus we adopt a plane view to represent the orientation and the geometry of the intersection of the ellipsoid with the horizontal ( , ) plane (FIG. 3 -top). In FIG. 3, the ellipses which are the representation of the intersection of ellipsoids with the horizontal plane, provide us with three main pieces of information in terms of vertical orientation, particle's velocity magnitude and ellipse's azimuth in the horizontal plane. A fourth piece of information is the prograde or retrograde motion ( Figure 6 of Supplemental Material), but we observe that is quite unstable. We decided not develop this information. At first sight, there is a great heteroclicity of ellipses, both in geometry but also in amplitude. We recall that we expect ellipses with their major axis oriented vertically but here we observe a 90 ° rotation of this axis for the majority of ellipses located between the grid and the source but also in the grid. That is the essential of the vibration is horizontal at the free surface and the dark red coloration of the ellipses illustrates this information, according the graphic convention of FIG. 3. Behind the mesh of holes and below, i.e. in the left and the bottom parts of the FIG. 3, the geometry of the ellipses in the vertical plane is either close to that of a circle (black color) or showing the verticalization of their main axis (blue color). In terms of amplitudes, the ellipses are much more energetic inside the grid and between the grid and the source, than elsewhere. We know that real soils strongly attenuate the signal energy with the offset [1] but here we notice a concentration of energy inside the grid. Two "shadow zones" are identified: at the top left of the figure and in the bottom right quarter. We are unable to identify azimuth families as clearly as for the other parameters. However, the overall motion of the ellipses suggests a double rotation of the horizontal axes of the ellipses in ( , ) plane: at both y-interfaces of the grid of holes. Additional information can be obtained from the preliminary seismic test because ellipses for sensors at 10 and 20 m are mainly oriented in -direction and the ycomponent is very tiny. The test with holes causes tilting of ellipses axes and involves a second horizontal component in the area before the grid. Spectral analysis (FIG. 3) brings the following elements. We present the results for three sensors (see § IV). The transfer function ! ! represents information relating to the signal entering in the grid of holes from the right (FIG. 4) and ! ! , that of the signal going out of the device by the left. We have calculated the magnitude in dB of the transfer function (Eq. 1) for the initial soil (solid blue line) and for the structured soil with holes (solid red line). We have also drawn the magnitude of the original soil transfer function minus 3 dB (gray dotted line) to illustrate the efficiency of the holey-ground. We consider that results acquired with the land streamer (solid blue line) is the reference curve to compare with the others. It is the spectral signature of the soil with its initial peculiarities. The other aspects of the energy distribution in the structure (effective negative index and flat lensing, seismic metamaterials, dynamic anisotropy) have been already discussed [1]. For ! ! in left column of FIG. 4, results show a significant change of the shape of the magnitude in range of frequency 1-9 Hz for the 3 components of the sensor. In the horizontal ( , ) plane, amplification is overall observed for 1 to 4 Hz, and de-amplification for 4 to 7 Hz. For zcomponent, the amplification is observed for 2 to 7.5 Hz and the de-amplification for 1 to 2 Hz and 8 to 9 Hz. For ! ! in right column of FIG. 9, one can notice a deamplification for the broadband between 1 to 5 Hz and between 1 to 9 Hz respectively for the horizontal components x and y of the sensors. For -component, the de-amplification is observed for 0.5 to 3.2 Hz. Roughly speaking, the signal is much more attenuated horizontally and for a broader range of frequency at the output than at the input of the mesh of holes. VI. Discussion on transfer function and velocity contrast On the basis of data acquired on the shape of the ellipsoidal movement of the surface waves and on the spectral study showing the amplifications of the , and components according to the frequency, we suggest correlations and interpretations. Thanks to geological features, we make the assumption of a half-space as soil model. We have identified the signature of the surface waves on the seismograms resulting from a vertical impact source. Theoretical ellipses can be compared to recorded data. The shape and distribution of the ellipses show that the grid of holes does not only modify the energy distribution, which is already knew from previous studies [1], but also changes the polarization of the seismic waves. FIG. 4. Transfer functions ! ! and ! ! for , and component of the sensors [22]. The light blue bands on the graphs show the frequency ranges for which a deamplification is measurable after digging holes. Indeed we observe a clear amplification of the horizontalcomponent of particle's velocity inside the grid with a tilting of most of the ellipses. Theoretically vertical, the major axis has turned /2 for the preliminary test but also for the ground with holes. The land streamer shows that below a wavelength for the mean frequency, we are still in "near-field" (vicinity of the source) with an unstabilized particle's motion regime. For the ground with holes, this could be correlated with the horizontal amplification shown by ! ! ! and ! ! ! functions for frequency < 4 . We give two explanations to this observation. First, the right border of the grid of holes is acting like a seismic reflector, by analogy with optics, and changes the wave polarization. This interpretation could also explain the thickening ( / ) of the ellipses in ( , ) plane, located between the source and the grid, as the combination of two wave fields, an incident and a reflected one. Various authors ( [23,24,25]) describe the influence of a horizontal layer over half-space (thickness ! ) and the velocity contrast ( ! ! ) on the ellipticity of Rayleigh waves. There is a frequency dependence of wave motion (prograde or retrograde) and then on the ellipse's shape. In our case, we could imagine such a situation with the existence of two layers (initial soil and structured soil) but with a vertical interface. For the second explanation, we propose that the local resonance of the holes causes an energy conversion by horizontal vibration modes of the holes in the ( , ) plane. The vertical amplification is nonnull inside the mesh. We suggest that the cause is the vertical vibration mode of the holes. Another way to interpret this data in the grid and in the area between the grid and the source, is to invoke a Love mode conversion. In the area located at the left of the grid of holes, the major axis of the ellipses is mainly verticalized. This can be interpreted as the crossing of a second reflector (the left border of the mesh) and then a change of polarization or the consequence of a strong energy dissipation in the ( , ) plane inside the grid itself. However, regardless of the interpretation of the physical phenomena, there is a clear attenuation of the horizontal component of the motion at the exit of the grid. We believe this is an essential result in the idea of protection against vibrations. VII. Perspective on seismic metamaterials with dynamic effective elasticity tensors The full-scale experiment on a subwavelength device near the city of Lyon has already provided valuable information on the distribution of seismic energy [1]. In the present article, we show the complexity of the wave propagation in a structured soil by analyzing the surface particle's velocity in 3D. The recorded data suggest various physical phenomena take place, including changes in polarization or wave mode conversions. For the civil engineering purpose, a strong attenuation of the horizontal component of the motion is observed behind the grid at the opposite side of the seismic source. Otherwise, we can suspect the holes in the ground act as Helmholtz resonators with large aperture [26] especially for a vertical stress. According to the geometry of real cavities in this experiment, the theoretical Helmholtz frequency of a single hole, is of the order of 10 Hz. This article also raises the concept of media with effective properties. Indeed, we could interpret the change of ellipse shape as the consequence of an effective Poisson's ratio (FIG. 5 and FIG.3 and more detail in §5 of Supplemental Material) and this opens interesting avenues in the design of seismic metamaterials with tunable Poisson's ratio similarly to what has been experimentally achieved with pentamode metamaterials at the micrometer scale [13], with potential applications far beyond those of electromagnetic metamaterials [27]. Indeed, large scale mechanical metamaterials with auxetic properties could find applications in earthquake protection [28]. Finally, we point out that our experimental data can be interpreted in terms of cloaking of Rayleigh waves via anomalous resonances [29] induced by holes. Indeed, considering as in [1] that the soil periodically structured with boreholes behave like an effective medium with strong dynamic anisotropic features (such as hyperbolic media), one can invoke space folding arguments to reinterpret data in Fig.4 as being some form of cloaking for the source. This concept which is still in its infancy and thus beyond the scope of the present article, would find applications in seismology similar to those proposed in [30] in electromagnetics. FIG. 5. Horizontal to vertical displacement ratio χ at the free surface versus Poisson's ratio. Cartography with outlines drawn around the sensors for which the geometry of the ellipses can be interpreted as a negative Poisson's coefficient. It is interesting to note that conversion of Rayleigh waves recently studied with small scale experiments [31,32] could well be revisited to unveil radical changes of Rayleigh wave ellipticity in forests of trees [33], in light of our findings. We believe the future of seismic metamaterials is bright, and we hope the extreme dynamic effective features reported here for soils structured with holes, will lead to further analogies with composite structures such as those introduced by Milton and Cherkaev more than twenty years ago [34]. Structured soils could then potentially match any prescribed elasticity tensor in a dynamic regime, and this would be a natural path to a seismic cloak. FIG. 1 . 1(A) Example of a diagram of particle's motion on the vertical plane for Rayleigh waves at different depths. Here υ = 0.25, = 10 , Λ ! = 31.8 . (B) Parametric dimensionless study of ellipse pattern versus Poisson's ratio value. ℬ the Fourier transforms of signal recorded with sensor A and B. As shown in FIG. 4, we calculated ! ! and ! ! , the seismic signal passing through the grid from the right to the left of the array of holes. For a given sensor and for the same time sampling, data are normalized by the maximum of ! ( ), ! ( ) or ! ( ). Results presented are the projection of the particle's velocity on three planes: ( , ), ( , ) and ( , ). Supplementary Figure 6(A) shows the ellipses of the landstreamer sensors located at 10, 20, 50 and 70 m from the source and Supplementary Figure 6(B) presents those of sensors B1, C1 and F1. At the impact point, the signal emitted is less than 0.4 s and we have identified for all ellipses, 1 to 2 cycles included in a duration of 0.1 to 0.15 s. FIG. 2. Plan view of field-site layout FIG. 3 . 3Top view of ellipses with graphic conventions for ellipsoid and ellipses at the bottom part of the figure. Source is located at ( = −40, = 0). The dotted-line purple rectangle represents the part of the grid of holes coinciding with sensors location. In this representation, we can observe the azimuth of the major axis of the ellipsoid intersecting the horizontal plane. Bottom left, the three axes of the ellipsoid: and are respectively the largest axis in ( , ) and the smallest one in the ( , ) plane; is the intermediate one in( , ). Bottom center, the pattern of the ellipses in the ( , ) plane ( = 0) is shown as ( / ). In blue, the ellipses are stretched upwards and in dark red, in the horizontal plane. Bottom right, the thickness of the line also indicates their proportions / in the horizontal plane. AcknowledgementsSG is thankful for a visiting position in the Mathematics Department at Imperial College London supported by EPSRC program grant EP/L024926/1. Flat lens for seismic waves. S Brûlé, E H Javelaud, S Enoch, S Guenneau, Sci. Rep. 718066S. Brûlé, E.H. Javelaud, S. Enoch, S. Guenneau, Flat lens for seismic waves. Sci. Rep. 7, 18066 (2017). Experiments on Seismic Metamaterials: Molding Surface Waves. S Brûlé, E H Javelaud, S Enoch, S Guenneau, Phys. Rev. Lett. 112133901S. Brûlé, E.H. Javelaud, S. Enoch, S. Guenneau, Experiments on Seismic Metamaterials: Molding Surface Waves. Phys. Rev. Lett. 112, 133901 (2014). Elastic metamaterial-based seismic shield for both Lamb and surface waves. Q Du, Y Zeng, G Huang, H Yang, AIP Advances. 775015Q. Du, Y. Zeng, G. Huang, H. Yang, Elastic metamaterial-based seismic shield for both Lamb and surface waves. AIP Advances, 7, 075015 (2017). Clamped seismic metamaterials: Ultra-Low Broad Frequency stop-bands. Y Achaoui, T Antonakakis, S Brûlé, R V Craster, S Enoch, S Guenneau, New J. Phys. 1963022Y. Achaoui, T. Antonakakis, S. Brûlé, R.V. Craster, S. Enoch, S. Guenneau, Clamped seismic metamaterials: Ultra-Low Broad Frequency stop-bands. New J. Phys. 19, 063022 (2017). Spanning the scales of mechanical metamaterials using time domain simulations in transformed crystals, graphene flakes and structured soils. R Aznavourian, T Puvirajesinghe, S Brûlé, S Enoch, S Guenneau, J. Phys.: Condens. Matter. 29433004R. Aznavourian, T. Puvirajesinghe, S. Brûlé, S. Enoch, S. Guenneau, Spanning the scales of mechanical metamaterials using time domain simulations in transformed crystals, graphene flakes and structured soils. J. Phys.: Condens. Matter. 29, 433004 (2017). Wide band-gap seismic metastructures. S Krodel, N Thome, C Daraio, Extreme Mech. Lett. 4S. Krodel, N. Thome, C. Daraio, Wide band-gap seismic metastructures. Extreme Mech. Lett. 4, 111- 117 (2015). Large scale mechanical metamaterials as seismic shields. M Miniaci, A Krushynska, F Bosia, N M Pugno, New J. Phys. 18883041M. Miniaci, A. Krushynska, F. Bosia, N.M. Pugno, Large scale mechanical metamaterials as seismic shields. New J. Phys. 18(8), 083041 (2016). S Brûlé, S Enoch, S Guenneau, arXiv:1712.09115Emergence of Seismic Metamaterials: Current State and Future Perspectives. S. Brûlé, S. Enoch, S. Guenneau, Emergence of Seismic Metamaterials: Current State and Future Perspectives. arXiv:1712.09115 (2017). Electrodynamics of substances with simultaneously negative electrical and magnetic permeabilities. V Veselago, Sov. Phys. Usp. 104V. Veselago, Electrodynamics of substances with simultaneously negative electrical and magnetic permeabilities. Sov. Phys. Usp, vol. 10, no. 4, pp. 509- 514 (1968). Negative refraction makes a perfect lens. J B Pendry, Phys. Rev. Lett. 85J. B. Pendry, Negative refraction makes a perfect lens. Phys. Rev. Lett. 85, 3966-3969 (2000). Pratique de l'interaction solstructure sous séisme. S Brûlé, F Cuira, Editions Afnor. S. Brûlé, F. Cuira, Pratique de l'interaction sol- structure sous séisme. Editions Afnor (2018). Change of ground type by means of dynamic compaction: Consequences on the calculation of seismic loadings. S Brûlé, S Duquesnoy, Innov. Infrastruct. Solut. 139S. Brûlé, S. Duquesnoy, Change of ground type by means of dynamic compaction: Consequences on the calculation of seismic loadings. Innov. Infrastruct. Solut. 1:39 (2016). On the practicability of pentamode mechanical metamaterials. M Kadic, T Bückmann, N Stenger, M Thiel, M Wegener, App. Phys. Lett. 100191901M. Kadic, T. Bückmann, N. Stenger, M. Thiel, M. Wegener, On the practicability of pentamode mechanical metamaterials. App. Phys. Lett. 100 (19), 191901 (2012). On the propagation of tremors over the surface of an elastic solid. H Lamb, Philosophical Transactions of the Royal Society of London. 20346H. Lamb, On the propagation of tremors over the surface of an elastic solid. Philosophical Transactions of the Royal Society of London, 203: 46 (1904). K Aki, P G Richards, Quantitative seismology: Theory and Methods. University Science Books. CaliforniaK. Aki, P.G. Richards, Quantitative seismology: Theory and Methods. University Science Books, California, ed. 2 (2002). Theoretical global seismology. F A Dahlen, J Tromp, Princeton University PressNew JerseyF.A. Dahlen, J. Tromp, Theoretical global seismology. Princeton University Press, New Jersey (©1998). Surface waves in multilayered elastic media I. Rayleigh and Love waves from buried sources in a multilayered elastic half-space. D G Harkrider, Bull. Seism. Soc. Am. 542D.G. Harkrider, Surface waves in multilayered elastic media I. Rayleigh and Love waves from buried sources in a multilayered elastic half-space. Bull. Seism. Soc. Am., 54(2), 627-679 (1957). Elastic waves in layered media. W M Ewing, W Jardetzky, McGraw-HillW.M. Ewing, W.S Jardetzky, F. Press, Elastic waves in layered media. McGraw-Hill (1957). On the partition of energy between elastic waves in a semi-infinite solid. G F Miller, H Pursey, Phil. Trans. Soc. London. 233, Series A. G.F. Miller, H. Pursey, On the partition of energy between elastic waves in a semi-infinite solid. Phil. Trans. Soc. London. 233, Series A, 55-69 (1955). Branch line contribution in Lamb's problem. M Saito, 46Butsuri-TansaM. Saito, Branch line contribution in Lamb's problem. Butsuri-Tansa, 46(5), 327-380 (1993). Green's functions for a volume source in an elastic half-space. E A Zabolotskaya, Y A Ilinskii, T A Hay, M F Hamilton, J. Acoust. Soc. Am. 1313E.A. Zabolotskaya, Y.A. Ilinskii, T.A. Hay, M.F. Hamilton, Green's functions for a volume source in an elastic half-space. J. Acoust. Soc. Am. 131(3), 1831- 1842 (2012). Structured soils in earthquake engineering. S Brûlé, S Enoch, S Guenneau, 16 th ECEE. S. Brûlé, S. Enoch, S. Guenneau, Structured soils in earthquake engineering. 16 th ECEE, June, 18-21, 2018, Thessaloniki, Greece (2018). The influence of subsoil structure and acquisition parameters in MASW mode mis-identification. J Boaga, G Vignoli, R Deiana, G Cassiani, J. Environ. Eng. Geoph. 192J. Boaga, G. Vignoli, R. Deiana, G. Cassiani, The influence of subsoil structure and acquisition parameters in MASW mode mis-identification. J. Environ. Eng. Geoph. 19 (2), pp. 87-99 (2014). Polarization of surface waves: characterization, inversion and application to seismic hazard assessment. M Hobiger, Université de GrenobleThèse de lM. Hobiger, Polarization of surface waves: characterization, inversion and application to seismic hazard assessment. Thèse de l'Université de Grenoble (2011). On the relationship of peaks and troughs of the ellipticity (H/V) of Rayleigh waves and the transmission response of single layer over half-space models. T T Tuan, F Scherbaum, P G Malischewsky, Geophys. J. Int. 184T.T. Tuan, F. Scherbaum, P.G. Malischewsky, On the relationship of peaks and troughs of the ellipticity (H/V) of Rayleigh waves and the transmission response of single layer over half-space models. Geophys. J. Int. 184, 793-800 (2011). Helmholtz Resonators with Large Aperture. J Mohring, GermanyInstitut für Techno-und Wirtschaftsmathematik KaiserslauternJ. Mohring, Helmholtz Resonators with Large Aperture. Institut für Techno-und Wirtschaftsmathematik Kaiserslautern, Germany (1998). Metamaterials beyond electromagnetism. M Kadic, T Bückmann, R Schittny, M Wegener, Rep Prog Phys. 7626501M. Kadic, T. Bückmann, R. Schittny, M. Wegener, Metamaterials beyond electromagnetism. Rep Prog Phys 76, 26501 (2013). Auxetic-like metamaterials as novel earthquake protection. B Ungureanu, Y Achaoui, S Enoch, S Brûlé, S Guenneau, EPJ Applied Metamaterials. 217B. Ungureanu, Y. Achaoui, S. Enoch, S. Brûlé, S. Guenneau, Auxetic-like metamaterials as novel earthquake protection, EPJ Applied Metamaterials 2, 17 (2015). On the cloaking effects associated with anomalous localized resonance. G W Milton, N A P Nicorovici, Proc. Roy. Lond. A. 462G.W. Milton, N.A.P. Nicorovici, On the cloaking effects associated with anomalous localized resonance, Proc. Roy. Lond. A, 462, 3027-3059 (2006). Cloaking a sensor. A Alu, N Engheta, Phys. Rev. Lett. 102233901A. Alu, N. Engheta, Cloaking a sensor, Phys. Rev. Lett. 102, 233901 (2009). Enhanced sensing and conversion of ultrasonic Rayleigh waves by elastic metasurfaces. A Colombi, V Ageeva, R J Smith, A Clare, R Patel, M Clark, D Colquitt, P Roux, S Guenneau, R V Craster, Sci. Rep. 76750A. Colombi, V. Ageeva, R.J. Smith, A. Clare, R. Patel, M. Clark, D. Colquitt, P. Roux, S. Guenneau, R.V. Craster, Enhanced sensing and conversion of ultrasonic Rayleigh waves by elastic metasurfaces. Sci. Rep. 7, 6750 (2017). Hybridization of guided surface acoustic modes in unconsolidated granular media by a resonant metasurface. A Palermo, S Krödel, K H Matlack, R Zaccherini, V K Dertimanis, E N Chatzi, A Marzani, C Daraio, Phys. Rev. Appl. 9554026A. Palermo, S. Krödel, K.H. Matlack, R. Zaccherini, V.K. Dertimanis, E.N. Chatzi, A. Marzani, C. Daraio, Hybridization of guided surface acoustic modes in unconsolidated granular media by a resonant metasurface. Phys. Rev. Appl. 9(5), 054026 (2018). Toward seismic metamaterials: the METAFORET project. P Roux, D Bindi, T Boxberger, A Colombi, F Cotton, I Bacque, S Garambois, P Gueguen, G Hillers, D Hollis, T Lecocq, I Pondaven, Seismological Research Letters. 892AP. Roux, D. Bindi, T. Boxberger, A. Colombi, F. Cotton, I. Douste Bacque, S. Garambois, P. Gueguen, G. Hillers, D. Hollis, T. Lecocq, I. Pondaven, Toward seismic metamaterials: the METAFORET project, Seismological Research Letters 89(2A), 582-593 (2018). Which elasticity tensors are realizable. G W Milton, A V Cherkaev, Journal of Engineering Materials and Technology. 1174G.W. Milton, A.V. Cherkaev, Which elasticity tensors are realizable ? Journal of Engineering Materials and Technology. 117 (4), 483-493 (1995).
[]
[ "Thermodynamic signatures of edge states in topological insulators", "Thermodynamic signatures of edge states in topological insulators" ]
[ "A Quelle \nInstitute for Theoretical Physics\nCenter for Extreme Matter and Emergent Phenomena\nUtrecht University\nLeuvenlaan 43584 CEUtrechtThe Netherlands\n", "E Cobanera \nInstitute for Theoretical Physics\nCenter for Extreme Matter and Emergent Phenomena\nUtrecht University\nLeuvenlaan 43584 CEUtrechtThe Netherlands\n", "C Morais \nInstitute for Theoretical Physics\nCenter for Extreme Matter and Emergent Phenomena\nUtrecht University\nLeuvenlaan 43584 CEUtrechtThe Netherlands\n", "Smith \nInstitute for Theoretical Physics\nCenter for Extreme Matter and Emergent Phenomena\nUtrecht University\nLeuvenlaan 43584 CEUtrechtThe Netherlands\n" ]
[ "Institute for Theoretical Physics\nCenter for Extreme Matter and Emergent Phenomena\nUtrecht University\nLeuvenlaan 43584 CEUtrechtThe Netherlands", "Institute for Theoretical Physics\nCenter for Extreme Matter and Emergent Phenomena\nUtrecht University\nLeuvenlaan 43584 CEUtrechtThe Netherlands", "Institute for Theoretical Physics\nCenter for Extreme Matter and Emergent Phenomena\nUtrecht University\nLeuvenlaan 43584 CEUtrechtThe Netherlands", "Institute for Theoretical Physics\nCenter for Extreme Matter and Emergent Phenomena\nUtrecht University\nLeuvenlaan 43584 CEUtrechtThe Netherlands" ]
[]
Topological insulators are states of matter distinguished by the presence of symmetry protected metallic boundary states. These edge modes have been characterised in terms of transport and spectroscopic measurements, but a thermodynamic description has been lacking. The challenge arises because in conventional thermodynamics the potentials are required to scale linearly with extensive variables like volume, which does not allow for a general treatment of boundary effects. In this paper, we overcome this challenge with Hill thermodynamics. In this extension of the thermodynamic formalism, the grand potential is split into an extensive, conventional contribution, and the subdivision potential, which is the central construct of Hill's theory. For topologically non-trivial electronic matter, the subdivision potential captures measurable contributions to the density of states and the heat capacity: it is the thermodynamic manifestation of the topological edge structure. Furthermore, the subdivision potential reveals phase transitions of the edge even when they are not manifested in the bulk, thus opening a variety of new possibilities for investigating, manipulating, and characterizing topological quantum matter solely in terms of equilibrium boundary physics.
10.1103/physrevb.94.075133
[ "https://arxiv.org/pdf/1601.03745v2.pdf" ]
119,116,334
1601.03745
4923f9c030ffc559aa6fb63b5bea611835fdc1df
Thermodynamic signatures of edge states in topological insulators A Quelle Institute for Theoretical Physics Center for Extreme Matter and Emergent Phenomena Utrecht University Leuvenlaan 43584 CEUtrechtThe Netherlands E Cobanera Institute for Theoretical Physics Center for Extreme Matter and Emergent Phenomena Utrecht University Leuvenlaan 43584 CEUtrechtThe Netherlands C Morais Institute for Theoretical Physics Center for Extreme Matter and Emergent Phenomena Utrecht University Leuvenlaan 43584 CEUtrechtThe Netherlands Smith Institute for Theoretical Physics Center for Extreme Matter and Emergent Phenomena Utrecht University Leuvenlaan 43584 CEUtrechtThe Netherlands Thermodynamic signatures of edge states in topological insulators (Dated: August 4, 2016) Topological insulators are states of matter distinguished by the presence of symmetry protected metallic boundary states. These edge modes have been characterised in terms of transport and spectroscopic measurements, but a thermodynamic description has been lacking. The challenge arises because in conventional thermodynamics the potentials are required to scale linearly with extensive variables like volume, which does not allow for a general treatment of boundary effects. In this paper, we overcome this challenge with Hill thermodynamics. In this extension of the thermodynamic formalism, the grand potential is split into an extensive, conventional contribution, and the subdivision potential, which is the central construct of Hill's theory. For topologically non-trivial electronic matter, the subdivision potential captures measurable contributions to the density of states and the heat capacity: it is the thermodynamic manifestation of the topological edge structure. Furthermore, the subdivision potential reveals phase transitions of the edge even when they are not manifested in the bulk, thus opening a variety of new possibilities for investigating, manipulating, and characterizing topological quantum matter solely in terms of equilibrium boundary physics. I. INTRODUCTION Topological insulators (TI's) are phases of electronic matter protected by time-reversal symmetry [1][2][3][4][5]. Here, the topological part pertains to the presence of time-reversal conjugate pairs of boundary states that are robust, that is, stable against perturbations that do not break the protecting symmetry. In TI's, the transition between a topologically trivial and a non-trivial phase is usually described in terms of band inversion, where the gap between two energy bands closes and creates an avoided crossing hosting the protected boundary states. This phenomenon is seen in HgTe quantum wells, for example, which undergo a topological phase transition as a function of quantum well thickness [6]. These edge modes have been detected through transport measurements [7]. In the three dimensional case, the boundary modes are also conveniently described spectroscopically using ARPES [8]. However, a thermodynamic description of topological boundary states is missing. A problem one encounters is that band topology is determined for infinite, translationally invariant systems in terms of Bloch Hamiltonians [9,10], whereas the edge states of the system are only found in finite systems with boundaries. The bulk-boundary correspondence describes this connection between band topology and edge states. If one tries to apply the thermodynamic formalism to topological phases of matter, one immediately discovers that there is no thermodynamic bulk-boundary correspondence. The thermodynamic potentials only depend on the energy levels of the Hamiltonian and the density of states, while the topology of the bands depends on the eigenstates: two models can have the same spectrum in the bulk, but different topological characteristics. Since in conventional thermodynamics the energy is additive with respect to the extensive variables like entropy S, volume V , and particle number N , the thermodynamic contribution of the edge states is lost in the conventional thermodynamic limit. In this work, we show that the solution to this conundrum lies in Hill's refinement of conventional thermodynamics [11]. As computed from statistical mechanics, the thermodynamic potentials (entropy or any of the various free energies) do not typically scale linearly with the extensive variables of the system due to finite size and boundary effects. In order to account for this feature within the thermodynamic framework, Hill collects the deviations from linear scaling in a new state function, the subdivision potential. To show how this works, we first give the necessary details from Hill's thermodynamics in Sec. II. We then discuss in Sec. III how this relates to the more traditional method of treating boundary effects due to Gibbs. Finally, in Sec. IV we apply the developed theory to the Bernevig-Hughes-Zhang (BHZ) model for HgTe quantum wells, to show how this works in practice. Our main results are presented in Sec. V: We show that the thermodynamic density of states and the specific heat signatures of the topological edge states can be experimentally detected. Moreover, we find that the topological phase transition is accompanied by a thermodynamic phase transition on the boundary of the system that has no counterpart in conventional thermodynamics. Our conclusions are summarised in Sec. VI. II. HILL THERMODYNAMICS Let us consider a finite system of size V , in contact with an environment at temperature T and electronic chemical potential µ. We take the extensive variable V to be the number of sites in the lattice associated to a tight-binding model of a TI. Formally, V must be a fluctuating parameter, which can be achieved by considering a reservoir of ions capable of becoming attached to the lattice. Then, conjugate to V there is a variable ν characterizing the thermodynamic response of the band structure to an increase in the number of lattice sites [12]. For a system with these independent state variables, one uses the grand potential Φ. In Hill's thermodynamics, it still obeys the conventional relation dΦ = −SdT − νdV − N dµ,(1) as in ordinary thermodynamics. Hence, the connection with arXiv:1601.03745v2 [cond-mat.mes-hall] 3 Aug 2016 the microscopic behaviour is made through the statisticalmechanical partition function, Φ = −k B T ln {Tr exp [−(H − µN )/k B T ]} ,(2) with k B denoting the Boltzmann constant and H the Hamiltonian of the TI. However, the differential equation for Φ does not integrate to −νV by way of Euler's theorem because non-linear scaling with the extensive variable V is allowed within Hill's framework. The new thermodynamic variablê ν = −Φ/V determines the subdivision potential X as X = −(ν − ν)V.(3) Hill thermodynamics was originally developed for small systems, where highly non-linear thermodynamic potentials are computed by considering an independent ensemble of such systems [11]. The approach also works if the individual small systems are not independent, since it allows for a systematic computation of potentials from statistical mechanics [13][14][15]. Interestingly, Hill thermodynamics is just as necessary for systems as large as gravitationally bound systems [16], where conventional thermodynamics fails to apply because the gravitational force is long-ranged and universally attractive. Topologically non-trivial states of electronic matter belong to neither one of these categories. What makes the subdivision potential important for TI's is the strong dependence of the spectrum on the boundary conditions, due to the bulkboundary correspondence. Since the boundary is small compared to the bulk, systems with a strong behavioural dependence on boundary conditions are in a sense small themselves, and hence Hill's thermodynamics is the natural framework to describe them. III. GIBBS AND HILL THERMODYNAMICS A recurring theme in this work, is that Hill thermodynamics describes features in topological insulators that cannot be described by effective boundary theories. In order to shed further light on this statement, we now give a detailed description of the Hill formalism. We then contrast this approach to the traditional Gibbs method of effective boundary theories. Finally, we indicate where the Gibbs method breaks down in the case of topological insulators, and why. The basic physical assumption in thermodynamics is the thermodynamic identity dE = T dS − pdV + µdN,(4) where T, p, µ denote, respectively, temperature, pressure and chemical potential. It relates the average energy, which completely determines the system, to the thermodynamic variables. Conventional thermodynamics assumes the energy to be extensive; however, it is unable to describe the non-local behaviour of topological phase transitions. If one relaxes extensiveness, Eq. (4) is no longer straightforwardly integrated. Nevertheless, Hill realised that this problem may be overcome in a very astute manner: consider a macroscopic number N of independent copies of the system, and allow this number of copies to vary. The thermodynamic identity for the total system then reads dE t = T dS t − pN dV + µdN t −pV dN ,(5) where the subscript t stands for the total system, V is the volume of an individual subsystem, and −pV is a formal thermodynamic response of the system to changes in N . It is important to note thatp might well depend on V itself, but the leading term inpV should be linear in V . Now, we consider the total system at fixed T, µ, V and since E must be linear in N , using Euler's theorem we can integrate Eq. (5) to get E t = T S t −pV N + µN t .(6) The energy of an individual system may be obtained by dividing Eq. (6) by N . Using that S = S t /N and N = N t /N , this gives E = T S −pV + µN . This result holds for any V, T, µ system, and we have actually integrated Eq. (4) for this case. The non-extensive behaviour is naturally incorporated sincep can depend on V . The deviations of Hill's thermodynamics can be naturally separated from the conventional formalism by writing E = T S − pV + µN + X.(7) Here, X = (p −p)V defines Hill's subdivision potential; it is an extra degree of freedom that characterises the nonextensiveness of the system. By inserting the ansatz Φ(µ, T, W L) L = φ 0 (µ, T ) + φ(µ, T )W(8) into the Hill formalism, as in the main text, we are clearly separating the bulk from the boundary effects in the free energy. Gibbs has also developed a method to describe boundary effects in thermodynamics, which was originally used to describe classical fluids [17], and will also be used in this work. In 1878, Gibbs described a thermodynamic approach to surface tension relying on the hypothesis that bulk and boundary could be treated as independent systems in some approximate sense. In Gibbs' approach, the free energy of the fluid system acquires a term proportional to a suitable power (e.g. 2/3) of the volume. The phenomenological success of Gibbs' approach is remarkable, since there is no sharp surface associated to any actual microscopic fluid system. In Gibbs' approach, based on conventional thermodynamics, the area of the boundary has to be an extensive thermodynamic variable in itself. This implies that one treats the surface of the system as a separate thermodynamic system with its own energetics, independent from the bulk. Consider, for example, a bulk system B with a boundary b. The bulk system will have energy U B = S B T B − νV + N B µ B , and the boundary has its own energy U b = S b T b − γA + N b µ b . Applying the equations of thermodynamic equilibrium, T B = T b = T and µ B = µ b = µ, the total energy U reads U = T (S B + S b ) − νV − γA + µ(N B + N b ). Considering the grand potential Φ i = U i − S i T i − µ i N i for both subsystems, and defining the total grand potential Φ = Φ B + Φ b , we obtain Φ = −νV − γA.(9) Because the bulk and the boundary are separate systems, with their own grand potential, the thermodynamic identity reads dΦ = −SdT − N dµ − νdV − γdA.(10) This is crucially different from the Hill approach: although Hill also uses Eq. (9), the thermodynamic identity is still given by Eq. (4), which does not contain the dA term, so that the bulk and boundary are thermodynamically connected in a natural way. In this case, the γA term and possible additional terms define the subdivision potential X. It is also possible to consider the bulk and boundary as a connected system in the Gibbs approach. In that case, one writes A(V, µ, T ), so that dA = ∂A ∂T dT + ∂A ∂µ dµ + ∂A ∂V dV. By using this identity, Eq. (10) becomes formally equivalent to the standard thermodynamic identity. However, the starting point is fundamentally different, and writing A(V, µ, T ) is tantamount to distilling a boundary theory from the total Hamiltonian. It does mean that if a sensible boundary theory can be written down, the Gibbs and Hill approaches will yield the same results. This coincidence is a powerful tool in analysing topological boundary effects. Although the subdivision potential X, and hence the boundary behaviour, can be obtained from Φ by looking at the different volume scalings, there is no clear way to separate finite-size effects from topological behaviour. However, there is a natural way to define a boundary theory for non-trivial topological insulators: diagonalise the Hamiltonian, and single out boundary states based on the localisation of the eigenfunctions. We will find that this effective theory is a powerful tool in interpreting our results when applicable. However, it is Hill's thermodynamics that allows one to find the regime where this is so. The above procedure for writing down an effective theory works precisely if the linear regime from Eq. (13) has set in. As we discuss in the next section, the minimum width W for which this holds is dependent on the gap size. This occurs because the edge states merge into the bulk at the phase transition, and an effective theory can only be expected to hold if the system is larger than the decay length of the edge states. Alternatively, the low energy theory becomes conformally invariant at the topological phase transition, so an ansatz of the form Eq. (13) cannot be correct. However, for systems larger than a floating cutoff width W 0 depending on the gap size, we get Eq. (13) as a generalised thermodynamic limit. This thermodynamic limit is equivalent to a Gibbs effective theory, hence, such an effective Gibbs theory can describe the low temperature thermodynamic responses as long as the gap remains large enough. However, if one wants to correctly describe the phase transition, one needs to keep the system size above the floating cutoff, which can be done naturally in the Hill thermodynamics by looking at the scaling, but not using an effective theory, since it is not clear precisely when an edge state has merged into the bulk. IV. APPLICATION TO BERNEVIG-HUGHES-ZHANG MODEL We will now apply the developed formalism to the paradigmatic Bernevig-Hughes-Zhang (BHZ) model of HgTe/CdTe quantum wells [6]. The Hamiltonian on the infinite plane will decompose as H = ⊕ BZ H k ,(11) where BZ stands for Brillouin zone, and the Bloch Hamiltonian reads H k = −A sin(k x )σ x − A sin(k y )σ y − {M + 2B[2 − cos(k x ) − cos(k y )]} σ z .(12) Here, k i denotes momentum in the i direction, σ i are 2 × 2 Pauli matrices, with i = x, y, z, and A, B and M are parameters depending on the thickness of the quantum well. For M < 0, the system is in a topologically non-trivial phase, whereas for M > 0 the gap is trivial. We consider a ribbon with a finite width W and a length L = 600 ≈ ∞ (so that V = LW ), and impose the corresponding boundary conditions on Eq. (11). Then, we calculate the grand potential Φ/L numerically according to Eq. (2). The subdivision potential is extracted from the ansatz Φ(µ, T, W L) L = φ 0 (µ, T ) + φ(µ, T )W.(13) Here, φ 0 = X/L is essentially the subdivision potential of the BHZ model. A linear fit is then performed in W , to obtain φ 0 and φ for the given values of µ and T . One can obtain φ and φ 0 as a function of µ and T by evaluating them on a grid, and interpolating. We let one single parameter vary, while keeping all others constant, and we use one-dimensional Hermite interpolation. Our results indicate that the above ansatz cor-rectly describes the relevant features of the model for large W , but the interested reader is referred to the supplementary material for a detailed error analysis of the fitting procedure. The parameters A = B = 1 are fixed for numerical convenience and clarity in the results. From Eq. (13), it is readily derived that ν = −φ andν = −φ − φ 0 /W , which indeed become equal as W → ∞, as expected for large systems. However, we will demonstrate that φ 0 cannot be neglected for topologically non-trivial systems, showing that Eq. (13) defines a generalised thermodynamic limit, appropriate for topological insulators with boundary. V. RESULTS Using the thermodynamic identity, various thermodynamic responses can be calculated, and due to Eq. (13), these naturally split into a boundary and a bulk contribution. In Fig. 1(a) and Fig. 1(c) ( Fig. 1(b) and Fig. 1(d)), we plot the T = 0 (µ = 0) density of states (heat capacity at constant volume) for the bulk (B) D B : = −∂ 2 φ/∂µ 2 (C B v := −T ∂ 2 φ/∂T 2 ) , and for the boundary (b) D b := −∂ 2 φ 0 /∂µ 2 (C b v := −T ∂ 2 φ 0 /∂T 2 ) in blue and in red, respectively. In Fig. 1(a) ( Fig. 1(b)), the system is in the trivial phase M = 1, whereas in Fig. 1(c) (Fig. 1(d)), the system is the topological phase M = −1. Our data show clearly that D B vanishes in the energy gap |µ| < 1 both in the topological and in the trivial phase. However, due to the presence of edges, there is also a nonvanishing contribution D b . To interpret this contribution, it is important to note that φ 0 contains not just the topological edge states, but also other finite size effects, as evidenced by the finite value of φ 0 even if M > 0. Outside the gap, nontopological finite size effects are dominated by the discreteness of the spectrum, and D b is essentially just noise. The noise becomes reduced in magnitude for non-zero temperatures, which smooths out the discreteness of the spectrum, or by fitting Eq. (13) to a larger range of W values. On the other hand, the energy spectrum of the edge states is much less dependent on the width W , and not subject to noise. Inside the gap, D B = 0 and the scaling of the density of states with W vanishes, making D b the only term for all system sizes. In this case, it can be interpreted as the density of topological edge states, since finite-size effects only show up in combination with bulk behaviour. Indeed, the Dirac states at the edge disperse with E = Ak + O(k 3 ) [4], yielding a DOS of 1/(Aπ) at µ = 0 according to the Gibbs approach; the line 1/π (obtained by putting A = 1) has been added in yellow in Fig. 1(c). To first order in T , the heat capacity behaves as C v = π 3 Dk B T(14) since a Dirac fermion has conformal charge 1 [18]. Hence, to linear order the bulk C B v necessarily vanishes at low temperatures for µ = 0. However, while we expect the boundary C b v to vanish at low temperatures in the trivial phase at M = 1 ( Fig. 1(b)), for the topological phase with M = −1 the specific heat amounts to C b v = πk B T /3, which can be obtained by simply substituting D(0) = 1/π into Eq. (14). Indeed, in Fig. 1(d), we observe a linear scaling of C b v at low temperatures. In the inset, the yellow line with slope π/3 has been added to confirm that the low temperature heat capacity derives from the edge states. In both of these cases, the φ 0 term in Eq. (13) could not be dropped because a derivative of φ vanished, and hence the contribution from φ 0 became dominant. This shows that for edge effects to become irrelevant, it is not only required that ν → ν, but that this also holds for all derivatives. There is another situation where the derivatives ofν fail to converge, which is if a phase transition occurs at the boundary. In Fig. 2, we depict the behaviour of the thermodynamic potential in terms of the parameter M that controls the topological phase transitions in the BHZ model. In Fig. 2(a), ∂ 2 φ/∂M 2 is shown, which exhibits a slightly smoothed kink at M = 0. This smoothing is precisely as large as the sample spacing in M from which we interpolated to obtain the graph, indicating that it is a numerical artefact. The inlay shows ∂ 3 φ/∂M 3 , which is discontinuous at M = 0. Calculating the same graph for the system on an infinite plane yields the same behaviour, except that the kink is sharper. Therefore, the closing of the band gap is detected by the bulk free energy and characterised as a third-order phase transition. It is reminiscent of a lambda phase transition, and the divergence occurs in the non-topological regime. In contrast, Fig. 2(b) shows that ∂φ 0 /∂M exhibits a kink. The inlay shows that the transition is again of lambda type, with the divergence occurring in the non-topological regime, but this time the order is different: the edge undergoes a second-order phase transition. This phase transition cannot be described by an effective boundary theory through the Gibbs method, as can be seen because the divergence is at the trivial side, where there are no edge states. This occurs because the phase transition is driven by the merging of the topological edge states into the bulk. The precise moment when an edge state ceases to be an edge state cannot be determined from its localisation. However, any contribution to the free energy coming from topological states should scale with the edge length of the system. As such, even if we do not precisely know which states are edge states, φ 0 naturally captures how many there are in total. Therefore, it is not the presence of the edge states that determines the boundary scaling, but the boundary scaling that determines the edge states. The absence of a sensible boundary theory might cause doubt whether φ 0 truly detects a thermodynamic phase transition related to the appearance of topological edge states, rather than an echo of the bulk phase transition. This issue can be clarified by adding an on-site superconducting pairing to the BHZ-model. The same effect and results would be obtained from induced superconductivity on the edge [19]. The Bloch Hamiltonian for this system is H ∆,k := H k ∆ ∆ −H * k ,(15) where H k is the Hamiltonian from Eq. (12) and ∆ is the superconducting pairing parameter. Adding superconducting pairing has added particle-hole symmetry to the system; the bulk As expected, it vanishes to linear order as T → 0. In red, C b v (T ) for the same parameters. Since the system is not in a topological phase, C b v also vanishes to linear order as T → 0. (c) The same as in (a), except that the blue curve has been shifted upwards by 1, and the system is in the topological phase M = −1. Furthermore, the line 1/π has been added in yellow to emphasise that the edge has a non-vanishing DOS in the gap. (d) The same as in (b), but in the topological phase M = −1. One sees that C b v now is linear at the edge for low T . In the inset, the straight line with slope π/3 has been added to emphasise the linear scaling of C b v with temperature. stays gapped, but now the gapless edge states acquire a mass ∆ [20]. Since the gap does not close, φ is a smooth function of ∆, and no phase transition occurs in the bulk. However, the subdivision potential φ 0 detects the opening of the mass gap for the Dirac electrons at the edge as a continuous boundary phase transition. This can be seen in Fig. 3, where ∂ 2 φ 0 /∂∆ 2 is shown in red. A clear divergence is present in ∂ 2 φ 0 /∂∆ 2 at ∆ = 0, where the particle-hole symmetry is broken. Because adding a Cooper pairing does not merge edge states into the bulk, but only gives them a mass, it is possible to describe a phase transition in ∆ using an effective boundary theory in the manner of Gibbs, and thereby confirm our interpretation. This theory reads H e,k := kσ x + ∆σ z , with a momentum cutoff |k| < 1 (since A = B = −M = 1, the edge states exist only for this range of k values). The free energy per unit length of H e,k is φ e = ∆ 2 ln 1 + √ 1 + ∆ 2 ∆ + 1 + ∆ 2 .(17) The corresponding value of ∂ 2 φ e /∂∆ 2 is shown in Fig. 3 in yellow. The divergence at ∆ = 0 indicates a continuous phase transition due to the appearance of gapless edge modes in Eq. (16). The behaviour of the subdivision potential φ 0 is compatible with that of φ e , indicating that φ 0 truly detects topological edge behaviour, since it gives the same qualitative results as an effective boundary theory. In red, ∂ 2 φ0/∂∆ 2 is shown as a function of ∆. The smoothing of an infinite peak is visible, which indicates the presence of a third-order phase transition at the boundary of the system. In yellow, ∂ 2 φe/∂∆ 2 + 6 is shown, using Eq. (17). The agreement between the two curves indicates that the phase transition at the edge of the model comes from the opening of a gap for the edge states. VI. CONCLUSIONS Our general scheme for extracting thermodynamic signatures of the bulk-boundary correspondence, illustrated here for the paradigmatic BHZ model of the quantum-spin Hall effect in HgTe/CdTe quantum wells, comprises a novel tool kit for detecting these elusive states of matter in terms of equilibrium measurements. The scheme is based on the idea that for topologically non-trivial systems, a non-conventional thermodynamic limit exists: even though the system becomes infinitely large, the boundary is always present. The boundary is taken into account as in Eq. (13), in terms of the subdivision potential at the heart of Hill thermodynamics. The subdivision potential captures thermodynamic signatures unique to topological insulators, including a DOS in the bandgap observable in transport measurements [7], and a contribution to the electronic heat capacity, linear for the BHZ model. Most notably, this contribution might be relevant for identifying topological phases in ultracold-atom systems, where transport experiments are challenging and the absence of phonons makes the measurement of fermionic heat capacity easier. Although the edge contributions to the density of states and the specific heat can be captured by an effective model that artificially separates the bulk and the boundary contributions, the same does not hold when describing generic topological phase transitions. If a system undergoes a phase transition within the same symmetry class, the bandgap necessarily closes and a bulk phase transition takes place. The appearance/disappearance of topologically protected edge states gives rise to an accompanying boundary phase transition, which can only be classified by the subdivision potential, and does not need to be of the same order as the one in the bulk. Furthermore, one also observes phase transitions between different symmetry classes, which occur without a closing of the bulk gap. In this case, the topological phase transition occurs solely at the boundary. The fact that φ 0 detects a phase transition even if the bulk gap does not close shows that the subdivision potential describes the edge behaviour of the system. This makes it a quantity of prime interest for investigating topological phase transitions, and allows for a classification of their order within the well known Ehrenfest scheme. These results open a new set of possibilities for experimentally detecting topological order by delicate but standard thermodynamic methods, and provide a deeper understanding of the effect of edges in the abstruse field of topological insulators. Ministry of Education, Culture and Science (OCW). APPENDIX: FITTING ANALYSIS Key to our work is the assumption that the grand potential Φ has the asymptotic form given in Eq. (13). Since numerical calculations are necessarily done for finite samples, the question rises for which values of W deviations from the asymptotic behaviour in Eq. (13) become negligible. Here, we provide a detailed description of the way these deviations depend on the various parameters in the system. The deviation, as a function of W , depends on µ and T , and also on the gap size. Throughout this appendix, as well as in the main text, A = B = 1. We will vary the parameters µ, T , and M , and study the relative error (Φ W − Φ)/Φ, where Φ W is given by Eq. (2) for sample width W , and Φ is the corresponding value after fitting to Eq. (13). The range of W along which we fitted to obtain Φ will be mentioned for each specific case. In Fig. 4, the relative error has been plotted for the trivial phase M = 1, at T = 0, for small values of W , although the fitting was done for 40 ≤ W ≤ 100, where the linear behaviour in W has set in. In this way, one observes the small W deviations from Eq. 13 without including them in the fit. The results are shown in Fig. 4(a) and (b) for µ = 0, i.e. in the gap. Similarly, the results are shown in Fig. 4(c) and (d) for µ = −4/3, i.e. in one of the bulk energy bands. We see that for very small system width, there is a large deviation from the linear relation in Eq. (13), but the error quickly decreases. For µ in the gap, the error quickly becomes negligible, while for µ in one of the energy bands, the error is much larger , but also shows a clear structure (notice the unequal scales 10 −16 in Fig. 4(b) and 10 −7 in Fig. 4(d)). A likely cause of this error is the discreteness of the spectrum, which causes Φ W to deviate from Φ as the system width is varied. This deviation occurs because varying the system width causes individual energies jump from above to below the chemical potential and vice versa. In the gap there are no states near the chemical potential, so this effect is absent, significantly reducing the error. In Fig. 5, the relative error is plotted for µ = −4/3 at T = 0 and M = 1 for a fit along 500 ≤ W ≤ 550. The relative error has decreased by two orders of magnitude compared to Fig. 4(d), which means that the absolute error has decreased by one order of magnitude. This implies that the energy spectrum becomes less dependent on W as W increases. Furthermore, the behaviour of the relative error is qualitatively similar if one takes M = −1, which puts the system in the topological phase. This shows that the energy spectrum of the edge states is highly independent of W even at small width, as can be expected from their strong localisation on the edge. When the gap becomes much smaller, the conclusions above still hold, but the width W for which the error becomes negligible is larger. In Fig. 6(a), the relative error has been plotted for µ = T = 0 at M = 0, for a fit along the values 90 ≤ W ≤ 100. The errors show that the linear regime from Eq. (13) has not set in yet. In contrast, Fig. 6(b) shows a fit along 1000 ≤ W ≤ 1010, for the same parameters, and here a linear scaling clearly holds. Finally, in Fig. 7, the relative error is shown for M = −1, µ = −2 for a fitting interval of 50 ≤ W ≤ 100. In Fig. 7(a), where T = 1/100, the relative error is of the same order of magnitude as in Fig. 4(d), indicating that the temperature is not yet high enough to suppress the fluctuations. In Fig. 7(b), where T = 1/10, the relative error has decreased by orders of magnitude, indicating that the smoothing of the Fermi-Dirac distribution at these temperatures suppresses the fluctuations in the energy spectrum. The errors are of the same order as in Fig. 4(d). (b) The same as in (a), but for kBT = 1/10. The errors are significantly smaller, since the temperature effectively smooths out the energy spectrum. FIG. 1 . 1(a) In blue, D B (µ)/W is shown for M = 1; it has been shifted up by 0.1 for greater visibility. As expected, it vanishes in the energy gap. In red, D b (µ) for the same parameters, which is the DOS on the edge. Since the system is not in a topological phase, the DOS at the edge also vanishes in the gap. (b) In blue, C B v (T )/W is shown as a function of T for M = 1, µ = 0. FIG. 2 . 2(a) The second derivative ∂ 2 φ/∂M 2 is shown as a function of M . A kink is visible at M = 0, indicating that the bulk undergoes a third-order phase transition (see the inlay, where the discontinuity in the third derivative is depicted). (b) The first derivative ∂φ0/∂M is shown as a function of M . A kink is visible at M = 0 indicating that the edge undergoes a second-order phase transition (the discontinuity in the second derivative is depicted in the inlay), which can be considered the order of the topological phase transition. FIG . 4. (a) The relative error between ΦW and Φ, for µ = T = 0 and M = −1. We have fitted along the interval 40 ≤ W ≤ 100. (b) Same as in (a) but the range of W has been changed. (c) Same as in (a) but for µ = −4/3. (d) Same as in (b) but for µ = −4/3. FIG. 5 .FIG. 6 .FIG. 7 . 567The relative error between ΦW and Φ, for T = 0, µ = −4/3, and M = −1. We have fitted along the interval 500 ≤ W ≤ 550. (a) The relative error between ΦW and Φ, for T = 0, µ = 0 and M = 0. We have fitted along the interval 90 ≤ W ≤ 100. It can be seen that the linear regime has not set in for this system size. (b) The same as in (a), but for 1000 ≤ W ≤ 1010. The linear regime has set in, and the error is negligible. (a) The relative error between ΦW and Φ, for kBT = 1/100, µ = −2 and M = 1. We have fitted along the interval 50 ≤ W ≤ 100. ACKNOWLEDGMENTSThe authors would like to acknowledge A. Bernevig and L. Molenkamp for useful discussions. This work is part of the D-ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch . C Kane, E Mele, Phys. Rev. Lett. 95226801C. Kane and E. Mele, Phys. Rev. Lett. 95, 226801 (2005). . C Kane, E Mele, Phys. Rev. Lett. 95146802C. Kane and E. Mele, Phys. Rev. Lett. 95, 146802 (2005). . X.-L Qi, S.-C Zhang, 10.1063/1.3293411Phys. Today. 6333X.-L. Qi and S.-C. Zhang, Phys. Today 63, 33 (2010). . M Z Hasan, C L Kane, 10.1103/RevModPhys.82.3045Rev. Mod. Phys. 823045M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010). . X.-L Qi, S.-C Zhang, 10.1103/RevModPhys.83.1057arXiv:1008.2026Reviews of Modern Physics. 831057cond-mat.mes-hallX.-L. Qi and S.-C. Zhang, Reviews of Modern Physics 83, 1057 (2011), arXiv:1008.2026 [cond-mat.mes-hall]. . B Bernevig, T Hughes, S.-C Zhang, Science. 3141757B. Bernevig, T. Hughes, and S.-C. Zhang, Science 314, 1757 (2006). . M König, S Wiedmann, C Brüne, A Roth, H Buhmann, L W Molenkamp, X.-L Qi, S.-C Zhang, 10.1126/science.1148047Science. 318766M. König, S. Wiedmann, C. Brüne, A. Roth, H. Buhmann, L. W. Molenkamp, X.-L. Qi, and S.-C. Zhang, Science 318, 766 (2007). . C Brüne, C X Liu, E G Novik, E M Hankiewicz, H Buhmann, Y L Chen, X L Qi, Z X Shen, S C Zhang, L W Molenkamp, 10.1103/PhysRevLett.106.126803Phys. Rev. Lett. 106126803C. Brüne, C. X. Liu, E. G. Novik, E. M. Hankiewicz, H. Buh- mann, Y. L. Chen, X. L. Qi, Z. X. Shen, S. C. Zhang, and L. W. Molenkamp, Phys. Rev. Lett. 106, 126803 (2011). A Kitaev, 10.1063/1.3149495arXiv:0901.2686American Institute of Physics Conference Series. V. Lebedev and M. Feigel'Man1134American Institute of Physics Conference Series. cond-mat.mes-hallA. Kitaev, in American Institute of Physics Conference Series, American Institute of Physics Conference Series, Vol. 1134, edited by V. Lebedev and M. Feigel'Man (Chernogolokova, 2009) pp. 22-30, arXiv:0901.2686 [cond-mat.mes-hall]. . S Ryu, A P Schnyder, A Furusaki, A W W Ludwig, 10.1088/1367-2630/12/6/065010New J. Phys. 1265010S. Ryu, A. P. Schnyder, A. Furusaki, and A. W. W. Ludwig, New J. Phys. 12, 065010 (2010). The thermodynamics of small systems. T L Hill, Dover PublishingNew York2nd ed.T. L. Hill, The thermodynamics of small systems, 2nd ed. (Dover Publishing, New York, 1994). One can also consider the dependence of the thermodynamics on V if it is not a fluctuating parameter, but then one would formally be comparing different thermodynamic systems. which is experimentally more feasable, but theoretically inelegantOne can also consider the dependence of the thermodynamics on V if it is not a fluctuating parameter, but then one would for- mally be comparing different thermodynamic systems, which is experimentally more feasable, but theoretically inelegant. . R V Chamberlin, 10.1103/PhysRevLett.82.2520Phys. Rev. Lett. 822520R. V. Chamberlin, Phys. Rev. Lett. 82, 2520 (1999). . R Chamberlin, 10.1038/35042534Nature. 408337R. Chamberlin, Nature 408, 337 (2000). . R Chamberlin, 10.3390/e17010052arXiv:1504.04754Entropy. 17cond-mat.stat-mechR. Chamberlin, Entropy 17, 52 (2014), arXiv:1504.04754 [cond-mat.stat-mech]. . I Latella, A Pérez-Madrid, A Campa, L Casetti, S Ruffo, Phys. Rev. Lett. 114230601I. Latella, A. Pérez-Madrid, A. Campa, L. Casetti, and S. Ruffo, Phys. Rev. Lett. 114, 230601 (2015). . J Gibbs, Transactions of the Connecticut Academy of Arts and Sciences. 3198J. Gibbs, Transactions of the Connecticut Academy of Arts and Sciences 3, 198 (1878). P Di Francesco, P Mathieu, D Senechal, Conformal Field Theory. New YorkSpringerP. Di Francesco, P. Mathieu, and D. Senechal, Conformal Field Theory (Springer, New York, 1997). . S Hart, H Ren, T Wagner, P Leubner, M Mühlbauer, C Brüne, H Buhmann, L Molenkamp, A Yacoby, 10.1038/nphys3036Nat. Phys. 10638S. Hart, H. Ren, T. Wagner, P. Leubner, M. Mühlbauer, C. Brüne, H. Buhmann, L. Molenkamp, and A. Yacoby, Nat. Phys. 10, 638 (2014). . M Ezawa, Y Tanaka, N Nagaosa, 10.1038/srep02790Sc. Rep. 32790M. Ezawa, Y. Tanaka, and N. Nagaosa, Sc. Rep. 3, 2790 (2013).
[]
[ "Construction and Iteration-Complexity of Primal Sequences in Alternating Minimization Algorithms", "Construction and Iteration-Complexity of Primal Sequences in Alternating Minimization Algorithms" ]
[ "Quoc Tran-Dinh " ]
[]
[]
We introduce a new weighted averaging scheme using "Fenchel-type" operators to recover primal solutions in the alternating minimization-type algorithm (AMA) for prototype constrained convex optimization. Our approach combines the classical AMA idea in[18]and Nesterov's prox-function smoothing technique without requiring the strong convexity of the objective function. We develop a new non-accelerated primal-dual AMA method and estimate its primal convergence rate both on the objective residual and on the feasibility gap. Then, we incorporate Nesterov's accelerated step into this algorithm and obtain a new accelerated primal-dual AMA variant endowed with a rigorous convergence rate guarantee. We show that the worst-case iteration-complexity in this algorithm is optimal (in the sense of first-oder black-box models), without imposing the full strong convexity assumption on the objective.Keywords Alternating minimization algorithm · smoothing technique · primal solution recovery · accelerated first-oder method · constrained convex optimization 1 Introduction This paper studies a new weighted-averaging strategy in alternating minimizationtype algorithms (AMA) to recover a primal solution of the following constrained convex optimization problem:where g : R p 1 → R ∪ {+∞} and h : R p 2 → R ∪ {+∞} are both proper, closed and convex (not necessarily strongly convex), (p 1 + p 2 = p, A ∈ R n×p 1 , B ∈ R n×p 2 , c ∈ R n , and U ⊂ R p 1 and V ⊂ R p 2 are two nonempty, closed and convex sets. Problem (1) surprisingly covers a broad class of constrained convex programs, including composite convex minimization, general linear constrained convex optimization problems, and conic programs.
null
[ "https://arxiv.org/pdf/1511.03305v1.pdf" ]
119,576,375
1511.03305
effe16d97571053c4abe2cac70f1f0cc2558b6f2
Construction and Iteration-Complexity of Primal Sequences in Alternating Minimization Algorithms 10 Nov 2015 Quoc Tran-Dinh Construction and Iteration-Complexity of Primal Sequences in Alternating Minimization Algorithms 10 Nov 2015Received: date / Accepted: dateNoname manuscript No. (will be inserted by the editor) We introduce a new weighted averaging scheme using "Fenchel-type" operators to recover primal solutions in the alternating minimization-type algorithm (AMA) for prototype constrained convex optimization. Our approach combines the classical AMA idea in[18]and Nesterov's prox-function smoothing technique without requiring the strong convexity of the objective function. We develop a new non-accelerated primal-dual AMA method and estimate its primal convergence rate both on the objective residual and on the feasibility gap. Then, we incorporate Nesterov's accelerated step into this algorithm and obtain a new accelerated primal-dual AMA variant endowed with a rigorous convergence rate guarantee. We show that the worst-case iteration-complexity in this algorithm is optimal (in the sense of first-oder black-box models), without imposing the full strong convexity assumption on the objective.Keywords Alternating minimization algorithm · smoothing technique · primal solution recovery · accelerated first-oder method · constrained convex optimization 1 Introduction This paper studies a new weighted-averaging strategy in alternating minimizationtype algorithms (AMA) to recover a primal solution of the following constrained convex optimization problem:where g : R p 1 → R ∪ {+∞} and h : R p 2 → R ∪ {+∞} are both proper, closed and convex (not necessarily strongly convex), (p 1 + p 2 = p, A ∈ R n×p 1 , B ∈ R n×p 2 , c ∈ R n , and U ⊂ R p 1 and V ⊂ R p 2 are two nonempty, closed and convex sets. Problem (1) surprisingly covers a broad class of constrained convex programs, including composite convex minimization, general linear constrained convex optimization problems, and conic programs. Primal-dual methods handle problem (1) together with its dual formulation, and generate a primal-dual sequence so that it converges to a primal and dual solution of (1). Research on primal-dual methods has been extensively studied in the literature for many decades, see, e.g., [4,17,19] and the references quoted therein. However, such methods have attracted a great attention in the past decade due to new applications in signal and image processing, economics, machine learning, and statistics. Various primal-dual methods have been rediscovered and extended, not only from algorithmic perspectives, but also from theoretical convergence guarantees. Despite of this great attempt in the algorithmic development, the corresponding supporting theory has not been well-developed, especially, the algorithms with rigorous convergence guarantees and low complexity-per-iteration. Perhaps, applying first order methods to the dual is the most nature approach to solve constrained problems of the form (1). By means of the Lagrange duality theory, we can formulate the dual problem of (1) as a convex problem, where existing convex optimization techniques can be applied to solve it. Depending on the structure assumptions imposing on (1), the dual problem possesses useful properties that can be exploited to develop algorithms for the dual. For instance, we can use subgradient, gradient, proximal-gradient, as well as other proximal and splitting techniques to solve this problem. Then, the primal solutions of (1) can be recovered from the dual solutions [10,20]. Among many other primal-dual splitting methods, alternating minimization algorithm (AMA) proposed by Tseng [18] becomes one of the most popular and powerful methods to solve (1) when g and h are nonsmooth and convex, and either g or h is strongly convex. Unfortunately, to the best of our knowledge, there has existed no optimization scheme to recover primal solutions of (1) in AMAs with convergence rate guarantees on both the primal objective residual and the feasibility gap. If g and h are nonsmooth, then numerical methods for solving (1) often rely on the proximal operators of g and h. Mathematically, a proximal operator of a proper, closed, and convex function ϕ : R p → R ∪ {+∞} is defined as: prox ϕ (x) := argmin z ϕ(z) + (1/2) z − x 2 . ( If prox ϕ can be computed efficiently, i.e., by a closed form or by a polynomial time algorithm, then we say that ϕ has a "tractable proximity" operator. There exist many smooth and nonsmooth convex functions with tractable proximity operators as indicated in, e.g., [6,14]. The proximal operator is in fact a special case of the resolvent in monotone inclusions [16]. Principally, the optimality condition for (1) can be cast into a monotone inclusion [1,8]. By mean of proximity operators and gradients, splitting approaches in monotone inclusions can be applied to solve such a problem [7,5,8]. However, due to this generalization, the convergence guarantees and the convergence rates of these algorithms often achieve via a primal-dual gap or residual metric joined both the primal and dual variables. Such convergence guarantees do not reveal the complexity bounds of the primal sequence for (1) at intermediate iterations when we terminate the algorithm at a desired accuracy. Our approach in this paper is briefly described as follows. First, since we work with non-strongly convex objectives g and h, we employ Nesterov's smoothing technique via prox-functions [13] to partially smooth the dual function. Then, we apply the forward-backward splitting method to solve the smoothed dual problem, which is exactly the AMA method in [18]. Next, we introduce a new weighted averaging scheme using the Fenchel-type operators (c.f. (7)) to generate the primal sequence simultaneously with the dual one. We then prove convergence rate guarantees for (1) in the primal variable as opposed to the dual one as in [9]. Finally, by incorporating Nesterov's acceleration step into the forward-backward splitting method, we obtain an accelerated primal-dual variant for solving (1) with a primal convergence rate guarantee. Interestingly, we can show that the primal sequence converges to an optimal solution of (1) with the O(1/k 2 )-optimal rate provided that only g or h is strongly convex, but not the whole function f as in accelerated dual gradient methods [10], where k is the iteration counter. Our contributions: Our specific contributions can be summarized as follows: a) We propose to combine Nesterov's smoothing technique, the alternating minimization idea, and the weighted-averaging strategy to develop a new primaldual AMA algorithm for solving (1) without strong convexity assumption on g or h. We characterize the convergence rate on the absolute primal objective residual |f (x k ) − f ⋆ | and feasibility gap Aū k + Bv k − c for the averaging primal sequence x k . By an appropriate choice of the smoothness parameter, we provide the worst-case iteration-complexity of this algorithm to obtain an ǫ-primal solution. b) By incorperatiing Nesterov's accelerated step, we develop a new accelerated primal-dual AMA variant for solving (1), and characterize its worst-case iterationcomplexity which is optimal in the sense of first-oder black-box models [12]. c) When either g or h is strongly convex, we recover the standard AMA algorithm as in [9], but with our averaging strategy, we obtain the O(1/k 2 )-convergence rate on |f (x k ) − f ⋆ | and Aū k + Bv k − c separably for the primal problem (1), not for its dual. Let us emphasize the following points of our contributions. First, we can view the algorithms presented in this paper as the ISTA and FISTA schemes [2] applied to the smoothed dual problem of (1) instead the original dual of (1) as in [9]. The convergence rate on the dual objective residual is well-known and standard, while the convergence rates on the primal sequence are new. Second, we adapt the weights in our averaging primal sequence (c.f. (9)) to the local Lipschitz constant via a back-tracking line-search, which potentially increases the empirical performance of the algorithms. Third, the averaging primal sequence is computed via an additional sharp-operator of h V (c.f. (7)) instead of the current primal iterate. This computation can be done efficiently (e.g., in a closed form) when h V has a decomposable structure. Paper organization: The rest of this paper is organized as follows. Section 2 briefly describes standard Lagrange duality framework for (1), and shows how to apply Nesterov's smoothing idea to the dual problem. The main results are presented in Sections 3 and 4, where the two new algorithms and their convergence are provided. Section 5 is devoted to investigating the strongly convex case. Concluding remarks are given in Section 6, while technical proof is moved to the appendix. 2 Primal-dual framework and smoothing technique First, we briefly present the Lagrange duality framework for (1). Then we show how to apply Nesterov's smoothing technique to smooth the dual function of (1). The Lagrange primal-dual framework Let x := (u, v) denote the primal variables, and D := {x ∈ U × V : Au + Bv = c} denote the feasible set of (1). We define the Lagrange function of (1) corresponding to the linear constraint Au + Bv = c as L(x, λ) := g(u) + h(v) + λ, c− Au − Bv , where λ is the vector of Lagrange multipliers. Then, we can define the dual function d of (1) as d(λ) := min u∈U ,v∈V {g(u) + h(v) + λ, c − Au − Bv } .(3) Clearly, d can be split into three terms d(λ) = d 1 (λ) + d 2 (λ) + c, λ , where      d 1 (λ) := min u∈U g(u) − A T λ, u , d 2 (λ) := min v∈V h(v) − B T λ, v .(4) Using d, we can define the dual problem of (1) as d ⋆ := max λ∈R n d(λ).(5) We say that problem (1) satisfies the Slater condition if ri(X ) ∩ {Au + Bv = c} = ∅,(6) where X := U × V and ri(X ) is a the relative interior of X [17]. In this paper, we require the following blanket assumptions, which are standard in convex optimization. Assumption A.1 The functions g and h are both proper, closed, and convex (not necessarily strongly convex). The solution set X ⋆ of (1) is nonempty. The Slater condition (6) holds for (1). It is well-known that, under Assumption A.1, strong duality in (1) and (5) holds, i.e., we have zero duality gap which is expressed as f ⋆ − d ⋆ = 0. Moreover, for any feasible point (x, λ) ∈ dom(f ) × R n and any primal-dual solution ( x ⋆ , λ ⋆ ) with x ⋆ := (u ⋆ , v ⋆ ) ∈ X ⋆ we have: L(x ⋆ , λ) ≤ L(x ⋆ , λ ⋆ ) = f ⋆ = d ⋆ ≤ L(x, λ ⋆ ) for all x ∈ X and λ ∈ R n . Now, let us consider the components d 1 and d 2 of (4). Indeed, we can write these components as d 1 (λ) = − max u∈U A T λ, u − g(u) = −g * U (A T λ), d 2 (λ) = − max v∈V B T λ, v − h(v) = −h * V (B T λ), where g * U and h * V are the Fenchel conjugate of g U := g + δ U and h V := h + δ V , respectively [17]. If we define two multivalued maps u # (s) := argmax u∈U { s, u − g(u)} , and v # (s) := argmax v∈V { s, v − h(v)} , (7) then the solution u * (λ) of d 1 in (4) is given by u * (λ) ∈ u # (A T λ) ≡ ∂g * U (A T λ). Similarly, the solution v * (λ) of d 2 in (4) is given by v * (λ) ∈ v # (B T λ) ≡ ∂h * V (B T λ) . We call u # and v # the sharp-operator of g and h, respectively [20]. Each oracle call to d queries one element of the sharp-operators u # and v # at a given λ ∈ R n . By using the saddle point relation, we can show that f * ≤ L(x, λ ⋆ ) = f (x) − Au + Bv − c, λ ⋆ ≤ f (x) + Au + Bv − c λ ⋆ for any x ∈ X . Hence, we have − λ ⋆ Au + Bv − c ≤ f (x) − f ⋆ ≤ f (x) − d(λ).(8) In this paper, we only assume that the second dual component d 2 defined by (4) satisfies the following assumption. Assumption A.2 The dual component d 2 defined by (4) is finite. This assumption holds in particular when V is bounded. Moreover, v * (λ) is welldefined for any λ ∈ R n . Throughout this paper, we assume that Assumptions A.1 and A.2 holds without referring to them again. The primal weighted averaging sequence Given a sequence of the primal approximation x k k≥0 , wherex k := (ũ k ,ṽ k ) ∈ X . We define the following weighted averaging sequence x k withx k := (ū k ,v k ) as u k := S −1 k k i=1 w iũ i ,v k := S −1 k k i=0 w iṽ i , and S k := k i=0 w i ,(9) where {w i } i≥0 ⊂ R ++ is the corresponding weights. To avoid storing the whole sequence ũ k ,ṽ k ) in our algorithms, we can compute x k recursively as follows: u k := (1 − τ k )ū k−1 + τ kũ k , andv k := (1 − τ k )v k−1 + τ kṽ k , ∀k ≥ 1, (10) where τ k := w k S k ∈ [0, 1],ū 0 :=ũ 0 , andv 0 :=ṽ 0 . Clearly, for any convex function f , we have f (x k ) ≤ S −1 k k i=0 w i f (x i ) by the well-known Jensen inequality. Approximate solutions: Our goal is to approximate a solution x ⋆ of (1) by x ⋆ ǫ in the following sense: Definition 1 Given an accuracy level ǫ > 0, a point x ⋆ ǫ := (u ⋆ ǫ , v ⋆ ǫ ) ∈ X is said to be an ǫ-solution of (1) if |f (x ⋆ ǫ ) − f ⋆ | ≤ ǫ and Au ⋆ ǫ + Bv ⋆ ǫ − c ≤ ǫ.(11) Here, we call |f (x ⋆ ǫ )−f ⋆ | the [absolute] primal objective residual and Au ⋆ ǫ +Bv ⋆ ǫ − c the primal feasibility gap. The condition x ⋆ ǫ ∈ X is in general not restrictive since, in many cases, X is a simple set (e.g., a box, a simplex, or a conic cone) so that the projection onto X can exactly be guaranteed. Smoothing the dual component As mentioned earlier, we first focus on the non-strongly convex functions g and h. In this case, we can not directly apply the standard AMA [18] to solve (1). We smooth g by using a prox-function as follows. A continuous and strongly convex function p U with the strong convexity parameter µ p > 0 is called a prox-function for U if U ⊆ dom(p U ) [13]. We consider the following smoothed function d 1 γ for d 1 : d 1 γ (λ) := min u∈U {g(u) − λ, Au + γp U (u)} ,(12) where γ > 0 is a smoothness parameter. It is well-known that d 1 γ is concave and smooth. Moreover, as shown in [13], its gradient is given by ∇d 1 γ (λ) = −Au * γ (λ), which is Lipschitz continuous with the Lipschitz constant L γ d 1 := A 2 γµ p , where u * γ (λ) is the unique solution of the minimization problem in (12). In addition, we have the following estimate d 1 γ (λ) − γD U ≤ d 1 (λ) ≤ d 1 γ (λ), ∀ λ ∈ R n ,(13) where D U is the prox-diameter of U , i.e., D U := sup u∈U p U (u).(14) In order to develop algorithms, we require the following additional assumption. Assumption A.3 The quantity D U defined by (14) is finite, i.e., 0 ≤ D U < +∞. Clearly, if U is bounded, then Assumption A.3 is automatically satisfied. Under Assumption A.3, we consider the following convex problem: d ⋆ γ := max λ∈R n d γ (λ) := d 1 γ (λ) + d 2 (λ) + c, λ .(15) Using (13), we can see that d ⋆ γ converges to d ⋆ as γ ↓ 0 + . Hence, (15) can be considered as an approximation to the dual problem (5). We call (15) the smoothed dual problem of (1). The non-accelerated primal-dual alternating minimization algorithm Since d 1 γ is Lipschitz gradient, we can apply the proximal-gradient method (ISTA [2]) to solve (15). This leads to the AMA scheme presented in [9,18]. The main iteration of the alternating minimization algorithm (AMA) [18] applying to the corresponding primal problem of (15) can be written as         û k+1 := argmin u∈U g(u) − A Tλ k , u + γp U (u) = ∇g * γ (A Tλ k ), v k+1 := argmin v∈V h(v) − B Tλ k , v + η k 2 c − Aû k+1 − Bv 2 , λ k+1 :=λ k + η k (c − Aû k+1 − Bv k+1 ),(16) whereλ k ∈ R n is given, η k > 0 is the penalty parameter, and g γ (·) := g(·)+γp U (·). We define the quadratic surrogate of d 1 as follows: Q γ L k (λ;λ k ) := d 1 γ (λ k ) + ∇d 1 γ (λ k ), λ −λ k − L k 2 λ −λ k 2 .(17) Then the following lemma provides a key estimate to prove the convergence of the algorithms in the sequel, whose proof can be found in Appendix A. (12) is concave and smooth. It satisfies the following estimate Lemma 1 The smoothed dual component d 1 γ defined byd 1 γ (λ) + ∇d 1 γ (λ),λ − λ − L d 1 2 λ − λ 2 ≤ d 1 (λ), ∀λ,λ ∈ R n ,(18) where L γ d 1 := A 2 γµ p . Let λ k+1 be the point generated by (16) fromλ k and η k . Then, (16) is equivalent to the forward-backward splitting scheme applying to the smoothed dual problem (15), i.e., λ k+1 := prox (−η k d 2 ) λ k + η k ∇d 1 γ (λ k ) .(19) In addition, with Q γ L k defined by (17), if the following condition holds d 1 γ (λ k+1 ) ≥ Q γ L k (λ k+1 ;λ k ),(20) then, for any λ ∈ R n , the following estimates hold d γ (λ k+1 ) ≥ ℓ γ k (λ) + 1 η k λ k+1 −λ k , λ −λ k + 1 η k − L k 2 λ k − λ k+1 2 ≥ d γ (λ) + 1 η k λ k+1 −λ k , λ −λ k + 1 η k − L k 2 λ k − λ k+1 2 , (21) where ℓ γ k (λ) := d 1 γ (λ k ) + ∇d 1 γ (λ k ), λ −λ k + d 2 (λ k+1 ) + ∇d 2 (λ k+1 ), λ − λ k+1 + c, λ , and ∇d 2 (λ k+1 ) ∈ ∂d 2 (λ k+1 ) is a subgradient of d 2 at λ k+1 . Our next step is to recover an approximate primal solutionx k := (ū k ,v k ) of (1) using the weighted averaging scheme (9). Combing this strategy and (16) we can present the new primal-dual AMA algorithm is as in Algorithm 1 below. Algorithm 1 (Primal-dual alternating minimization algorithm) Initialization: (7). 8. Update S k := S k−1 + w k , with w k := η k , and τ k := w k S k . 9. Updateū k := (1 − τ k )ū k−1 + τ kũ k andv k := (1 − τ k )v k−1 + τ kṽ k . end for Output: The sequence x k withx k := (ū k ,v k ). 1. Choose γ := ǫ 2D U , and L such that 0 < L ≤ L γ d 1 := A 2 γµ p . 2. Choose an initial point λ 0 ∈ R n . 3. Set S −1 := 0,ū −1 := 0 andv −1 := 0. for k := 0 to k max do 4. Computeũ k =û k+1 = u * γ (λ k ) defined in (12). 5. Choose η k ∈ 0, 1 L γ d 1 and computê v k+1 := arg min v∈V h(v) − B T λ k , v + η k 2 c − Aũ k − Bv 2 . 6. Update λ k+1 := λ k + η k c − Aû k+1 − Bv k+1 . 7. Computeṽ k := v * (λ k+1 ) ∈ v ♯ B T λ k+1 defined in In fact, we can use the Lipschitz constant L γ d 1 = A 1 γµ p to compute the constant step η k as η k := 1 L γ d 1 at Step 5. However, we can adaptively choose η k = L −1 k via a back-tracking line-search procedure in Algorithm 1 to guarantee the condition (20), and this usually performs better in practice than the constant step-size. Algorithm 1 requires one more sharp operator query of v at Step 7. As mentioned earlier, when h V has decomposable structures, computing this sharp operator can be done efficiently (e.g., closed form or parallel/distributed manner). The following theorem shows the bounds on the objective residual f (x k ) − f ⋆ and the feasibility gap Aū k + Bv k − c of (1) atx k . Theorem 1 Let x k withx k := (ū k ,v k ) be the sequence generated by Algorithm 1 and L d 1 := A 2 µ p . Then, the following estimates hold:        |f (x k ) − f ⋆ | ≤ max L d 1 λ 0 2 γ(k+1) + γD U , 2L d 1 λ ⋆ λ 0 −λ ⋆ γ(k+1) + λ ⋆ L d 1 D U k+1 , Aū k + Bv k − c ≤ 2L d 1 λ 0 −λ ⋆ γ(k+1) + L d 1 D U k+1 .(22) Consequently, if we choose γ := ǫ 2D U , which is optimal, then the worst-case iterationcomplexity of Algorithm 1 to achieve the ǫ-solutionx k of (1) in the sense of Def- inition 1 is O L d 1 D U ǫ 2 R 2 0 , where R 0 := max 2, 3 λ ⋆ , 2 λ 0 , 2 λ 0 − λ ⋆ . Proof Since 0 < η i ≤ 1 L γ d 1 by Step 5 of Algorithm 1, for any λ ∈ R n , it follows from (21) that d γ (λ i+1 ) ≥ ℓ γ i (λ) + 1 η i λ i+1 − λ i , λ − λ i + 1 2η i λ i+1 − λ i 2 = ℓ γ i (λ) + 1 2η i λ i+1 − λ 2 − λ i − λ 2 ,(23) where ℓ γ i (λ) := d 1 γ (λ i ) + ∇d 1 γ (λ i ), λ −λ i + d 2 (λ i+1 ) + ∇d 2 (λ i+1 ), λ − λ i+1 + c, λ and ∇d 2 (λ i+1 ) ∈ ∂d 2 (λ i+1 ) is a subgradient of d 2 at λ i+1 . Next, we consider ℓ γ i (λ). We first note that, for any i = 0, · · · , k, we have d 1 γ (λ i )+ ∇d 1 γ (λ i ), λ−λ i = g(û i+1 )+γp U (û i+1 ) − Aû i+1 , λ i − Aû i+1 , λ − λ i = g(û i+1 ) − Aû i+1 , λ + γp U (û i+1 ).(24) Second, by Step 6 of Algorithm 1, we haveṽ i ∈ v ♯ (B T λ i+1 ), which implies d 2 (λ i+1 ) + ∇d 2 (λ i+1 ), λ − λ i+1 = h(ṽ i ) − Bṽ i , λ i+1 − Bṽ i , λ − λ i+1 = h(ṽ i ) − Bṽ i , λ .(25) Summing up (24) and (25) and using the definition of ℓ γ i , we obtain ℓ γ i (λ) = g(ũ i )+h(ṽ i )− Aũ i + Bṽ i −c,λ i + c−Aũ i − Bṽ i , λ −λ i +γp U (ũ i ) = f (x i ) − Aũ i + Bṽ i − c, λ + γp U (ũ i ).(26) By (13), we have d γ (λ) ≤ d(λ) + γD U ≤ d ⋆ + γD U :=d ⋆ γ for any λ ∈ R n . Substituting (26) into (23), subtracting tod ⋆ γ , and summing up the result from i = 0 to k, we obtain k i=0 η i d ⋆ γ − d γ (λ i+1 ) ≤ k i=0 η i d ⋆ γ − f (x i ) + Aũ i + Bṽ i − c, λ − γp U (ũ i ) + 1 2 λ 0 − λ 2 − λ k+1 − λ 2 .(27) On the one hand, we note that d(λ) ≤ d ⋆ = f ⋆ ≤ L(x, λ ⋆ ) = f (x) − Au + Bv − c, λ ⋆ for any λ ∈ R n and x ∈ X due to strong duality. Hence, Aū k + Bv k − c, λ ⋆ ≤ f (x k ) − d ⋆ . Moreover,d ⋆ γ − d γ (λ i+1 ) ≥ 0. On the other hand, using the convexity of f we have S k f (x k ) ≤ k i=0 w i f (x i ) and S k Aū k + Bv k − c, λ = k i=0 w i Aũ i + Bṽ i − c, λ for w i := η i . Combining these expressions into (27), and noting that 0 ≤ p U (ũ i ) ≤ D U , we can derive 0 ≤ k i=0 w i d ⋆ γ − f (x i ) + Aũ i + Bṽ i − c, λ − γp U (ũ i ) + 1 2 λ 0 − λ 2 ≤ S k d ⋆ − f (x k ) + Aū k + Bv k − c, λ + γD U + 1 2 λ 0 − λ 2 , which implies Aū k +Bv k −c, λ ⋆ ≤ f (x k )−d ⋆ ≤ Aū k +Bv k −c, λ + 1 2S k λ 0 −λ 2 +γD U . (28) Hence, we obtain Aū k + Bv k − c, λ ⋆ − λ − 1 2S k λ 0 − λ 2 − γD U ≤ 0,(29) for all λ ∈ R n . Since (29) holds for all λ ∈ R n , we can show that max λ∈R n Aū k + Bv k − c, λ ⋆ − λ − 1 2S k λ 0 − λ 2 − γD U ≤ 0,(30) By optimizing the left-hand side over λ ∈ R n and using λ 0 =λ 0 , we obtain S k Aū k + Bv k − c 2 + 2 Aū k + Bv k − c + r, λ 0 − λ ⋆ − γD U ≤ 0. Using the Cauchy-Schwarz inequality, we have Aū k +Bv k −c, λ 0 −λ ⋆ ≤ Aū k + Bv k − c λ 0 − λ ⋆ . Hence, the last inequality leads to Aū k + Bv k − c ≤ λ 0 −λ ⋆ + λ 0 −λ ⋆ 2 +γS k D U S k ≤ 2 λ 0 −λ ⋆ S k + γD U S k .(31) Now, since w i = η i ≥ γ L d 1 for i = 0 to k, where L d 1 := A 2 µ p . Hence, S k ≥ γ(k+1) L d 1 . Substituting this bound into (31), we obtain the second inequality of (22). To prove the first inequality of (22), we note from (28) and f ⋆ = d ⋆ that f (x k ) − f ⋆ ≤ Aū k + Bv k − c, λ + 1 2S k λ 0 − λ 2 + γD U . Taking λ = 0 n into this inequality, we get f (x k ) − f ⋆ ≤ 1 2S k λ 0 2 + γD U ≤ L d 1 γ(k + 1) λ 0 2 + γD U . Combining this inequality, (8), and the second estimate of (22), we obtain the first estimate of (22). Let us choose γ such that 2L d 1 r 0 γ(k+1) = L d 1 D U k+1 , where r 0 := max λ 0 − λ ⋆ , λ 0 . Then, γ = 2r 0 √ L d 1 √ D U (k+1) . Substituting this expression into (22), we obtain      |f (x k ) − f ⋆ | ≤ max 2r 0 √ L d 1 D U √ k+1 , 3 λ ⋆ √ L d 1 D U √ k+1 ≤ ǫ Aū k + Bv k − c ≤ 3 √ L d 1 D U √ k+1 ≤ ǫ. Consequently, we obtain the worst-case complexity of Algorithm 1 from the last estimates, which is O L d 1 D U ǫ 2 R 2 0 , where R 0 := max 2, 3 λ ⋆ , 2 λ 0 , 2 λ 0 − λ ⋆ . In this case, we can also show that γ = ǫ 2D U . Remark 1 If we apply a back-tracking line-search with a bi-section strategy on η k , then we have 0 < η k ≤ 2 L γ d 1 at Step 5 of Algorithm 1. In this case, the bounds in Theorem 1 still hold with L d 1 = 2 A 2 µ p instead of L d 1 = A 2 µ p . 4 The accelerated primal-dual alternating minimization algorithm In this section, we incorperate Nesterov's accelerated step into Algorithm 1 as done in [9], but applying to (15) to obtain a new accelerated primal-dual AMA variant. Clearly, this algorithm can be viewed as the FISTA scheme [2] applying to the smoothed dual problem (15). Let t 0 := 1 andλ 0 := λ 0 ∈ R n . The main step at the iteration k of the accelerated AMA method is presented as follows:                   û k+1 := argmin u∈U g(u) − A Tλ k , u + γp U (u) = ∇g * γ (A Tλ k ), v k+1 := argmin v∈V h(v) − B Tλ k , v + η k 2 c − Aû k+1 − Bv 2 , λ k+1 :=λ k + η k c − Aû k+1 − Bv k+1 , t k+1 := 1 2 1 + 1 + 4t 2 k , λ k+1 := λ k+1 + t k −1 t k+1 λ k+1 −λ k ,(32) where, again, g γ (·) := g(·) + γp U (·). We now combine the accelerated AMA step (32) and the weighted averaging scheme (9) to construct a new accelerated primaldual AMA method as presented in Algorithm 2 below. Similar to Algorithm 1, if we know the Lipschitz constant L γ d 1 a priori, we can use η k := 1 L γ d 1 . However, we can also use a backtracking line-search to adaptively choose η k := L −1 k such that the condition (20) holds. We note that the complexityper-iteration of Algorithm 2 essentially remains the same as in Algorithm 1. The following theorem provides the bound on the absolute objective residual and the primal feasibility gap at the iterationx k for Algorithm 2. Theorem 2 Let {x k } be the sequence generated by Algorithm 2 and L d 1 := A 2 µ p . Then, the following estimates hold:      |f (x k ) − f ⋆ | ≤ max 2L d 1 λ 0 2 γ(k+1)(k+2) + γD U , 8L d 1 λ ⋆ λ 0 −λ ⋆ γ(k+1)(k+2) + λ ⋆ 4L d 1 D U (k+1)(k+2) , Aū k + Bv k − c ≤ 8L d 1 λ 0 −λ ⋆ γ(k+1)(k+2) + 4L d 1 D U (k+1)(k+2) . (33) Consequently, if we choose γ := ǫ D U , which is optimal, then the worst-case iterationcomplexity of Algorithm 2 to achieve an ǫ-solutionx k of (1) in the sense of Defi- nition 1 is O √ L d 1 D U ǫ R 0 , where R 0 := max 4, 9 2 λ 0 , 9 2 λ 0 − λ ⋆ , 4 λ ⋆ . Algorithm 2 (Accelerated primal-dual alternating minimization algorithm) Initialization: 1. Choose γ := ǫ D U , and L such that 0 < L ≤ L γ d 1 := A 2 γµ p . 2. Choose an initial point λ 0 ∈ R n . 3. Set t 0 := 1 andλ 0 := λ 0 . Set S −1 := 0,ū −1 := 0 andv −1 := 0. for k := 0 to k max do 4. Computeũ k =û k+1 = u * γ (λ k ) defined in (15). 5. Choose η k ∈ 0, 1 L γ d 1 and computê v k+1 := arg min v∈V h(v) − B Tλ k , v + η k 2 Aũ k + Bv − c 2 . 6. Update λ k+1 :=λ k + η k (c − Aû k+1 − Bv k+1 ). 7. Update t k+1 := 0. (7). 9. Update S k := S k−1 + w k , with w k := η k t k , and τ k : 5 1 + (1 + 4t 2 k ) 1/2 andλ k+1 := λ k+1 + t k −1 t k+1 (λ k+1 −λ k ). 8. Computeṽ k := v * (λ k+1 ) ∈ v ♯ (B T λ k+1 ) defined in= w k S k . 10. Updateū k := (1 − τ k )ū k−1 + τ kũ k andv k := (1 − τ k )v k−1 + τ kṽ k . end for Output: The primal sequence x k withx k := (ū k ,v k ). Proof If we define τ k := 1 t k , then τ 0 = 1, and by Step 7 of Algorithm 2, one has τ 2 k+1 = (1 − τ k+1 )τ 2 k . Moreover, if we defineλ k := 1 τ k λ k − (1 − τ k )λ k , thenλ 0 =λ 0 = λ 0 . Using Step 7 of Algorithm 2, we can also deriveλ k+1 = 1 τ k+1 λ k+1 − (1 − τ k+1 )λ k+1 ) =λ k − 1 τ k λ k+1 −λ k . By (13), we have d γ (λ) ≤ d(λ)+γD U ≤ d ⋆ +γD U :=d ⋆ γ . Hence,d ⋆ γ −d γ (λ) ≥ 0 for any λ ∈ R n . For i = 0, · · · , k, let ℓ γ i (λ) := d 1 γ (λ i ) + ∇d 1 γ (λ i ), λ −λ i + d 2 (λ i+1 )+ ∇d 2 (λ i+1 ), λ−λ i+1 + c, λ . Then, from (21) with 0 < η i ≤ γL −1 d 1 , and ℓ γ i (λ i ) = d 1 γ (λ i )+ ∇d 1 γ (λ i ), λ i −λ i +d 2 (λ i+1 )+ ∇d 2 (λ i+1 ), λ i −λ i+1 + c, λ ≥ d 1 γ (λ i ) + d 2 (λ i ) + c, λ = d γ (λ i ), we havē d ⋆ γ −d γ (λ i+1 ) ≤d ⋆ γ −ℓ γ i (λ) − η −1 i λ i+1 −λ i , λ−λ i − 1 2η i λ i+1 −λ i 2 , d ⋆ γ − d γ (λ i+1 ) ≤d ⋆ γ −d γ (λ i )−η −1 i λ i+1 −λ i , λ i −λ i − 1 2η i λ i+1 −λ i 2 .(34) Multiplying the first inequality of (34) by τ i and the second one by (1 − τ i ) for τ i ∈ (0, 1) and summing the results up, we obtain d ⋆ γ − d γ (λ i+1 ) ≤ (1 − τ i )[d ⋆ γ − d γ (λ i )] + τ i [d ⋆ γ − ℓ γ i (λ)] + 1 η i λ i+1 −λ i ,λ i − (1 − τ i )λ i − τ i λ − 1 2η i λ i+1 −λ i 2 2 = (1 − τ i ) d ⋆ γ − d γ (λ i ) + τ i d ⋆ γ − ℓ γ i (λ) + τ i 2η i λ i − λ 2 − λ i − 1 τ i (λ i+1 −λ i ) − λ 2 ,(35)whereλ i := 1 τ i λ i − (1 − τ i )λ i . Now, letλ i+1 =λ i − 1 τ i (λ i+1 −λ i ) as stated above. Then, (35) leads tō d ⋆ γ −d γ (λ i+1 ) ≤ (1−τ i ) d ⋆ γ −d γ (λ i ) +τ i d ⋆ γ −ℓ γ i (λ) + τ 2 i 2η i λ i −λ 2 − λ i+1 −λ 2 . Now, since τ 2 i = (1 − τ i )τ 2 i−1 and η i ≤ η i−1 , we have η i (1−τ i ) τ 2 i ≤ η i−1 τ 2 i−1 . Then, sincē d ⋆ γ − d γ (λ i ) ≥ 0, the last inequality implies η i τ 2 i d ⋆ γ − d γ (λ i+1 ) ≤ η i−1 τ 2 i−1 d ⋆ γ − d γ (λ i ) + η i τ i d ⋆ γ − ℓ γ i (λ) + 1 2 λ i − λ 2 − λ i+1 − λ 2 . Summing up this inequality from i = 0 to k, and using the fact that τ 0 = 1, we obtain η k τ k d ⋆ γ − d γ (λ k+1 ) ≤ η 0 (1 − τ 0 ) τ 2 0 d ⋆ γ − d γ (λ k ) + k i=0 η i τ i d ⋆ γ − ℓ γ i (λ) + 1 2 λ 0 − λ 2 − λ k+1 − λ 2 ≤ k i=0 η i τ i d ⋆ γ − ℓ γ i (λ) + 1 2 λ 0 − λ 2 .(36) Similar to the proof of (26), we have ℓ γ i (λ) = g(ũ i ) + h(ṽ i ) − Aũ i + Bṽ i − c, λ + γp U (ũ i ). Next, using the convexity of g and h, and p U (ũ i ) ≥ 0, the last inequality implies k i=0 η i τ i d ⋆ γ − ℓ γ i (λ) = k i=0 η i τ i d ⋆ γ − g(ũ i ) − h(ṽ i ) + Aũ i + Bṽ i − c, λ − γp U (ũ i ) ≤ S k d ⋆ γ − g(ū k ) − h(v k ) + Aū k + Bv k − c, λ .(37) Substituting (37) into (36) and noting thatd ⋆ γ ≥ d γ (λ k+1 ), f (x k ) = g(ū k ) + h(v k ) and f ⋆ = d ⋆ =d ⋆ γ − γD U , we have f (x k ) − f ⋆ ≤ Aū k + Bv k − c, λ + 1 2S k λ 0 − λ 2 + γD U .(38) Moreover, we have f ⋆ ≤ L(x, λ ⋆ ) = f (x) − Au + Bv − c, λ ⋆ for x ∈ X . Substituting x :=x k , u :=ū k and v :=v k into this inequality we get f ⋆ ≤ f (x k ) − Aū k + Bv k − c, λ ⋆ .(39) Combining (38) and (39), we obtain Aū k + Bv k − c, λ ⋆ − λ − 1 2S k λ 0 − λ 2 − γD U ≤ 0, ∀λ ∈ R n .(40) Hence, by maximizing the left-hand side over λ ∈ R n , we finally get max λ∈R n Aū k + Bv k − c, λ ⋆ − λ − 1 2S k λ 0 − λ 2 − γD U ≤ 0, Solving the maximization problem in this inequality, we can show that Aū k + Bv k − c ≤ 2 λ 0 − λ ⋆ S k + γD U S k .(41) We note that t k updated by Step 6 satisfies: k+1 2 ≤ t k ≤ k+1, and 0 < η k ≤ γL −1 d 1 . Hence, S k = k i=0 w i = k i=0 t i η i ≥ γ k i=0 i+1 2L d 1 = γ(k+1)(k+2) 4L d 1 . Using this estimate into (41), we get the second estimate of (33). To prove the first estimate of (33), we note from (38) with λ := 0 n that f (x k ) − f ⋆ ≤ 1 2S k λ 0 2 + γD U ≤ 2L d 1 γ(k + 1)(k + 2) λ 0 2 + γD U . Combining this estimate, the second estimate of (33), and (8), we obtain the first estimate of (33). Let us choose γ > 0 such that 8L d 1 r 0 γ(k+1)(k+2) = 4L d 1 D U (k+1)(k+2) , where r 0 := max λ 0 , λ 0 − λ ⋆ . Then, γ = 4r 0 √ L d 1 √ D U (k+1)(k+2) . Substituting this γ into (33), we obtain      |f (x k ) − f ⋆ | ≤ max 9r 0 √ L d 1 D U 2 √ (k+1)(k+2) , 4 λ ⋆ √ L d 1 D U √ (k+1)(k+2) ≤ ǫ Aū k + Bv k − c ≤ 4 √ L d 1 D U √ (k+1)(k+2) ≤ ǫ. Hence, the worst-case complexity of Algorithm 2 to achieve the ǫ-solutionx k is O √ L d 1 D U ǫ R 0 , where R 0 := max 4, 9 2 λ 0 , 9 2 λ 0 − λ ⋆ , 4 λ ⋆ . In this case, we also have γ = ǫ D U . Remark 2 We note that the bounds in Theorems 1 and 2 only essentially depend on the prox-diameter D U of U , but not of V. Since we can exchange g and h in the alternating step, we can choose U or V that has smaller prox-diameter in our algorithms to smooth its corresponding objective. Application to strongly convex objectives We assume that either g or h is strongly convex. Without loss of generality, we can assume that g is strongly convex with the convexity parameter µ g > 0 but h remains non-strongly convex, then the dual component d 1 is concave and smooth. Its gradient ∇d 1 (λ) = −Au * (λ) is Lipschitz continuous with the Lipschitz constant L d 1 := A 2 µ g . In this case, we can modified Algorithms 1 and 2 at the following steps to capture this assumption. (7). -Step 1: Choose L such that 0 < L ≤ L d 1 := A 2 µ g . -Step 4: Computeũ k =û k+1 = u * (λ k ) = u ♯ (A Tλ k ) defined by-Step 5: Choose η k ∈ (0, L −1 d 1 ]. We call this modification the strongly convex variant of Algorithms 1 and 2, respectively. In this case, we obtain the following convergence result, which is a direct consequence of Theorems 1 and 2. Corollary 1 Let g be strongly convex with the convexity parameter µ g > 0. Assume that x k is the sequence generated by the strongly convex variant of Algorithm 1. Then    |f (x k ) − f ⋆ | ≤ A 2 µ g (k+1) max λ 0 2 , 2 λ ⋆ λ 0 − λ ⋆ , Aū k + Bv k − c ≤ 2 A 2 λ 0 −λ ⋆ µ g (k+1) . (42) Consequently, the worst-case iteration-complexity of this variant to achieve an ǫ- solutionx k of (1) is O A 2 R 0 µ g ǫ , where R 0 := max λ 0 2 , 2 λ ⋆ λ 0 − λ ⋆ . Alternatively, assume that x k is the sequence generated by the strongly convex variant of Algorithm 2. Then    |f (x k ) − f ⋆ | ≤ 2 A 2 µ g (k+1)(k+2) max λ 0 2 , 4 λ ⋆ λ 0 − λ ⋆ , Aū k + Bv k − c ≤ 8 A 2 λ 0 −λ ⋆ µ g (k+1)(k+2) .(43) Consequently, the worst-case iteration-complexity of this variant to achieve an ǫ- solutionx k of (1) is O A R 0 µ g ǫ , where R 0 := max 2 λ 0 2 , 8 λ ⋆ λ 0 − λ ⋆ . Remark 3 It is important to note that, even h is not strongly convex, our accelerated primal-dual AMA algorithm still achieves the O(1/ √ ǫ)-worst case iterationcomplexity, which is different from existing dual accelerated schemes [3,11,10,15]. In addition, if h is also strongly convex, then the sharp-operator v ♯ (·) of h V is well-defined and single-valued without requiring Assumption A.2. We note that our results present in Corollary 1 can be considered as the primaldual variants of the AMA methods in [9], while the result presented in Theorems 1 and 2 is an extension to the non-strongly convex case. Concluding remarks We have introduce a new weighted averaging scheme, and combine the AMA idea and Nesterov's smoothing technique to develop new primal-dual AMA methods, Algorithm 1 and Algorithm 2, for solving prototype constrained convex optimization problems of the form (1) without strong convexity assumption. Then, we have incorporated Nesterov's accelerated step into Algorithm 1 to improve the worstcase iteration-complexity of the primal sequence from O 1/ǫ 2 (resp., O (1/ǫ) to O (1/ǫ) (resp., O (1/ √ ǫ). Our complexity bounds are directly given for the primal objective residual and the primal feasibility gap of (1), which are new. Interestingly, the O (1/ √ ǫ)-complexity bound is archived with only the strong convexity of g or h, but not both of them. We will extend this idea to other splitting schemes such as alternating direction methods of multipliers and other sets of assumptions such as the Höder continuity of the dual gradient in the forthcoming work. A Appendix: The proof of Lemma 1 The concavity and smoothness of d γ 1 is trivial [13]. In addition, the equivalence between the AMA scheme (16) and the forward-backward splitting method was proved in, e.g., [18,9]. Let g U ,γ := g γ + δ U and h V := h + δ V . We first write the optimality condition for the two convex subproblems in (16) as ∇g U ,γ (û k+1 ) − A Tλ k = 0, and ∇h V (v k+1 ) − B Tλ k − η k B T (c − Aû k+1 − Bv k+1 ). Using the third line of (16) we obtain from the last expressions that ∇g U ,γ (û k+1 ) = A Tλ k , and ∇h V (v k+1 ) = B T λ k+1 , which are equivalent tô u k+1 = ∇g U ,γ * (A Tλ k ), andv k+1 = ∇h V * (B T λ k+1 ). Multiplying these expressions by A and B, respectively, and adding them together, and then subtracting to c, we finally obtain η −1 k (λ k+1 −λ k ) = c − Aû k+1 − Bv k+1 = c − A∇g U ,γ * (A Tλ k ) − B∇h V * (B T λ k+1 ).(44) Now, from the definition (4) of d 1 γ and d 2 , we have A∇g U ,γ * (A Tλ k ) = ∇d 1 (λ k ) and B∇h V * (B T λ k+1 ) = ∇d 2 (λ k+1 ). Substituting these relations into (44), we get η −1 k (λ k+1 −λ k ) = c − ∇d 1 γ (λ k ) − ∇d 2 (λ k+1 ).(45) Next, under the condition (20), we can derive d 1 γ (λ k ) + ∇d 1 γ (λ k ), λ −λ k = d 1 γ (λ k )+ ∇d 1 γ (λ k ), λ k+1 −λ k + ∇d 1 γ (λ k ), λ−λ k+1 = Q γ L k (λ k+1 ;λ k ) + ∇d 1 γ (λ k ), λ − λ k+1 + L k 2 λ k+1 −λ k 2 (20) ≤ d 1 γ (λ k+1 ) + ∇d 1 γ (λ k ), λ − λ k+1 + L k 2 λ k+1 −λ k 2 .(46) Let ℓ γ k (λ) := d 1 γ (λ k ) + ∇d 1 γ (λ k ), λ −λ k + d 2 (λ k+1 ) + ∇d 2 (λ k+1 ), λ − λ k+1 + c, λ . Using this experesion in (46), and then combining the result with (45) and d γ (·) = d 1 γ (·) + d 2 (·) + c, · , we finally get ℓ γ k (λ) ≤ d γ (λ k+1 ) + ∇d 1 γ (λ k ) + ∇d 2 (λ k+1 ) − c, λ − λ k+1 + L k 2 λ k+1 −λ k 2 = d γ (λ k+1 ) − η −1 k (λ k+1 −λ k ), λ − λ k+1 + L k 2 λ k+1 −λ k 2 = d γ (λ k+1 ) − η −1 k (λ k+1 −λ k ), λ −λ k − 1 η k − L k 2 λ k+1 −λ k 2 , which is the first inequality of (21). The second inequality of (21) follows from the first one, d 1 γ (λ k ) + ∇d 1 γ (λ k ), λ −λ k ≥ d 1 γ (λ) and d 2 (λ k+1 ) + ∇d 2 (λ k+1 ), λ − λ k+1 ≥ d 2 (λ) due to the concavity of d 1 γ and d 2 , respectively. Primal Solution Recovery in Alternating Minimization Algorithms Convex analysis and monotone operators theory in Hilbert spaces. H Bauschke, P Combettes, Springer-VerlagBauschke, H., Combettes, P.: Convex analysis and monotone operators theory in Hilbert spaces. Springer-Verlag (2011) A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. A Beck, M Teboulle, SIAM J. Imaging Sciences. 21Beck, A., Teboulle, M.: A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sciences 2(1), 183-202 (2009) A fast dual proximal gradient algorithm for convex minimization and applications. A Beck, M Teboulle, Oper. Res. Letter. 421Beck, A., Teboulle, M.: A fast dual proximal gradient algorithm for convex minimization and applications. Oper. Res. Letter 42(1), 1-6 (2014) Constrained Optimization and Lagrange Multiplier Methods (Optimization and Neural Computation Series). D P Bertsekas, Athena Scientific. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods (Optimiza- tion and Neural Computation Series). Athena Scientific (1996) A first-order primal-dual algorithm for convex problems with applications to imaging. A Chambolle, T Pock, Journal of Mathematical Imaging and Vision. 401Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40(1), 120-145 (2011) Signal recovery by proximal forward-backward splitting. P Combettes, J.-C , P , Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer-VerlagCombettes, P., J.-C., P.: Signal recovery by proximal forward-backward splitting. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185-212. Springer-Verlag (2011) Variable metric forward-backward splitting with applications to monotone inclusions in duality. P L Combettes, B C Vu, Optimization. 639Combettes, P.L., Vu, B.C.: Variable metric forward-backward splitting with applications to monotone inclusions in duality. Optimization 63(9), 1289-1318 (2014) F Facchinei, J S Pang, Finite-dimensional variational inequalities and complementarity problems. Springer-Verlag1Facchinei, F., Pang, J.S.: Finite-dimensional variational inequalities and complementarity problems, vol. 1-2. Springer-Verlag (2003) Fast Alternating Direction Optimization Methods. T Goldstein, B Odonoghue, S Setzer, SIAM J. Imaging Sci. 73Goldstein, T., ODonoghue, B., Setzer, S.: Fast Alternating Direction Optimization Meth- ods. SIAM J. Imaging Sci. 7(3), 1588-1623 (2012) Iteration complexity analysis of dual first order methods for convex programming. I Necoara, A Patrascu, arXiv:1409.1462arXiv preprintNecoara, I., Patrascu, A.: Iteration complexity analysis of dual first order methods for convex programming. arXiv preprint arXiv:1409.1462 (2014) Applications of a smoothing technique to decomposition in convex optimization. I Necoara, J Suykens, IEEE Trans. Automatic control. 5311Necoara, I., Suykens, J.: Applications of a smoothing technique to decomposition in convex optimization. IEEE Trans. Automatic control 53(11), 2674-2679 (2008) Problem Complexity and Method Efficiency in Optimization. A Nemirovskii, D Yudin, Wiley InterscienceNemirovskii, A., Yudin, D.: Problem Complexity and Method Efficiency in Optimization. Wiley Interscience (1983) Smooth minimization of non-smooth functions. Y Nesterov, Math. Program. 1031Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127-152 (2005) Proximal algorithms. N Parikh, S Boyd, Foundations and Trends in Optimization. 13Parikh, N., Boyd, S.: Proximal algorithms. Foundations and Trends in Optimization 1(3), 123-231 (2013) Dual fast projected gradient method for quadratic programming. R A Polyak, J Costa, J Neyshabouri, Optimization Letters. 74Polyak, R.A., Costa, J., Neyshabouri, J.: Dual fast projected gradient method for quadratic programming. Optimization Letters 7(4), 631-645 (2013) Monotone operators and the proximal point algorithm. R Rockafellar, SIAM Journal on Control and Optimization. 14Rockafellar, R.: Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization 14, 877-898 (1976) R T Rockafellar, Convex Analysis. Princeton University Press28Rockafellar, R.T.: Convex Analysis, Princeton Mathematics Series, vol. 28. Princeton University Press (1970) Relaxation methods for problems with strictly convex cost and linear constraints. P Tseng, D Bertsekas, Math. Oper. Research. 163Tseng, P., Bertsekas, D.: Relaxation methods for problems with strictly convex cost and linear constraints. Math. Oper. Research 16(3), 462-481 (1991) Primal-Dual Interior-Point Methods. S Wright, SIAM PublicationsPhiladelphiaWright, S.: Primal-Dual Interior-Point Methods. SIAM Publications, Philadelphia (1997) A universal primal-dual convex optimization framework. A Yurtsever, Q Tran-Dinh, V Cevher, Proc. of 29th Annual Conference on Neural Information Processing Systems (NIPS2015). of 29th Annual Conference on Neural Information essing Systems (NIPS2015)Montreal, CanadaYurtsever, A., Tran-Dinh, Q., Cevher, V.: A universal primal-dual convex optimization framework. Proc. of 29th Annual Conference on Neural Information Processing Systems (NIPS2015), Montreal, Canada, 2015.
[]
[ "High-order boundary integral equation solution of high frequency wave scattering from obstacles in an unbounded linearly stratified medium", "High-order boundary integral equation solution of high frequency wave scattering from obstacles in an unbounded linearly stratified medium" ]
[ "Alex H Barnett \nDepartment of Mathematics\nDartmouth College\n03755HanoverNH\n", "Bradley J Nelson \nInstitute for Computational and Mathematical Engineering\nStanford University\n94305StanfordCA\n", "J Matthew Mahoney \nDepartment of Neurological Sciences\nUniversity of Vermont\n05405BurlingtonVT\n" ]
[ "Department of Mathematics\nDartmouth College\n03755HanoverNH", "Institute for Computational and Mathematical Engineering\nStanford University\n94305StanfordCA", "Department of Neurological Sciences\nUniversity of Vermont\n05405BurlingtonVT" ]
[]
We apply boundary integral equations for the first time to the two-dimensional scattering of time-harmonic waves from a smooth obstacle embedded in a continuously-graded unbounded medium. In the case we solve the square of the wavenumber (refractive index) varies linearly in one coordinate, i.e. (∆+E + x 2 )u(x 1 , x 2 ) = 0 where E is a constant; this models quantum particles of fixed energy in a uniform gravitational field, and has broader applications to stratified media in acoustics, optics and seismology. We evaluate the fundamental solution efficiently with exponential accuracy via numerical saddle-point integration, using the truncated trapezoid rule with typically 10 2 nodes, with an effort that is independent of the frequency parameter E. By combining with high-order Nyström quadrature, we are able to solve the scattering from obstacles 50 wavelengths across to 11 digits of accuracy in under a minute on a desktop or laptop.
10.1016/j.jcp.2015.05.034
[ "https://arxiv.org/pdf/1409.7423v1.pdf" ]
52,837,329
1409.7423
051d8608a435806eb798e8b7f6673d509565d1cc
High-order boundary integral equation solution of high frequency wave scattering from obstacles in an unbounded linearly stratified medium Alex H Barnett Department of Mathematics Dartmouth College 03755HanoverNH Bradley J Nelson Institute for Computational and Mathematical Engineering Stanford University 94305StanfordCA J Matthew Mahoney Department of Neurological Sciences University of Vermont 05405BurlingtonVT High-order boundary integral equation solution of high frequency wave scattering from obstacles in an unbounded linearly stratified medium scatteringacousticHelmholtzgraded-indexrefractiongravityquantumintegral equation 2010 MSC: 65N3865N8034M6065D20 We apply boundary integral equations for the first time to the two-dimensional scattering of time-harmonic waves from a smooth obstacle embedded in a continuously-graded unbounded medium. In the case we solve the square of the wavenumber (refractive index) varies linearly in one coordinate, i.e. (∆+E + x 2 )u(x 1 , x 2 ) = 0 where E is a constant; this models quantum particles of fixed energy in a uniform gravitational field, and has broader applications to stratified media in acoustics, optics and seismology. We evaluate the fundamental solution efficiently with exponential accuracy via numerical saddle-point integration, using the truncated trapezoid rule with typically 10 2 nodes, with an effort that is independent of the frequency parameter E. By combining with high-order Nyström quadrature, we are able to solve the scattering from obstacles 50 wavelengths across to 11 digits of accuracy in under a minute on a desktop or laptop. Introduction Problems involving time-harmonic waves in media whose wave speed or refractive index varies continuously in a layered fashion are common in both the natural and engineered worlds. In acoustics, underwater sound propagation [1,2], and environmental noise modeling in the presence of a thermal gradient [3] both involve continuously stratified wave speeds. In electromagnetics, continuously stratified media occur in ionospheric propagation [4] and nano-scale optical devices (see [5] and references within). In elastodynamics, similar models play important roles in seismology since wave speed grows in a piecewise continuous fashion with with depth into the earth [6,Sec. 2.5.3], and in designing functionally graded materials [7]. In quantum physics the same equations as in acoustics arise when gravitational or electric fields influence the motion of fixed energy particles [8]. In each case, when the varying medium is acoustically large (many wavelengths across), or unbounded, accurate numerical solution of wave propagation and scattering remains challenging. We will solve the following scalar-wave exterior boundary value problem (BVP), where Ω ⊂ R 2 is a given bounded obstacle with smooth boundary ∂Ω, and f is smooth Dirichlet data on ∂Ω, ∆ + k(x 2 ) 2 u(x 1 , x 2 ) = 0 x := (x 1 , x 2 ) ∈ R 2 \Ω ,(1)u = f on ∂Ω ,(2) where ∆ := ∂ 2 /∂x 2 1 + ∂ 2 /∂x 2 2 is the Laplace operator, with the specific vertical wavenumber variation k(x 2 ) given by k(x 2 ) 2 = E + x 2 ,(3) and outgoing radiation conditions for u. The latter, given in Definition 1, are required for uniqueness of the solution. In applications the potential u represents pressure, wavefunction, or a component of electric or magnetic field. The general relationship k = ω/c, where ω is frequency and c wave speed, means that in the frequency-domain (fixed ω) case, k is proportional to the refractive index and inversely proportional to the wave speed. In (3) the inverse square of wave speed (sometimes called sloth) is linear in the vertical (x 2 ) coordinate, a model found in seismology [6, Recalling that the Helmholtz equation (∆ + E)u = 0 models free-space quantum particles at energy E, we call (1) with (3) the "gravity Helmholtz equation" because it is a non-dimensionalized 1 model for quantum particles at energy E in a uniform gravitational [8] or electric [10] field, i.e. a linear potential. Its one-dimensional (1D) solution is the Airy function, and its application goes back at least to Hartree's 1931 work on the ionosphere [4,Sec. 6]. The constant E sets the square of the wavenumber at the height x 2 = 0; the waves have evanescent (modified Helmholtz) character for x 2 < −E, changing to oscillatory (Helmholtz) character for x 2 > −E. The asymptotic behavior of solutions to (1) is radically different in the horizontal and vertical directions, with waves eventually "dragged" into a narrow upwardspropagating beam; see Fig. 2(b). In the optical and acoustic setting, the imaginary refractive index for x 2 < −E could be relevant for graded metamaterials, although a more common application of the effficient PDE solver we present might be to acoustic or electromagnetic propagation in subregions of the plane (a half-space, etc). In the usual setting of scattering theory (see Fig. 1) an incident wave u inc satisfying (1) in the entire plane impinges on the obstacle; the scattered wave is then u, the solution to the above exterior BVP with boundary data f = −u inc on ∂Ω. The physical potential is then u inc + u. The Dirichlet case we study corresponds to sound-soft acoustics, or z-invariant Maxwell's equations with a perfect electric conductor in transverse-magnetic polarization. The Neumann (sound-hard) case can be solved with similar tools [11]. We will also solve the interior Dirichlet BVP, with applications to graded-index optics, and to transverse acoustic or optical modes in a bending waveguide in 3D approximated by an "equivalent profile" in which the square of refractive index varies linearly [12]. We propose boundary integral equations (BIE) as an efficient and accurate numerical method to solve (1)- (2). This demands being able to compute values and first derivatives of Φ(x, y), the fundamental solution to (1), where x, y ∈ R 2 are target and source points respectively. Recall the definition that, for a source point y ∈ R 2 , Φ(·, y) is the radiative solution to the PDE − (∆ x + k(x 2 ) 2 )Φ(x, y) = δ(x − y) , where δ is the Dirac delta distribution in R 2 . In contrast to the common situation, Φ is no longer an elementary or special function of distance |x − y|; this is clear in Fig. 2. A large part of our contribution is an efficient numerical method for evaluation of Φ, by applying quadrature to the Fourier transform of an analytical solution to the timedependent Schrödinger equation in a linear potential [13]. Unfortunately the integral is highly oscillatory, especially as E grows, thus we use deformation of the contour into the complex plane, passing through the saddle (stationary phase) points and using the trapezoid rule [15] to achieve exponential accuracy with effort independent of E. The saddle points will have an elegant interpretion as the classical ray travel times. The cost of each evaluation of Φ is only a few hundred complex exponential evaluations, hence we achieve typically 10 5 evaluations per second. Relation to previous work on frequency domain wave propagation in layered media Accurate numerical propagation of high frequency waves in a variable medium is numerically challenging: conventional "volume" discretization methods such as finite differencing (FD) [16] and finite elements (FEM) require several degrees of freedom per wavelength to achieve reasonable accuracy; moreover, in order to avoid "pollution errors" the degrees of freedom per wavelength must grow with frequency [17]. The resulting linear systems are so huge that iterative solvers are almost always used, and yet preconditioning has mostly been unsuccessful for the highfrequency Helmholtz equation, especially for high-order discretizations, and is a topic of current research [18]. The radiation condition must still be approximated via artificial absorbing boundary conditions (e.g. perfectly matched layers) [19] [9,Sec. 4.7]. At high frequencies, ray approximation is useful [6] and geometric diffraction theory can approximate the interaction with simple obstacles. However, such approximations break down at turning points (such as at x 2 = −E) and for geometric details on the wavelength scale. Parabolic approximation (i.e. one-way wave equation) methods [20] handle only a limited range of propagation directions, and cannot account for back reflections. Several of these methods are reviewed in the underwater acoustic and elastic contexts in [21]. When the medium (PDE coefficient) is piecewise constant, reformulation as a boundary integral equation (BIE) [22,Ch. 3] [23] [9, Ch. 8] is popular due to several advantages: • the unknowns live on the boundary (or material interfaces) rather than the volume; this reduction in dimension by one greatly reduces the number of unknowns N, especially at high frequencies, and simplifies the geometric issues (meshing, etc); • when a second-kind formulation is used, it remains well-conditioned (and hence iterative methods rapidly convergent) independent of the number of discretization nodes used; • radiation conditions are already built into the representation and need not be enforced, unlike in FD or FEM; • fast algorithms such as the fast multipole method [24] or fast direct solvers [25,26] can reduce the solution time to O(N) for low frequencies; • in the two-dimensional case, high-order quadratures on boundary curves are easy to implement [27,28]. This has enabled the scattering from objects (in a uniform medium) thousands of wavelengths across to be solved efficiently to many digits of accuracy (e.g. see [29]). In contrast, we care about scattering in a continuously-varying medium. If this medium were constant outside a bounded region, a Lippmann-Schwinger (volume integral) equation [22,Ch. 8] could be used, or coupling of direct discretization methods to BIE [30,31]. Tools also exist for BIE within media with a finite number of constant layers [32]. The method of the present paper extends the above advantages of BIEs to a particular problem where the stratified medium variation-and the resulting wave propagation-is smooth and unbounded in all directions. We are not aware of previous applications of BIE to such a case. The only similar work we know of is that of Premat-Gabillet in their environmental acoustics code Meteo-BEM [3], who use BIEs with the Green's function for a linear wave speed profile. However, they approximate the Green's function using a discrete sum over 1D eigenfunctions, an approach that works only when waves are trapped by a ground plane; this would fail in the case of unbounded propagation. Also, since their BIE is of Fredholm first kind, the convergence rate of an iterative solver would be poor. Remark 1. Our approach to evaluate the Green's function is reminiscent of the Sommerfeld integral (spectral representation) commonly used for layered media [9, Ch. 2], [32]. Yet, although both methods exploit numerical quadrature of a contour integral, they are distinct, with crucial differences. In the Sommerfeld approach the integration variable is a transverse wavenumber, and a vertical ODE has to be solved for each contour quadrature node; for the profile (3) this would demand Airy functions. The number of quadrature nodes needed grows linearly with wavenumber, for fixed source-target separation. In addition, the decay of the Sommerfeld integrand is known to be very slow when the vertical separation is small, demanding various windowing approximations [32]. In contrast, in our proposed scheme the integration variable represents time, the integrand involves only exponentials, and by choosing appropriate complex contours the number of nodes is independent of wavenumber. Of course, the Sommerfeld approach has the advantage over our scheme that, assuming the ODEs can be solved fast enough, arbitrary profiles k(x 2 ) could be handled. Outline of the paper We use the remainder of the introduction to state a radiation condition that allows a unique solution to our BVP (this is proved in Appendix A). In Sec. 2 we present an integral formula for the fundamental solution (4) for the PDE (1); here the radiation condition derives from causality in the time domain. We then use potential theory to reformulate the BVP as an integral equation on ∂Ω in Sec. 3, and present its high-order numerical solution, which demands many evaluations of the fundamental solution. Sec. 4 is the key part of the paper in which we present efficient new contour quadrature algorithms for this task. In Sec. 5 we present numerical tests of convergence and speed for both the interior and exterior BVPs. We draw some conclusions and discuss future work in Sec. 6. The radiation condition for the BVP Recall that for the constant-k Helmholtz equation (∆ + k 2 )u = 0 in R 2 , the Sommerfeld radiation condition [22, (3.62)] is ∂u/∂r − iku = o(r −1/2 ), holding uniformly in angle, where r := |x|. This corresponds to outgoing waves at infinity. It guarantees a unique solution to exterior BVPs [22,Sec. 3.2], for instance the case of Dirichlet data (2). Radiation conditions are also known for stratified media that are eventually constant or tend to a constant in upper and lower half-planes [33,34], for variable media that tend towards a constant at large distances [35], and for scattering from unbounded rough surfaces in a uniform medium [36]. For our exterior gravity Helmholtz equation there are no trapped waveguide modes because the refractive index is monotonic in x 2 , simplifying the situation from that of [33,34]. And yet, we have not been able to find a radiation condition in the literature that applies in our case where the wavenumber is unbounded in one direction. Hence we propose the following new radiation condition, recalling the notation x = (x 1 , x 2 ). Definition 1 (Radiation condition). A solution u to (1) in the exterior of a bounded domain Ω ⊂ R 2 , with medium defined by (3), is called radiative if lim x 2 →+∞ 1 k(x 2 ) ∞ −∞ ∂u ∂x 2 − ik(x 2 )u 2 dx 1 = 0 (5) lim x 2 →−∞ ∞ −∞ |u| 2 + ∂u ∂x 2 2 dx 1 = 0 (6) lim L→∞ lim x 1 →±∞ L −L |u| 2 + ∂u ∂x 1 2 dx 2 = 0(7) The first condition states that the flux is eventually upwards-going on positive horizontal slices; the other two guarantee enough decay that the flux tends to zero on the sides and bottom of a large rectangular box. Note that these conditions could most likely be tightened; however, they are adequate for our purpose, namely to prove in Appendix Appendix A the following uniqueness result, analogous to [22,Thm. 3.7] for the Helmholtz equation. This places our BVP on a more rigorous footing. Theorem 1. There is either zero or one radiative exterior solution to (1)-(3). The fundamental solution and its ray interpretation In this section we derive an integral formula for the fundamental solution for our PDE (1) in R 2 , and give some of its properties. In fact, since it requires no extra effort, we work in R n and then specialize to n = 2. Let x = (x , x n ) where x = (x 1 , . . . , x n−1 ) is the transverse coordinate and x n is the vertical one. The gravity Helmholtz equation in R n is (∆ + E + x n )u(x) = 0. Recall that the fundamental solution is defined by (4). We will exploit causality in the time domain to obtain a solution with physically correct radiation conditions, so call this the "causal" fundamental solution (although see Remark 3). Lemma 1 (Bracher et al. [8]). The causal fundamental solution to the gravity Helmholtz equation (∆+ E + x n )u(x) = 0 in R n with source point y ∈ R n is given by Φ(x, y) = i (4πi) n/2 ∞ 0 1 t n/2 exp i |x − y| 2 4t + x n + y n 2 + E t − 1 12 t 3 dt .(8) Its proof exploits the fact that the time-dependent Schrödinger equation has an analytically known fundamental solution in a linear potential. We will show that the integral in (8) is in fact the Fourier transform from time t to energy E; note that this is distinct from the more usual connection of frequency-domain fundamental solutions to the wave equation, for instance in the Cagniard-de Hoop method [9,Sec. 4.2]. For convenience we simplify and rephrase the derivation of Bracher et al. [8] in a more mathematical language, and in dimensionless units. Our definitions of the Fourier transform from time to energy will be, in terms of a general function f , ∞ −∞f (t)e iEt dt = f (E) , 1 2π ∞ −∞ f (E)e −iEt dE =f (t) . Similarly, our definition for spatial Fourier transforms is R nf (k)e ik·x dk = f (x) , 1 (2π) n R n f (x)e −ik·x dx =f (k) . We now prove the lemma. Proof. We will isolate the last coordinate with the notation x = (x , x n ) and y = (y , y n ). Suppressing for now the y dependence, but making the dependence on E explicit, the fundamental solution obeys (∆ + E + x n )Φ(x , x n ; E) = −δ(x − y) . The Fourier transform from E (energy) to t (time) turns this into (∆ + i∂ t + x n )Φ(x , x n ; t) = −δ(x − y)δ(t)(9) which is the fundamental solution for the time-dependent Schrödinger equation in a linear potential. We may solve this exactly by performing a Fourier transform in space, from coordinates (x , x n ) to wavevector (k , K), −|k | 2 − K 2 + i(∂ t + ∂ K ) Φ (k , K; t) = −(2π) −n e −ik·y δ(t) . The only derivatives are an advection term causing constant unit speed drift in wavevector in the positive x n direction, so we shift to a frame moving in wavevector, substituting κ = K − t (in physics this is called a gauge change [8, App. A]). To change from coordinates (k , K; t) to (k , κ; t) we then need ∂ ∂t κ = ∂ ∂t K + ∂ ∂K . This gives the simple first-order ODE in time at each wavevector (k , κ) ∈ R n , −|k | 2 − (κ + t) 2 + i∂ t Φ (k , κ; t) = −(2π) −n e −i(k ·y +κy n ) δ(t) . For each (k , κ) ∈ R n we seek a causal solution withΦ(k , κ; t) = 0 for all t < 0. The right-hand side is an impulsive excitation at t = 0 which gives the ODE solutioñ Φ(k , κ; t) = ie −i(k ·y +κy n ) (2π) n exp i −|k | 2 t − κ 2 t − κt 2 − 1 3 t 3 , t > 0 . Changing back to the original wavevector coordinates via κ = K − t giveŝ Φ(k , K; t) = ie i(y n t− 1 3 t 3 ) (2π) n e −ik·y exp i −|k | 2 t − K 2 t + Kt 2 , t > 0 . The final exponential is an (imaginary) gaussian in Fourier space, whose inverse spatial Fourier transform is known exactly. The middle exponential term causes a real space translation by y. This gives after simplification, Φ(x, y; t) = i (4πit) n/2 exp i |x − y| 2 4t + x n + y n 2 t − 1 12 t 3 , t > 0 .(10) This is the fundamental solution to the time-dependent Schrödinger equation (9). An inverse Fourier transform in time returns to the frequency-domain, giving the desired (8). Remark 2 (plain Helmholtz equation). Applying the above technique to the constant-wavenumber Helmholtz equation (∆ + E)u = 0 gives the fundamental solution representation Φ(x, y) = i (4πi) n/2 ∞ 0 1 t n/2 exp i |x − y| 2 4t + Et dt , which is the same as (8) absent two terms. In the case n = 2, by changing variable to s = (2i √ E/r)t, where r = |x − y|, we see that the above is the little-known Schläfli integral representation [37, (4) Sec. 6.21] for the radiative fundamental solution (i/4)H (1) 0 ( √ Er), where H (1) 0 is the outgoing Hankel function of order zero. Remark 3. We leave for future work a proof that the causal fundamental solution (8) satisifies our radiation condition in Definition 1, although physical intuition, the Helmholtz case, and numerical evidence strongly suggest that this is the case. A proof seems to demand stationary phase estimates beyond the scope of this work. A rigorous existence proof for the BVP (1)-(3) would follow, in an analogous fashion to [22,Thm. 3.9]. The importance of (8) is that quadrature of this integral will provide us with an accurate numerical algorithm to evaluate the fundamental solution for n = 2 (Section 4). Finally we recall a property of Φ special to n = 2. Since the PDE has coefficients which vary as analytic functions of x 1 and x 2 , the fundamental solution must have the form [38, Ch. 5] Φ(x, y) = A(x, y) 1 2π log 1 |x − y| + B(x, y) ,(11) where A and B are analytic in both coordinates of both variables, and A(x, x) = 1 for all x ∈ R 2 . Thus, as with the Laplace and Helmholtz equations, there is a (positive sign) logarithmic singularity at the source point. Connection to ray dynamics, propagating and forbidden regions In Fig. 2 we plot the fundamental solution, showing the different behaviors resulting by varying the height of the source location y at fixed energy E. In panel (a) the radiation from the source point is visible, as is interference between upwards and downwards propagating waves. In panel (b) the source is closer to the turning height x 2 = −E, and ray trajectories have been superimposed showing the connection to classical dynamics. We now review this connection (see e.g. [39,, [40,Sec. 4.5], [6, Sec. 5.1] [41]). Consider the general variable-coefficient Helmholtz equation (∆ + k(x) 2 )u = 0 . When k is locally large, inserting Keller's traveling wave ansatz u(x) = a(x)e iφ(x) into the PDE gives to leading order the eikonal equation |∇φ| = k(x), whose characteristics are rays given by evolving Hamilton's equations (here a dot indicates a time derivative),ẋ = ∇ p H ,ṗ = −∇ x H ,(12)with the Hamiltonian H(x, p) = |p| 2 + V(x) and potential V(x) = −k(x) 2 + E, with (conserved) total energy H = E, where E is any constant. (Here the kinetic energy term corresponds to a particle of mass 1 2 .) Another way to express this is via quantization, or "quantum-classical correspondence", which associates the operator i∇ with the momentum variable p. This rigorous connection is the topic of semiclassical analysis [42]. Returning to our case of stratified k, and the constant E, given by (3), then V(x) = −x 2 , we see that rays evolve under a constant "gravitational" force field −∇V(x) = (0, 1) in the vertical direction, i.e. Hamilton's equations arė x = 2p andṗ = (0, 1). To model the fundamental solution Φ(·, y), rays are launched from the source y, with initial momentum ρ = (ρ 1 , ρ 2 ), hence have the Galilean solution x 1 (t) = y 1 + 2ρ 1 t , x 2 (t) = y 2 + 2ρ 2 t + t 2 .(13) Fig. 2 suggests that such rays predict the wavefronts and caustics of Φ, and that Φ is small in the "classically forbidden" region, which we call region F, defined in the following. Proposition 1. Rays obeying (12) with Hamiltonian H(x, p) = |p| 2 − x 2 launched from y with total energy E cannot reach the forbidden region F, which is defined by x = (x 1 , x 2 ) such that |x − y| 2 > x 2 + y 2 2 + E ,(14) whose boundary is the parabola with focus y and directrix x 2 = −y 2 −2E. Rays can reach any point in the complement of the region, which we will label region A, for "classically allowed". We provide a proof, simplifying that of Bracher et al. [8], that introduces the concept of travel time, crucial to the later numerical evaluation. Proof. We substitute the formulae for ρ 1 and ρ 2 from (13) into the expression ρ 2 1 + ρ 2 2 = E + y 2 expressing that the initial total energy H(y, ρ) = E, to get the quadratic equation in t 2 , t 4 4 − bt 2 + a = 0(15) where for later simplicity we define a := |x − y| 2 4 , b := x 2 + y 2 2 + E .(16) The positive solutions to (15) give possible ray travel times from y to x at fixed E, being t ± = + 2 b ± √ b 2 − a .(17) No real solutions are possible precisely when √ a > b, which gives (14). The boundary, written 2 √ a = 2b, states that the distance from y to x equals the distance from x to the directrix line x 2 = −y 2 − 2E, defining a parabola. At the parabolic boundary the two travel times coalesce, i.e. t − = t + , causing a caustic (singularity in density) for the rays, which manifests itself as large amplitudes in the fundamental solution; see Fig. 2(a)-(b). We show in Fig. 2(c) a case where the source itself lies in the forbidden region. Here there are no classical rays and the wave leakage into the propagating region is exponentially small, occurring only in a single upwards direction. Finally we emphasize that time evolution appears in two different settings in this section: in the time-dependent Schrödinger equation to give t in the integral (8), and the time variable t in the classical dynamics. We have chosen the dimensionless units (i.e. particle mass 1 2 ) so that they correspond. 7 Conversion to a boundary integral equation, and its numerical solution We will reformulate the exterior Dirichlet BVP (1)-(3) as a Fredholm second-kind integral equation on ∂Ω. Since it provides us a useful numerical test case, we also do the same for the interior BVP. Recall that by standard elliptic PDE theory, given a compact domain Ω, the interior Dirichlet BVP has a unique solution for all E except at a countable set (the Dirichlet eigenvalues of the operator −∆ − x 2 ) that accumulates only at infinity [43,Thms. 4.10,4.12]. Given the fundamental solution Φ(x, y), and a "density" function τ on the boundary curve ∂Ω, we define the standard single-and double-layer potential representations, (Sτ)(x) := ∂Ω Φ(x, y)τ(y) ds y (Dτ)(x) := ∂Ω ∂Φ(x, y) ∂n y τ(y) ds y ,(18) where n(y) is the outward-pointing unit normal vector at the point y ∈ ∂Ω, and ds the usual arc length element. One may interpret y as a source point and x as a target. Since limits of such potentials on the curve itself may depend on from which side it is approached, we define v ± (x) := lim h→0 + v(x ± hn(x)) . Letting S : C(∂Ω) → C(∂Ω) be the boundary integral operator with kernel Φ(x, y), and D : C(∂Ω) → C(∂Ω) be the boundary integral operator with kernel ∂Φ(x, y)/∂n(y) taken in the principal value sense, we have jump relations, (Dτ) ± (x) = (Dτ ± 1 2 τ)(x) ,(19)(Sτ) ± (x) = (S τ)(x) ,(20)u = (D − iηS)τ(21) and substituting this into the boundary condition (2), using the exterior jump relations to get the BIE for the unknown density τ, ( 1 2 I + D − iηS )τ = f exterior BIE ,(22) where I is the identity. This mixture of double-and single-layer prevents a spurious resonance problem (for η = 0 the operator would be singular at interior Neumann eigenvalues), making the BIE a robust method for the BVP. The choice of constant η is not crucial but is commonly scaled with the wavenumber [27]; our wavenumber varies in space, and we choose at typical value η = √ E. Note that the correct sign of η is crucial for rapid convergence of iterative solvers at high frequency. For the interior BVP, the CFIE is not (usually) needed, so we set η = 0 and get (− 1 2 I + D)τ = f interior BIE .(23) Note that the operator S is compact, and when ∂Ω is smooth the operator D is compact, making the above BIEs of Fredholm second kind. This has the well-known advantages over first-kind BIEs of stability under discretization, and a benign spectrum leading to rapid convergence for the iterative solution of the resulting linear system. Numerical solution: Nyström method and quadrature We first parametrize the smooth closed curve ∂Ω by a 2π-periodic function z : [0, 2π) → R 2 such that z(t) ∈ ∂Ω and |z (t)| 0, for all t ∈ R. Changing variable to the parameter t turns (22) into a integral equation on the periodic interval [0, 2π), 1 2 τ(t) + 2π 0 ∂Φ(z(t), z(s)) ∂n z(s) − iηΦ(z(t), z(s)) |z (s)| τ(s)ds = f (t), ∀t ∈ [0, 2π)(24) 8 The reparametrization of (23) is similar. We can write both of these integral equations in the standard form τ(t) + 2π 0 K(t, s)τ(s)ds = g(t), ∀t ∈ [0, 2π)(25) In the exterior case, we see from the presence of Φ and from (11) that K has a logarithmically singular kernel, i.e. K(s, t) ∼ log |s − t|; in the interior case the kernel of K is continuous at the diagonal but has a weaker singularity of the form |s − t| 2 log |s − t|, as with the Helmholtz equation [22,Sec. 3.5]. To achieve high-order convergence in either case when the data g is smooth we will need to use a quadrature scheme accurate for kernels containing a periodized log singularity of the form K(t, s) = K 1 (t, s) log 4 sin 2 s − t 2 + K 2 (t, s)(26) where K 1 and K 2 are smooth and 2π-periodic in both of their arguments. We apply the Nyström method [44,Sec. 12.3] to approximate the solution of (25) by that of a linear system, based upon an underlying quadrature rule. For this we use periodic trapezoid rule quadrature, 2π 0 φ(t)dt ≈ 2π N N j=1 φ(s j ), where s j = 2π j/N(27)τ(s i ) + 2π 0 K(s i , s)τ(s)ds = g(s i ), ∀i = 1, . . . , N(28) Were K to possess a smooth kernel (i.e. K 1 ≡ 0), superalgebraic convergence would be achieved by applying (27) to the above integral, to give the square N-by-N linear system, τ i + N j=1 A i j τ j = g i , ∀i = 1, . . . , N(29) with elements of the matrix given by A i j = 2π N K(s i , s j ) ,(30) and where τ j approximates τ(s j ) and the right-hand side vector has elements g j = g(s j ). However, for general singular kernels of the form (26), the formula (30) fails to be accurate, and diagonal entries would be infinite. Yet it is still possible to design a set of quadrature nodes to approximate the integral in (28) to high accuracy for kernels of the form (26). This is done by replacing a few of the trapezoid nodes s j near the singularity s i by a new set of auxiliary nodes and weights; we choose 16th-order Alpert end-correction nodes [45], of which 30 are required (15 either side of the singularity). The auxiliary node nearest the target point is at a distance of around 10 −3 δ from this target point, where δ ≈ (2π/N)|z (s i )| is the local underlying node spacing. The values of τ at these auxiliary nodes is related to the neighboring few elements of the vector {τ j } N j=1 using local Lagrange interpolation. The net effect is that the matrix A takes the form (30) away from the diagonal, but with corrected entries near the diagonal. The full formulae are presented in [28,Sec. 4]. This gives for kernels of the form (26) a high-order convergence of the error between τ j and the true solution samples τ(s j ) of O(N −16 log N), for either the exterior or interior BIEs of interest. For the convergence theory see [45,Cor. 3.8] for the end-correction scheme, and Kress [46,Ch. 12]. Once the linear system (29) has been solved, the vector τ = {τ j } N j=1 may be used to reconstruct the scattered potential at any target location sufficiently far from ∂Ω, by substituting the same trapezoid rule into the integrals (18) in the representation (21), to get u(x) = N j=1 ∂Φ(x, z(s j )) ∂n z(s j ) − iηΦ(x, z(s j )) |z (s j )| τ j(31) A rule of thumb is that this quadrature rule is accurate for all points at least 5δ from the boundary [47, Remark 6]. As before, for the interior case we set η = 0. Evaluation of the fundamental solution Filling the Nyström matrix A of the previous section, and evaluating the solution u via (31), both demand a large number of evaluations of Φ(x, y), from source points y that are either periodic trapezoid nodes z(s j ) or auxiliary nodes. When filling A the target points x are also the nodes z(s i ), thus for a small number of cases (O(N) of them), the distance |x − y| will be very small (e.g. 10 −3 δ). As promised, we base our evaluation of the fundamental solution on the n = 2 dimensional case of (8), Φ(x, y) = 1 4π ∞ 0 1 t exp i |x − y| 2 4t + x n + y n 2 + E t − 1 12 t 3 ds = 1 4π ∞ 0 1 t e iψ(t) dt(32) where, recalling (16), the phase function ψ(t) = ψ a,b (t) is defined by ψ(t) := a t + bt − 1 12 t 3 .(33) To remove the pole at the origin, and place small and large t on an equal footing, we change variable via t = e s to get Φ(x, y) = 1 4π ∞ −∞ exp iψ(e s ) ds .(34) This integrand is shown in Fig. 3(b), for E = 20 and the source y and target x shown in Fig. 3(a). It is clearly highly oscillatory-and it becomes more so with increasing E-thus accurate integration along the real s axis would be prohibitively expensive. However, φ(e s ), and hence the integrand, is analytic in the entire complex s plane. We thus use numerical saddle point integration [14, Sec. 5.5] [48] (related to, but simpler than, "numerical steepest descent" [49]), along a contour passing through the stationary phase (saddle) points and asymptotically tending to the correct regions of the plane. We have the following by direct differentiation of (33). Proposition 2. Given a source y, target x, and energy E, the stationary phase points, that is, the solutions to ψ (t) = 0, are precisely the classical ray travel times t ± already given by (16)- (17). This connection between waves and rays is key to our efficient numerical evaluation of the integral (34). (Note that for a general potential function V(x), this is only approximately true in the semi-classical or high-frequency limit; its exactness here reflects exact formulae for the propagation of the Gaussian when the potential is at most quadratic in the coordinates [13].) Inserting this into the last step in the proof of Lemma 1, we see that Φ(x, y) = ∞ 0 exp i S (x, y; t) + Et dt, thus the phase function (33) is ψ(t) = S (x, y; t) + Et. A less well-known result from classical mechanics is ∂S (x, y; t)/∂t| x,y = −E x,y (t), where E x,y (t) is the energy required to complete the path in time t. [41,Ex. 10.4(c)]. Thus ψ (t) = 0 precisely when E x,y (t) = E, that is, at the travel times for a ray at the particular energy E to pass from y to x. An example contour passing through the (real-valued) saddle points and ending in the correct regions of the plane is shown in Fig. 3(c). On such a contour the integral may be approximated to exponential accuracy using the trapezoid rule [15] (with respect to the variable parametrizing the contour), and the sum may be truncated once values are sufficiently small. Choice of saddle point contour Since the integrand in (34) is entire, mathematically the choice of contour is irrelevant as long as its ends connect −∞ to +∞. However, for practical numerical evaluation the contour choice is crucial. Observe in Fig. 3(c) that the integrand is exponentially small in some regions, exponentially large in others, and that the borders between them are quite well defined. One may deform the limits of the contour to lie below the real axis, as long as one stays within the exponentially small regions adjoining the real axis (lower-left and lower-right in Fig. 3(c)). It must connect these limits, but to prevent catastrophic cancellation it must avoid large regions, passing between small regions only via saddle points, and passing through these saddle points at an angle not too far from the steepest descent direction. In addition, an analytic contour shape is desirable, since the trapezoid rule is then exponentially convergent. See Fig. 3(c) and Fig. 4 for examples. The task remains to choose, for any parameters a and b, a good contour, and rules for choosing the trapezoid node spacing and truncation intervals. Our rules will depend on the existence and types of classical rays. Recall the definition that the set x, y and E is classically allowed (region A) if there is one or two rays connecting y to x at energy E in (real-valued) time, otherwise forbidden (region F). Classically allowed (region A): b 2 ≥ a In this case, as in Fig. 3, there are two real saddle points, with steepest descent angles π/4 for s − = log t − (the root with smaller real part), and −π/4 for s + = log t + . We parametrize contours by their real part α ∈ R, thus s = γ(α) := α + ig(α), hence γ (α) = 1 + ig (α), where the function g : R → R depends on the usual parameters a and b (16). The following analytic function g makes the contour pass through the two saddle points at angles not too far from ±π/4, g(α) = 1 π + 1 2 tan −1 2(α − Re s − + c − ) − π 4 − 1 2 · 1 π + 1 6 tan −1 −4(α − Re s + − c + ) − π 12 − 1 2 (35) with the constants c − := 1 2 tan( π 2 −2π 4+2π ) and c + := 1 4 tan( π 2 −6π 12+2π ). We do not claim it is optimal, but it serves our purpose well. Examples from this family are shown in Fig. 3(c) and Fig. 4(a). The leftward limit lim α→−∞ g(α) = −π/2 is designed to lie in the middle of the exponentially-small region to the left. This region has height π due to the 2π vertically periodic nature of the function e −s which dominates as Re s becomes highly negative. To the right the period becomes three times smaller, since e 3s is dominant, thus we chose lim α→∞ g(α) = −π/6. Note that it is essential to enter and exit through the correct periodic images on the left and right sides. When the saddle points coalesce (t − = t + at the classical turning point, or boundary of A and F), the angles through the saddle points become flatter, as is needed to traverse smoothly through the small region; see the zoom Fig. 4(b). However, when saddles are close to coalescing at high E, it is advantageous for accuracy to shift the contour down enough to avoid being close to the rapid oscillations on the real axis, whilst keeping the integrand not too large. Hence, when |s + − s − | < 0.1 we add the constant c shift := −i min 0.7 √ E , 0.1(36) to γ. The resulting shift is visible in the figure. Classically forbidden (region F): b 2 < a Things get simpler when no real rays are possible: the saddle points s ± split away from the real s axis, and only the one with negative imaginary part is relevant. Let us call this point s 0 . There are a couple of regimes to consider; see Fig. 4(c)-(f). We use the following contour when Im s 0 > −π/3, g(α) = Im s 0 + tan −1 (α − Re s 0 ) − π 3 − Im s 0 1 − e −(α−Re s 0 ) 2 . This has the same limits as (35), is designed to pass through s 0 horizontally (i.e. g (Re s 0 ) = 0), and is shown in Fig. 4(c). The need for horizontal passage is to stay below the real axis when saddles are close to coalescing at high E. As above, we also apply the shift (36) when saddle points are close. This is shown in the zoom Fig. 4(d). When Im s − < −π/3, we are deep into the forbidden region (thus we call the region D ⊂ F). It lies below the hyperbola b = − √ a/2 in the x plane, as shown in Fig. 3(a). In region D we use the simple contour g(α) = tan −1 (α − Re s 0 ) − π 3 . This lies above all saddle points, has the same limits as (35), and is shown in Fig. 4(e). When b < − √ a, as occurs in region D with negative E and close source-target distances, the saddle points finally merge again onto the line Im s = −iπ/2. In this case we take s 0 to be the point with more negative real part, and use the above contour. This is shown in Fig. 4(f). Truncation of the integration domain With contour shapes now defined for all cases of a and b, we need rules to truncate the integral to a finite domain I ⊂ R, that is, Φ(x, y) = 1 4π ∞ −∞ exp iψ(e s ) ds = 1 4π ∞ −∞ exp iψ(e γ(α) ) γ (α)dα ≈ 1 4π I exp iψ(e γ(α) ) γ (α)dα .(37) For efficiency, we wish I to enclose only the parts of R where the integrand is significant, which we define as exceeding a convergence parameter ε, which we set to 10 −14 . We exploit the fact that, along the contour, the integrand decays exponentially away from saddle points. There are three types of behavior: (i) I comprises two intervals I 1 and I 2 that may be integrated independently, (ii) there are two saddle points but the integrand does not die to ε between them, so it must be handled as a single integration interval, and (iii) there is one saddle point hence only a single "bump" and a single interval. For case (i), for high E the size of the intervals can be much smaller than their separation, so integrating them separately is crucial. All three cases are shown in Fig. 5(a). In region A, (i) and (ii) may occur; in regions F and D only (iii) occurs. The recipe for regions F and D, with one saddle s 0 , is to initialize distances d 1 = d 2 = | Re s 0 |/2 which define an interval [Re s 0 − d 1 , Re s 0 + d 2 ]. If |ψ(e γ(Re s 0 −d 1 ) )| > ε then we set d 1 to βd 1 , where β is a "jump factor" constant, and repeat until the left end of the interval has integrand no larger than ε. The same is done for d 2 on the right end. We find that β = 1.3 is a good compromise between making jumps that don't produce an overly large interval, yet don't require too many extra integrand evaluations. The recipe for region A, with saddle points s ± , is to use a crude minimization of |ψ(e γ(α) )| in [Re s − , Re s + ], and if the minimum value exceeds ε, to use a single interval [Re s − − d 1 , Re s + + d 2 ], which is initialized and expanded as before. Otherwise two intervals I 1 and I 2 are used centered at s − and s + respectively, and each is expanded separately, as before. An example result is shown at the top of Fig. 5. Φ Φ,Φ 1 ,Φ 2 near Φ near Φ,Φ 1 ,Φ 2 Choice of quadrature node spacing For each interval I (= I, I 1 or I 2 ), we need rules to choose h, the quadrature node spacing in the trapezoid rule approximation to (34), 1 4π I exp iψ(e γ(α) ) γ (α)dα ≈ h 4π h j∈I exp iψ(e γ(h j) ) γ (h j) .(38) A general rule is to scale h in proportion to the minimum width of any saddle points contained in I . Let s 0 be such a saddle point, then we define its width as σ(s 0 ) := d 2 ds 2 ψ(e s )| s=s 0 −1/2 . Setting a convergence parameter h 0 , we use a node spacing of h = min h max , |I | n min , σh 0 where σ = σ(s 0 ) for the case of one saddle, or σ = min[σ(s − ), σ(s + )] in the case of two. The new numerical parameters here are h max , the maximum allowed node spacing, and n min , the minimum allowed number of nodes over the interval length |I |. Both are needed to prevent h from become too large, since σ can be arbitrarily large, e.g. when saddles coalesce or when |x − y| is very small. Derivatives of Φ The formula for entries of the matrix approximation to the double-layer operator D in Sections 3 and (3.1) requires first derivatives of Φ(x, y) with respect to moving the source y. These are simple to evaluate from (33)-(34) by passing the derivative through the integral to give, ∂Φ(x, y) ∂y 1 = 1 4π ∞ −∞ −i(x 1 − y 1 )e −s 2 exp iψ(e s ) ds(39) ∂Φ(x, y) ∂y 2 = 1 4π ∞ −∞ −i (x 2 − y 2 )e −s − e s 2 exp iψ(e s ) ds(40) These may be evaluated with minimal extra effort along with Φ by including extra factors in (38). Although these factors can grow exponentially in size, they do not affect the super-exponential decay away from saddle points of the integrand. We take care to include these factors when testing for decay of the integrand to ε in Sec. 4.2. Convergence and speed tests We now test the convergence of the above scheme for Φ and its derivatives. For true convergence, h 0 must shrink while h max also shrinks and n min grows; in Fig. 5(b) we perform this test, over the large range of E and x parameters used in Fig. 5(b), 150000 in total. The upper graph shows that a worst-case absolute error around 3 × 10 −11 for h 0 = 0.35, h max = 0.05 and n min = 43, which we thus find acceptable and fix as our standard choices. In fact, the lower graphs show that typical accuracies are much better, being 13 to 15 digits. Remark 5. It is known that 24 nodes is sufficient to integrate the Gaussian via the trapezoid rule to double precision accuracy, e.g. [50,Remark 2]. Nearly twice this is needed to guarantee accuracy in our setting, we believe due to distortion around the saddle from an exactly quadratic phase function, and the overshoot in interval size due to β exceeding 1. Note that we test absolute not relative errors in Φ: we believe that this is what is relevant for solution of BIEs, and support this claim in the next section. Since Φ is exponentially small in the forbidden region, demanding high relative error would require more effort, and is unnecessary. In Fig. 6(a) we test the mean number of nodes n used for the contour integral over the test set, splitting the data for near distances |x − y| < 0.1, and for |x − y| ≥ 0.1. For the latter, only around 100 nodes is needed, with a slight decrease at large E. For near distances (hence a is small), the saddle point s − moves leftwards, and the width of the significant region around it grows as shown in Fig. 4(a) and (f). We observe that here n grows like log 1/|x − y|. This explains n in the 200-600 range for near distances. The peak at E = 10 is due to I being a single large interval containing two saddles, one of which has a small width which demands a small h. We implemented the code in C with OpenMP and a MEX interface (constructed via Mwrap) to MATLAB (version 2012b), and tested its speed on a desktop workstation with two quad-core Intel Xeon E5-2643 CPUs at 3.3 GHz. 2 Fig. 6(b) shows that at most E values we achieve a mean rate exceeding 10 5 evaluations per second (where we count Φ and its two derivatives as a single evaluation). For near distances this drops to around 60% of that. Dips at various E ranges are explained by the increased n. The CPU time is believed to be dominated by calls to the complex exponential, and arctangent, functions; memory usage is very small. Performance of the boundary value solver Convergence for interior Dirichlet BVP To solve the interior BVP corresponding to (1)-(3), firstly the parametrization of the curve ∂Ω, and a number N of boundary nodes, is chosen. Then the data vector g i = −2 f (s i ), i = 1, . . . , N is filled, and the Nyström matrix A is filled using (30) for entries away from the diagonal and the Alpert correction of Sec. 3.1 close to the diagonal, with kernel K(t, s) = −2 ∂Φ(z(t), z(s))/∂n z(s) |z (s)|, appropriate for the BIE (23). The dense linear system (29) is solved by direct Gaussian elimination to get the density {τ j } N j=1 , and the solution evaluated by direct summation (31). We test convergence using Dirichlet data f = u| ∂Ω coming from the analytic separation of variables solution where Ai is the Airy function of the first kind, for E = 10, with ∂Ω a smooth "trefoil" domain given by the polar function r(θ) = 5 + 1.5 cos(3θ), about 5 wavelengths across. Fig. 7(a) shows the domain, boundary nodes, and resulting BIE solution constructed via (31). In Fig. 7(b) we observe exponential convergence of the absolute solution error at interior points; we believe this rate is limited by the distance of the nearest points to ∂Ω rather than the convergence of the density. At N = 260 we reach 11-digit accuracy (the solution u has maximum size around 0.5). Filling A took 5 seconds, and the evaluation of u at 32841 interior points used to plot Fig. 7(a) took 70 seconds. u(x) = cos( √ Ex 1 )Ai(−x 2 ) ,(41) As an independent check of the discretization of the operators S and D on the boundary, by Green's representation formula [22, (2.5) Fig. 7(b); it is consistent with the high order of the Alpert scheme, and reaches 11-digit accuracy (each term, e.g. (D + 1 2 I)u − , has norm 1.5). Fig. 8 and described in Sec. 5.2. Evaluation time is for the solution u via (31), and is the mean value over a coarse grid covering the region shown. Error is the maximum absolute error over 100 points lying uniformly on a circle of radius 12 (i.e. a closest distance of 1 from ∂Ω), estimated by comparing to the converged values for N = 600. Convergence and timing for scattering problems For a scattering problem with given incident wave u inc , as explained in the introduction, the exterior BVP (1)-(2) is solved with f = −u inc . We solve the combined-field BIE (22) similarly to the interior case summarized in Sec. 5.1, except with data g i = 2 f (s i ), i = 1, . . . , N and kernel K(t, s) = 2 ∂Φ(z(t), z(s))/∂n z(s) − iηΦ(z(t), z(s)) |z (s)|. We test with two smooth scatterers which are chosen to be large enough (diameter of order E) that the wavelength has sizeable vertical variation across the object. We first test a small example, at E = 20, with shape given by the polar function r(θ) = 9 + 2 sin(5θ), which is about 15 wavelengths across at the typical wavenumber √ E. The incident wave is due to a single nearby source at x s . The convergence in Table 1 is consistent with exponential. The solution time is entirely dominated by evaluations of Φ, and is consistent with 10 5 evaluations per second. The fill time has not yet reached its asymptotic O(N 2 ), since the O(30N) Alpert correction entries are expensive due to their small source-target distances. The dense linear system solve is O(N 3 ), but insignificant in comparison. A strict O(N 2 ) overall scaling is recovered via using an iterative solver; we applied GMRES [51] and found that 43 iterations were required for a residual of 10 −12 . The total wave solution, shown in Fig. 8, took 4 minutes to evaluate at 84089 grid points, i.e. around 350 target points per second. Notice that the waves bend, and do not propagate below x 2 = −E = −20. Finally, we test a similar but more challenging case, at E = 65, with shape r(θ) = 15 + 3 cos(10 θ), about 50 wavelengths across. The convergence and timing is in Table 2 and the total wave solution is shown in Fig. 9. 3 Again, 11 digits of accuracy is achieved at N = 2000 (relative to the typical size of u, which is of order 0.1). For GMRES, 59 iterations were needed to reach a residual of 10 −12 , showing scarcely any growth from the lower-frequency example. The plot in Fig. 9 took around 50 minutes for 226000 target points, i.e. about 80 target points per second. The parabolic turning point for the source is clearly visible, as well as waves of lower amplitude that have been scattered and hence escape this parabola. Conclusion and discussion We have presented an efficient scheme for high-frequency scattering from smooth objects embedded in a stratified medium in which the inverse square of wave speed varies linearly in the vertical coordinate (the "gravity Helmholtz equation"). Our high efficiency and accuracy comes from combining numerical saddle point integration for an integral representation of the fundamental solution Φ, with a boundary integral formulation and high-order quadrature rules for the singular kernels, allowing a problem 50 wavelengths in diameter to be solved to 11 digit accuracy in less than a minute on a desktop or laptop. Our detailed study of the saddle points (and their connection to classical ray dynamics) allows around 10 5 evaluations of Φ per second, independent of the wavenumber. Solution cost is dominated by evaluations of Φ, which is trivially parallelizable, and, once the matrix is filled, multiple incident waves at the same E can be solved with negligible extra cost. The scheme is strictly O(N 2 ) when an iterative solver (such as GMRES) is used; here convergence is rapid due to the second-kind formulation. In addition we placed the boundary value problem in the unbounded stratified medium on a more rigorous footing by deriving radiation conditions (Definition 1) such that the solution is unique. It remains to prove the conjecture that these are indeed satisfied by our causal Φ; this would give an existence proof for the BVP (Remark 3). In terms of future research, the sound-hard and transmission problems [22] are straightforward variants, as is the restriction to a half-space (reflected rays would need to be considered). The BIE operators we have constructed are also ideal for applying our medium's radiation boundary conditions to finite-element solvers. When the obstacle is no more than around 100 wavelengths across, much acceleration is possible: a kernel-independent fast multipole method (FMM) [52] could be used to apply A in each GMRES iteration, or a fast direct solver [26]; both would evaluate only O(N) as opposed to O(N 2 ) matrix elements. The former would also be much faster than direct summation for evaluation of u. We hope that our numerical saddle point integration techniques might prove useful for other (special) functions. The generalization to 3D will be easy, since Φ may be expressed directly using Airy functions [8]. A generalization to quadratic variation of the inverse square wave speed is also possible since the time-dependent Schrödinger Green's function is still known analytically [13]; this could be used for modeling guiding channels in underwater acoustics. Documented C/OpenMP and MATLAB/MEX codes, with which all tests were performed, are freely available at http://math.dartmouth.edu/∼ahb/software/lhelmfs.tgz This is a shifted Airy's equation, thus, in terms of Airy functions Ai and Bi, u(ξ, x 2 ) = α(ξ) Ai(−x 2 − E + ξ 2 ) + β(ξ) Bi(−x 2 − E + ξ 2 ) By unitarity of the Fourier transform, (A.1) implies lim x 2 →+∞ k(x 2 ) ∞ −∞ |û(ξ, x 2 )| 2 dξ = 0 (A.2) By the asymptotics Ai(−z) ∼ π −1/2 z −1/4 cos(2z 3/2 /3−π/4) and Bi(−z) ∼ −π −1/2 z −1/4 sin(2z 3/2 /3−π/4), as z → +∞ [53, (9.7.9), (9.7.11)], and k(x 2 ) ∼ √ x 2 , it follows that if α(ξ) or β(ξ) were nonzero on any open subset of R, then the limit (A.2) would be positive. Thus α and β are zero except possibly at a set of measure zero. Taking the inverse Fourier transform, u(x 1 , x 2 ) = 0 for all x 2 sufficiently large. Since (1) has analytic coefficients, its solutions are analytic in both variables. By unique continuation, u = 0 in all of R 2 \Ω. Next we need flux conservation, which states that, for any bounded region D ⊂ R 2 with boundary ∂D in which u satisfies (1) [22, (2.10)]. By the assumption of the theorem, the right-hand side is non-positive, so (A.1) holds, and Lemma 2 completes the proof. Finally, to prove the uniqueness of the radiative solution to the Dirichlet BVP (1)-(2), we need only that if u = 0 on ∂Ω, and u is a radiative solution, then u = 0 in R 2 \Ω. Given the remark in the proof [22,Thm 3.7] about the convergence of the normal derivative, the incoming flux is zero and the result follows from Theorem 2. We suspect that the above generalizes easily to more general profiles k(x 2 ). Figure 1 : 1Geometry for the scattering problem embedded in a stratified medium. Wave speed decreases (refractive index increases) in the vertical x 2 direction. Sec. 2.5.2.2]; in the electromagnetic case (3) corresponds to linear variation in permittivity [9, Sec. 2.5.1]. Figure 2 : 2Real part of fundamental solution Φ(·, y) plotted in R 2 for the case E = 5 and three choices of source location y: (a) y = (0, 10); (b) y = (0, 0); (c) y = (0, −10). In (b) we also show parabolic classical ray trajectories emanating from the source y, which themselves all lie within region A (a parabola with focus y), discussed in Sec. 2.1. Notice the color scale in (c) indicating the very small amplitude of the propagating beam. whose approximation error for a 2π-periodic φ ∈ C ∞ (R) is super-algebraic, i.e. O(N −m ) for each m > 0 [44, Cor. 9.27]. The first step in the Nyström method is to enforce (25) only at the nodes {s i }, giving Figure 3 : 3Contour integration for the fundamental solution at E = 20, y = 0. (a) Re Φ(x, y) in the physical domain of x, showing the two ray paths to reach x = (20, 10), and the classically allowed (A), forbidden (F), and deep forbidden (D) regions. (b) integrand of (34) on the real s axis, with the two stationary phase points s ± = log t ± . (c) real part of the same integrand in the complex s plane, saddle points (white dots), and the 79 quadrature nodes used lying on the contour γ. In (a) and (c) the color scale is blue (negative) through green (zero) to red (positive); in (c) the color range covers [−1, 1]. Figure 4 : 4Real part of integrands plotted in the complex s plane, for source y = 0, with saddle points (white dots) and numerical integration contours (grey) and nodes (black). (a) Region A (allowed) but source close to target, E = 20, x = (0.1, 0.2). (c) Region F (forbidden), E = 10, x = (1, −11). (b) and (d) Zoom in on coalescing saddle points: at E = 10 3 , with x = (2E − 1, 0) in (b) (just allowed), and x = (2E + 0.2, 0) in (d) (just forbidden). (e) Region D (deep forbidden), E = 1, x = (1, −5). (f) Region D but source close to target, E = −10, x = (0.1, 0.2). Remark 4 . 4There is a beautiful and deep physical reason lying behind Prop. 2, i.e. ψ (t ± ) = 0. The phase function (term in square brackets) in the time-dependent Schrödinger propagator (10) is the classical action S (x, y; t), defined as the time integral over [0, t] of the Lagrangian along the unique classical path from y to x taking precisely time t [8, Sec. 2] [41, Ch. 10]. Figure 5 : 5(a) Magnitude of summand in (38) along the parametrized contour, showing three types of behavior. For case (i) the intervals I 1 and I 2 containing the saddle points (large dots) are shown at the top. (b) Convergence of absolute error in Φ, with respect to the quadrature spacing h 0 , also scaling h max = 0.13 h 0 and n min = 15/h 0 . The source is y = 0, and targets are a set of 10 4 points randomly distributed uniformly in angle and uniformly in the logarithm of distance from the origin, |x| ∈ [10 −4 , 10 4 ]. For each target the set of E tested is [−100, −30, −10, −3, −1, 1, 3, 10, 30, 100, 300, 10 3 , 3 × 10 3 , 10 4 ]. The maximum, mean, and median error is taken over the 1.5 × 10 5 evaluations. Figure 6 : 6Efficiency of the numerical steepest descent algorithm as a function of frequency parameter E. (a) Mean number of quadrature nodes used, and (b) mean number of evaluations per second. In both graphs, + signs indicate E > 0 while signs indicate E < 0. Solid lines are for evaluation of Φ alone while dashed lines are for evaluation of Φ and its first partials. The source is y = 0, and averaging is done over 10 4 targets randomly distributed uniformly in angle and uniformly in the logarithm of distance from the origin. For the darker (blue) lines |x| ∈ [10 −1 , 10 4 ], while for the lighter (green) lines only "near" distances |x| ∈ [10 −4 , 10 −1 ] are used. Figure 7 : 7(a) Plot of interior Dirichlet solution u as evaluated by (31) given the density from solving the BIE, with N = 260 (boundary nodes s i shown as dots). (b) Convergence of interior BVP solution: maximum absolute error ( signs) over 100 interior points chosen randomly to lie inside a copy of ∂Ω scaled by 0.8, so points are not too close to ∂Ω; boundary error S (∂u/∂n) − − (D + 1 2 I)u − l 2 (+ signs) for the Green's representation formula with the analytically known data ∂u/∂n and u on ∂Ω. See Sec. 5.1. ] [43, Thm. 6.10], u = S(∂u/∂n) − − Du − , in Ω , and thus taking the evaluation point to ∂Ω from inside and applying (19)-(20), the boundary function S (∂u/∂n) − − (D + 1 2 I)u − should vanish. We show convergence of its norm in Figure 8 : 8Real part of total wave u + u inc for a Dirichlet scattering problem at E = 20, with u inc (x) = Φ(x, x s ) with x s = (−20, −10 Figure 9 : 9Real part of total wave u + u inc for a Dirichlet scattering problem at E = 65, with u inc (x) = Φ(x, x s ) with x s = (−30, −15). Around 11 digit accuracy relative to the typical solution size is achieved at N = 2000; see Sec. 5.2. = 0 . 0n = ∂u/∂n is the outward-pointing normal derivative. The left-hand side may be interpreted as the wave energy flux exiting the domain D. This follows simply from taking the imaginary part of Green's first identity ∂D uu n ds = D u∆u + |∇u| 2 dx after inserting ∆u = −k(x 2 ) 2 u from the PDE. We now prove a result analogous to [22, Thm. 2.12]. Theorem 2 (Non-negative incoming flux). Let Ω ⊂ R 2 be a bounded domain. Let u solve (1) with medium (3) in R 2 \Ω, be radiative according to Definition 1, and have non-negative incoming flux, i.e., Im ∂Ω uu n ds ≥ 0 . Then u = 0 in R 2 \Ω. Proof. Expanding the square in (Applying (A.3) to the punctured rectangle (−M, M) × (−x 2 , x 2 )\Ω, by the decay conditions (6)-(7) and Cauchy-Schwarz the flux contributions from the bottom, left, and right sides vanish, which are identical to the Laplace and Helmholtz cases [22, Thm. 3.1 and p.66]. For the proof we need the variablecoefficient elliptic PDE case [43, Thm. 6.11 and (7.5)].The indirect BIE is constructed by making the "combined field integral equation" (CFIE) ansatz Table 1: Convergence and timing for the small scattering problem shown inN A fill time (s) dense solve time (s) evaluation time per target (s) error 200 3.8 0.004 0.0012 4.1e-05 300 5.3 0.007 0.0018 6.3e-08 400 7.9 0.019 0.0024 2.9e-10 500 10.0 0.023 0.0030 2.6e-12 600 12.6 0.028 0.0036 - ). Around 11 digit accuracy relative to the typical solution size is achieved at N = 500; see Sec. 5.2.N A fill time (s) dense solve time (s) evaluation time per target (s) error 1200 26 0.09 0.008 5.7e-08 1600 37 0.20 0.011 2.1e-10 2000 46 0.31 0.013 2.9e-12 2400 58 0.42 0.016 - Table 2 : 2Convergence and timing for the large scattering problem shown inFig. 9and described in Sec. 5.2. Error is the maximum absolute error over 100 points lying uniformly on a circle of radius 19 (i.e. a closest distance of 1 from ∂Ω), estimated by comparing to the converged values for N = 2400. We chose a unity constant in front of x 2 without loss of generality since adjusting this constant is equivalent to rescaling the domain Ω. We also tested our codes on a laptop with a quad-core Intel i7-3720QM at 2.6 GHz and found speeds 70%-100% of those reported. Curiously, fill times on the laptop were slightly faster than for the desktop, but evaluation times were only 70% as fast. AcknowledgementsWe have benefited from helpful discussion with Simon Chandler-Wilde, Erik van Erp, and Nick Trefethen. AHB is grateful for support from NSF grant DMS-1216656. BJN is grateful for support from the Paul K. Richter and Evalyn E. Cook Richter Memorial Fund. The work of JMM and BJN was performed while at the Department of Mathematics at Dartmouth College.Proof. For sufficiently large x 2 , using the horizontal Fourier transformû(ξ, x 2 ) = 1 2π ∞ −∞ u(x 1 , x 2 )e iξx 1 dx 1 , the PDE becomes, for each ξ ∈ R, an ODE in x 2 , ∂ 2x 2û (ξ, x 2 ) + (x 2 + E − ξ 2 )û(ξ, x 2 ) = 0 . Proceedings of the 1974 Workshop on Wave Propagation and Underwater Acoustics. J. B. Keller, J. S. Papadakisthe 1974 Workshop on Wave Propagation and Underwater AcousticsSpringer-Verlag70J. B. Keller, J. S. Papadakis (Eds.), Proceedings of the 1974 Workshop on Wave Propagation and Underwater Acoustics, Lecture Notes in Physics, 70, Springer-Verlag, 1977. Underwater acoustic modeling and simulation. P C Etter, CRC Press4th EditionP. C. Etter, Underwater acoustic modeling and simulation, 4th Edition, CRC Press, 2013. A new boundary-element method for predicting outdoor sound propagation and application to the case of a sound barrier in the presence of downwards refraction. E Premat, Y Gabillet, J. Acoust. Soc. Am. 1086E. Premat, Y. Gabillet, A new boundary-element method for predicting outdoor sound propagation and application to the case of a sound barrier in the presence of downwards refraction, J. Acoust. Soc. Am. 108 (6) (2000) 2775-2783. Optical and equivalent paths in a stratified medium, treated from a wave standpoint. D R Hartree, Proc. Roy. Soc. Lond. A. 131817D. R. Hartree, Optical and equivalent paths in a stratified medium, treated from a wave standpoint, Proc. Roy. Soc. Lond. A 131 (817) (1931) 428-450. Optics of subwavelength gradient nanofilms. A B Shvartsburg, V Kuzmiak, G Petite, Phys. Rep. 452A. B. Shvartsburg, V. Kuzmiak, G. Petite, Optics of subwavelength gradient nanofilms, Phys. Rep. 452 (2007) 33-88. C H Chapman, Fundamentals of Seismic Wave Propagation. Cambridge Universtiy PressC. H. Chapman, Fundamentals of Seismic Wave Propagation, Cambridge Universtiy Press, 2004. Fundamentals of functionally graded materials. S Suresh, A Mortensen, Maney Materials Science. S. Suresh, A. Mortensen, Fundamentals of functionally graded materials, Maney Materials Science, 1998. Three-dimensional tunneling in quantum ballistic motion. C Bracher, W Becker, S A Gurvitz, M Kleber, M S Marinov, Am. J. Phys. 66C. Bracher, W. Becker, S. A. Gurvitz, M. Kleber, M. S. Marinov, Three-dimensional tunneling in quantum ballistic motion, Am. J. Phys. 66 (1998) 38-48. W C Chew, Waves and Fields in Inhomogeneous Media. Wiley-IEEE PressW. C. Chew, Waves and Fields in Inhomogeneous Media, Wiley-IEEE Press, 1999. Tunneling from a 3-dimensional quantum well in an electric field: an analytical solution. B Gottlieb, M Kleber, J Krause, Z. Phys. A. 339B. Gottlieb, M. Kleber, J. Krause, Tunneling from a 3-dimensional quantum well in an electric field: an analytical solution, Z. Phys. A 339 (1991) 201-206. On the numerical solution of a hypersingular integral equation in scattering theory. R Kress, J. Comput. Appl. Math. 61R. Kress, On the numerical solution of a hypersingular integral equation in scattering theory, J. Comput. Appl. Math. 61 (1995) 345-360. Influence of curvature on the losses of doubly clad fibers. D Marcuse, Appl. Optics. 2123D. Marcuse, Influence of curvature on the losses of doubly clad fibers, Appl. Optics 21 (23) (1982) 4208-4213. E J Heller, Wavepacket dynamics and quantum chaology. Les Houches; North-Holland, AmsterdamChaos et physique quantiqueE. J. Heller, Wavepacket dynamics and quantum chaology, in: Chaos et physique quantique (Les Houches, 1989), North-Holland, Amsterdam, 1991, pp. 547-664. . A Gil, J Segura, N M Temme, Numerical Methods for Special Functions. A. Gil, J. Segura, N. M. Temme, Numerical Methods for Special Functions, SIAM, 2007. The exponentially convergent trapezoidal rule. L N Trefethen, J A C Weideman, SIAM Review. 563L. N. Trefethen, J. A. C. Weideman, The exponentially convergent trapezoidal rule, SIAM Review 56 (3) (2014) 385-458. R J Leveque, Finite Difference Methods for Ordinary and Partial Differential Equations, SIAM. R. J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations, SIAM, 2007. Is the pollution effect of the FEM avoidable for the Helmholtz equation considering high wave numbers?. I M Babuska, S A Sauter, SIAM J. Numer. Anal. 346I. M. Babuska, S. A. Sauter, Is the pollution effect of the FEM avoidable for the Helmholtz equation considering high wave numbers?, SIAM J. Numer. Anal. 34 (6) (1997) 2392-2423. Sweeping preconditioner for the Helmholtz equation: Moving perfectly matched layers, Multiscale Mod. B Engquist, L Ying, Sim. 92B. Engquist, L. Ying, Sweeping preconditioner for the Helmholtz equation: Moving perfectly matched layers, Multiscale Mod. Sim. 9 (2) (2011) 686-710. Absorbing boundary conditions for the numerical simulation of waves. B Engquist, A Majda, Math. Comp. 31B. Engquist, A. Majda, Absorbing boundary conditions for the numerical simulation of waves, Math. Comp. 31 (1977) 629-651. Splitting of operators, alternate directions, and paraxial approximations for the three-dimensional wave equation. F Collino, P Joly, SIAM J. Sci. Comput. 165F. Collino, P. Joly, Splitting of operators, alternate directions, and paraxial approximations for the three-dimensional wave equation, SIAM J. Sci. Comput. 16 (5) (1995) 1019-1048. Effective computational methods for wave propagation. N A Kampanis, V A Dougalis, J. A. EkaterinarisCRC PressBoca RatonN. A. Kampanis, V. A. Dougalis, J. A. Ekaterinaris (Eds.), Effective computational methods for wave propagation, CRC Press, Boca Raton, 2007. Inverse acoustic and electromagnetic scattering theory. D Colton, R Kress, Applied Mathematical Sciences. 93Springer-Verlag2nd EditionD. Colton, R. Kress, Inverse acoustic and electromagnetic scattering theory, 2nd Edition, Vol. 93 of Applied Mathematical Sciences, Springer- Verlag, Berlin, 1998. The numerical solution of integral equations of the second kind. K Atkinson, Cambridge University PressK. Atkinson, The numerical solution of integral equations of the second kind, Cambridge University Press, 1997. W Y Crutchfield, Z Gimbutas, G L , J Huang, V Rokhlin, N Yarvin, J Zhao, Remarks on the implementation of the wideband FMM for the Helmholtz equation in two dimensions. Providence, RI408W. Y. Crutchfield, Z. Gimbutas, G. L., J. Huang, V. Rokhlin, N. Yarvin, J. Zhao, Remarks on the implementation of the wideband FMM for the Helmholtz equation in two dimensions, Vol. 408 of Contemp. Math., Amer. Math. Soc., Providence, RI, 2006, pp. 99-110. A sparse matrix arithmetic based on H-matrices; Part I: Introduction to H-matrices. W Hackbusch, Computing. 62W. Hackbusch, A sparse matrix arithmetic based on H-matrices; Part I: Introduction to H-matrices, Computing 62 (1999) 89-108. A Gillman, P Young, P Martinsson, A direct solver with O(N) complexity for integral equations on one-dimensional domains. 7A. Gillman, P. Young, P. Martinsson, A direct solver with O(N) complexity for integral equations on one-dimensional domains, Frontiers of Mathematics in China 7 (2) (2012) 217-247. Boundary integral equations in time-harmonic acoustic scattering. R Kress, Mathl. Comput. Modelling. 15R. Kress, Boundary integral equations in time-harmonic acoustic scattering, Mathl. Comput. Modelling 15 (1991) 229-243. High-order accurate Nyström discretization of integral equations with weakly singular kernels on smooth curves in the plane. S Hao, A H Barnett, P G Martinsson, P Young, Adv. Comput. Math. 401S. Hao, A. H. Barnett, P. G. Martinsson, P. Young, High-order accurate Nyström discretization of integral equations with weakly singular kernels on smooth curves in the plane, Adv. Comput. Math. 40 (1) (2014) 245-272. A fast direct solver for scattering problems involving elongated structures. P G Martinsson, V Rokhlin, J. Comput. Phys. 221P. G. Martinsson, V. Rokhlin, A fast direct solver for scattering problems involving elongated structures, J. Comput. Phys. 221 (2007) 288- 302. An analysis of the coupling of finite-element and Nyström methods in acoustic scattering. A Kirsch, P Monk, IMA J. Numer. Anal. 14A. Kirsch, P. Monk, An analysis of the coupling of finite-element and Nyström methods in acoustic scattering, IMA J. Numer. Anal. 14 (1994) 523-544. A spectrally accurate direct solution technique for frequency-domain scattering problems with variable media. A Gillman, A H Barnett, P.-G Martinsson, BIT J. Numer. Math. to appearA. Gillman, A. H. Barnett, P.-G. Martinsson, A spectrally accurate direct solution technique for frequency-domain scattering problems with variable media, http://arxiv.org/abs/1308.5998, to appear, BIT J. Numer. Math., 2014. A parallel fast algorithm for computing the Helmholtz integral operator in 3-D layered media. M H Cho, W Cai, J. Comput. Phys. 231M. H. Cho, W. Cai, A parallel fast algorithm for computing the Helmholtz integral operator in 3-D layered media, J. Comput. Phys. 231 (2012) 5910-25. Sound propagation in stratified fluids. C H Wilcox, Applied Mathematical Sciences. 50Springer-VerlagC. H. Wilcox, Sound propagation in stratified fluids, Applied Mathematical Sciences, volume 50, Springer-Verlag, 1984. Asymptotics for Helmholtz and Maxwell solutions in 3-D open waveguides. C Jeres-Hanckes, J.-C Nédélec, Commun. Comput. Phys. 112C. Jeres-Hanckes, J.-C. Nédélec, Asymptotics for Helmholtz and Maxwell solutions in 3-D open waveguides, Commun. Comput. Phys. 11 (2) (2012) 629-646. The reduced wave equation in a medium with variable index of refraction. W L Miranker, Comm. Pure Appl. Math. 10W. L. Miranker, The reduced wave equation in a medium with variable index of refraction, Comm. Pure Appl. Math. 10 (1957) 491-502. Boundary value problems for the Helmholtz equation in a half-plane. S N Chandler-Wilde, Proceedings of the 3rd International Conference on Mathematical and Numerical Aspects of Wave Propagation. the 3rd International Conference on Mathematical and Numerical Aspects of Wave PropagationMandelieu-La Napoule, France; SIAMS. N. Chandler-Wilde, Boundary value problems for the Helmholtz equation in a half-plane, in: Proceedings of the 3rd International Confer- ence on Mathematical and Numerical Aspects of Wave Propagation, Mandelieu-La Napoule, France, April 1995, SIAM, 1995, pp. 188-197. G N Watson, A Treatise on the Theory of Bessel functions. Cambridge University PressG. N. Watson, A Treatise on the Theory of Bessel functions, Cambridge University Press, 1922. P R Garabedian, Partial differential equations. New YorkJohn Wiley & Sons IncP. R. Garabedian, Partial differential equations, John Wiley & Sons Inc., New York, 1964. Progress and prospects in the theory of linear wave propagation. J B Keller, SIAM Review. 212J. B. Keller, Progress and prospects in the theory of linear wave propagation, SIAM Review 21 (2) (1979) 229-245. L C Evans, Partial Differential Equations. Providence, RIAmerican Mathematical Society19L. C. Evans, Partial Differential Equations, Vol. 19 of Graduate Studies in Mathematics, American Mathematical Society, Providence, RI, 1998. Introduction to Quantum Mechanics: A Time-Dependent Perspective, University Science Books. G Tannor, G. Tannor, Introduction to Quantum Mechanics: A Time-Dependent Perspective, University Science Books, 2007. M Zworski, Semiclassical analysis. AMSM. Zworski, Semiclassical analysis, Graduate Studies in Mathematics, AMS, 2012. W C H Mclean, Strongly elliptic systems and boundary integral equations. Cambridge University PressW. C. H. McLean, Strongly elliptic systems and boundary integral equations, Cambridge University Press, 2000. R Kress, Numerical Analysis, Graduate Texts in Mathematics #181. Springer-VerlagR. Kress, Numerical Analysis, Graduate Texts in Mathematics #181, Springer-Verlag, 1998. Hybrid Gauss-trapezoidal quadrature rules. B K Alpert, SIAM J. Sci. Comput. 20B. K. Alpert, Hybrid Gauss-trapezoidal quadrature rules, SIAM J. Sci. Comput. 20 (1999) 1551-1584. R Kress, Linear Integral Equations. Springer822nd EditionR. Kress, Linear Integral Equations, 2nd Edition, Vol. 82 of Appl. Math. Sci., Springer, 1999. Evaluation of layer potentials close to the boundary for Laplace and Helmholtz problems on analytic planar domains. A H Barnett, SIAM J. Sci. Comput. 362A. H. Barnett, Evaluation of layer potentials close to the boundary for Laplace and Helmholtz problems on analytic planar domains, SIAM J. Sci. Comput. 36 (2) (2014) A427-A451. Optimal contours for high-order derivatives. F Bornemann, G Wechslberger, IMA J. Numer. Anal. 33F. Bornemann, G. Wechslberger, Optimal contours for high-order derivatives, IMA J. Numer. Anal. 33 (2013) 403-412. On the evaluation of highly oscillatory integrals by analytic continuation. D Huybrechs, S Vandewalle, SIAM J. Numer. Anal. 44D. Huybrechs, S. Vandewalle, On the evaluation of highly oscillatory integrals by analytic continuation, SIAM J. Numer. Anal. 44 (2006) 1026-1048. Accelerating the nonuniform fast fourier transform. L Greengard, J.-Y. Lee, SIAM Review. 463L. Greengard, J.-Y. Lee, Accelerating the nonuniform fast fourier transform, SIAM Review 46 (3) (2004) 443-454. GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. Y Saad, M H Schultz, SIAM J. Stat. Sci. Comput. 73Y. Saad, M. H. Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Stat. Sci. Comput. 7 (3) (1986) 856-869. A kernel-independent adaptive fast multipole method in two and three dimensions. L Ying, G Biros, D Zorin, J. Comput. Phys. 1962L. Ying, G. Biros, D. Zorin, A kernel-independent adaptive fast multipole method in two and three dimensions, J. Comput. Phys. 196 (2) (2004) 591-626. F W J Olver, D W Lozier, R F Boisvert, NIST Handbook of Mathematical Functions. C. W. ClarkCambridge University PressF. W. J. Olver, D. W. Lozier, R. F. Boisvert, C. W. Clark (Eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, 2010, http://dlmf.nist.gov.
[]
[ "Towards Neurohaptics: Brain-Computer Interfaces for Decoding Intuitive Sense of Touch", "Towards Neurohaptics: Brain-Computer Interfaces for Decoding Intuitive Sense of Touch" ]
[ "Jeong-Hyun Cho [email protected] \nDept. Brain and Cognitive Engineering\nDept. Artificial Intelligence\nKorea University Seoul\nRepublic of Korea\n", "Myoung-Ki Kim \nDept. Brain and Cognitive Engineering\nKorea University Seoul\nRepublic of Korea LG Display\n", "Ji-Hoon Jeong [email protected] \nDept. Artificial Intelligence\nKorea University Seoul\nRepublic of Korea\n", "Seong-Whan Lee [email protected] \nKorea University Seoul\nRepublic of Korea\n" ]
[ "Dept. Brain and Cognitive Engineering\nDept. Artificial Intelligence\nKorea University Seoul\nRepublic of Korea", "Dept. Brain and Cognitive Engineering\nKorea University Seoul\nRepublic of Korea LG Display", "Dept. Artificial Intelligence\nKorea University Seoul\nRepublic of Korea", "Korea University Seoul\nRepublic of Korea" ]
[]
Noninvasive brain-computer interface (BCI) is widely used to recognize users' intentions. Especially, BCI related to tactile and sensation decoding could provide various effects on many industrial fields such as manufacturing advanced touch displays, controlling robotic devices, and more immersive virtual reality or augmented reality. In this paper, we introduce haptic and sensory perception-based BCI systems called neurohaptics. It is a preliminary study for a variety of scenarios using actual touch and touch imagery paradigms. We designed a novel experimental environment and a device that could acquire brain signals under touching designated materials to generate natural touch and texture sensations. Through the experiment, we collected the electroencephalogram (EEG) signals with respect to four different texture objects. Seven subjects were recruited for the experiment and evaluated classification performances using machine learning and deep learning approaches. Hence, we could confirm the feasibility of decoding actual touch and touch imagery on EEG signals to develop practical neurohaptics.
10.1109/bci51272.2021.9385331
[ "https://arxiv.org/pdf/2012.06753v2.pdf" ]
229,152,342
2012.06753
43c8ef399615b54c15b8354edf3a30fffd386c3e
Towards Neurohaptics: Brain-Computer Interfaces for Decoding Intuitive Sense of Touch Jeong-Hyun Cho [email protected] Dept. Brain and Cognitive Engineering Dept. Artificial Intelligence Korea University Seoul Republic of Korea Myoung-Ki Kim Dept. Brain and Cognitive Engineering Korea University Seoul Republic of Korea LG Display Ji-Hoon Jeong [email protected] Dept. Artificial Intelligence Korea University Seoul Republic of Korea Seong-Whan Lee [email protected] Korea University Seoul Republic of Korea Towards Neurohaptics: Brain-Computer Interfaces for Decoding Intuitive Sense of Touch brain-computer interfaceelectroencephalogramtactile informationhaptic sensation analysistouch imagery Noninvasive brain-computer interface (BCI) is widely used to recognize users' intentions. Especially, BCI related to tactile and sensation decoding could provide various effects on many industrial fields such as manufacturing advanced touch displays, controlling robotic devices, and more immersive virtual reality or augmented reality. In this paper, we introduce haptic and sensory perception-based BCI systems called neurohaptics. It is a preliminary study for a variety of scenarios using actual touch and touch imagery paradigms. We designed a novel experimental environment and a device that could acquire brain signals under touching designated materials to generate natural touch and texture sensations. Through the experiment, we collected the electroencephalogram (EEG) signals with respect to four different texture objects. Seven subjects were recruited for the experiment and evaluated classification performances using machine learning and deep learning approaches. Hence, we could confirm the feasibility of decoding actual touch and touch imagery on EEG signals to develop practical neurohaptics. I. INTRODUCTION Brain-computer interface (BCI)s are systems that could decode brain signals to understand the intention and status 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This of people. BCIs are also used to analyze various tactile sensations related to haptics. Numerous studies have attempted to understand electroencephalogram (EEG) signals because the signals contain significant information about the cognition of people [1]- [4]. For decades, EEG-based BCIs focused and investigated into several paradigms for signal acquisition such as movement-related cortical potential [2], [5], event-related potential (ERP) [6], [7], and motor imagery [8]- [10]. As applications of the BCIs, robotic arm controls [8], speller systems [11], [12], brain-controlled wheelchairs [13], and the neurohaptics [14]- [16] were commonly used for communication between human and machines. However, the neurohaptics are relatively few compared to conventional paradigms. Recently, a BCI-based haptic sensation is one of the most interesting topics. The haptic sensation is an electrical sense of objects and surfaces felt through the nerves in the skin of the fingers. Before haptic sensation studies became active in the non-invasive BCI research field, invasive BCI-based haptic studies had begun first. Osborn et al. [14] developed a haptic feedback system that operates on a prosthetic basis. Based on the tactile information obtained through the electronic skin installed on the robot's prosthetic arm and the sensory recognition information of the subjects obtained through the BCI, the sensory information felt when holding a sharp object was made available to the brain by using a transcutaneous nerve stimulation that could give electrical stimulation to the nerves. Ganzer et al. [15] also implemented a bi-directional communication system of the brain and haptic feedback. Based on brain signals from invasive BCI, they developed a system that could give patients adequate electrical feedback through a functional electrical stimulation (FES) attached to the arm. When the patient with reduced finger sensation holds the object with a hand, the system amplified the haptic perception to assist the patient makes a better and stronger grasp through FES. Tayeb et al. [16] proposed a non-invasive BCI system that could implement the same system as the studies described above. They developed a prosthetic arm that can restore the sense of touch and pain. Brain responses are analyzed and decoded to understand the tactile sensory perception, including pain, and identify activated brain regions. Neural activity can be used to design a prosthesis that mimics natural pain withdrawal behavior in humans. The overview in Fig. 1 illustrates what tactile information is being targeted to analyze in recent studies and what purpose and process are using EEG and haptic information. In this study, we measured EEG signals of 4-class of actual touch and touch imagery ('Fabric', 'Glass', 'Paper', and 'Fur'). The classes used in the experimental paradigm consist of the most basic haptic perception for analysis of touch sensation. To the best of our knowledge, this is the first attempt that demonstrates the feasibility of classifying the high-level haptic perception which consists of 4-class to develop a high-performance classification model based on a deep learning approach. Second, we achieved robust classification performance in the 4-class touch perception compared with the chance-level accuracy (0.25). II. MATERIALS AND METHODS A. Participants Seven healthy subjects, who were naive BCI users, have recruited in the experiment (aged 25-38, all right-handed). Before the experiment, each subject was informed of the experimental protocols, paradigms, and purpose. After they had understood, all of them provided their given written consent according to the Declaration of Helsinki. The experimental protocols and environments were reviewed and approved by the Institutional Review Board at Korea University [KUIRB-2020-0013-01]. B. Experimental Setup First, a monitor display for touch instruction was put at a distance of 90cm from the subjects. At the same time, we devised a control device that can give the subject a consistent and uniformed tactile sensation with his hands fixed as depicted in Fig. 2 (a). The subjects were asked to perform actual touch and touch imagery according to the four different tactile objects. Each trial was composed of four phases such as rest, cue/preparation, actual touch, and imagery, as shown in Fig. 2 (b). In the rest phase, the subject took a comfortable rest with restraining eye and body movement. Between the rest phase, the monitor displays one of the texture images (picture of the four texture objects) as a touch cue, and then subjects prepare the touch imagery task according to the cue. The subjects stared the fixation point during 3 s to avoid an afterimage effect. During the 5 s, The subjects conducted a touch imagery task. We asked subjects to perform 200 trials in total (i.e., 50 trials × 4 classes). C. EEG Signal Acquisition We acquired the EEG signals via BrainVision Recorder (BrainProducts GmbH, Germany). EEG signals were acquired using 64 Ag/AgCl electrodes following 10/20 international systems. The ground and reference channels were FCz and FPz positions, respectively. The sampling rate was 1,000 Hz, and a notch filter was applied to the acquired signals as 60 Hz. All electrode impedances were kept below 10 kΩ during the experiment. D. Control Device for Tactile Sensation We designed a device for the experiment that could provide the subjects with a consistent and uniform texture feeling. As shown in Fig. 2 (a), the control device consists of a rail and a plate that travels horizontally above it. The subject's finger was able to feel the texture of the objects without moving itself according to the experimental protocol. The actual touch was conducted by touching four different tactile objects which are 'Fabric', 'Glass', 'Paper', and 'Fur.' The fabric cue indicated touching a smooth surface for the haptic sensation. In the touch imagery phase, the subjects imagined the fabric texture that they experienced in the actual touch phase. The glass cue shows that the hard and slippy surface for the texture sensations. The paper cue presents the surface as smooth but relatively not slippy compares to the glass texture. Finally, the fur cue indicates the texture which is smoother than the fabric class, and this surface texture has a characteristic of high friction because of the structure of the raw material. E. Data Analysis For data analysis, we compared the algorithm of common spatial patterns (CSP)-linear discriminant analysis (LDA), a popular method for motor and haptic-related EEG decoding, and the architecture of EEGNet, which shows remarkable performance in classification tasks. We segmented the data into 5 s epoched data for each trial so we could prepare 5 s of actual touch and 5 s of touch imagery from a single trial. Then, the CSP algorithm [17] was applied to extract dominant spatial features for training. A transformation matrix from CSP consisted of the logarithmic variances of the first three and the last three columns were used as a feature. The LDA [18] was used for a classification method which classified four different class using one-versus-rest strategy. EEGNet which is a convolutional neural network (CNN) architecture proposed by Lawhern et al used the preprocessed EEG data as input [19]. No conventional feature extraction, such as CSP, was performed and only significant features were expected to be extracted with the operation of the convolution layer. For a fair evaluation of classification performance, 5-by-5-fold cross-validation was used. III. RESULTS AND DISCUSSION As shown in Table I, we conducted an experiment of actual touch and touch imagery on seven subjects for 4class classification. As a result, classification using EEGNet performed better than the CSP-LDA method, especially after analyzing actual touch, it was found that using EEGNet improved average performance by about 0.1174. The difference in classification performance between actual touch and touch imagery was lower than expected. For example, the difference between the average accuracy of the actual touch and touch imagery classification is not significant (0.1050-0.1836). In Fig. 3, the confusion matrices for the average classification performance of EEGNet were presented. Fig. 3 (a) is for actual touch and Fig. 3 (b) is for touch imagery. We could confirm the decoding model could perform classification clearly on actual touch and its performance is relatively less effective on touch imagery. At the same time, we can see that classifying the texture of glass and paper are more confusing than other classes. On the other hand, the texture of fabric and fur are relatively distinguishable. IV. CONCLUSION AND FUTURE WORKS In this paper, we designed an experimental environment for acquiring EEG data with respect to touch imagery. Through the experiment system, the subjects could perform the touch imagery for the haptic analysis of various tactile information. We have implemented the four classes representing natural texture. The classification of touch imagery could contribute to developing neurohaptic systems for industry, VR/AR application, and artificial intelligence. Fig. 4 shows an overview of the BCI-based bi-directional haptic system that we will challenge in the future. The purpose is to provide real-time haptic feedback, along with analysis of motor intention to assist object cognition and application controllability of users for the tasks needed in everyday life. As a result, the EEG classification performance needs to be higher. We will continue to test and adopt advanced deep learning approaches to improve our haptic-related BCI system robust in real-world environments. It would greatly improve the interaction effect between the user and BCIs. work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2017-0-00432, Development of Non-Invasive Integrated BCI SW Platform to Control Home Appliances and External Devices by User's Thought via AR/VR Interface; No. 2017-0-00451, Development of BCI based Brain and Cognitive Computing Technology for Recognizing User's Intentions using Deep Learning; No. 2019-0-00079, Artificial Intelligence Graduate School Program (Korea University)). Fig. 1 . 1Overview of decoding natural haptic perception and reconstructing the haptic feedback to develop haptic-related BCI systems. Fig. 2 . 2Experimental setup and protocol to record touch sensation. A selfmade instrument for the purpose of obtaining EEG signals under consistent tactile sensation (a) and experimental protocol for recording EEG signals related to the actual touch and the touch imagery (b). Fig. 3 . 3Confusion matrices of predicted and true labels to the corresponding classes. Result on actual touch (a) and result on touch imagery (b). Fig. 4 . 4Illustration of describing bidirectional non-invasive BCI based neurohaptic system and its application. BMI control of a third arm for multitasking. C I Penaloza, S Nishio, Sci. Robot. 3201228C. I. Penaloza and S. Nishio, "BMI control of a third arm for multitasking," Sci. Robot., vol. 3, no. 20, pp. eaat1228, Jul. 2018. Detection of movement intention from single-trial movement-related cortical potentials. I K Niazi, N Jiang, O Tiberghien, J F Nielsen, K Dremstrup, D Farina, J. Neural Eng. 8666009I. K. Niazi, N. Jiang, O. Tiberghien, J. F. Nielsen, K. Dremstrup, and D. Farina, "Detection of movement intention from single-trial movement-related cortical potentials," J. Neural Eng., vol. 8, no. 6, pp. 066009, Oct. 2011. Changes of functional and effective connectivity in smoking replenishment on deprived heavy smokers: a resting-state fMRI study. X Ding, S.-W Lee, PLoS One. 8359331X. Ding and S.-W. Lee, "Changes of functional and effective connectiv- ity in smoking replenishment on deprived heavy smokers: a resting-state fMRI study," PLoS One, vol. 8, no. 3, pp. e59331, Mar. 2013. A high performance spelling system based on EEG-EOG signals with visual feedback. M.-H Lee, J Williamson, D.-O Won, S Fazli, S.-W Lee, IEEE Trans. Neural Syst. Rehabil. Eng. 267M.-H. Lee, J. Williamson, D.-O. Won, S. Fazli, and S.-W. Lee, "A high performance spelling system based on EEG-EOG signals with visual feedback," IEEE Trans. Neural Syst. Rehabil. Eng., vol. 26, no. 7, pp. 1443-1459, Jul. 2018. Decoding movement-related cortical potentials based on subject-dependent and section-wise spectral filtering. J.-H Jeong, N.-S Kwak, C Guan, S.-W Lee, IEEE Trans. Neural Syst. Rehabil. Eng. J.-H. Jeong, N.-S. Kwak, C. Guan, and S.-W. Lee, "Decoding movement-related cortical potentials based on subject-dependent and section-wise spectral filtering," IEEE Trans. Neural Syst. Rehabil. Eng., Mar. 2020. A high-security EEG-based login system with RSVP stimuli and dry electrodes. Y Chen, A D Atnafu, I Schlattner, W T Weldtsadik, M.-C Roh, H J Kim, S.-W Lee, B Blankertz, S Fazli, IEEE Inf. Fore. Sec. 1112Y. Chen, A. D. Atnafu, I. Schlattner, W. T. Weldtsadik, M.-C. Roh, H. J. Kim, S.-W. Lee, B. Blankertz, and S. Fazli, "A high-security EEG-based login system with RSVP stimuli and dry electrodes," IEEE Inf. Fore. Sec., vol. 11, no. 12, pp. 2635-2647, Jun. 2016. Canonical feature selection for joint regression and multi-class identification in Alzheimer's disease diagnosis. X Zhu, H.-I Suk, S.-W Lee, D Shen, Brain imaging behav. 103X. Zhu, H.-I. Suk, S.-W. Lee, and D. Shen, "Canonical feature selection for joint regression and multi-class identification in Alzheimer's disease diagnosis," Brain imaging behav., vol. 10, no. 3, pp. 818-828, Aug. 2015. Decoding three-dimensional trajectory of executed and imagined arm movements from electroencephalogram signals. J.-H Kim, F Bießmann, S.-W Lee, IEEE. Trans. Neural. Syst. Rehabil. Eng. 235J.-H. Kim, F. Bießmann, and S.-W. Lee, "Decoding three-dimensional trajectory of executed and imagined arm movements from electroen- cephalogram signals," IEEE. Trans. Neural. Syst. Rehabil. Eng., vol. 23, no. 5, pp. 867-876, Dec. 2014. Decoding of multidirectional reaching movements for EEG-based robot arm control. J.-H Jeong, K.-T Kim, D.-J Kim, S.-W Lee, Conf. Proc. IEEE. Int. Conf. Syst. Man Cybern. (SMC). Bary, ItalyJ.-H. Jeong, K.-T. Kim, D.-J. Kim, and S.-W. Lee, "Decoding of multi- directional reaching movements for EEG-based robot arm control," in Conf. Proc. IEEE. Int. Conf. Syst. Man Cybern. (SMC), 2020, Bary, Italy, Oct. 2019, pp. 511-514. Non-homogeneous spatial filter optimization for electroencephalogram (EEG)-based motor imagery classification. T.-E Kam, H.-I Suk, S.-W Lee, Neurocomputing. 108T.-E. Kam, H.-I. Suk, and S.-W. Lee, "Non-homogeneous spatial filter optimization for electroencephalogram (EEG)-based motor imagery classification," Neurocomputing, vol. 108, pp. 58-68, May 2013. Effect of higher frequency on the classification of steady-state visual evoked potentials. D.-O Won, H.-J Hwang, S Dähne, K R Müller, S.-W Lee, J. Neural Eng. 13116014D.-O. Won, H.-J. Hwang, S. Dähne, K. R. Müller, and S.-W. Lee, "Effect of higher frequency on the classification of steady-state visual evoked potentials," J. Neural Eng., vol. 13, no. 1, pp. 016014, Dec. 2015. Motion-based rapid serial visual presentation for gaze-independent brain-computer interfaces. D.-O Won, H.-J Hwang, D.-M Kim, K.-R Müller, S.-W Lee, IEEE Trans. Neural Syst. Rehabil. Eng. 262D.-O. Won, H.-J. Hwang, D.-M. Kim, K.-R. Müller, and S.-W. Lee, "Motion-based rapid serial visual presentation for gaze-independent brain-computer interfaces," IEEE Trans. Neural Syst. Rehabil. Eng., vol. 26, no. 2, pp. 334-343, Aug. 2017. Commanding a brain-controlled wheelchair using steady-state somatosensory evoked potentials. K.-T Kim, H.-I Suk, S.-W Lee, IEEE. Trans. Neural. Syst. Rehabil. Eng. 263K.-T. Kim, H.-I. Suk, and S.-W. Lee, "Commanding a brain-controlled wheelchair using steady-state somatosensory evoked potentials," IEEE. Trans. Neural. Syst. Rehabil. Eng., vol. 26, no. 3, pp. 654-665, Aug. 2016. Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain. Andrei Luke E Osborn, Dragomir, L Joseph, Betthauser, L Christopher, Hunt, H Harrison, Nguyen, R Rahul, Nitish V Kaliki, Thakor, Sci. Robot. 319Luke E Osborn, Andrei Dragomir, Joseph L Betthauser, Christopher L Hunt, Harrison H Nguyen, Rahul R Kaliki, and Nitish V Thakor, "Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain," Sci. Robot, vol. 3, no. 19, 2018. Restoring the sense of touch using a sensorimotor demultiplexing neural interface. P Ganzer, S Colachis 4th, M Schwemmer, D Friedenberg, C Dunlap, C Swiftney, A Jacobowitz, D Weber, M Bockbrader, G Sharma, Cell. P. D Ganzer, S. C Colachis 4th, M. A Schwemmer, D. A Friedenberg, C. F Dunlap, C. E Swiftney, A. F Jacobowitz, D. J Weber, M. A Bockbrader, and G. Sharma, "Restoring the sense of touch using a sensorimotor demultiplexing neural interface," Cell, 2020. Decoding of pain perception using EEG signals for a realtime reflex system in prostheses: A case study. Z Tayeb, R Bose, A Dragomir, L E Osborn, N V Thakor, G Cheng, Sci. Rep. 101Z. Tayeb, R. Bose, A. Dragomir, L. E. Osborn, N. V. Thakor, and G. Cheng, "Decoding of pain perception using EEG signals for a real- time reflex system in prostheses: A case study," Sci. Rep, vol. 10, no. 1, pp. 1-11, 2020. Filter bank common spatial pattern (FBCSP) in brain-computer interface. K K Ang, Z Y Chin, H Zhang, C Guan, Conf. Proc. IEEE Int. Neural Netw. (IJCNN). Hong Kong, ChinaK. K. Ang, Z. Y. Chin, H. Zhang, and C. Guan, "Filter bank common spatial pattern (FBCSP) in brain-computer interface," in Conf. Proc. IEEE Int. Neural Netw. (IJCNN), Hong Kong, China, Jun. 2008, pp. 2390-2397. Classification of hand motions within EEG signals for non-invasive BCI-based robot hand control. J.-H Cho, J.-H Jeong, K.-H Shim, D.-J Kim, S.-W Lee, Conf. Proc. IEEE. Int. Conf. Syst. Man Cybern. (SMC). Miyazaki, JapanJ.-H. Cho, J.-H. Jeong, K.-H. Shim, D.-J. Kim, and S.-W. Lee, "Classi- fication of hand motions within EEG signals for non-invasive BCI-based robot hand control," in Conf. Proc. IEEE. Int. Conf. Syst. Man Cybern. (SMC), 2020, Miyazaki, Japan, Oct. 2018, pp. 515-518. EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces. V Lawhern, A Solon, N Waytowich, S Gordon, C Hung, B Lance, J. Neural Eng. 15556013V. J Lawhern, A. J Solon, N. R Waytowich, S. M Gordon, C. P Hung, and B. J Lance, "EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces," J. Neural Eng., vol. 15, no. 5, pp. 056013, 2018.
[]
[ "On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems", "On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems" ]
[ "Bangti Jin ", "Zehui Zhou ", "Jun Zou ", "† " ]
[]
[]
Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems, due to its excellent scalability with respect to data size. The current mathematical theory in the lens of regularization theory predicts that SGD with a polynomially decaying stepsize schedule may suffer from an undesirable saturation phenomenon, i.e., the convergence rate does not further improve with the solution regularity index when it is beyond a certain range. In this work, we present a refined convergence rate analysis of SGD, and prove that saturation actually does not occur if the initial stepsize of the schedule is sufficiently small. Several numerical experiments are provided to complement the analysis.In this part, we present several preliminary estimates and a refined error decomposition.Notation and preliminary estimatesWe will employ the following index sets extensively. For any k 1 ≤ k 2 and 1 ≤ i ≤ k 2 − k 1 + 1, letNote that the set J [k1,k2],i consists of (strictly monotone) multi-indices of length i, which arises naturally in the proof of Theorem 2.1 below. For i = 0, we adopt the convention J [k1,k2],0 = {∅} and J 0 = ∅. Forwhere we omit the dependency on J i for notational simplicity. In particular, J [k1,k2],1 = {{k 1 }, . . . , {k 2 }} and J c [k1,k2],0 = {k 1 , · · · , k 2 }.
10.1137/20m1374456
[ "https://arxiv.org/pdf/2010.10916v2.pdf" ]
224,814,082
2010.10916
c5d20b13ddd44b056b4f0736d76bf26dbcbbd67d
On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems Bangti Jin Zehui Zhou Jun Zou † On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems stochastic gradient descentregularizing propertyconvergence ratesaturationinverse problems Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems, due to its excellent scalability with respect to data size. The current mathematical theory in the lens of regularization theory predicts that SGD with a polynomially decaying stepsize schedule may suffer from an undesirable saturation phenomenon, i.e., the convergence rate does not further improve with the solution regularity index when it is beyond a certain range. In this work, we present a refined convergence rate analysis of SGD, and prove that saturation actually does not occur if the initial stepsize of the schedule is sufficiently small. Several numerical experiments are provided to complement the analysis.In this part, we present several preliminary estimates and a refined error decomposition.Notation and preliminary estimatesWe will employ the following index sets extensively. For any k 1 ≤ k 2 and 1 ≤ i ≤ k 2 − k 1 + 1, letNote that the set J [k1,k2],i consists of (strictly monotone) multi-indices of length i, which arises naturally in the proof of Theorem 2.1 below. For i = 0, we adopt the convention J [k1,k2],0 = {∅} and J 0 = ∅. Forwhere we omit the dependency on J i for notational simplicity. In particular, J [k1,k2],1 = {{k 1 }, . . . , {k 2 }} and J c [k1,k2],0 = {k 1 , · · · , k 2 }. Introduction In this paper, we consider the numerical solution of the following finite-dimensional linear inverse problem: Ax = y † ,(1.1) where A ∈ R n×m is the system matrix representing the data formation mechanism, and x ∈ R m is the unknown signal of interest. In the context of inverse problems, the matrix A is commonly ill-conditioned. When the matrix A is rank-deficient, equation (1.1) may have infinitely many solutions. The reference solution x † is taken to be the minimum norm solution relative to the initial guess x 1 , i.e., x † = arg min x∈R m { x − x 1 s.t. Ax = y † }, with · being the Euclidean norm of a vector (and also the spectral norm of a matrix). In practice, we only have access to a noisy version y δ of the exact data y † = Ax † , i.e., y δ = y † + ξ, where ξ ∈ R n denotes the noise in the data with a noise level δ := ξ . We denote the ith row of the matrix A by a column vector a i ∈ R m , i.e., A = [a t i ] n i=1 (with the superscript t denoting the matrix/vector transpose), and the ith entry of the vector y δ ∈ R n by y δ i . Linear inverse problems of the form (1.1) arise in a broad range of applications, e.g., initial condition / source identification and optical imaging. A large number of numerical methods have been developed, prominently variational regularization [7,14] and iterative regularization [21]. Stochastic gradient descent (SGD) is one very promising numerical method for solving problem (1.1). In its simplest form, it reads as follows. Given an initial guess x δ 1 ≡ x 1 ∈ R m , we update the iterate x δ k+1 recursively by x δ k+1 = x δ k − η k ((a i k , x δ k ) − y δ i k )a i k , k = 1, 2, · · · , (1.2) where the random row index i k is drawn i.i.d. uniformly from the index set {1, · · · , n}, η k > 0 is the stepsize at the kth iteration, and (·, ·) denotes the Euclidean inner product. We denote by F k the filtration generated by the random indices {i 1 , . . . , i k−1 }, define F by F = k∈N F k , and let (Ω, F, P) be the associated probability space. The notation E[·] denotes taking expectation with respect to the filtration F. The SGD iterate x δ k is random, and measurable with respect to F k . SGD is a randomized version of the classical Landweber method [23] x δ k+1 = x δ k − η k n −1 A t (Ax δ k − y δ ), k = 1, 2, · · · , (1. 3) which is identical with the gradient descent applied to the following objective functional J(x) = (2n) −1 Ax − y δ 2 . (1.4) When compared with the Landweber method in (1.3), SGD (1.2) employs only one data pair (a i k , y δ i k ) instead of all data pairs, and thus it enjoys excellent scalability with respect to the data size. It is worth noting that due to the ill-conditioning of A and the presence of noise in the data y δ , the exact minimizer of J(x) is not of interest. Since its first proposal by Robbins and Monro [29] for statistical inference, SGD has received a lot of attention in many diverse research areas (see the monograph [22] for various asymptotic results). Due to its excellent scalability, the interest in SGD and its variants has grown explosively in recent years in machine learning, and its accelerated variants, e.g., ADAM, have been established as the workhorse in many challenging deep learning training tasks [2,3]. It has also achieved great success in inverse problems, e.g., in computed tomography (known as algebraic reconstruction techniques or randomized Kaczmarz method [13,27,31,17]) and optical tomography [4]. The theoretical analysis of SGD for solving inverse problems is still in its infancy. Let e δ k = x δ k − x † be its error with respect to the minimum-norm solution x † . Only very recently, the regularizing property was proved in [18]: when equipped with a priori stopping rules, the mean squared error E[ e δ k 2 ] 1 2 of the SGD iterate x δ k converges to zero as δ tends to zero, and further, under the canonical power type source condition (see (1.5) in Assumption 1.1 below), it converges to zero at a certain rate. However, the result predicts that SGD can suffer from an undesirable saturation phenomenon for smooth solutions (i.e., with ν > 1 2 ): E[ e δ k 2 ] 1 2 converges at most at a rate O(δ 1 2 ), which is slower than that achieved by Landweber method [7,Chapter 6]; see also [15] for a posteriori stopping using the discrepancy principle and numerical illustration on the saturation phenomenon for SGD. Thus, SGD is suboptimal for "smooth" inverse solutions with ν > 1 2 . This phenomenon is attributed to the inherent computational variance of the SGD approximation x δ k , which arises from the use of a random gradient estimate in place of the true gradient. To the best of our knowledge, it remains unclear whether the saturation phenomenon is intrinsic to SGD. In this work, we revisit the convergence rate analysis of SGD with a polynomially decaying stepsize schedule for small initial stepsize c 0 , and aim at addressing the saturation phenomenon, under the standard source condition. First we state the standing assumptions for the analysis of SGD. The choice in (i) is commonly known as a polynomially decaying stepsize schedule. Part (ii) is the classical source condition, which represents a type of smoothness of the initial error x † − x 1 (with respect to the matrix B), and the condition on B is easily achieved by rescaling the problem. Source type conditions are needed in order to derive convergence rate, without which the convergence can be arbitrarily slow [7]. Loosely speaking, it restricts x † − x 1 to a suitable subspace which enables quantitatively bounding the approximation error. Note that the condition generally is insufficient to ensure a contractive map for the Landweber method. Below we shall focus on the case ν > 1 2 , for which the current analysis [18] exhibits the saturation phenomenon, as mentioned above. (iii) assumes that the forward map A takes a special form. Alternatively, it can be viewed as SGD applied to a preconditioned version of problem (1.1). To validate this condition, we present some numerical results for typical inverse problems in Subsection 4.2, which indicates that this structure is irrelevant to the performance of SGD in the sense that it performs nearly identically on the problems with or without this structure. Thus this restriction is due to the limitation of the proof technique; see Remark 2.2 for the obstruction in the proof in the general case. Assumption 1.1. Let B = n −1 A t A with B ≤ 1. The following assumptions hold. (i) The stepsize η j = c 0 j −α , j = 1, · · · , α ∈ [0, 1), with c 0 ≤ min (max i a i 2 ) −1 , 1 and c 0 B ≤ (2e) −1 . (ii) There exist w ∈ R m and ν > 1 2 such that the exact solution x † satisfies x † − x 1 = B ν w. (1.5) ( iii) The matrix A = ΣV t with Σ being diagonal and nonnegative and V column orthonormal. Now we can state the main result of the work. By choosing the stopping index k(δ) in accordance with the (unknown) regularity index ν as k = O( w δ −1 ) 2 (1+2ν)(1−α) , the result implies a convergence rate E[ e δ k(δ) 2 ] 1 2 ≤ c w 1 1+2ν δ 2ν 1+2ν , which is identical with that for Landweber method [7,Chapter 6]. Thus, under the given condition, the aforementioned saturation phenomenon does not occur for SGD. This result partly settles the saturation phenomenon, and complements existing analysis [18,19]. Theorem 1.1. Let Assumption 1.1 hold, and c 0 be sufficiently small. Then there exist constants c * and c * * , which depend on ν, n, c 0 and α, such that E[ e δ k 2 ] ≤ c * k −2ν(1−α) w 2 + c * * δ 2 k 1−α . The condition c 0 being sufficiently small can be made more precise as c 0 = O(n −1 ). Note that the stepsize choice O(n −1 ) has been extensively used in the convergence analysis of stochastic gradient descent with random shuffling [33,11,30]. In Theorem 1.1, the constant condition on c 0 is not given explicitly. When α = 0, the following condition is sufficient 2(1 + φ(2 ))nc 2−2 0 ≤ 1, for some ∈ ( 1 2 , 1), (1.6) and the function φ being defined in Lemma 2.2 below; see Theorem 3.1. The numerical experiments in Section 4 indicate that with a small initial stepsize c 0 , SGD can indeed deliver reconstructions with accuracy comparable with that by the Landweber method, for a range of regularity index and noise levels, and in the absence of the smallness condition on c 0 , the results obtained by SGD are indeed suboptimal. These numerical results indicate the necessity and sufficiency of a small stepsize for achieving the optimal convergence rate. The general strategy of proof is to decompose the error e δ k := x δ k − x † into three components (with x k being the SGD iterate for exact data y † ) x δ k − x † = (E[x k ] − x † ) + (E[x δ k ] − E[x k ]) + (x δ k − E[x δ k ]), which represent respectively approximation error due to early stopping, propagation error due to data noise, and stochastic error due to randomness of gradient estimate, and then to bound the terms by bias-variance decomposition and the triangle inequality as E[ x δ k − x † 2 ] ≤ 2 E[x k ] − x † 2 + 2 E[x δ k ] − E[x k ] 2 + E[ x δ k − E[x δ k ] 2 ]. In our analysis, we refine this decomposition by repeatedly expanding the random iterate noise within the third term and applying the bias-variance decomposition up to the th fold; see Theorem 2.1 for the details. In the decomposition, Assumption 1.1(iii) is used in an essential manner to arrive at a simple recursion. It improves the existing analysis [18,19] for SGD in the sense that the stochastic component is further decomposed. Then the analysis proceeds by bounding the first two components separately, and the third component by recursion, which in turn all involve lengthy computation of certain summations. It is noteworthy that for the case of a constant stepsize, the convergence analysis can be greatly simplified; see Section 3.4 for the details. Last, we situate the current work within a large body of literature on SGD. The convergence issue of SGD has been extensively studied in different senses, and two main lines of research that are related to this work are optimization and statistical learning, besides the aforementioned results for inverse problems. In the context of optimization, when the objective function is strictly convex, many results on the convergence of the iterates to the global minimizer are available; see, e.g. [16] for matching lower and upper bounds, and the references therein for further results. Note that J(x) in (1.4) is not strictly convex. In general, the convergence of SGD is often measured by the optimality gap (i.e., the expected objective function value to the optimal one) or the magnitude of the gradient. See the survey [3] for a recent overview on this line of research, including advances on nonconvex problems. Very recently the work [8] proves the local convergence of SGD with rates to minima of the objective function, while avoiding convexity or contractivity assumptions. It is noteworthy that these results cannot be directly compared with the convergence rates given in Theorem 1.1, since the global minimizer to the objective function J(x) is not of practical interest, due to the ill-conditioning of A. This represents one essential difference between the results from optimization and that from regularization theory. The second line of research is the generalization error in reproducing kernel Hilbert spaces in statistical learning theory [34,32,5,26,28,25]. These works aim at establishing upper bounds on the generalization error for SGD or its variants (often combined with a suitable averaging scheme), which differs from the error bound on the iterate itself. Nonetheless, the high level idea of analysis is similar: both use the biasvariance decomposition to bound relevant quantities, which often depend on source type conditions given in Assumption 1.1(ii). One major technical novelty of this work is to develop a recursive version of the bias-variance decomposition for the mean squared error. The rest of the paper is organized as follows. In Section 2, we derive a novel error decomposition, and then in Section 3, we give the convergence rate analysis, by bounding the three error components of the SGD iterate x δ k . Finally, in Section 4, we provide some illustrative numerical experiments to complement the theoretical analysis. Throughout, the notation c, with or without a subscript, denotes a generic constant, which may differ at each occurrence, but it is always independent of the iteration number k (and the random index i k ) and the noise level δ. For i > k 2 − k 1 + 1, we adopt the convention J [k1,k2],i = {∅}, J i = ∅, J c [k1,k2],i = ∅. The next lemma collects useful identities on the summation over the indices J [1,k],i+1 . Lemma 2.1. The following identities hold: Ji+1∈J [1,k],i+1 = Ji∈J [2,k],i ji−1 ji+1=1 = k−i ji+1=1 Ji∈J [j i+1 +1,k],i . (2.1) Proof. The identities are direct from the definition: Ji+1∈J [1,k],i+1 = k j1=i+1 · · · ji−1−1 ji=2 ji−1 ji+1=1 = Ji∈J [2,k],i ji−1 ji+1=1 , Ji+1∈J [1,k],i+1 = k−i ji+1=1 k−i+1 ji=ji+1+1 · · · k j1=j2+1 = k−i ji+1=1 Ji∈J [j i+1 +1,k],i . This shows directly the assertion. We use the following elementary inequality extensively. Lemma 2.2. For any k ∈ N and s ∈ R, there holds k j=1 j −s ≤        2 1−s (1 − s) −1 k 1−s , s < 0, (1 − s) −1 k 1−s , s ∈ [0, 1), 2 max(ln k, 1), s = 1, s(s − 1) −1 , s > 1. (2.2) Throughout, we denote the constant and power on the right hand side of the inequality (2.2) by φ(s) and k max(1−s,0) , respectively, with the shorthand k max(0,0) = max(ln k, 1). The next result bounds the spectral norm of the matrix product Π J (B)B s , which, for each index set J, is defined by (with the convention Π ∅ (B) = I) Π J (B) = j∈J (I − η j B).∈ J [k ,k], with k ≤ k, 0 ≤ < k + 1 − k , Π J c [k ,k], (B)B s ≤ s s (ec 0 ) −s (k + 1 − k − ) −s k αs . Proof. For any s > 0 and J ∈ J [k ,k], with k ≤ k, 0 ≤ < k + 1 − k , there holds Π J c [k ,k], (B)B s ≤ sup λ∈Sp(B) |λ s Π J c [k ,k], (λ)| = sup λ∈Sp(B) λ s i∈J c [k ,k], (1 − η i λ), where Sp(B) denotes the spectrum of B. For any x ∈ R, there holds the inequality 1 − x ≤ e −x , and thus λ s i∈J c [k ,k], (1 − η i λ) ≤ λ s i∈J c [k ,k], e −ηiλ = λ s e −λ i∈J c [k ,k], ηi . For the function g(λ) = λ s e −λa , with a > 0, the maximum is attained at λ * = sa −1 , with a maximum value s s (ea) −s . Then setting a = i∈J c (i) For any k ≥ 2, α ∈ [0, 1) and 2 ≤ i ≤ k, there holds Ji∈J [1,k],i i t=1 j −2α t ≤ φ(2α) i (k max(1−2α,0) ) i . (ii) For any j = 0, · · · , k − 1 and i = 1, · · · , k − j, we have Ji∈J [j+1,k],i 1 ≤ (k − j) i i! . Proof. Assertion (i) follows from (2.2) as Ji∈J [1,k],i i t=1 j −2α t = k j1=i j1−1 j2=i−1 · · · ji−1−1 ji=1 j −2α i i−1 t=1 j −2α t ≤ i t=1 k jt=1 j −2α t ≤ φ(2α)k max(1−2α,0) i . By the definition of the index set J [j+1,k],i , we have the identity Ji∈J [j+1,k],i 1 = k−i+1 ji=j+1 · · · k−1 j2=j3+1 k j1=j2+1 1 ≤ k−1 ji=j+1 · · · k−1 j2=j3+1 k j1=j2+1 1 = k−1 ji=j+1 · · · k−1 j2=j3+1 (k − j 2 ). then assertion (ii) follows by repeatedly applying the inequality T t=1 t s ≤ (s + 1) −1 (T + 1) s+1 , ∀T ∈ N, s ≥ 0. This completes the proof of the lemma. Error decomposition Now we derive an important error decomposition. Below, we denote the SGD iterates for the exact data y † and noisy data y δ by x k and x δ k , respectively, and also use the following shorthand notation: A = n − 1 2 A,ξ = n − 1 2 ξ,δ = n − 1 2 δ, and e k = x k − x † . The following result plays a central role in the convergence analysis. Theorem 2.1. Under Assumption 1.1(iii), for any 0 ≤ < k, the following error decomposition holds E[ e δ k+1 2 ] ≤ i=0 I δ i,1 + i=0 I δ i,2 + (I δ ) c , (2.3) where the terms I δ i,j , i = 0, 1, · · · , , j = 1, 2, are defined by I δ 0,1 = 2 Π J c [1,k],0 (B)e 1 2 ,I δ 0,2 = 2δ 2 k j=1 η j Π J c [j+1,k],0 (B)B 1 2 2 + (n − 1) k j=1 η 2 j Π J c [j+1,k],0 (B)B 1 2 2 , I δ i,1 = 2 i+1 (n − 1) i Ji∈J [1,k],i i t=1 η 2 jt Π J c [1,k],i (B)B i e 1 2 , ∀1 ≤ i ≤ , I δ i,2 = 2 i+1 (n − 1) iδ2 Ji∈J [2,k],i i t=1 η 2 jt ji−1 ji+1=1 η ji+1 Π J c [j i+1 +1,k],i (B)B i+ 1 2 2 + (n − 1) ji−1 ji+1=1 η 2 ji+1 Π J c [j i+1 +1,k],i (B)B i+ 1 2 2 , ∀1 ≤ i ≤ , (I δ ) c = 2 +1 (n − 1) +1 J +1 ∈J [1,k], +1 +1 t=1 η 2 jt E[ Π J c [j +1 +1,k], (B)B +1 e δ j +1 2 ]. Proof. Recall that J i = {j 1 , · · · , j i } for any i ≥ 1 and J 0 = ∅. By the definition of the SGD iteration (1.2), we have e δ k = (I − η k−1 B)e δ k−1 + η k−1 H k−1 = Π J c [1,k−1],0 (B)e δ 1 + k−1 j=1 η j Π J c [j+1,k−1],0 (B)H j , (2.4) where H j is defined by H j = Be δ j − (a ij , x δ j ) − y δ ij a ij = B − a ij a t ij )e δ j + ξ ij a ij . (2.5) By bias-variance decomposition and triangle inequality, we have E[ e δ k+1 2 ] = E[e δ k+1 ] 2 + E[ E[x δ k+1 ] − x δ k+1 2 ] ≤ 2 E[e k+1 ] 2 + 2 E[x k+1 − x δ k+1 ] 2 + E[ E[x δ k+1 ] − x δ k+1 2 ]. (2.6) It is known that the following estimates hold [18] E[e k+1 ] = Π J c [1,k],0 (B)e 1 , E[x k+1 − x δ k+1 ] ≤δ k j=1 η j Π J c [j+1,k],0 (B)B 1 2 , E[ x δ k+1 − E[x δ k+1 ] 2 ] = k j=1 η 2 j E[ Π J c [j+1,k],0 (B)A t N δ j 2 ],(2.7) with the iteration noise N δ j (at the jth SGD iteration) given by N δ j = n −1 (Ax δ j − y δ ) − ((a ij , x δ j ) − y δ ij )b ij , where b i = (0, . . . , 0, 1, 0, . . . , 0) t ∈ R n denotes the ith canonical Cartesian basis vector. Let N δ j = ((a ij , x δ j ) − y δ ij )b ij = ((a ij , e δ j ) − ξ ij )b ij . Then the iteration noise N δ j can be rewritten as N δ j = E[Ñ δ j |F j ] −Ñ δ j . Next we claim that under Assumption 1.1(iii), there holds Π J c [j+1,k],0 (B)A t (Ae δ j − ξ) 2 = n i=1 Π J c [j+1,k],0 (B)A t (a i , e δ j ) − ξ i b i 2 . (2.8) Actually, in view of Assumption 1.1(iii), for any 1 ≤ j ≤ k, there hold Ae δ j − ξ = n i=1 (a i , e δ j ) − ξ i b i and Π J c [j+1,k],0 (B)A t = V Π J c [j+1,k],0 (n −1 Σ t Σ)Σ t . Then the claim (2.8) follows from these two identities and column orthonormality of V as Π J c [j+1,k],0 (B)A t (Ae δ j − ξ) 2 = V n i=1 Π J c [j+1,k],0 (n −1 Σ t Σ)Σ t (a i , e δ j ) − ξ i b i 2 = n i=1 V Π J c [j+1,k],0 (n −1 Σ t Σ)Σ t (a i , e δ j ) − ξ i b i 2 = n i=1 Π J c [j+1,k],0 (B)A t (a i , e δ j ) − ξ i b i 2 . Thus, by the bias-variance decomposition and the definitions of the notationĀ andξ etc, we have, for any j = 1, · · · , k, E[ Π J c [j+1,k],0 (B)A t N δ j 2 |F j ] = E[ Π J c [j+1,k],0 (B)A tÑ δ j 2 |F j ] − Π J c [j+1,k],0 (B)A t E[Ñ δ j |F j ] 2 = 1 n n i=1 Π J c [j+1,k],0 (B)A t (a i , e δ j ) − ξ i b i 2 − Π J c [j+1,k],0 (B) A t n (Ae δ j − ξ) 2 =(n − 1) Π J c [j+1,k],0 (B)(Be δ j −Ā tξ ) 2 . By the Cauchy-Schwarz inequality, the identity Π J c [j+1,k],0 (B)Ā t 2 = Π J c [j+1,k],0 (B)B 1 2 2 , and the triangle inequality, we deduce from (2.7) that E[ x δ k+1 − E[x δ k+1 ] 2 ] = k j=1 η 2 j E E[ Π J c [j+1,k],0 (B)A t N δ j 2 |F j ] =(n − 1) k j=1 η 2 j E[ Π J c [j+1,k],0 (B)(Be δ j −Ā tξ ) 2 ] ≤2(n − 1) k j=1 η 2 j E[ Π J c [j+1,k],0 (B)Be δ j 2 ] + Π J c [j+1,k],0 (B)Ā tξ 2 ≤2(n − 1) k j=1 η 2 j E[ Π J c [j+1,k],0 (B)Be δ j 2 ] + 2(n − 1)δ 2 k j=1 η 2 j Π J c [j+1,k],0 (B)B 1 2 2 . By the definitions of I δ 0,1 , I δ 0,2 and (I δ 0 ) c , we have E[ e δ k+1 2 ] ≤ I δ 0,1 + I δ 0,2 + (I δ 0 ) c . Next further expanding {e δ j } k j=2 in the expression of (I δ 0 ) c using (2.4) gives (I δ 0 ) c = 2(n − 1)η 2 1 Π J c [2,k],0 (B)Be δ 1 2 + 2(n − 1) k j1=2 η 2 j1 E Π J c [j 1 +1,k],0 (B)B Π J c [1,j 1 −1],0 (B)e δ 1 + j1−1 j2=1 Π J c [j 2 +1,j 1 −1],0 (B)η j2 H j2 2 . Now using the definition of H j in (2.5), we obtain j1−1 j2=1 Π J c [j 2 +1,j 1 −1],0 (B)η j2 H j2 = j1−1 j2=1 η j2 Π J c [j 2 +1,j 1 −1],0 (B)(B − a ij 2 a t ij 2 )e δ j2 + j1−1 j2=1 η j2 Π J c [j 2 +1,j 1 −1],0 (B)ξ ij 2 a ij 2 . Thus, we can further bound (I δ 0 ) c by (I δ 0 ) c ≤2(n − 1)η 2 1 Π J c [2,k],0 (B)Be δ 1 2 + 4(n − 1) k j1=2 η 2 j1 E Π J c [j 1 +1,k],0 (B)B j1−1 j2=1 η j2 Π J c [j 2 +1,j 1 −1],0 (B)ξ ij 2 a ij 2 2 + 4(n − 1) k j1=2 η 2 j1 E Π J c [j 1 +1,k],0 (B)B Π J c [1,j 1 −1],0 (B)e δ 1 + j1−1 j2=1 η j2 Π J c [j 2 +1,j 1 −1],0 (B)(B − a ij 2 a t ij 2 )e δ j2 2 ≤2(n − 1)η 2 1 Π J c [2,k],0 (B)Be δ 1 2 + 4(n − 1) k j1=2 η 2 j1 E[ j1−1 j2=1 η j2 Π J c [j 2 +1,k],1 (B)Bξ ij 2 a ij 2 2 ] II1 + 4(n − 1) k j1=2 η 2 j1 E[ Π J c [1,k],1 (B)Be δ 1 + j1−1 j2=1 η j2 Π J c [j 2 +1,k],1 (B)B(B − a ij 2 a t ij 2 )e δ j2 2 ] II2 . Next we simplify the two terms II 1 and II 2 . Under Assumption 1.1(iii), direct computation gives, for any j 1 = 2, · · · , k and j, j = 1, · · · , j 1 − 1, E[ Π J c [j +1,k],1 (B)Bξ i j a i j , Π J c [j+1,k],1 (B)Bξ ij a ij ] = n Π J c [j +1,k],1 (B)BĀ tξ , Π J c [j+1,k],1 (B)BĀ tξ , j = j, Π J c [j +1,k],1 (B)BĀ tξ , Π J c [j+1,k],1 (B)BĀ tξ , j = j. (2.9) Indeed, the case j = j follows directly. Meanwhile, under Assumption 1.1(iii), we have Π J c [j+1,k],1 (B)BA t ξ = V Π J c [j+1,k],1 (n −1 Σ t Σ)(n −1 Σ t Σ)Σ t n i=1 ξ i b i , and V Σ t ξ i b i = A t ξ i b i = a i ξ i . This and the column orthonormality of the matrix V imply n Π J c [j+1,k],1 (B)BĀ tξ 2 = n −1 V Π J c [j+1,k],1 (n −1 Σ t Σ)(n −1 Σ t Σ)Σ t n i=1 ξ i b i 2 =n −1 n i=1 V Π J c [j+1,k],1 (n −1 Σ t Σ)(n −1 Σ t Σ)Σ t ξ i b i 2 = n −1 n i=1 Π J c [j+1,k],1 (B)Ba i ξ i 2 =E[ Π J c [j+1,k],1 (B)Bξ ij a ij 2 ]. This and the bias-variance decomposition imply that the term II 1 can be simplified to II 1 = j1−1 j2=1 η j2 Π J c [j 2 +1,k],1 (B)BĀ tξ 2 + (n − 1) j1−1 j2=1 η 2 j2 Π J c [j 2 +1,k],1 (B)BĀ tξ 2 ≤δ 2 j1−1 j2=1 η j2 Π J c [j 2 +1,k],1 (B)B 3 2 2 + (n − 1)δ 2 j1−1 j2=1 η 2 j2 Π J c [j 2 +1,k],1 (B)B 3 2 2 . Further, by the measurability of e δ j with respect to F j , we have E[ e δ 1 , (B − a ij a t ij )e δ j ] = e δ 1 , E[E[(B − a ij a t ij )e δ j |F j ]] = 0, ∀j, (2.10) since e δ 1 is deterministic, and similarly, E[ (B − a i j a t i j )e δ j , (B − a ij a t ij )e δ j ] =E[ (B − a i j a t i j )e δ j , E[(B − a ij a t ij )e δ j |F j ] ] = 0, ∀j < j. (2.11) Consequently, there holds II 2 = Π J c [1,k],1 (B)Be δ 1 2 + j1−1 j2=1 η 2 j2 E[ Π J c [j 2 +1,k],1 (B)B(B − a ij 2 a t ij 2 )e δ j2 2 ]. Combining these estimates with the definitions of the quantities I δ 1,1 , I δ 1,2 and (I δ 1 ) c gives (I δ 0 ) c ≤ 2(n − 1)η 2 1 Π J c [2,k],0 (B)Be δ 1 2 + 4(n − 1)δ 2 k j1=2 η 2 j1 j1−1 j2=1 η j2 Π J c [j 2 +1,k],1 (B)B 3 2 2 + (n − 1) j1−1 j2=1 η 2 j2 Π J c [j 2 +1,k],1 (B)B 3 2 2 + 4(n − 1) k j1=2 η 2 j1 Π J c [1,k],1 (B)Be δ 1 2 + j1−1 j2=1 η 2 j2 E[ Π J c [j 2 +1,k],1 (B)B(B − a ij 2 a t ij 2 )e δ j2 2 ] ≤4(n − 1) k j1=1 η 2 j1 Π J c [1,k],1 (B)Be δ 1 2 + 4(n − 1)δ 2 k j1=2 η 2 j1 j1−1 j2=1 η j2 Π J c [j 2 +1,k],1 (B)B 3 2 2 + (n − 1) j1−1 j2=1 η 2 j2 Π J c [j 2 +1,k],1 (B)B 3 2 2 + 4(n − 1) k j1=2 η 2 j1 j1−1 j2=1 η 2 j2 E[ Π J c [j 2 +1,k],1 (B)B(B − a ij 2 a t ij 2 )e δ j2 2 ] =I δ 1,1 + I δ 1,2 + (I δ 1 ) c . Similar to the analysis of (I δ 0 ) c , by repeating the argument, we obtain (I δ 1 ) c =4(n − 1) 2 k j1=2 j1−1 j2=1 η 2 j1 η 2 j2 E[ Π J c [j 2 +1,k],1 (B)B 2 e δ j2 2 ]. In general, we can derive (I δ ) c = 2 +1 (n − 1) J +1 ∈J [1,k], +1 +1 t=1 η 2 jt E[ Π J c [j +1 +1,k], (B)B (B − a ij +1 a t ij +1 )e δ j +1 2 ]. Then repeating the preceding argument, and noting the relation e δ 1 = e 1 complete the proof. Remark 2.1. In Theorem 2.1, Assumption 1.1(iii) plays a central role in the refined error decomposition, at two places, i.e., (2.8) and (2.9). Intuitively, the condition essentially assumes low correlation between the rows of the matrix A, in analogy to the mutual coherence condition in compressed sensing [6]. The numerical experiments in Section 4 indicate that SGD performs comparably with or without this assumption. Remark 2.2. It is instructive to see the obstruction in extending the argument of Theorem 2.1 to a general matrix A with exact data (i.e., ξ = 0), in the absence of Assumption 1.1(iii). Let the singular value decomposition of A be A = U ΣV t , with Σ ∈ R n×m being diagonal with positive diagonal entries {σ i } r i=1 (with r ≤ min(m, n) being the rank, ordered nonincreasingly) and U = [u 1 , · · · , u n ]∈ R n×n and V = [v 1 , · · · , v m ]∈ R m×m being column orthonormal. Now consider the right-hand side and left-hand side, denoted by RHS and LHS, respectively, of the crucial identity (2.8) with a random index set J and a random vector e ∈ R m (by suppressing the subscripts). In view of the identity a t i = b t i A, we have LHS = V Π J (n −1 Σ t Σ)Σ t U t Ae 2 = DU t Ae 2 = n j=1 (d j u t j (Ae)) 2 = r j=1 d 2 j (u t j (Ae)) 2 , RHS = n i=1 V Π J (n −1 Σ t Σ)Σ t U t b i b t i Ae 2 = n i=1 DU t b i (Ae) i 2 = r j=1 d 2 j n i=1 (u ji (Ae) i ) 2 , with the diagonal matrix D given by D = Π J (n −1 Σ t Σ)Σ t := diag(d 1 , · · · , d n ) , with the first r entries being strictly positive. Since the index set J is arbitrary, the existence of a constant c (independent of J) such that RHS ≤ cLHS essentially requires n i=1 (u ji (Ae) i ) 2 ≤ c(u t j (Ae)) 2 , j = 1, . . . , r. Since Ae = r =1 σ u v t e, the above inequality is equivalent to n i=1 (u ji (Ae) i ) 2 ≤ c(σ j v t j e) 2 . (2.12) When Assumption 1.1 (iii) does not hold, there exist some j ≤ r and two nonzero elements u ji1 , u ji2 . Now we take any e ∈ R m such that v t j e = 0 and (Ae) i1 = 0 or (Ae) i2 = 0. Then the left hand side of (2.12) is strictly positive, and the right hand side vanishes. Thus, there is no constant c such that this inequality holds. This shows the delicacy of the analysis for a general matrix A. Nonetheless, the numerical experiments in Section 4 indicate that the saturation phenomenon actually also does not occur for a general matrix, so long as the stepsize c 0 is sufficiently small. Thus, we believe that the restriction is due to the limitation of the proof technique. Note that the convergence analysis in Section 3 remain valid provided that relaxed versions of the identities (2.8) and (2.9) hold but with different constants in the final estimate. The proof of Theorem 2.1 also gives the following error decomposition for exact data y † . Corollary 2.1. For any 0 ≤ < k, the following error decomposition holds E[ e k+1 2 ] ≤ i=0 I i + (I ) c , (2.13) where the terms I i , i = 0, 1, · · · , , are defined by I 0 = Π J c [1,k],0 (B)e 1 2 , I i = (n − 1) i Ji∈J [1,k],i i t=1 η 2 jt Π J c [1,k],i (B)B i e 1 2 , ∀1 ≤ i ≤ , (I ) c = (n − 1) +1 J +1 ∈J [1,k], +1 +1 t=1 η 2 jt E[ Π J c [j +1 +1,k], (B)B +1 e j +1 2 ]. In view of Theorem 2.1, the error E[ e δ k+1 2 ] can be decomposed into three components: approximation error i=0 I δ i,1 , propagation error i=0 I δ i,2 , and stochastic error (I δ ) c . Here we have slightly abused the terminology for approximation and propagation errors, since the approximation error only depends on the regularity of the exact solution x † (indicated by the source condition (1.5) in Assumption 1.1(ii)), whereas the propagation error is determined by the noise level. With the choice = 0, the decomposition recovers that in [18,19]. When compared with the classical error decomposition for the Landweber method, the summands for ≥ 1 arise from the stochasticity of the iterates (due to the random row index at each iteration), so is the stochastic error (I δ ) c . This refined decomposition is crucial to analyze the saturation phenomenon (under suitable conditions on the initial stepsize). Below we first derive bounds on the first two terms in Propositions 3.1 and 3.2, and then we prove optimal convergence rates of SGD by mathematical induction in Section 3.3. Convergence rate analysis In this section, we present the convergence rate analysis, and establish Theorem 1.1. The proof proceeds by first analyzing the approximation error and propagation error in Sections 3.1 and 3.2, respectively, and then bound the mean squared error E[ e δ k 2 ] via mathematical induction. We also give an alternative (simplified) convergence analysis for the case α = 0 in Section 3.4. Bound on the approximation error We begin with bounding the approximation error i=0 I δ i,1 for any fixed ≥ ν. The summand I δ 0,1 is the usual approximation error (for Landweber method), and the remaining terms arise from the random row index. Thus, the approximation error decays at the optimal rate. h 0 (k) = 2(ν + ) 2 nφ(2α)k −2(1−α)+max(1−2α,0) . Then for any integer ≥ ν, α ∈ [0, 1) and k ≥ 2 , there holds i=0 I δ i,1 ≤ c ν, ,α,n c −2ν 0 k −2ν(1−α) w 2 , with the constant c ν, ,α,n = 4(ν + ) 2ν , if h 0 (k) ≤ 1 2 , 2(ν + ) 2ν i=0 (h 0 (2 )) i , otherwise. Proof. In view of the source condition (1.5) and Lemma 2.3, we have I δ 0,1 = 2 Π J c [1,k],0 (B)e 1 2 ≤ 2 Π J c [1,k],0 (B)B ν 2 w 2 ≤ 2ν 2ν (ec 0 ) −2ν k −2ν(1−α) w 2 . Similarly, for any 1 ≤ i ≤ , i t=1 η 2 jt Π J c [1,k],i (B)B i e 1 2 ≤ i t=1 η 2 jt Π J c [1,k],i (B)B ν+i 2 w 2 ≤ ( ν+i e ) 2(ν+i) c −2ν 0 k 2(ν+i)α w 2 i t=1 j −2α t (k − i) −2(ν+i) . By the definition of I δ i,1 , since k ≥ 2 , k − i ≥ k 2 , for i = 1, . . . , , by Lemma 2.4(i), I δ i,1 ≤ 2 i+1 n i ( ν+i e ) 2(ν+i) c −2ν 0 k 2(ν+i)α w 2 Ji∈J [1,k],i i t=1 j −2α t (k − i) −2(ν+i) ≤ 2(2e −1 ) 2ν (ν + i) 2ν c −2ν 0 k −2ν(1−α) w 2 8e −2 (ν + i) 2 nφ(2α)k −2(1−α)+max(1−2α,0) i . Clearly, the quantity in the square bracket is bounded by h 0 (k). Next we treat the two cases h 0 (k) ≤ 1 2 and h 0 (k) > 1 2 separately. If h 0 (k) ≤ 1 2 , we deduce i=0 I δ i,1 ≤ 2(ν + ) 2ν c −2ν 0 k −2ν(1−α) w 2 i=0 h 0 (k) i ≤ c ν, ,α,n c −2ν 0 k −2ν(1−α) w 2 . Further, when h 0 (k) > 1 2 , since k ≥ 2 , we have h 0 (k) ≤ h 0 (2 ), and thus obtain i=0 I δ i,1 ≤ 2(ν + ) 2ν c −2ν 0 k −2ν(1−α) w 2 i=0 h 0 (2 ) i ≤ c ν, ,α,n c −2ν 0 k −2ν(1−α) w 2 . Finally, combining the last two estimates completes the proof. Remark 3.1. For any k satisfying h 0 (k) ≤ 1 2 , the constant c ν, ,α,n is actually independent of α and n. Further, if k < 2 , then by setting to 0, we obtain I δ 0,1 ≤ 2 1−2ν ν 2ν c −2ν 0 k −2ν(1−α) w 2 . Bound on the propagation error Now we bound the propagation error i=0 I δ i,2 , which arises from the presence of the data noise ξ. The summands for ≥ 1 arise from the stochasticity of the SGD iterates x δ k . We bound each summand I δ i,2 , i = 0, . . . , , separately, equivalently the following two quantities for k ≥ 4i: I(i, k) := Ji∈J [2,k],i i t=1 η 2 jt ji−1 ji+1=1 η ji+1 Π J c [j i+1 +1,k],i (B)B i+ 1 2 2 , (3.1) II(i, k) := Ji∈J [2,k],i i t=1 η 2 jt ji−1 ji+1=1 η 2 ji+1 Π J c [j i+1 +1,k],i (B)B i+ 1 2 2 ,(3.2) with the convention J0∈J [2,k],0 0 t=1 η 2 jt = 1 and j 0 = k + 1. The condition k ≥ 4i implies the following two elementary estimates: k − j i+1 − i ≥ k 4 , j i+1 = 1, 2, · · · , [ k 2 ], (3.3) k − j i+1 ≤ (i + 1)(k − j i+1 − i), j i+1 = [ k 2 ] + 1, . . . , k − i − 1. (3.4) First we bound I(i, k). The notation [·] denotes taking the integral part of a real number. Lemma 3.1. Let I(i, k) be defined in (3.1), and Assumption 1.1 be fulfilled. Then for any fixed i ∈ N and k ≥ 4i, the following estimate holds I(i, k) ≤2 2 2α−1 e −1 (2i + 1)c 0 (2e −1 ) 2 (2i + 1) 2 φ(2α)k −2(1−α)+max(1−2α,0) i + 25c max(i,1) 0 2 2α−1 e −1 (i + 2) 2 k −α i φ(α) 2 k 1−α . Proof. We abbreviate I(i, k) as I. By triangle inequality and Lemma 2.3, for any s ∈ (0, i + 1 2 ] and any j i+1 = k − i (when j i+1 = k − i, j i = k − i + 1, · · · , j 1 = k), we have ji−1 ji+1=1 η ji+1 Π J c [j i+1 +1,k],i (B)B s ≤ ji−1 ji+1=1 η ji+1 Π J c [j i+1 +1,k],i (B)B s ≤s s (ec 0 ) −s k αs ji−1 ji+1=1 η ji+1 (k − j i+1 − i) −s , By the identity (2.1), and since the quantity k−i−1 ji+1=1 η ji+1 s s (ec 0 ) −s (k − j i+1 − i) −s k αs 2 is independent of the indices {j 1 , · · · , j i }, there holds I 1 2 ≤ Ji+1∈J [k−i,k],i+1 i+1 t=1 η 2 jt B s 2 1 2 + s s (ec 0 ) −s k αs k−i−1 ji+1=1 η ji+1 (k − j i+1 − i) −s Ji∈J [j i+1 +1,k],i i t=1 η 2 jt 1 2 =c i+1 0 k j=k−i j −α B s + s s (ec 0 ) −s k αs k−i−1 ji+1=1 η ji+1 (k − j i+1 − i) −s Ji∈J [j i+1 +1,k],i i t=1 η 2 jt 1 2 . The two terms on the right are denoted by I 0 and I . For i ≥ 1, setting s = i 2 + 1 ≤ i + 1 2 in the first term, the inequalities k − i ≥ 3 4 k and c 0 B ≤ (2e) −1 imply that I 0 ≤ c i 2 0 (2e) − i 2 −1 ( 4 3 ) (i+1)α k −(i+1)α ≤ e −1 2 2α−1 e −1 c 0 k −α i 2 . Likewise, for i = 0, setting s = i + 1 2 gives I 0 ≤ (2e) − 1 2 c 1 2 0 k −α . Next we split I into two summations I 1 and I 2 over the index j i+1 , one from 1 to [ k 2 ], and the other from [ k 2 ] + 1 to k − i − 1, respectively. It suffices to bound I 1 and I 2 . First, setting s to i + 1 2 in I 1 and then applying the inequality Ji∈J [j i+1 +1,k],i i t=1 j −2α t ≤ Ji∈J [1,k],i i t=1 j −2α t , and the estimate (3.3) lead to I 1 ≤( 2i+1 2e ) i+ 1 2 c 1 2 0 k (i+ 1 2 )α ( k 4 ) −(i+ 1 2 ) [ k 2 ] ji+1=1 j −α i+1 Ji∈J [1,k],i i t=1 j −2α t 1 2 . Then by Lemma 2.4(i) and the estimate (2.2), we obtain I 1 ≤2 α−1 (2e −1 ) 1 2 (2i + 1) 1 2 c 1 2 0 φ(α) (2e −1 ) 2 (2i + 1) 2 φ(2α)k −2(1−α)+max(1−2α,0) i 2 k 1−α 2 . For the term I 2 , we analyze the cases i = 0 and i ≥ 1 separately. Since c 0 B ≤ (2e) −1 , cf. Assumption 1.1, if i = 0, then, Lemma 2.3 with s = 1 2 gives I 2 ≤ ( 1 2e ) 1 2 c 1 2 0 k α 2 k−1 j=[ k 2 ]+1 j −α (k − j) − 1 2 , Now the estimate (2.2) implies k−1 j=[ k 2 ]+1 j −α (k − j) − 1 2 ≤ ( k 2 ) −α 2( k 2 ) 1 2 ≤ 2( k 2 ) 1 2 −α . Consequently, when i = 0, we have Meanwhile, when i ≥ 1, setting s = i 2 + 1 ≤ i + 1 2 in Lemma 2.3 gives I 2 ≤ 2 α e −I 2 ≤( i 2 +1 e ) i 2 +1 c i 2 0 k α( i 2 +1) ( k 2 ) −(i+1)α I 2 , with I 2 := k−i−1 ji+1=[ k 2 ]+1 (k − j i+1 − i) −( i 2 +1) Ji∈J [j i+1 +1,k],i i t=1 1 1 2 . Now Lemma 2.4(ii), and the estimates (3.4) and (2.2) yield I 2 ≤ 1 i! 1 2 k−i−1 ji+1=[ k 2 ]+1 (k − j i+1 − i) −( i 2 +1) (k − j i+1 ) i 2 ≤ (i + 1) i 2 i! 1 2 k−i−1 ji+1=[ k 2 ]+1 (k − j i+1 − i) −1 ≤ 2(i + 1) i 2 i! 1 2 max(ln k, 1). Combining the last two identities gives I 2 ≤ 2 α (i + 2)i! − 1 2 e −1 max(ln k, 1) 2 2α−1 e −1 c 0 (i + 2) 2 k −α i 2 , i ≥ 1.k −s max(ln k, 1) ≤ s −1 ,(3.5) with s = 1−α 2 , we obtain I 2 ≤ 12e −1 φ(α) 2 2α−1 e −1 c 0 (i + 2) 2 k −α i 2 k 1−α 2 , i ≥ 1, and thus for i ≥ 1, there holds I 2 + I 0 ≤ 13e −1 φ(α) 2 2α−1 e −1 c 0 (i + 2) 2 k −α i 2 k 1−α 2 ≤ 5φ(α) 2 2α−1 e −1 c 0 (i + 2) 2 k −α i 2 k 1−α 2 . The bounds on I 1 and I 2 + I 0 and the triangle inequality complete the proof. The next result bounds the quantity II(i, k). Lemma 3.2. Let II(i, k) be defined in (3.2), and Assumption 1.1 hold. Then for any fixed i ∈ N, and k ≥ 4i, the following estimate holds II(i, k) ≤ ec0 2(2i+1) (4e −2 (2i + 1) 2 φ(2α)k −2(1−α)+max(1−2α,0) ) i+1 + 3φ(α) 2 2α−1 e −1 c 0 (i + 1) 2 k −α i+1 k 1−α . Proof. Like before, we abbreviate II(i, k) to II. By (2.1), II can be rewritten as II = k−i ji+1=1 Ji∈J [j i+1 +1,k],i i t=1 η 2 jt η 2 ji+1 Π J c [j i+1 +1,k],i (B)B i+ 1 2 2 . Now we split the summation into three terms, i.e., j i+1 = k − i, one from j i+1 = 1 to [ k 2 ] and one from j i+1 = [ k 2 ] + 1 to k − i − 1, denoted by II 0 , II 1 and II 2 , respectively. Since k − i ≥ 3 4 k, B ≤ 1 and c 0 B ≤ (2e) −1 , cf. Assumption 1.1(i), we obtain that, for any i ≥ 0, II 0 = c 2i+2 0 k j=k−i j −2α B i+ 1 2 2 ≤ c i+1 0 (c 0 B ) i+1 (k − i) −2(i+1)α ≤ 2 2α−1 e −1 c 0 k −2α i+1 . By Lemma 2.3 with s = i + 1 2 and (3.3), II 1 ≤( 2i+1 2e ) 2i+1 c 0 k (2i+1)α [ k 2 ] ji+1=1 (k − j i+1 − i) −(2i+1) Ji∈J [j i+1 +1,k],i i+1 t=1 j −2α t ≤( 2i+1 2e ) 2i+1 c 0 k (2i+1)α ( k 4 ) −(2i+1) [ k 2 ] ji+1=1 Ji∈J [j i+1 +1,k],i i+1 t=1 j −2α t Meanwhile, Lemma 2.4 and the estimate (2.2) imply [ k 2 ] ji+1=1 Ji∈J [j i+1 +1,k],i i+1 t=1 j −2α t ≤ (φ(2α)k max(1−2α,0) ) i+1 . The last two estimates together imply II 1 ≤ ec0 2(2i+1) (4e −2 (2i + 1) 2 φ(2α)k −2(1−α)+max(1−2α,0) ) i+1 k 1−α . Now we bound the term II 2 . In this case, we analyze the cases i = 0 and i ≥ 1 separately. When i = 0, by Lemma 2.3, II 2 = k−1 j=[ k 2 ]+1 η 2 j Π J c [j+1,k],0 (B)B 1 2 2 ≤ c 0 k α 2e k−1 j=[ k 2 ]+1 j −2α (k − j) −1 . The estimates (2.2) and (3.5) with s = 1 − α imply k−1 j=[ k 2 ]+1 j −2α (k − j) −1 ≤ 2( k 2 ) −2α max(ln k, 1) ≤ 2 2α+1 φ(α)k 1−3α . The last two estimates together show that for i = 0, there holds II 2 ≤ 2 2α e −1 c 0 φ(α)k 1−2α . Next, when i ≥ 1, by Lemma 2.3 with s = i+1 2 , II 2 ≤ k−i−1 ji+1=[ k 2 ]+1 Ji∈J [j i+1 +1,k],i i t=1 η 2 jt η 2 ji+1 Π J c [j i+1 +1,k],i (B)B i+1 2 2 ≤( i+1 2e ) i+1 k (i+1)α c i+1 0 ( k 2 ) −2(i+1)α k−i−1 ji+1=[ k 2 ]+1 Ji∈J [j i+1 +1,k],i (k − j i+1 − i) −(i+1) . Now Lemma 2.4(ii), and (2.2) imply k−i−1 ji+1=[ k 2 ]+1 Ji∈J [j i+1 +1,k],i (k−j i+1 − i) −(i+1) ≤ k−i−1 ji+1=[ k 2 ]+1 (k − j i+1 − i) −(i+1) (k − j i+1 ) i i! ≤ (i + 1) i i! k−i−1 ji+1=[ k 2 ]+1 (k − j i+1 − i) −1 ≤ 2(i + 1) i i! max(ln k, 1). Combining the last two bounds with (3.5) with s = 1 − α leads to II 2 ≤ 2 i! φ(α) 2 2α−1 e −1 c 0 (i + 1) 2 k −α i+1 k 1−α , i ≥ 1. Clearly, the preceding discussion shows that the last inequality holds actually also for i = 0. Therefore, the bounds on II 0 , II 1 and II 2 complete the proof of the lemma. h 1 (k) = 2(2 + 1) 2 nφ(2α)k −2(1−α)+max(1−2α,0) and h 2 (k) = 2 2α−1 ( + 2) 2 nc 0 k −α . Then for any fixed ∈ N, and k ≥ 4 , there holds i=0 I δ i,2 ≤ c ,α,n,c0δ 2 k 1−α , with the constant c ,α,n,c0 given by c ,α,n,c0 =      (2 4 ( + 1)c 0 + 203)φ(α) 2 , if h 1 (k), h 2 (k) ≤ 1 2 , 8( + 1)c 0 +1 i=0 h 1 (4 ) i + 103 +1 i=0 h 2 (4 ) i φ(α) 2 , otherwise, Proof. For i = 0, 1, · · · , , we bound the summands I δ i,2 by I δ i,2 ≤2 i+1 n iδ2 Ji∈J [2,k],i i t=1 η 2 jt ji−1 ji+1=1 η ji+1 Π J c [j i+1 +1,k],i (B)B i+ 1 2 2 + 2 i+1 n i+1δ2 Ji∈J [2,k],i i t=1 η 2 jt ji−1 ji+1=1 η 2 ji+1 Π J c [j i+1 +1,k],i (B)B i+ 1 2 2 . The two terms on the right hand side, denoted by I δ i,2,1 and I δ i,2,2 , can be bounded using Lemmas 3.1 and 3.2, respectively. Indeed, for I δ i,2,1 , Lemma 3.1 yields that for any k ≥ 4 ≥ 4i, I δ i,2,1 ≤4 2 2α−1 e −1 (2i + 1)c 0 2 3 e −2 (2i + 1) 2 nφ(2α)k −2(1−α)+max(1−2α,0) i + 25 2 2α e −1 (i + 2) 2 nc 0 k −α i φ(α) 2δ2 k 1−α . Thus, by the definitions of h 1 (k), h 2 (k), for any k ≥ 4 , if h 1 (k) ≤ 1 2 and h 2 (k) ≤ 1 2 , then the condition i ≤ implies i=0 I δ i,2,1 ≤ 4 2 2α−1 e −1 (2 + 1)c 0 i=0 h 1 (k) i + 25 i=0 h 2 (k) i φ(α) 2δ2 k 1−α ≤ 4 2 2α e −1 (2 + 1)c 0 + 50 φ(α) 2δ2 k 1−α . Meanwhile, if k does not satisfy the condition, by the monotonicity of h 1 (k) and h 2 (k) in k, we have h 1 (k) ≤ h 1 (4 ) and h 2 (k) ≤ h 2 (4 ), and consequently, i=0 I δ i,2,1 ≤ 4 2 2α−1 e −1 (2 + 1)c 0 i=0 h 1 (4 ) i + 25 i=0 h 2 (4 ) i φ(α) 2δ2 k 1−α . Next we bound the term I δ i,2,2 . Actually, by Lemma 3.2, for any k ≥ 4 ≥ 4i, there holds I δ i,2,2 ≤ ec0 2(2i+1) (2 3 e −2 (2i + 1) 2 nφ(2α)k −2(1−α)+max(1−2α,0) ) i+1 + 3φ(α)(2 2α e −1 (i + 1) 2 nc 0 k −α ) i+1 δ 2 k 1−α . Then repeating the preceding arguments yield i=0 I δ i,2,2 ≤        2c 0 + 3φ(α) δ 2 k 1−α , if h 1 (k), h 2 (k) ≤ 1 2 , 2c 0 i=0 h 1 (4 ) i+1 + 3φ(α) i=0 h 2 (4 ) i+1 δ 2 k 1−α , otherwise. Now combining the bounds on i=0 I δ i,2,1 and i=0 I δ i,2,2 yields the desired assertion. Remark 3.2. If k < 4 , we can replace by 0. By Assumption 1.1(i), c 0 < 1, repeating the argument of the proposition and Lemmas 3.1 and 3.2 yields I δ 0,2 ≤ 2c 0 n(φ(2α) + 3φ(α)) + 11φ(α) 2 δ 2 k 1−α . Note that in the conditions h 0 (k), h 1 (k), h 2 (k) ≤ 1 2 , h 0 and h 1 , apart from the factor φ(2α), do not depend sensitively on the exponent α, but for α close to zero, h 2 (k) ≤ 1 2 essentially requires a small c 0 = O(n −1 ), and further the larger is, and the smaller c 0 should be in order to fulfill the conditions. The latter condition also appears in the proof of Theorem 1.1 below. Bound on the error E[ e δ k 2 ] To prove Theorem 1.1, we need a useful technical estimate, where the notation k max(0,0) denotes ln k. Note the restricted range of s is sufficient for the proof of Theorem 1.1. , 1 − 2α), +∞), and k ≥ 4 with = ( + 1) and η = η( + 1), the following two estimates hold: J +1 ∈J [k− ,k], +1 +1 t=1 η 2 jt B +1 2 (k − ) −s (3.6) ≤max(2 s , 1)(2 2α−2η e −2η c 2−2η 0 k −2α ) +1 k −s , [ k 2 ] j +1 =1 J ∈J [j +1 +1,k], +1 t=1 η 2 jt Π J c [j +1 +1,k], (B)B +1 2 j −s +1 (3.7) ≤c s 4e −1 2 φ(2α)c 2−2 0 k −s (s) +1 k −s , k− −1 j +1 =[ k 2 ]+1 J ∈J [j +1 +1,k], +1 t=1 η 2 jt Π J c [j +1 +1,k], (B)B +1 2 j −s +1 (3.8) ≤ max(2 s ,1)φ(2 η − ) ( +1)! (2 2α e −2η ( + 1) 2η η c 2−2η 0 k −sη ) +1 k −s , with the constant c s , the exponents s (s) and s η respectively defined by c s = 2 s , if s ≤ 0, φ(2α+s) φ(2α) , if s > max(0, 1 − 2α), s (s) = 2 (1 − α) − max(1 − 2α, 0) − ( + 1) −1 max s − max(1 − 2α, 0), 0 , s η = (2 − 2η)α − max(1 − 2η, 0). Proof. The proof is similar to that of Lemma 3.2. We denote the three terms on the left hand side by I 0 , I 1 and I 2 , respectively. It is easy to check that, for any η ∈ [0, 1], with the inequalities c 0 B ≤ (2e) −1 , B ≤ 1, cf. Assumption 1.1(i), and k − ≥ 3 4 k, I 0 = J +1 ∈J [k− ,k], +1 +1 t=1 η 2 jt B +1 2 (k − ) −s ≤ (2 2α−2η e −2η c 2−2η 0 k −2α ) +1 max(2 s , 1)k −s .I 1 ≤ [ k 2 ] j +1 =1 J ∈J [j +1 +1,k], +1 t=1 η 2 jt Π J c [j +1 +1,k], (B)B 2 j −s +1 ≤( e ) 2 c 2(1− )( +1) 0 k 2 α [ k 2 ] j +1 =1 J ∈J [j +1 +1,k], +1 t=1 j −2α t (k − j +1 − ) −2 j −s +1 . Now by the estimates (3.3) and (2.2), and Lemma 2.4(i), [ k 2 ] j +1 =1 J ∈J [j +1 +1,k], +1 t=1 j −2α t (k − j +1 − ) −2 j −s +1 ≤( k 4 ) −2 [ k 2 ] j +1 =1 j −(2α+s) +1 J ∈J [1,k], t=1 j −2α t ≤( k 4 ) −2 [ k 2 ] j +1 =1 j −(2α+s) +1 φ(2α)k max(1−2α,0) . Direct computation with (2.2) gives [ k 2 ] j +1 =1 j −(2α+s) +1 ≤ 2 s φ(2α)( k 2 ) max(1−2α,0) k −s , s ≤ 0, φ(2α+s) φ(2α) (φ(2α)k max(1−2α,0) )k s−max(1−2α,0) k −s , s > max(0, 1 − 2α). These two estimates together give (3.7). Similarly, Lemma 2.3 with s = η yields I 2 ≤( η e ) 2 η c 2(1−η)( +1) 0 k 2 η α k− −1 j +1 =[ k 2 ]+1 J ∈J [j +1 +1,k], +1 t=1 j −2α t (k − j +1 − ) −2 η j −s +1 . Note that for j +1 = [ k 2 ] + 1, . . . , k − − 1, j −s +1 ≤ max(1, 2 s )k −s , and thus Lemma 2.4(ii), and the estimates (2.2) and (3.4) give k− −1 j +1 =[ k 2 ]+1 J ∈J [j +1 +1,k], +1 t=1 j −2α t (k − j +1 − ) −2 η j −s +1 ≤ max(1, 2 s )( k 2 ) −2( +1)α k −s k− −1 j +1 =[ k 2 ]+1 J ∈J [j +1 +1,k], 1 (k − j +1 − ) −2 η ≤ max(1, 2 s ) ( +1) ! ( k 2 ) −2( +1)α k −s k− −1 j +1 =[ k 2 ]+1 (k − j +1 − ) −2 η + ≤ max(1, 2 s ) ( +1) φ(2 η − ) ! ( k 2 ) −2( +1)α k −s k max((1−2η)( +1),0) , where the last line follows from (2.2) and the identity −2 η + + 1 = (1 − 2η)( + 1). Combining the preceding estimates yields the bound (3.8), and completes the proof of the lemma. Now, we can prove the order-optimal convergence rate of SGD in Theorem 1.1. Proof of Theorem 1.1. Let r k = E[ e δ k 2 ]. We prove that for any ∈ ( 1 2 , 1], there exist c * and c * * such that r k ≤ c * k −β + c * * δ 2 k γ , (3.9) with β = min(2ν, 1 + (2 − 1)( + 1))(1 − α) and γ = 1 − α. Then the desired assertion holds by choosing ∈ ( 1 2 , 1) and ∈ N such that (2 − 1)( + 1) ≥ 2ν − 1. The proof proceeds by mathematical induction. We treat the cases (i) α ∈ [0, 1 2 ) ∪ ( 1 2 , 1) and (ii) α = 1 2 separately. First we consider case (i). If k ≤ 4 , the estimate (3.9) holds for any sufficiently large c * and c * * . Assume that it holds up to some k ≥ 4 , and we prove it for k + 1. It follows from Theorem 2.1 and Propositions 3.1 and 3.2 that r k+1 ≤c ν, ,α,n c −2ν 0 k −2ν(1−α) w 2 + c ,α,n,c0δ 2 k 1−α + (2n) +1 J +1 ∈J [1,k], +1 +1 t=1 η 2 jt Π J c [j +1 +1,k], (B)B +1 2 r j +1 . Applying the induction hypothesis r j ≤ c * j −β + c * * δ 2 j γ , j = 1, 2, · · · , k, to the recursion gives r k+1 ≤c ν, ,α,n c −2ν 0 k −2ν(1−α) w 2 + c ,α,n,c0δ 2 k 1−α + (2n) +1 c * I + (2n) +1 c * * δ 2 II, with I := J +1 ∈J [1,k], +1 +1 t=1 η 2 jt Π J c [j +1 +1,k], (B)B +1 2 j −β +1 , II := J +1 ∈J [1,k], +1 +1 t=1 η 2 jt Π J c [j +1 +1,k], (B)B +1 2 j γ +1 . Using (2.1), we split each of I and II into three terms over the index j +1 , one for j +1 = k − , one from 1 to [ k 2 ], and one from [ k 2 ] + 1 to k − − 1, respectively. Now by Lemma 3.3, r k+1 ≤c ν, ,α,n c −2ν 0 k −2ν(1−α) w 2 + c ,α,n,c0δ 2 k 1−α (3.10) + c * ξ 1 (k)(k + 1) −β + c * * δ 2 ξ 2 (k)(k + 1) γ , where the functions ξ 1 and ξ 2 are given by (for any , η ∈ [ 1 2 , 1], which implies 1 2 ≤ η ) ξ 1 (k) = 2 β φ(2α+β) φ(2α) (2 1+2 2 φ(2α)nc 2−2 0 k −s (β) ) +1 + 2 2β cη ( +1)! (2 2α+1 e −2η ( + 1) 2η η nc 2−2η 0 k −sη ) +1 , ξ 2 (k) =(2 1+2 2 φ(2α)nc 2−2 0 k −s (−γ) ) +1 + cη ( +1)! 2 2α+1 e −2η ( + 1) 2η η nc 2−2η 0 k −sη +1 , with the constants c η = 1 + φ(2 η − ), , η , s (·), and s η defined in Lemma 3.3. By choosing 1 ≥ = η > 1 2 and such that (2 − 1)( + 1) ≥ 2ν − 1, we have s (β), s (−γ), s η ≥ 0, and ξ 1 (k) ≤ c 1 := 2 β φ(2α+β) φ(2α) (2 1+2 2 φ(2α)nc 2−2 0 ) +1 + 2 2β cη ( +1)! (2 2α+1 e −2η ( + 1) 2η η nc 2−2η 0 ) +1 , ξ 2 (k) ≤ c 2 :=(2 1+2 2 φ(2α)nc 2−2 0 ) +1 + cη ( +1)! 2 2α+1 e −2η ( + 1) 2η η nc 2−2η 0 +1 . For small c 0 , c 1 , c 2 ≤ 1 2 hold, and then (3.9) follows by setting c * = 2 2ν+1 c −2ν 0 c ν, ,α,n w 2 and c * * = 2c ,α,n,c0 . This proves the theorem for case (i). In case (ii), repeating the preceding argument, by choosing 1 ≥ = η > 1 2 and ≥ 1 such that (2 − 1)( + 1) ≥ 2ν − 1 gives k −s (β) = k − +max(0,0)+( +1) −1 max(β−max(0,0),0) ≤ k − +( +1) −1 β ln k ≤ k − 1 4 ln k ≤ 4, k −s (−γ) = k − +max(0,0)+( +1) −1 max(−γ−max(0,0),0) ≤ k − ln k ≤ −1 and k −sη ≤ 1. Then repeating the preceding argument shows that the assertion holds when ξ 1 (k) ≤ c 1 := 2 β φ(2α+β) φ(2α) (2 3+2 2 φ(2α)nc 2−2 0 ) +1 + 2 2β cη ( +1)! (2 2α+1 e −2η ( + 1) 2η η nc 2−2η 0 ) +1 ≤ 1 2 , ξ 2 (k) ≤ c 2 :=(2 1+2 −1 2 φ(2α)nc 2−2 0 ) +1 + cη ( +1)! 2 2α+1 e −2η ( + 1) 2η η nc 2−2η 0 +1 ≤ 1 2 , which can be satisfied with sufficiently small c 0 . This completes the proof of the theorem. Remark 3.3. In practice, it is desirable to take small α. With the choice α = 0 and = η ∈ ( 1 2 , 1), the proof requires that the initial stepsize c 0 satisfy 2 β φ(β)(2 1+2 2 nc 2−2 0 ) +1 + 2 2β cη ( +1)! (2e −2η ( + 1) 2η η nc 2−2η 0 ) +1 ≤ 1 2 , (2 1+2 2 nc 2−2 0 ) +1 + cη ( +1)! 2e −2η ( + 1) 2η η nc 2−2η 0 +1 ≤ 1 2 . These two conditions are fulfilled provided that nc min(2ν,1)+1 ) [18] for SGD. In particular, it proves that SGD with small initial stepsizes is actually order optimal. Remark 3.4. One may slightly refine Theorem 1.1. Indeed, for any α ∈ (0, 1), let H 1 (k) := 2 3 max(ν, + 1) 2 nφ(2α)k −(1−α)+max(1−2α,0) , H 2 (k) := 2 2α−1 ( + 2) 2 nc 0 k −α ln k. Note that the following inequalities hold h 0 (k) ≤ H 1 (k), h 1 (k) ≤ H 1 (k) and h 2 (k) ≤ H 2 (k). Then there exists some k 0 , dependent of α, n, , ν and c 0 , such that H 1 (k), H 2 (k) ≤ 1 2 for any k ≥ k 0 . The claim (3.9) shows that the assertion holds for any k ≤ k 0 with sufficiently large c * and c * * . Then we refine the estimate by mathematical induction. Assume that the assertion up to some k ≥ k 0 , and prove it for k + 1. Since k ≥ k 0 , h i (k) ≤ 1 2 , i = 1, 2, 3, it follows from the estimate (3.10) with β = min(2ν, 1 + (2 − 1)( + 1))(1 − α), γ = 1 − α, = 1 and η = 1 2 and Lemma 3.3, that r k+1 ≤c ν, ,α,n c −2ν 0 k −2ν(1−α) w 2 + c ,α,n,c0δ 2 k 1−α + c * ξ 1 (k)(k + 1) −β + c * * δ 2 ξ 2 (k)(k + 1) 1−α , with the functions ξ 1 (k) and ξ 2 (k) given by ξ 1 (k) = 2 β φ(2α+β) φ(2α) (2 3 ( + 1) 2 φ(2α)nk −2(1−α)+max(1−2α,0)+( +1) −1 (β−max(1−2α,0)) ) +1 + 3·2 2β ( +1)! (2 2α−1 ( + 1) 2 nc 0 k −α ln k) +1 , ξ 2 (k) =(2 3 ( + 1) 2 φ(2α)nk −2(1−α)+max(1−2α,0) ) +1 + 3 ( +1)! 2 2α−1 ( + 1) 2 nc 0 k −α ln k +1 . Since ( + 1) −1 (β − max(1 − 2α, 0)) ≤ 1 − α, the terms on the right hand side can be bounded by either H 1 (k) or H 2 (k), and thus for k ≥ k 0 , we have H 1 (k), H 2 (k) ≤ 1 2 , and consequently ξ 1 (k) ≤2 −( +1) ( 2 β φ(2α+β) φ(2α) + 3·2 2β ( +1)! ) and ξ 2 (k) ≤ 2 −( +1) (1 + 3 ( +1)! ). By choosing suitable ≥ 2ν − 2 (dependent of ν), we can ensure ξ 1 (k), ξ 2 (k) ≤ 1 2 , and then taking the constants c * and c * * as before yield the desired assertion. Error analysis for α = 0 In this part, we revisit the case α = 0 separately, and derive an error bound directly with more explicit constants. Lemma 3.4. Let Assumption 1.1(i) and (iii) holds with α = 0. Further, suppose that the following condition holds 2(1 + φ(2 ))nc 2−2 0 ≤ 1, for some ∈ ( 1 2 , 1). (3.11) Then for any s ≥ 0, there holds E[ (I − c 0 B) s B − 1 2 (Be δ k −Ā tξ ) 2 ] ≤ 2 (I − c 0 B) k−1 2 +s B − 1 2 (Be 1 −Ā tξ ) 2 . Proof. We prove the assertion by mathematical induction. When k = 1, the inequality holds trivially true for any s ≥ 0. Now we assume that it holds up to some k − 1 ≥ 1, and prove it for k. With the condition α = 0, η j = c 0 and Π J c [j,j ],0 (B) = (I − c 0 B) j −j+1 for any j ≥ j ≥ 1. By the definitions of H j and N δ j , we can rewrite H j as H j =Ā tξ + A t N δ j . Consequently, we derive from (2.4) that for any s ≥ 0 (I − c 0 B) s B − 1 2 (Be δ k −Ā tξ ) =(I − c 0 B) s B − 1 2 (I − c 0 B) k−1 Be δ 1 −Ā tξ + c 0 k−1 j=1 (I − c 0 B) k−j−1 BĀ tξ + c 0 k−1 j=1 (I − c 0 B) k−j−1 BA t N δ j =(I − c 0 B) k−1+s B − 1 2 (Be δ 1 −Ā tξ ) + c 0 k−1 j=1 (I − c 0 B) k−j−1+s B 1 2 A t N δ j , in view of the identity c 0 E[ (I − c 0 B) s B − 1 2 (Be δ k −Ā tξ ) 2 ] = (I − c 0 B) k−1+s B − 1 2 (Be δ 1 −Ā tξ ) 2 + c 2 0 k−1 j=1 E[ (I − c 0 B) k−j−1+s B 1 2 A t N δ j 2 ]. Next we denote the summation by I(s). Then the argument for N δ j in the proof of Theorem 2.1 and the condition B ≤ 1 imply that for any ∈ ( 1 2 , 1), I(s) ≤ nc 2 0 k−1 j=1 E[ (I − c 0 B) k−j−1+s B 1 2 (Be δ j −Ā tξ ) 2 ] ≤nc 2 0 (I − c 0 B) −1 k−1 j=1 (I − c 0 B) k−j−1 2 B 2 E[ (I − c 0 B) k−j 2 +s B − 1 2 (Be δ j −Ā tξ ) 2 ]. With the identity (I − c 0 B) −1 = (1 − c 0 B ) −1 and the induction hypothesis E[ (I − c 0 B) k−j 2 +s B − 1 2 (Be δ j −Ā tξ ) 2 ] ≤2 (I − c 0 B) k−1 2 +s B − 1 2 (Be 1 −Ā tξ ) 2 , j = 1, . . . , k − 1, we deduce I(s) ≤ 2nc 2 0 1 − c 0 B (I − c 0 B) k−1 2 +s B − 1 2 (Be 1 −Ā tξ ) 2 k−1 j=1 (I − c 0 B) k−j−1 2 B 2 . By Lemma 2.3 and the estimate (2.2), k−1 j=1 (I − c 0 B) k−j−1 2 B 2 = B 2 + k−2 j=1 (I − c 0 B) k−j−1 B 2 ≤( 2 ec0 ) 2 k−1 j=1 (k − j − 1) −2 ≤ ( 2 ec0 ) 2 (1 + φ(2 )). This, the assumption c 0 B ≤ (2e) −1 and the condition (3.11) imply I(s) ≤ (I − c 0 B) k−1 2 +s B − 1 2 (Be 1 −Ā tξ ) 2 . This completes the induction step of the proof and thus also the proof of the lemma. Last, we can state a refined error estimate for the case α = 0. E[ e δ k 2 ] ≤ c 0 * k −2ν w 2 + c 0 * * δ 2 k, with constants c 0 * = 2( 2ν ec0 ) 2ν + 6nc 0 ( 2(2ν+1) ec0 ) 2ν+1 and c 0 * * = 3 + 6nc 0 . Proof. Under condition (3.11), the proof of Theorem 2.1 and Lemma 3.4 imply E[ e δ k 2 ] ≤2 (I − c 0 B) k−1 e 1 2 + 2c 2 0δ 2 k−1 j=1 (I − c 0 B) k−j−1 B 1 2 2 + nc 2 0 k−1 j=1 E[ (I − c 0 B) k−j−1 (Be δ j −Ā tξ ) 2 ]. Below we denote the last term by II. Note that E[ (I − c 0 B) k−j−1 (Be δ j −Ā tξ ) 2 ] ≤(1 − c 0 B ) −1 (I − c 0 B) k−j−1 2 B 1 2 2 E[ (I − c 0 B) k−j 2 B − 1 2 (Be δ j −Ā tξ ) 2 ] ≤2(1 − c 0 B ) −1 (I − c 0 B) k−j−1 B (I − c 0 B) k−1 2 B − 1 2 (Be 1 −Ā tξ ) 2 . By Lemma 2.3, the assumption c 0 B ≤ (2e) −1 , the estimates (2.2) and (3.5), we have k−1 j=1 (I − c 0 B) k−j−1 B ≤ (ec 0 ) −1 k−1 j=1 (k − j − 1) −1 ≤ 3(ec 0 ) −1 max(ln k, 1) ≤ 3(ec 0 ) −1 k. Meanwhile, by the Cauchy-Schwarz inequality, we have (I − c 0 B) k−1 2 (B 1 2 e 1 − B − 1 2Ā tξ ) 2 ≤ 2 e t 1 (I − c 0 B) k−1 Be 1 +δ 2 (I − c 0 B) k−1 2 2 . Combining the preceding estimates with Assumption 1.1(ii) and Lemma 2.3 leads to II ≤6nc 0 k (I − c 0 B) k−1 B 2ν+1 w 2 +δ 2 ≤ 6nc 0 ( 2(2ν+1) ec0 ) 2ν+1 k −2ν w 2 +δ 2 k). Similarly, there hold (I − c 0 B) k−1 e 1 2 ≤ ( 2ν ec0 ) 2ν k −2ν w 2 , k−1 j=1 (I − c 0 B) k−j−1 B 1 2 2 ≤ k−1 j=1 (I − c 0 B) k−j−1 B 1 2 2 ≤ 3 2 c −2 0 k. Combining the preceding estimates completes the proof of the theorem. Numerical experiments and discussions In this section, we provide numerical experiments to complement the analysis. To this end, we employ three examples, denoted by s-phillips (mildly ill-posed), s-gravity (severely ill-posed) and s-shaw (severely ill-posed), adapted from phillips, gravity and shaw in the public MATLAB package Regutools [10] (available at http://people.compute.dtu.dk/pcha/Regutools/, last accessed on August 20, 2020). These examples are Fredholm/Volterra integral equations of the first kind, discretized by means of either Galerkin approximation with piecewise constant basis functions or quadrature rules, and all discretized into a linear system of size n = m = 1000. To explicitly control the regularity index ν in the source condition (1.5), we generate the true solution x † by x † = (A t A) ν x e (A t A) ν x e ∞ , where x e is the exact solution provided by the package, and · ∞ denotes the maximum norm of vectors. In the test, the exponent ν is taken from the set {0, 1, 2, 4}. Note that the exponent ν in the source condition (1.5) is slightly larger than ν defined above, due to the inherent regularity of x e (which is less than one half for all examples). The exact data y † is generated by y † = Ax † and the noise data y δ by y δ i := y † i + y † ∞ ξ i , i = 1, · · · , n, where ξ i s are i.i.d. and follow the standard Gaussian distribution, and > 0 represent the relative noise level (the exact noise level being δ = y δ − y † ). SGD is always initialized with x 1 = 0, and the maximum number of epochs is fixed at 9e5, where one epoch refers to n SGD iterations. All statistical quantities presented below are computed from 100 independent runs. To verify the order optimality of SGD, we evaluate it against an order optimal regularization method with infinite qualification, i.e., Landweber method [7, Chapter 6], since it is the population version of SGD, converges steadily while enjoys order optimality, and thus serves a good benchmark for performance comparison in terms of the convergence rate. (However, one may employ any other order optimal methods). It is initialized with x 1 = 0, with a constant stepsize 1 A 2 , which can be much larger than that taken by SGD. The numerical results for the three examples are shown in Tables 1-3, where the notation e sgd = E[ x δ k sgd − x † 2 ] denotes the mean squared error achieved at the k sgd th iteration (counted in epoch) by SGD, and e lm = x δ k lm − x † 2 and k lm denote the squared 2 error and the stopping index for Landweber method. The stopping indices k sgd and k lm are taken such that the corresponding error is smallest for SGD and the Landweber method, respectively, along the iteration trajectory. The choice of the stopping index is motivated by a lack of provably order optimal a posteriori stopping rules for SGD. The initial stepsize c 0 is also indicated in the tables, in the form of a multiple of the constant c = 1 maxi( ai 2 ) . In the experiments, we consider two decay rates for the stepsize schedule, i.e., α = 0 and α = 0.1. First we comment on the SGD results. Clearly, for each fixed ν, the mean squared error e sgd (and also e lm ) decreases to zero as the noise level tends to zero, but it takes more iterations to reach the optimal error, and the decay rate depends on the regularity index ν roughly as the theoretical prediction O(δ 4ν 2ν+1 ). The larger is the regularity index ν, the faster the error decays and the fewer iterations it needs in order to reach the optimal error. The results obtained by SGD with α = 0 and α = 0.1 are largely comparable with each other, but generally the former imposes a more stringent condition on the initial stepsize c 0 than the latter so as to achieve comparable accuracy. This is attributed to the fact that polynomially decaying stepsize schedules have built-in variance reduction mechanism as the iteration proceeds. Nonetheless, at the low-regularity index (indicated by ν = 0 in the table), the initial stepsize can be taken independent of n. Next we compare the results of SGD with Landweber method. For all regularity indices, SGD, with either constant or decaying stepsize schedule, can achieve an accuracy comparable with that by the Landweber method, provided that the initial stepsize c 0 for SGD is taken to be of order O(n −1 ). Generally, the larger the index ν is, the smaller the value c 0 should be taken in the stepsize schedule, in order to fully realize the benefit of smooth solutions. This observation agrees well with the observation in Remark 3.2. These observations hold for all three examples, which have different degree of ill-posedness. Thus they are fully in line with the convergence analysis in Section 3. Numerical results for general A In order to shed further insights into the convergence behavior of SGD, we present numerical results with four different values of c 0 (i.e., min(c, nc * ), 10c * , c * and c * 10 , with c * from the tables) in Figs. 4.1 and 4.2, for the examples with ν = 1 and exact and noisy data, respectively. In the case of exact data, the mean squared error e sgd consists of only approximation and stochastic errors, and it decreases to zero as the iteration proceeds. With a large initial stepsize, the error e sgd decreases fast during the initial iterations, but only at a slow rate O(k −(1−α) ), whereas with a small c 0 , the initial decay is much slower. The asymptotic decay rate matches the optimal decay O(k −2ν(1−α) ) only when c 0 decreases to O(n −1 ), which otherwise exhibits only a slower decay O(k − min(2ν,1)(1−α) ) and thus an undesirable saturation phenomenon. Note that for small c 0 , the asymptotic decay O(k −2ν(1−α) ) kicks in only after a sufficient number of iterations, which agrees with the condition h 0 (k) ≤ 1 2 etc in the analysis. Further, there is an interesting transition layer for medium c 0 (but still of order O(n −1 )), for which it first exhibits the desired asymptotic decay, and then shifts back to a slower decay rate eventually. The presence of the wide transition region indicates that the optimal convergence can still be achieved for noisy data, even if the employed c 0 is larger than the critical value suggested by the theoretical analysis in Section 3. These observations hold for both constant and polynomially decaying stepsize schedules. These numerical results show that a small initial stepsize c 0 is necessary for overcoming the saturation phenomenon of SGD. These empirical observations remain largely valid also for noisy data in Fig. 4.2. It is observed that the asymptotic decay rate is higher for smaller initial stepsizes, but now only up to a certain iteration number, due to the presence of the propagation error, which increases monotonically as the iteration proceeds and eventually dominates the total error. This leads to the familiar semi-convergence behavior in the second and third columns of Fig. 4.2. The proper balance between the decaying approximation error and the increasing propagation error determines the attainable accuracy. One clearly observes that the larger is c 0 , the faster the asymptotic decay kicks in, but also the quicker the SGD iterate starts to diverge, which can compromise greatly the attainable accuracy along the trajectory, leading to the undesirable saturation phenomenon. When the initial stepsize c 0 becomes smaller, the attainable accuracy improves steadily. In particular, with a sufficiently small c 0 , the attained error is optimal (but of course at the expense of a much increased computational complexity). This observation naturally leads to the important question whether it is possible to design novel stepsize schedules (possibly not of polynomially decaying type) that enjoy both fast pre-asymptotic and asymptotic convergence behavior. On Assumption 1.1(iii) The convergence analysis in Section 3 requires Assumption 1.1(iii). This appears largely to be a limitation of the analysis technique. To illustrate this, we compare the results between the systems with a general matrix A and with one that satisfies Assumption 1.1(iii). The latter can be constructed from the former as follows. Let A = U ΣV t be the singular value decomposition of A. Then we replace A byà = U t A and y δ byỹ δ = U t y δ so thatà satisfies Assumption 1.1(ii)-(iii). The numerical results for s-phillips are shown in Table 4. It is observed that the results obtained by SGD with A andà are largely comparable with each other for all the noise levels and smooth indices, especially when the amount of the data noise is not too small. Although not presented, the observations are identical for other examples, including multiplying the matrix A by an arbitrary orthonormal matrix so long as c 0 is sufficiently small. These observations are also confirmed by the corresponding trajectories: The trajectory of the mean squared error for the three examples with ν = 1 for A andà nearly overlay each other when the data is not too small (as in most practical inverse problems) (cf. Fig. 4.3). For exact data, the trajectory overlaps up to a certain point around 1e-4 (which depends on the value of c 0 ), and the value of levelling off is observed to further decrease by choosing smaller c 0 . One interesting open question is thus to establish the saturation-overcoming phenomenon without Assumption 1.1(iii), as the experimental results suggest. Concluding remarks In this work, we have presented a refined convergence rate analysis of stochastic gradient descent with a polynomially decaying stepsize schedule for linear inverse problems, using a finer error decomposition. The analysis indicates that the saturation phenomenon exhibited by existing analysis actually does not occur, provided that the initial stepsize c 0 is sufficiently small. The analysis is also confirmed by several numerical experiments, which show that with a small c 0 , the accuracy of SGD is indeed comparable with the order-optimal Landweber method. The numerical experiments show that Assumption 1.1(iii) is actually not needed for the optimality, so long as the initial stepsize c 0 is sufficiently small, although the analysis requires the condition. One outstanding issue is to close the gap between the mathematical theory and practical performance. The study naturally leads to the question whether there is a "large" stepsize schedule that can achieve optimal convergence rates. The numerical experiments indicate that within polynomially decaying stepsize schedules, a small value of c 0 seems necessary for order optimality, but it leaves open nonpolynomial ones, e.g., stagewise SGD [35,9]. Intuitively, the small initial stepsize can be viewed as a form of implicit variance reduction, and thus it is also of interest to analyze existing explicit variance reduction techniques, e.g., SVRG [20] and SAG [24]. The current work discusses only deterministic noise. Naturally it is also of interest to extend the analysis to the case of random noise; See, e.g., the work [1,12] for relevant results for statistical inverse problems in a Hilbert space setting. Lemma 2 . 3 . 23Under Assumption 1.1(i), for any s > 0 and J [k ,k], η i and applying the inequality a ≥ c 0 (k + 1 − k − )k −α complete the proof of the lemma.The last lemma gives two useful bounds on the summations over the set J[1,k],i . Lemma 2.4. The following estimates hold. and I 2 + I 0 ≤ 2 α+1 e − 1 21 Lemma 3 . 3 . 33Let Assumption 1.1 hold. Then for any , η ∈ [0, 1], s ∈ (−∞, 0] ∪ (max(0 For I 1 1, by the condition B ≤ 1 in Assumption 1.1 and Lemma 2.3 with s = , we have constants. In particular, with = η close to 1 2 , the conditions essentially amount to c 0 = O(n −1 ), agreeing with the condition in Remark 3.2. Under this condition, by choosing an a priori stopping index k * (δ) = O(. This result is essentially identical with that for the Landweber method [7, Chapter 6], and higher than the existing convergence rate O(δ min(2ν,1) I − c 0 B) k−j−1 B = I − (I − c 0 B) k−1 . By the recursion (2.4) and (2.10)-(2.11), we have the following bias-variance decomposition Theorem 3. 1 . 1Let Assumption 1.1 holds with α = 0. Under condition (3.11), there holds Figure 4 . 1 : 41The convergence trajectory of the SGD error with different initial stepsize c 0 for the examples with ν = 1. The top and bottom rows are for α = 0 and α = 0.1, respectively. Figure 4 . 2 : 42The convergence trajectory of the SGD error (with α = 0) with different initial stepsize c 0 for the examples with ν = 1. The top and bottom rows are for =1e-2 and =5e-2, respectively. Figure 4 . 3 : 43The convergence of the error e versus iteration number for the examples with ν = 1, computed using A andÃ. The rows from top to bottom rows are for = 0, =1e-3 and =5e-2, respectively. Now we can bound the propagation error i=0 I δ 2,i . The bound is largely comparable with that for the Landweber method [18, Theorem 3.2]. Proposition 3.2. Let Assumption 1.1 be fulfilled, and let Table 1 : 1Comparison between SGD and LM for s-phillips.Method SGD(α = 0) SGD(α = 0.1) LM ν c0 e sgd k sgd c0 e sgd k sgd e lm k lm 0 1e-3 4c/n 1.66e-2 4691.28 c/30 1.67e-2 2176.23 1.65e-2 5851 5e-3 4c/n 9.35e-2 782.10 c/30 9.49e-2 336.33 9.28e-2 1036 1e-2 4c/n 1.29e-1 204.90 c/30 1.32e-1 69.69 1.28e-1 249 5e-2 4c/n 5.42e-1 108.90 c/30 5.57e-1 34.11 5.34e-1 136 1 1e-3 c/n 3.48e-4 539.19 c/n 2.88e-4 2089.62 2.28e-4 157 5e-3 c/n 3.69e-3 73.44 c/n 3.32e-3 218.94 2.74e-3 20 1e-2 c/n 6.64e-3 57.81 c/n 6.12e-3 166.47 5.12e-3 16 5e-2 c/n 3.52e-2 29.40 c/n 3.31e-2 80.79 3.16e-2 8 2 1e-3 c/(30n) 7.02e-5 2115.54 c/(20n) 5.48e-5 5912.91 3.22e-5 19 5e-3 c/(30n) 4.47e-4 1197.48 c/(20n) 4.13e-4 3201.63 3.76e-4 11 1e-2 c/(30n) 1.09e-3 938.70 c/(20n) 1.04e-3 2441.85 9.82e-4 8 5e-2 c/(30n) 2.92e-2 636.51 c/(20n) 2.90e-2 1597.56 1.57e-2 5 4 1e-3 c/(30n) 9.77e-5 1966.38 c/(20n) 6.91e-5 3291.18 1.30e-5 8 5e-3 c/(30n) 7.55e-4 879.51 c/(20n) 6.97e-4 2263.89 3.83e-4 6 1e-2 c/(30n) 2.56e-3 785.94 c/(20n) 2.50e-3 1996.83 1.42e-3 5 5e-2 c/(30n) 5.23e-2 596.73 c/(20n) 5.21e-2 1489.29 2.49e-2 3 Table 2 : 2Comparison between SGD and LM for s-gravity.Method SGD(α = 0) SGD(α = 0.1) LM ν c0 e sgd k sgd c0 e sgd k sgd e lm k lm 0 1e-3 c/20 9.37e-2 1000.50 c/10 9.39e-2 1894.14 9.39e-2 27201 5e-3 c/20 3.29e-1 86.43 c/10 3.29e-1 134.85 3.27e-1 2515 1e-2 c/20 5.81e-1 34.11 c/10 5.80e-1 34.17 5.73e-1 793 5e-2 c/20 2.23e0 5.61 c/10 2.22e0 6.03 2.07e0 Table 3 : 3Comparison between SGD and LM for s-shaw.Method SGD(α = 0) SGD(α = 0.1) LM ν c0 e sgd k sgd c0 e sgd k sgd e lm k lm 0 1e-3 c 2.81e-1 2704.92 2c 2.81e-1 5853.54 2.81e-1 760983 5e-3 c 5.37e-1 67.14 2c 5.33e-1 94.92 5.25e-1 18588 1e-2 c 7.08e-1 42.42 2c 6.98e-1 60.18 6.67e-1 12385 5e-2 c 3.91e0 10.59 2c 3.66e0 14.91 2.91e0 3392 1 1e-3 2c/n 1.21e-4 275.70 4c/n 1.26e-4 453.00 5.95e-5 144 5e-3 2c/n 1.45e-3 142.05 4c/n 1.48e-3 202.50 1.26e-3 71 1e-2 2c/n 5.75e-3 113.01 4c/n 5.62e-3 148.11 5.21e-3 54 5e-2 2c/n 1.51e-1 64.77 4c/n 1.54e-1 97.02 1.47e-1 36 2 1e-3 2c/n 1.53e-4 255.27 4c/n 1.29e-4 746.46 6.36e-5 50 5e-3 2c/n 2.00e-3 84.60 4c/n 1.73e-3 235.08 1.51e-3 37 1e-2 2c/n 6.43e-3 64.77 4c/n 6.05e-3 172.32 5.71e-3 30 5e-2 2c/n 8.17e-2 11.88 4c/n 8.00e-2 29.49 7.08e-2 5 4 1e-3 c/(30n) 5.79e-5 1966.38 c/(10n) 5.92e-5 2863.35 3.13e-5 9 5e-3 c/(30n) 6.00e-4 941.19 c/(10n) 6.06e-4 1116.81 3.71e-4 5 1e-2 c/(30n) 1.99e-3 828.45 c/(10n) 2.00e-3 1002.93 1.01e-3 4 5e-2 c/(30n) 3.61e-2 645.75 c/(10n) 3.61e-2 746.67 6.45e-3 1 Table 4 : 4Comparison between SGD with α = 0 for s-phillips with A andÃ. 1e-3 c/(30n) 7.02e-5 2115.54 3.49e-5 2021.6 5e-3 c/(30n) 4.47e-4 1197.48 3.66e-4 1186.10 1e-2 c/(30n) 1.Method SGD with A SGD withà ν c0 e k e k 0 1e-3 4c/n 1.66e-2 4691.28 1.65e-2 4738.4 5e-3 4c/n 9.35e-2 782.10 9.28e-2 835.35 1e-2 4c/n 1.29e-1 204.90 1.28e-1 198.75 5e-2 4c/n 5.42e-1 108.90 5.40e-1 111.85 1 1e-3 c/n 3.48e-4 539.19 2.29e-4 507.55 5e-3 c/n 3.69e-3 73.44 2.87e-3 71.2 1e-2 c/n 6.64e-3 57.81 5.72e-3 57.75 5e-2 c/n 3.52e-2 29.40 3.84e-2 30.4 2 09e-3 938.70 9.90e-4 934.75 5e-2 c/(30n) 2.92e-2 636.51 2.94e-2 639.60 4 1e-3 c/(30n) 9.77e-5 1966.38 2.49e-5 1103.00 5e-3 c/(30n) 7.55e-4 879.51 6.34e-4 869.40 1e-2 c/(30n) 2.56e-3 785.94 2.43e-3 781.80 5e-2 c/(30n) 5.23e-2 596.73 5.24e-2 597.60 AcknowledgmentsThe authors would like to thank the two anonymous referees for their many constructive comments, which have greatly helped improve the quality of the paper. Convergence rates of general regularization methods for statistical inverse problems and applications. N Bissantz, T Hohage, A Munk, F Ruymgaart, SIAM J. Numer. Anal. 456N. Bissantz, T. Hohage, A. Munk, and F. Ruymgaart. Convergence rates of general regularization methods for statistical inverse problems and applications. SIAM J. Numer. Anal., 45(6):2610-2636, 2007. Large-scale machine learning with stochastic gradient descent. L Bottou, Proc. CompStat'2010. Y. Lechevallier and G. SaportaCompStat'2010HeidelbergSpringerL. Bottou. Large-scale machine learning with stochastic gradient descent. In Y. Lechevallier and G. Saporta, editors, Proc. CompStat'2010, pages 177-186. Springer, Heidelberg, 2010. Optimization methods for large-scale machine learning. L Bottou, F E Curtis, J Nocedal, SIAM Rev. 602L. Bottou, F. E. Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. SIAM Rev., 60(2):223-311, 2018. Online learning in optical tomography: a stochastic approach. K Chen, Q Li, J.-G Liu, Inverse Problems. 347K. Chen, Q. Li, and J.-G. Liu. Online learning in optical tomography: a stochastic approach. Inverse Problems, 34(7):075010, 26 pp., 2018. Nonparametric stochastic approximation with large step-sizes. A Dieuleveut, F Bach, Ann. Statist. 444A. Dieuleveut and F. Bach. Nonparametric stochastic approximation with large step-sizes. Ann. Statist., 44(4):1363-1399, 2016. Optimally sparse representation in general (nonorthogonal) dictionaries via l 1 minimization. D L Donoho, M Elad, Proc. Natl. Acad. Sci. USA. 1005D. L. Donoho and M. Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via l 1 minimization. Proc. Natl. Acad. Sci. USA, 100(5):2197-2202, 2003. Regularization of Inverse Problems. H W Engl, M Hanke, A Neubauer, Kluwer, DordrechtH. W. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer, Dordrecht, 1996. Convergence rates for the stochastic gradient descent method for non-convex objective functions. B Fehrman, B Gess, A Jentzen, J. Mach. Learn. Res. 21136B. Fehrman, B. Gess, and A. Jentzen. Convergence rates for the stochastic gradient descent method for non-convex objective functions. J. Mach. Learn. Res., 21:Paper No. 136, 48, 2020. The step decay schedule: a near optimal, geometrically decaying learning rate procedure for least squares. R Ge, S M Kakade, R Kidambi, P Netrapalli, Advances in Neural Information Processing Systems 32 (NIPS 2019). R. Ge, S. M. Kakade, R. Kidambi, and P. Netrapalli. The step decay schedule: a near optimal, geometrically decaying learning rate procedure for least squares. In Advances in Neural Information Processing Systems 32 (NIPS 2019), 2019. Regularization tools version 4. P C Hansen, Algorithms. 462P. C. Hansen. Regularization tools version 4.0 for matlab 7.3. Numer. Algorithms, 46(2):189-194, 2007. Random shuffling beats SGD after finite epochs. J Haochen, S Sra, PMLRProceedings of the 36th International Conference on Machine Learning. K. Chaudhuri and R. Salakhutdinovthe 36th International Conference on Machine Learning97J. HaoChen and S. Sra. Random shuffling beats SGD after finite epochs. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, PMLR, volume 97, pages 2624-2633, 2019. Beyond the Bakushinkii veto: regularising linear inverse problems without knowing the noise distribution. B Harrach, T Jahn, R Potthast, Numer. Math. 1453B. Harrach, T. Jahn, and R. Potthast. Beyond the Bakushinkii veto: regularising linear inverse problems without knowing the noise distribution. Numer. Math., 145(3):581-603, 2020. Relaxation method for image reconstruction. G T Herman, A Lent, P H Lutz, Comm. ACM. 212G. T. Herman, A. Lent, and P. H. Lutz. Relaxation method for image reconstruction. Comm. ACM, 21(2):152-158, 1978. Inverse Problems: Tikhonov Theory and Algorithms. K Ito, B Jin, World ScientificHackensack, NJK. Ito and B. Jin. Inverse Problems: Tikhonov Theory and Algorithms. World Scientific, Hackensack, NJ, 2015. On the discrepancy principle for stochastic gradient descent. T Jahn, B Jin, Inverse Problems. 36995009T. Jahn and B. Jin. On the discrepancy principle for stochastic gradient descent. Inverse Problems, 36(9):095009, 30, 2020. Lower error bounds for the stochastic gradient descent optimization algorithm: sharp convergence rates for slowly and fast decaying learning rates. A Jentzen, P Wurstemberger, J. Complexity. 57A. Jentzen and P. von Wurstemberger. Lower error bounds for the stochastic gradient descent optimization algorithm: sharp convergence rates for slowly and fast decaying learning rates. J. Complexity, 57:101438, 16, 2020. Preasymptotic convergence of randomized Kaczmarz method. Y Jiao, B Jin, X Lu, Inverse Problems. 3312125012Y. Jiao, B. Jin, and X. Lu. Preasymptotic convergence of randomized Kaczmarz method. Inverse Problems, 33(12):125012, 21 pp., 2017. On the regularizing property of stochastic gradient descent. B Jin, X Lu, Inverse Problems. 35127B. Jin and X. Lu. On the regularizing property of stochastic gradient descent. Inverse Problems, 35(1):015004, 27, 2019. On the convergence of stochastic gradient descent for nonlinear ill-posed problems. B Jin, Z Zhou, J Zou, SIAM J. Optim. 302B. Jin, Z. Zhou, and J. Zou. On the convergence of stochastic gradient descent for nonlinear ill-posed problems. SIAM J. Optim., 30(2):1421-1450, 2020. Accelerating stochastic gradient descent using predictive variance reduction. R Johnson, T Zhang, NIPS'13. C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. WeinbergerLake Tahoe, NevadaR. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduc- tion. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, NIPS'13, pages 315-323, Lake Tahoe, Nevada, 2013. Iterative Regularization Methods for Nonlinear Ill-Posed Problems. B Kaltenbacher, A Neubauer, O Scherzer, Walter de Gruyter GmbH & Co. KGBerlinB. Kaltenbacher, A. Neubauer, and O. Scherzer. Iterative Regularization Methods for Nonlinear Ill-Posed Problems. Walter de Gruyter GmbH & Co. KG, Berlin, 2008. Stochastic Approximation and Recursive Algorithms and Applications. H J Kushner, G G Yin, Springer-VerlagNew Yorksecond editionH. J. Kushner and G. G. Yin. Stochastic Approximation and Recursive Algorithms and Applications. Springer-Verlag, New York, second edition, 2003. An iteration formula for Fredholm integral equations of the first kind. L Landweber, Amer. J. Math. 73L. Landweber. An iteration formula for Fredholm integral equations of the first kind. Amer. J. Math., 73:615-624, 1951. A stochastic gradient method with an exponential convergence rate for strongly-convex optimization with finite training sets. N Le Roux, M Schmidt, F Bach, Adv. Neural Inf. Process. Syst. N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential convergence rate for strongly-convex optimization with finite training sets. In Adv. Neural Inf. Process. Syst., pages 2663-2671, 2012. Generalization performance of multi-pass stochastic gradient descent with convex loss functions. Y Lei, T Hu, K Tang, J. Mach. Learn. Res. 22Y. Lei, T. Hu, and K. Tang. Generalization performance of multi-pass stochastic gradient descent with convex loss functions. J. Mach. Learn. Res., 22:1-41, 2021. Optimal rates for multi-pass stochastic gradient methods. J Lin, L Rosasco, J. Mach. Learn. Res. 18J. Lin and L. Rosasco. Optimal rates for multi-pass stochastic gradient methods. J. Mach. Learn. Res., 18:1-47, 2017. . F Natterer, The Mathematics of Computerized Tomography. SIAM. F. Natterer. The Mathematics of Computerized Tomography. SIAM, Philadelphia, PA, 2001. Statistical optimality of stochastic gradient descent on hard learning problems through multiple passes. L Pillaud-Vivien, A Rudi, F Bach, Adv. Neural Inf. Process. Syst. L. Pillaud-Vivien, A. Rudi, and F. Bach. Statistical optimality of stochastic gradient descent on hard learning problems through multiple passes. In Adv. Neural Inf. Process. Syst., 2018. A stochastic approximation method. H Robbins, S Monro, Ann. Math. Stat. 22H. Robbins and S. Monro. A stochastic approximation method. Ann. Math. Stat., 22:400-407, 1951. How good is SGD with random shuffling. I Safran, O Shamir, PMLRCOLT 2020. J. Abernethy and S. Agarwal125I. Safran and O. Shamir. How good is SGD with random shuffling? In J. Abernethy and S. Agarwal, editors, COLT 2020, PMLR, volume 125, pages 3250-3284, 2020. A randomized Kaczmarz algorithm with exponential convergence. T Strohmer, R Vershynin, J. Fourier Anal. Appl. 152T. Strohmer and R. Vershynin. A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl., 15(2):262-278, 2009. Online learning as stochastic approximation of regularization paths: optimality and almost-sure convergence. P Tarrès, Y Yao, IEEE Trans. Inform. Theory. 609P. Tarrès and Y. Yao. Online learning as stochastic approximation of regularization paths: optimality and almost-sure convergence. IEEE Trans. Inform. Theory, 60(9):5716-5735, 2014. Stochastic learning under random reshuffling with constant step-sizes. B Ying, K Yuan, S Vlaski, A H Sayed, IEEE Trans. Signal Proc. 672B. Ying, K. Yuan, S. Vlaski, and A. H. Sayed. Stochastic learning under random reshuffling with constant step-sizes. IEEE Trans. Signal Proc., 67(2):474-489, 2018. Online gradient descent learning algorithms. Y Ying, M Pontil, Found. Comput. Math. 85Y. Ying and M. Pontil. Online gradient descent learning algorithms. Found. Comput. Math., 8(5):561-596, 2008. Stagewise training accelerates convergence of testing error over SGD. Z Yuan, Y Yan, R Jin, T Yang, NIPS 2019. Z. Yuan, Y. Yan, R. Jin, and T. Yang. Stagewise training accelerates convergence of testing error over SGD. In NIPS 2019, 2019.
[]
[ "Road User Detection in Videos", "Road User Detection in Videos" ]
[ "Hughes Perreault [email protected] \nPolytechnique Montréal Montréal\nCanada, Canada\n", "Guillaume-Alexandre Bilodeau [email protected] \nPolytechnique Montréal Montréal\nCanada, Canada\n", "Nicolas Saunier [email protected] \nPolytechnique Montréal Montréal\nCanada, Canada\n", "Pierre Gravel [email protected] \nPolytechnique Montréal Montréal\nCanada, Canada\n" ]
[ "Polytechnique Montréal Montréal\nCanada, Canada", "Polytechnique Montréal Montréal\nCanada, Canada", "Polytechnique Montréal Montréal\nCanada, Canada", "Polytechnique Montréal Montréal\nCanada, Canada" ]
[]
Successive frames of a video are highly redundant, and the most popular object detection methods do not take advantage of this fact. Using multiple consecutive frames can improve detection of small objects or difficult examples and can improve speed and detection consistency in a video sequence, for instance by interpolating features between frames. In this work, a novel approach is introduced to perform online video object detection using two consecutive frames of video sequences involving road users. Two new models, RetinaNet-Double and RetinaNet-Flow, are proposed, based respectively on the concatenation of a target frame with a preceding frame, and the concatenation of the optical flow with the target frame. The models are trained and evaluated on three public datasets. Experiments show that using a preceding frame improves performance over single frame detectors, but using explicit optical flow usually does not.
null
[ "https://arxiv.org/pdf/1903.12049v1.pdf" ]
85,543,454
1903.12049
522b7c32b261be98ab341697c9d7550f265b6967
Road User Detection in Videos Hughes Perreault [email protected] Polytechnique Montréal Montréal Canada, Canada Guillaume-Alexandre Bilodeau [email protected] Polytechnique Montréal Montréal Canada, Canada Nicolas Saunier [email protected] Polytechnique Montréal Montréal Canada, Canada Pierre Gravel [email protected] Polytechnique Montréal Montréal Canada, Canada Road User Detection in Videos object detectionvideo object detectionroad users Successive frames of a video are highly redundant, and the most popular object detection methods do not take advantage of this fact. Using multiple consecutive frames can improve detection of small objects or difficult examples and can improve speed and detection consistency in a video sequence, for instance by interpolating features between frames. In this work, a novel approach is introduced to perform online video object detection using two consecutive frames of video sequences involving road users. Two new models, RetinaNet-Double and RetinaNet-Flow, are proposed, based respectively on the concatenation of a target frame with a preceding frame, and the concatenation of the optical flow with the target frame. The models are trained and evaluated on three public datasets. Experiments show that using a preceding frame improves performance over single frame detectors, but using explicit optical flow usually does not. I. INTRODUCTION Automatic road user detection is used by an increasing number of applications. In the context of traffic monitoring and intelligent transportation systems (ITS) for example, detecting road users can provide traffic counts, speeds and travel times for traffic state estimation, and incident detection. The research presented in this paper is based on the following observations. First, it is reasonable to expect that most objects of interest are moving. Second, traffic monitoring images contain a large number of small vehicles, for example in the farthest areas of the camera field of view. Finally, given the often large number of vehicles on the road and urban furniture and typical camera positions, occlusion is a frequent phenomenon (an example of occlusion is shown in figure 1). All these challenges for road user detection could be better addressed by considering multiple frames instead of a single frame at a time. Current research in object detection is mostly focused on single frame detectors, even though many applications provide video streams. The most widely known state-of-the-art detectors work with a single frame at a time, while detection performance could be improved with some modifications. In this work, a novel approach for road user detection in videos is developed and evaluated. The proposed approach is generic and can be integrated into most existing object detection methods. We show that by using two frames, Figure 1. In this difficult case of occlusion, our model RetinaNet-Double detects the vehicle (blue) while the baseline does not. It is reasonable to think that the motion of the vehicle helped the model. one does not need to compromise on speed, and detection accuracy can be improved. In the first model, we train the network to learn how to combine two frames of a video to perform object detection. We do that by concatenating the two images and feeding the result as an input to the deep network that performs the feature extraction. In our second model, we feed the network with the concatenation of the target image and its optical flow. The second model tries to accelerate the learning process, and in a way assumes that what the first proposed model learns is to associate detected movement with the presence of an object. Results show this might not be all there is to it. The two models are trained, evaluated and compared with a baseline in different training settings. Our contribution is demonstrated through the improved performance of the proposed models over the baseline on three road user datasets. This paper is organized as follows: first we will present a brief literature review in section II, then our models and contributions will be explained in section III, we will follow by presenting the datasets we used and the experiments we conducted in section IV, before discussing our results in section V and finally concluding in section VI. II. RELATED WORK AND BACKGROUND Object detection in a single image has been widely studied and is the subject of many research papers every year. There has been some major breakthrough on this topic in the recent arXiv:1903.12049v1 [cs.CV] 28 Mar 2019 years. The classical methods, those created before the deep learning era, got surpassed by a large margin. This review will focus on deep learning-based methods. For single frame object detection, there are two main categories: the onestage approach and the two-stage approach. The difference between the two is that the two-stage approach uses an object proposal phase in which they select the best candidates for further processing, and the one-stage approach detects objects directly. We will then review object detection in videos. We will finish the literature review by presenting some work that has inspired us for this work, on optical flow estimation by deep neural networks (DNNs). Convolutional neural networks: convolutional neural networks (CNNs) have caused a revolution in the field of object detection. AlexNet [1] was the first network to beat classical methods on ImageNet [2], by using clever strategies such as dropout [3], batch normalization [4] and rectified linear units. VGG [5] was also extremely influential, as it established good practices by its simplicity and elegance. For deeper networks, ResNets [6] use skip connections to build a network out of residual blocks, which help propagate the gradient. This allows ResNets to go as deep as 150 layers. For faster networks, MobileNets [7] use the idea of depth separable convolutions in order to speed up the computation process and save memory. The result is a network suitable for vision applications that can run on mobile devices. Two-stage approach: two-stage detectors have a separate object proposal phase. They were the first ones to incorporate DNNs in their architecture. R-CNN [8] used selective search [9] to get object proposals, and used a CNN, such as VGG, to classify each proposal. This architecture was accurate but very slow due to having to run the whole DNN on every single proposal. R-CNN was very influential and was the first of a family of state-of-the-art object detectors. Several improvements were then proposed to make these two stages completely trainable end-to-end and to make them share most of the computation. Fast R-CNN [10] solves the bottleneck of having to run each proposal through the entire DNN by introducing an ROI pooling layer that extracts the relevant features for each proposal out of the feature map of the whole image. That way, the DNN only has to run once over the whole image. After that, Faster R-CNN [11] solved the limitation of having to use an external object detector by introducing the RPN, a DNN that generates object proposals. The RPN shares the vast majority of its layers with Fast R-CNN, creating a unified and endto-end trainable object detector. Several variants of these architectures were released, each to work on one or multiple specific problems, for example multi-scale detection [12], and a faster two-stage detector [13]. One-stage approach: the one-stage approach addresses the bottleneck of the two-stage approach, the computation required for each proposal. It does that by simply removing the object proposal phase, thus the name. The first accurate one-stage object detector was also the first to work at a realtime speed, YOLO [14], [15]. YOLO works by dividing the image into a regular grid, and having each cell of the grid predict two bounding boxes for objects. The loss function of YOLO is a combination of a classification and localization loss. SSD [16] proposed an improvement on YOLO by using a simpler architecture. Notably, SSD introduced the idea of using anchor boxes on the feature maps, as in the region proposal network of Faster R-CNN [11]. In SSD, a sliding window is performed on the feature maps using the anchor boxes, and detection is done at every single location. RetinaNet [17] works in a very similar fashion as SSD, but introduces a new loss function dubbed "focal loss", that aims to correct the imbalance in background and foreground examples of one-stage detectors during training. To improve multi-scale detection, RetinaNet also uses a pyramid of features and detects at multiple levels of this pyramid. To this day, it is impossible to state whether one family of object detectors has won over the other, and the competition for the best and faster detector is ongoing. Object detection in video: contrarily to object detection on single images, this task has seen less research. Here is presented some of the most notable recent works on this topic. Recently, Liu & Zhu [18] used a Long shortterm memory (LSTM) to propagate and refine feature maps between frames, allowing them to detect objects a lot faster while keeping a precision similar to a single frame detector. Before them, Zhu et al. [19] used the optical flow information to propagate feature maps to certain frames in order to save computation time. These two articles focus on reusing computation between frames in order to save time based on the premise that there is a lot of similarity and continuity between frames. In comparison, our work focuses on improving the precision and recall of the detection by combining information. By using deformable convolutions instead of optical flow training, Kim et al. [20] trained a model to compute an offset between frames, and thus are able to sample features from close preceding and following frames to help detect objects in the current frame. Their model is particularly good in cases of occlusion or blurriness in the video. Optical flow by DNNs: one of the most notorious work in optical flow learning is without a doubt FlowNet [21]. This work presented the first end-to-end network that learns to generate optical flow from a pair of images. They proposed two models, FlowNetSimple and FlowNetCorr, that are both trained on a digitally constructed dataset with 3D models of chairs. The dataset was created by moving these chairs on different backgrounds. The models take as input a pair of consecutive images. FlowNetSimple works by simply concatenating images and letting the network learn how to combine them. This is the inspiration for one of our model. FlowNetCorr, on the other hand, works by computing a correlation map between the high level representation of the two images. A second version, FlowNet 2.0 [22] proposed several improvements over FlowNet, most notably to stack several slightly different architectures one after the other to refine the flow. III. PROPOSED METHOD A. Problem Statement The task that we want to solve is as follows: given a target image, a preceding frame and a set of labels, locate with a bounding box and classify every object in the target image that corresponds to one of the labels. Using the immediately preceding frame is not mandatory, but the model may not use any future frames in order to work online. B. Overview To capitalize on consecutive frames in the video setting, the input stream of RetinaNet was changed using concatenation (see figure 2). The added inputs are either optical flow or a preceding frame. When using a preceding frame, the idea is that the network will learn by itself the best way to combine the two images and extract features for object detection. The motivation to use a direct concatenation and not two streams concatenated later comes from FlowNetSimple [21], where it was shown to be sufficient to train a CNN to learn motion. When using optical flow, the best information the network can learn is assumed to be movement, which is fed directly to it. By using two different models as well as the baseline, we have a good way of comparing them and finding out the best for the different kind of classes or cases. C. Baseline: RetinaNet In this work, multiple architectures were implemented and tested. The baseline is the state-of-the-art architecture RetinaNet [17] with VGG-16 [5] and ResNet50 [23] as backbone feature extractors. We used both alternatively in our experiences in order to show that our models were not dependant on a specific feature extractor. RetinaNet is a model that uses a CNN to create a feature pyramid network, that is, a pyramid of feature maps at different scales. On each of the pyramid levels, it runs a sliding window with multiple anchor boxes at different scales and aspect ratio, and it runs these boxes into a classification and a box regression subnetwork. It will keep the best of these detections as its final output. RetinaNet was chosen over other architectures due to its relatively high speed and very good performance. For training the model, the adam optimizer from Keras [24] was used, with a smooth L1 loss for regression, and the focal loss F L(p ) = −α t (1 − p ) γ log(p )(1) for classification, where γ can be seen as a factor that reduces the contribution of easy examples to the loss. In this work, γ is set to 2. α t is the inverse class frequency, and is used so that underrepresented classes have more weight in the training. p is the probability p of predicted label if it corresponds to the ground-truth label, and is 1−p otherwise. Intuitively, this means that if the predicted probability is high and the prediction is correct, or if the predicted probability is low and the prediction is incorrect, the loss will be mainly unaffected. These are the easy examples. Otherwise, the example is considered hard and the loss will be amplified. The initial learning rate was set to 1e-5. D. RetinaNet-Double The first proposed architecture is dubbed RetinaNet-Double. Inspired by FlowNetSimple [21], the architecture of the backbone model was modified so that it take as input two images that have been concatenated channel-wise, resulting in a DNN that takes a six-channel image as input. The rest of the RetinaNet model is left unchanged. This results in a model that is only slightly slower than its counterpart, as the number of parameters is practically the same. Our goal is to make the network learn how to combine data from two images to improve its performance. It is trained end-to-end on detection accuracy, and thus learns the features most useful for object detection and classification. It is not trained with optical flow ground-truths, but rather with detection ground-truths, meaning that the bounding box annotations and labels of the target image are used. This is the goal: the network must be trained with the standard classification and regression loss in order to learn how to correlate both images to perform better detection. More formally, given a target image I t and a preceding frame I p , the input of the network is constructed as Input = Concatenate(I p , I t )(2) which results in a tensor of shape: w × h × 2 * c(3) where w, h and c are respectively the width, height and number of channels of I t and I p . E. RetinaNet-Flow The second proposed architecture, dubbed RetinaNet-Flow, uses external optical flow data generated with OpenCV [25], specifically with the Farneback optical flow method [26]. The model is similar to RetinaNet-Double, but takes as input the concatenation of the target image and a dense optical flow image. The network learns to associate movement with object presence. Dense optical flow is generated using the target image and a previous one, and the resolution of the resulting flow image is the same as the target. No optical flow ground-truth is used for training. We keep the X and Y components as well as the norm of the flow vector for each pixel. Again, given I t and I p , we construct the input of RetinaNet-Flow by using Input = Concatenate(DenseF low(I p , I t ), I t ) (4) Figure 2. Presentation of our compared models: the first one is our baseline, a simple RetinaNet, the second one is RetinaNet-Double, which uses as input a concatenation of consecutive images, and the third one is RetinaNet-Flow, which uses as input a concatenation of the optical flow and the target image. RetinaNet builds feature maps at different resolutions and use a sliding window approach on these feature maps with different anchor boxes to find objects. These anchor boxes pass through a classification and a regression sub network. where the shape of the resulting tensor is the same as equation 3. F. Choice of preceding frame We conducted an experiment to determine which preceding frame yielded the best results. This problem is defined as finding the best i if the target frame is at time t, and the preceding frame at time t − i. We trained three models on UA-Detrac with i = 1, i = 3 and i = 5. Our conclusion is that since the three models did not differ significantly on the validation set, it is better to use i = 1 as it is the less restrictive and most intuitive of all the possible i's. Also, in a real-time application, it would allow us to keep fewer images in memory. IV. EXPERIMENTS AND RESULTS A. Datasets To train the proposed models, consecutive images from videos are needed. The models were trained and evaluated on three datasets: KITTI "3 temporally preceding frames" [27], UA-Detrac [28] and the Unmanned Aerial Vehicle Benchmark (UAV) [29]. These were chosen specifically because they contain consecutive frames of moving road users. UA-Detrac is a dataset created to benchmark object detection models and multi-object tracking in the context of traffic surveillance (Fig. 3). All images have a 960x540 pixel resolution, and the models were trained on that resolution. Data is organized by video sequence with a fixed camera pointed on a road. The dataset contains 70000 images annotated with class labels and bounding boxes. There are four different labels in total: car, bus, van and other. KITTI is a dataset containing street-level images that can be used to train autonomous driving systems (Fig. 4). Images are 1224x370 pixels, to contain all the information useful for driving. The subset that we used, "3 temporally preceding frames", contains approximately 7500 triplets of consecutive frames. Eight classes are present in this dataset, Car, Cyclist, Misc, Pedestrian, Person_sitting, Tram, Truck and Van. UAV is a dataset of traffic scene videos in various conditions of weather, altitude and occlusion obtained by drones (Fig. 5). It contains about 80000 annotated video frames. This dataset is particularly challenging due to its large number of small objects, high vehicle density and camera motion. UA-Detrac was randomly split into training and validation sets, with a ratio of 80 %, 20 % respectively. The models are evaluated remotely on a server, on their own test set. KITTI was randomly split in training, validation, and test sets, with a ratio of 60 %, 20 % and 20 % respectively. The class distribution stays roughly the same in every set. Training is done using only the training set, and overfitting and accuracy are monitored on the validation set during training. Results are reported on the test set, for which models are only run once for evaluation. For UAV, we chose the same training / test split proposed in the development kit of the project, and we evaluated our results with this development kit as well. B. Experiments For a thorough evaluation, our proposed models were trained and evaluated in different ways. For the first training case, we trained every model from scratch, using randomly initialized weights and compare them on UA-Detrac and KITTI. In the second training case, we used transfer learning from UA-Detrac to KITTI. In that case, the last classification layer is skipped since the datasets do not contain the same number of classes. Finally, we train our models starting from pre-trained weights on ImageNet [2] and compare them with state of the art models on the challenging UAV dataset. On that dataset, the three models were also trained from scratch in order to have a fair comparison between them. Indeed, for fairness, pre-trained weights should not be used for comparing with the baseline since the proposed models are new architectures for object detection that have to learn from scratch. The experiments were focused on comparing the proposed models with the RetinaNet baseline model trained in the same conditions as the new models. Since the datasets used are relatively small, the results are lower that what can be seen on benchmarks where pretrained weights are used. RetinaNet-Flow was not trained on the KITTI dataset since the camera is moving for most examples and the resulting optical flow is not appropriate for object detection. C. Results All results reported in tables I, II and III have been obtained with a minimum IOU of 0.7 for UA-Detrac and UAV, and 0.5 for KITTI as defined in their evaluation protocols. The IOU is the intersection over union, or the Jaccard index. The IOU between two rectangles is defined as the area of their intersection divided by the area of their union. The mAP computed for both datasets is the mean on all labels of the AP, the AP being the average precision given the recall and precision curves, therefore the area under the precision-recall curve. The backbone feature extractor used for KITTI and UAV is VGG-16 [5] and ResNet50 [23] is used for UA-Detrac. Table I presents the results obtained on the UA-Detrac dataset. One can notice that when trained from scratch, RetinaNet-Double shows a significant improvement over the baseline of 8 % points. However, RetinaNet-Flow did not perform as expected and scored 6% points lower than the baseline on the test set. We can interpret this result in a few ways. The network can make a good use of two frames in order to make better predictions. However, there might be too much noise in this dataset to use directly optical flow for detection, like trees moving in the wind and pedestrians on the side of the roads. An interesting finding is that it is better to let the network learn end-to-end how to combine frames instead of feeding it optical flow directly. Table II presents the results obtained on the KITTI dataset. One can see that when training from scratch, RetinaNet-Double achieves better results than the baseline, especially on the smaller objects like the classes with classes. For the Cyclist, Pedestrian and Person_Sitting classes, the proposed model significantly outperforms the baseline model. When using pre-trained weights from UA-Detrac, the proposed model can achieve similar results by training approximately for about half the time of the training from scratch, with the same mAP difference in the results, which confirms the advantages of RetinaNet-Double. Table III presents the results obtained on the UAV dataset. While looking at the results for the models using pre-trained weights on ImageNet [2] (RN (baseline), RN-D, RN-F), one must keep in mind that the very first layer of both RN-D and RN-F did not used pre-trained weights, putting them in a somewhat unfair comparison setting. Even so, we can see that our models are still competitive in terms of accuracy in this very challenging practical setting, outperforming Faster R-CNN but not surpassing the pre-trained SSD and R-FCN. The potential advantage of using pre-trained weights can be seen by comparing RN (baseline) and RN-fromscratch (baseline), and this also explains why RN (baseline) obtains better results than RN-D and RN-F. When looking at the results for the models trained from scratch, we can Table I MAP REPORTED ON THE UA-DETRAC TEST SET, FOR OUR TWO PROPOSED MODELS AND THE BASELINE. RN STANDS FOR RETINANET, D FOR DOUBLE AND F FOR FLOW. EPOCHS IS THE NUMBER OF EPOCHS TRAINED see that RetinaNet-Double outperforms RetinaNet again by 0.6% point mAP. However, RetinaNet-Flow still performs lower than the baseline at 0.5% points below. This can be partially explained by the frequent motion of the camera in the videos, making the optical flow noisy. Again, these results show that it is better to train a network end-to-end to combine two frames rather than feeding it optical flow directly. Overall, our RetinaNet-Double outperforms the baseline RetinaNet consistently on three datasets when trained in the same settings. This clearly shows that it is possible for a CNN to learn how to combine frames of a video, and that we should take advantage of that whenever possible. However, the lack of pre-trained weights makes it difficult to compare with current state-of-the-art benchmark results. We also demonstrated that using the optical flow directly does not help the network, as it might be too noisy and induce errors. V. DISCUSSION The learning process can be sped up with transfer learning. Transfer learning was achieved on the KITTI dataset by first training a model on UA-Detrac, and fine-tuning on KITTI. Through transfer learning, the models can learn much faster than when training from scratch. On KITTI, it is about half the time it takes to train the model. A. Detailed Analysis CNNs are often seen as black boxes with little knowledge on what they learn. It is therefore important to explain a few reasons why the proposed model achieves better performance than the baseline. Small objects: small objects are harder to locate precisely and classify than large objects for obvious reasons. There is a very large amount of small objects in the UA-Detrac dataset. Having two frames can help to address this challenge in multiple ways. If the object is moving, movement can be associated with the presence of an object. Combining features from two frames can help refine the features and thus classify more precisely. Figure 6 shows a concrete example where RetinaNet-Double correctly detects a small car while the baseline does not. Motion blur: motion blur is frequent in traffic surveillance images. A blurry object can be harder to detect and classify. If the detector does not recognize it as one object of the pre-determined classes, the blurry object will be invisible to it. With two frames, movement can be associated with the presence of an object, and help overcome this challenge. Occlusion: occlusion happens quite often with traffic videos, either by other vehicles or by road furniture as can be seen in figure 1. It is quite important to continue detecting vehicles even if they are partially hidden. Here, three elements can help us if we have access to more than one frame: 1) movement, 2) having more refined features and 3) the possibility that the vehicle is not hidden as much in the preceding frame. If the object is not as occluded in one of the two frames, then the network can learn to put more importance on the features of that frame. Motion: Motion can serve to increase the probability of the presence of a road user, or in the opposite case the absence of motion can serve to decrease the probability of the presence of an object. In the example shown in figure 7, the baseline model confused a billboard with a road user, while RetinaNet-Double did not. B. Limitations of the Proposed Models The most obvious limitation of our models is that they rely on a pair of images to improve accuracy. In settings where every object is moving, this is beneficial. But to ensure that performance does not go down too much when objects stop moving, the models should be more thoroughly evaluated on other datasets. As long as the models see enough stationary objects during training, this should not be a problem. The key is to have a well-balanced training dataset. Another limitation is the need for pairs of images to run our model, which would become useless if only single frames are available. Nevertheless, in such situations, the model may simply be used with the same image twice as input. Preliminary tests in that regard show that performance does not go down significantly compared to the baseline when the model is used in that way. This could allow users to avoid re-training an entire other model just for the cases where pair of images are not available. C. Generalization of the Proposed Models One of the distinct advantages of the proposed approach is that it generalizes to most object detection method. Since there is no change in the dimension of the features used by either the RPN of the two-stage approach or the sliding window of the one-stage approach, it could be used in almost any object detection method, and future experiments could determine for which it is the most useful. In fact, we could go further and provide pre-trained weights for pairs of images for multiple feature extractors to make experiments faster and more convenient. It would be very interesting for example to see a Faster R-CNN or R-FCN architecture use these models and compare with their baseline counterparts. VI. CONCLUSION Two novel models were introduced for road user detection and classification in videos, with improved performance on three datasets for RetinaNet-Double. These models were trained in different training settings, and compared with a baseline RetinaNet model. When trained from scratch, RetinaNet-Double achieves better mAP than the baseline, while RetinaNet-Flow shows that using optical flow for detection does not perform well and end-to-end learning of combining frames is better. A comparison is done with state of the art models on UAV, showing that when using pretrained weights, our proposed architectures do not surpass the baseline model but compare well despite not using pretrained weights for the first layer. Future work includes pretraining the proposed models on a large amount of video data, integrating the same ideas with other object detection models and using different backbone feature extractors. Figure 3 . 3Example frame of UA-Detrac and its ground truth annotations. Figure 4 . 4Example frame of KITTI and its ground truth annotations. Figure 5 . 5Example frame of UAV and its ground truth annotations. Figure 6 . 6An example of RetinaNet-Double (blue) performing better on small objects than RetinaNet (red). Figure 7 . 7The baseline RetinaNet (red) wrongly classifies the billboard as a road user, while RetinaNet-Double (blue) does not. . MAP REPORTED ON THE KITTI TEST SET WITH (FROM-UA-DETRAC) AND WITHOUT (FROM-SCRATCH) TRANSFER LEARNING. RN STANDS FOR RETINANET AND D FOR DOUBLE. EPOCHS IS THE NUMBER OF EPOCHS TRAINED. MAP IS THE MEAN OF THE AP OVER ALL CLASSES. UNDER EACH CLASS NAME IS THE AP FOR THAT CLASS.Table III MAP REPORTED ON THE UAV TEST SET. RN STANDS FOR RETINANET, D FOR DOUBLE AND F FOR FLOW. THE MODELS ARE EITHER TRAINED FROM SCRATCH WHEN MENTIONED, OR USE PRE-TRAINED WEIGHTS ON IMAGENET [2] OTHERWISE.Model Epochs Overall Easy Medium Hard Cloudy Night Rainy Sunny RN-D-from-scratch 20 54.69% 80.98% 59.13% 39.23% 59.88% 54.62% 41.11% 77.53% RN-from-scratch (baseline) 20 46.28% 67.79% 49.42% 34.47% 55.92% 40.99% 37.39% 56.43% RN-F-from-scratch 20 40.70% 60.38% 44.94% 28.57% 48.94% 34.97% 32.43% 54.80% Table II model Epochs mAP Car Cyclist Misc Pedestrian Person_sitting Tram Truck Van RN-D-from-scratch 130 72.08% 86.95% 56.80% 67.23% 53.57% 49.67% 89.21% 92.19% 81.00% RN-from-scratch (baseline) 130 71.61% 87.12% 54.67% 69.67% 51.06% 45.13% 90.04% 93.35% 81.82% RN-D-from-ua-detrac 60 68.29% 87.59% 51.15% 63.68% 54.30% 39.93% 85.62% 90.42% 73.63% RN-from-ua-detrac (baseline) 60 67.68% 87.26% 46.67% 60.18% 47.16% 44.93% 87.93% 89.73% 77.60% Model Overall R-FCN [13] 34.35% SSD [16] 33.62% RN (baseline) 33.48% RN-D 31.15% RN-F 30.41% Faster-RCNN [11] 22.32% RON [30] 21.59% RN-D-from-scratch 26.88% RN-from-scratch (baseline) 26.28% RN-F-from-scratch 24.87% ACKNOWLEDGMENTWe acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), , and the support of Genetec. The authors would like to thank Paule Brodeur for insightful discussions. Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in neural information processing systems. A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105. ImageNet: A Large-Scale Hierarchical Image Database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, 9J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "ImageNet: A Large-Scale Hierarchical Image Database," in CVPR09, 2009. Dropout: a simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, The Journal of Machine Learning Research. 151N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: a simple way to prevent neural networks from overfitting," The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929-1958, 2014. Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, International Conference on Machine Learning. S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in International Conference on Machine Learning, 2015, pp. 448-456. Very deep convolutional neural network based image classification using small training sample size. S Liu, W Deng, 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR). S. Liu and W. Deng, "Very deep convolutional neural network based image classification using small training sample size," in 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), nov 2015, pp. 730-734. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770- 778. Mobilenets: Efficient convolutional neural networks for mobile vision applications. A G Howard, M Zhu, B Chen, D Kalenichenko, W Wang, T Weyand, M Andreetto, H Adam, arXiv:1704.04861arXiv preprintA. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, "Mobilenets: Effi- cient convolutional neural networks for mobile vision appli- cations," arXiv preprint arXiv:1704.04861, 2017. Rich feature hierarchies for accurate object detection and semantic segmentation. R Girshick, J Donahue, T Darrell, J Malik, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014. Selective search for object recognition. J R Uijlings, K E Van De Sande, T Gevers, A W Smeulders, International journal of computer vision. 1042J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, "Selective search for object recognition," Inter- national journal of computer vision, vol. 104, no. 2, pp. 154- 171, 2013. Fast r-cnn. R Girshick, The IEEE International Conference on Computer Vision (ICCV). R. Girshick, "Fast r-cnn," in The IEEE International Confer- ence on Computer Vision (ICCV), December 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Advances in Neural Information Processing Systems 28. Curran Associates, IncS. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," in Advances in Neural Information Processing Systems 28. Curran Associates, Inc., 2015, pp. 91-99. Feature pyramid networks for object detection. T.-Y Lin, P Dollar, R Girshick, K He, B Hariharan, S Belongie, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. R-fcn: Object detection via region-based fully convolutional networks. J Dai, Y Li, K He, J Sun, Advances in Neural Information Processing Systems. Curran Associates, Inc29J. Dai, Y. Li, K. He, and J. Sun, "R-fcn: Object detection via region-based fully convolutional networks," in Advances in Neural Information Processing Systems 29. Curran Associates, Inc., 2016, pp. 379-387. You only look once: Unified, real-time object detection. J Redmon, S Divvala, R Girshick, A Farhadi, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in The IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), June 2016. Yolo9000: Better, faster, stronger. J Redmon, A Farhadi, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). J. Redmon and A. Farhadi, "Yolo9000: Better, faster, stronger," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. . W Liu, D Anguelov, D Erhan, C Szegedy, S Reed, C.-Y , W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Ssd: Single shot multibox detector. A C Fu, Berg, SpringerFu, and A. C. Berg, "Ssd: Single shot multibox detector," in European conference on computer vision. Springer, 2016, pp. 21-37. Focal loss for dense object detection. T.-Y Lin, P Goyal, R Girshick, K He, P Dollár, IEEE transactions. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal loss for dense object detection," IEEE transactions on pattern analysis and machine intelligence, 2018. Mobile video object detection with temporally-aware feature maps. M Liu, M Zhu, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). M. Liu and M. Zhu, "Mobile video object detection with temporally-aware feature maps," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. Deep feature flow for video recognition. X Zhu, Y Xiong, J Dai, L Yuan, Y Wei, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). X. Zhu, Y. Xiong, J. Dai, L. Yuan, and Y. Wei, "Deep feature flow for video recognition," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. Object detection in video with spatiotemporal sampling networks. G Bertasius, L Torresani, J Shi, The European Conference on Computer Vision (ECCV). G. Bertasius, L. Torresani, and J. Shi, "Object detection in video with spatiotemporal sampling networks," in The Eu- ropean Conference on Computer Vision (ECCV), September 2018. Flownet: Learning optical flow with convolutional networks. A Dosovitskiy, P Fischer, E Ilg, P Hausser, C Hazirbas, V Golkov, P Van Der Smagt, D Cremers, T Brox, The IEEE International Conference on Computer Vision (ICCV). A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox, "Flownet: Learning optical flow with convolutional net- works," in The IEEE International Conference on Computer Vision (ICCV), December 2015. Flownet 2.0: Evolution of optical flow estimation with deep networks. E Ilg, N Mayer, T Saikia, M Keuper, A Dosovitskiy, T Brox, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, "Flownet 2.0: Evolution of optical flow estimation with deep networks," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. Keras. F Chollet, F. Chollet et al., "Keras," https://keras.io, 2015. The OpenCV Library. G Bradski, Dr. Dobb's Journal of Software Tools. G. Bradski, "The OpenCV Library," Dr. Dobb's Journal of Software Tools, 2000. Two-frame motion estimation based on polynomial expansion. G Farnebäck, Scandinavian conference on Image analysis. SpringerG. Farnebäck, "Two-frame motion estimation based on poly- nomial expansion," in Scandinavian conference on Image analysis. Springer, 2003, pp. 363-370. Are we ready for autonomous driving? the kitti vision benchmark suite. A Geiger, P Lenz, R Urtasun, Conference on Computer Vision and Pattern Recognition (CVPR). A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? the kitti vision benchmark suite," in Conference on Computer Vision and Pattern Recognition (CVPR), 2012. UA-DETRAC: A New Benchmark and Protocol for Multi-Object Detection and Tracking. L Wen, D Du, Z Cai, Z Lei, M.-C Chang, H Qi, J Lim, M.-H Yang, S Lyu, abs/1511.04136arXiv CoRR. L. Wen, D. Du, Z. Cai, Z. Lei, M.-C. Chang, H. Qi, J. Lim, M.-H. Yang, and S. Lyu, " UA-DETRAC: A New Benchmark and Protocol for Multi-Object Detection and Tracking," arXiv CoRR, vol. abs/1511.04136, 2015. The unmanned aerial vehicle benchmark: Object detection and tracking. D Du, Y Qi, H Yu, Y Yang, K Duan, G Li, W Zhang, Q Huang, Q Tian, arXiv:1804.00518arXiv preprintD. Du, Y. Qi, H. Yu, Y. Yang, K. Duan, G. Li, W. Zhang, Q. Huang, and Q. Tian, "The unmanned aerial vehicle benchmark: Object detection and tracking," arXiv preprint arXiv:1804.00518, 2018. Ron: Reverse connection with objectness prior networks for object detection. T Kong, F Sun, A Yao, H Liu, M Lu, Y Chen, IEEE Conference on Computer Vision and Pattern Recognition. 12T. Kong, F. Sun, A. Yao, H. Liu, M. Lu, and Y. Chen, "Ron: Reverse connection with objectness prior networks for object detection," in IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, 2017, p. 2.
[]
[ "A SPATIO-TEMPORAL ADAPTIVE PHASE-FIELD FRACTURE METHOD", "A SPATIO-TEMPORAL ADAPTIVE PHASE-FIELD FRACTURE METHOD" ]
[ "Nicolas A Labanda ♦ ", "ANDLuis Espath ", "Victor M Calo " ]
[]
[]
We present an energy-preserving mechanic formulation for dynamic quasi-brittle fracture in an Eulerian-Lagrangian formulation, where a second-order phase-field equation controls the damage evolution. The numerical formulation adapts in space and time to bound the errors, solving the mesh-bias issues these models typically suffer. The time-step adaptivity estimates the temporal truncation error of the partial differential equation that governs the solid equilibrium. The second-order generalized-α time-marching scheme evolves the dynamic system. We estimate the temporal error by extrapolating a first-order approximation of the present time-step solution using previous ones with backward difference formulas; the estimate compares the extrapolation with the time-marching solution. We use an adaptive scheme built on a residual minimization formulation in space. We estimate the spatial error by enriching the discretization with elemental bubbles; then, we localize an error indicator norm to guide the mesh refinement as the fracture propagates. The combined space and time adaptivity allows us to use low-order linear elements in problems involving complex stress paths. We efficiently and robustly use low-order spatial discretizations while avoiding mesh bias in structured and unstructured meshes. We demonstrate the method's efficiency with numerical experiments that feature dynamic crack branching, where the capacity of the adaptive space-time scheme is apparent. The adaptive method delivers accurate and reproducible crack paths on meshes with fewer elements.
10.1016/j.cma.2022.114675
[ "https://arxiv.org/pdf/2110.03305v1.pdf" ]
238,419,634
2110.03305
29ea2b369f7465d1f797fa23d64e5676e4ed0ebe
A SPATIO-TEMPORAL ADAPTIVE PHASE-FIELD FRACTURE METHOD Nicolas A Labanda ♦ ANDLuis Espath Victor M Calo A SPATIO-TEMPORAL ADAPTIVE PHASE-FIELD FRACTURE METHOD We present an energy-preserving mechanic formulation for dynamic quasi-brittle fracture in an Eulerian-Lagrangian formulation, where a second-order phase-field equation controls the damage evolution. The numerical formulation adapts in space and time to bound the errors, solving the mesh-bias issues these models typically suffer. The time-step adaptivity estimates the temporal truncation error of the partial differential equation that governs the solid equilibrium. The second-order generalized-α time-marching scheme evolves the dynamic system. We estimate the temporal error by extrapolating a first-order approximation of the present time-step solution using previous ones with backward difference formulas; the estimate compares the extrapolation with the time-marching solution. We use an adaptive scheme built on a residual minimization formulation in space. We estimate the spatial error by enriching the discretization with elemental bubbles; then, we localize an error indicator norm to guide the mesh refinement as the fracture propagates. The combined space and time adaptivity allows us to use low-order linear elements in problems involving complex stress paths. We efficiently and robustly use low-order spatial discretizations while avoiding mesh bias in structured and unstructured meshes. We demonstrate the method's efficiency with numerical experiments that feature dynamic crack branching, where the capacity of the adaptive space-time scheme is apparent. The adaptive method delivers accurate and reproducible crack paths on meshes with fewer elements. Pioneering experimental work [1,2,3,4,5] studied dynamic fracture, stating the basic concepts of initiation, steady-state propagation, and interactions with stress waves. Building on this work, a landslide of papers proposing numerical models and formulations were published by the engineering community using cohesive elements [6,7], enriched discontinuous elements [8], extended finite elements [9,10], and embedded discontinuities [11,12], among many other approaches. Despite the advances in the field, these methods require computationally expensive features, such as interface elements; others are difficult to extend to three-dimensional simulations such as the embedded and extended approaches. In short, state-of-the-art simulation methods are incapable of circumventing the mesh-bias problem in coarse meshes. Since the early 2000s, an alternative fracture model has been popular; it avoids strong discontinuities by introducing a phase field that models damage with its evolution coupled to the solid equilibrium of Euler-Lagrange mechanical descriptions. This approach couples two partial differential equations that allow low-order finite elements to simulate complex crack paths efficiently when the mesh can capture the crack propagation by minimum potential paths. Francfort et al. [13,14] proposed phase-field models for fracture mechanics, which were later applied to dynamic crack propagation in several remarkable contributions [15,16,17,18]. The main drawback of this description of dynamic fracture propagation is that it requires extremely fine meshes to capture the crack topology accurately. Many mesh adaptive algorithms seek to deliver the computational efficiency of the method by building an intelligent numerical method that avoids unnecessary refinements. The first mesh adaptive method for Euler-Lagrange formulations of steady fracture was a predictor-corrector approach [19,20]. Recently, [21] proposed an isogeometric adaptive scheme for higher-order phase-field formulations of dynamic fracture. Irreversible fracture propagation processes require adaptive time-step strategies that control sudden energy releases and, consequently, error blow-ups. We need small-time steps to accurately reproduce the crack branching, while large steps can reproduce elastic or unloading processes. Many time-adaptive strategies for phase-field equations in the context of the Allen-Cahn and Cahn-Hilliard equations [22,23,24,25,26] exist. Most estimate the error by comparing solutions obtained with different time-accuracy integrators. Therefore, these approaches compute twice the solution of the time marching scheme, which is computationally expensive. Recently, [27] proposed a time-adaptive method for the Cahn-Hilliard equation that estimates the truncation error of the time-marching procedure using backward difference formulas. This ingenious proposal results in a simple, cheap, and robust method for large problems, where the error estimation requires simple extrapolations from previous time-step solutions. We believe that a unified spatial and temporal adaptivity must control the time-marching problems and avoid mesh bias. Nevertheless, only one publication deals with these challenges for Euler-Lagrange formulations for dynamic fracture problems to the best of our knowledge. In [21], the authors propose a simple approach to simulate dynamic crack propagation that uses the number of Newton-Raphson iterations to control the step size; they increase the time-step size when the step converges in less than four iterations or otherwise reduce it. Herein, we develop a thermodynamically consistent Euler-Lagrange space-and-time adaptive formulation. We evolve the dynamical system using the generalized-α solving the differential systems using a staggered scheme. We estimate the formulation's temporal error from the truncation error of the second-order timeintegrator using backward difference formulas. We estimate the error at the current time step by comparing it with a solution's extrapolation from the previous time steps. We build an adaptive spatial discretization using a residual minimization formulation for the phase-field equation that estimates the error by measuring the distance between low-order finite elements and a bubble enriched solution space [28,29,30,31,32,33]. We calculate the solution error in an appropriate norm that avoids mesh bias and allows efficient refinements to reduce the total computational cost while delivering solutions insensitive to the mesh and time-step sizes. We organize the paper as follows: Section 2 formulates the dynamics fracture problem using an energetically preserving Euler-Lagrangian approach along with a classical formulation, stating the strong and weak formulations of both cases. Section 3 details the time-integration scheme considered while section 4 formulates the proposed mesh-and-time-step adaptive scheme, including a detailed algorithm. These sections are the main contribution of this paper. Section 5 presents a set of numerical experiments involving crack propagation and branching, which demonstrate the advantages of our proposal, obtaining better results in meshes with much fewer elements. We draw conclusions in Section 6. Dynamic fracture modeling In what follows, B denotes a fixed region of three-dimensional space with boundary ∂B oriented by an outward unit normal n. We denote H n (B) as the Sobolev space of L 2 (B)-integrable functions endowed with nth L 2 (B)-integrable derivatives, (·, ·) B as the L 2 inner product over the physical region B with boundary ∂B, and ·, · ∂B as the L 2 inner product on the boundary ∂B. The phase-field theory by Fried & Gurtin [34] augments the field equations with a momentum balance (1) ∇ · ξ + π + γ = 0, and ∇ · σ + b = 0, where ξ is the microstress and π and γ are, respectively, the internal and external microforces, σ is the Cauchy stress, b = −ρ 0ü + ρ 0 g is the inertial and body forces, ρ 0 is the referential mass density, u is the displacement (where a superposed dot denotes time differentiation). Restricting attention to isothermal processes, where variations in temperature ϑ are negligible, that is, (2) ϑ = ϑ 0 ≡ constant, the free-energy density, in terms of the internal energy ε and entropy ζ densities, reads (3) ψ = ε − ϑ 0 ζ and the pointwise free-energy imbalance has the form (4)ψ + πφ − ξ · ∇φ − σ :˙ ≤ 0. Guided by the presence of the power conjugate pairings πφ, ξ · ∇φ, and σ :˙ , with small strain tensor := sym (∇u), we consider constitutive equations that deliver the free-energy density ψ, internal microforce π, microstress ξ, and the Cauchy stress σ at each point x in B and time t in terms of the values of the phase field ϕ with its gradient ∇ϕ, and its time derivativeφ at that point and time. 2.1. Thermodynamically consistent formulation. We deal with internal constraints, using the ideas of Capriz [35] who modeled continua with microstructure. Recently, da Silva et al. [36] applied them to brittle fracture, where they additively decompose flux/stress-like quantities into a quantity constitutively determined and a powerless (orthogonal) contribution. That is, (5) π := π a + π r , ξ := ξ a + ξ r , and σ := σ a + σ r . Here, the reactive term are powerless, that is (6) π rφ = 0, ξ r · ∇φ = 0, and σ r :˙ = 0. Thus, the free-energy imbalance (4) specializes to (7)ψ + π aφ − ξ a · ∇φ − σ a :˙ ≤ 0. Following Coleman & Noll [37], we enforce the satisfaction of the dissipation inequality (4) in all processes. Therefore, we require that: (i) The free-energy density ψ, given by a constitutive response functionψ, is independent ofφ: (8) ψ =ψ(ϕ, ∇ϕ, ). (ii) The active microstress ξ a and active Cauchy stress σ a are, respectively, given by constitutive response functionsξ a andσ a that derive from the response functionψ: (9) ξ a =ξ a (ϕ, ∇ϕ, ) = ∂ψ(ϕ, ∇ϕ, ) ∂(∇ϕ) , σ a =σ a (ϕ, ∇ϕ, ) = ∂ψ(ϕ, ∇ϕ, ) ∂ , Following da Silva et al. [36], the microstructural changes that the phase field describes are irreversible. We achieve irreversibility with the constraintφ ≤ 0, given that ϕ = 0 represents the damaged material whereas ϕ = 1 is undamaged. Conversely, ∇φ and˙ are unconstrained, thus (10) ξ r := 0, and σ r := 0. Moreover, (11) π r := arbitrary ifφ = 0, 0 ifφ ≤ 0. (iii) The internal microforce π, given by a constitutive response functionπ, splits additively into a contribution derived from the response functionψ and a dissipative contribution that, in contrast toψ,ξ a , andσ a , depends onφ and must be consistent with a residual dissipation inequality: (12)      π =π(ϕ, ∇ϕ,φ, ) = − ∂ψ(ϕ, ∇ϕ, ) ∂ϕ + π dis (ϕ, ∇ϕ,φ, ), π dis (ϕ, ∇ϕ,φ, )φ ≤ 0. In view of the constitutive restrictions (8)- (12), the response function for the free-energy density serves as a thermodynamic potential for the microstress, the hypermicrostress, and the equilibrium contribution to the internal microforce. Therefore, a complete description of the response of a material belonging to the class in question requires scalar-valued response functionsψ and π dis . Whereasψ depends only on ϕ, ∇ϕ, and ∇ 2 ϕ, π dis depends also onφ. Moreover, π dis must satisfy the residual dissipation inequality (12) 2 for all possible choices of ϕ, ∇ϕ, andφ. We now assign a suitable constitutive response for π a . Ifφ = 0, internal constraints are inactive, thus (13) π a = − ∂ψ(ϕ, ∇ϕ, ) ∂ϕ , ifφ = 0, and from (12), we have that (14) π = − ∂ψ(ϕ, ∇ϕ, ) ∂ϕ + π r , Using (9), (10), (11), and (12) in the field equation (1) 1 , we obtain an evolution equation (15) ifφ < 0, −π dis (ϕ, ∇ϕ,φ, ) ifφ = 0, −π r = ∇ · ∂ψ(ϕ, ∇ϕ, ) ∂(∇ϕ) − ∂ψ(ϕ, ∇ϕ, ) ∂ϕ + γ, for the phase field. Equation (15) is a nonconserved phase-field equation, a generalization of the Allen-Cahn-Ginzburg-Landau equation. Microstructural changes occur whenφ < 0. Moreover, with the bracket operator x = 1 2 (x + |x|), the phase-field equation (15) results in (16) − π dis (ϕ, ∇ϕ,φ, ) = − ∇ · ∂ψ(ϕ, ∇ϕ, ) ∂(∇ϕ) + ∂ψ(ϕ, ∇ϕ, ) ∂ϕ − γ , and (17) − π r = ∇ · ∂ψ(ϕ, ∇ϕ, ) ∂(∇ϕ) − ∂ψ(ϕ, ∇ϕ, ) ∂ϕ + γ . Here, we let the small strain tensor, with the spectral decomposion := n ι=1 ι m ι ⊗ m ι , assume the following decomposition (18) = + + − , where (19) + := n ι=1 ι + m ι ⊗ m ι , with − = − + . The elastic free-energy decomposes into (20) ψ 0 ( ) := ψ + 0 ( ) + ψ − 0 ( ), where (21)        ψ + 0 ( ) = 1 2 λ tr 2 + µtr (( + ) 2 ), ψ − 0 ( ) = 1 2 λ( − tr ) 2 + µtr (( − + ) 2 ), where λ and µ are the Lamé coefficients. The terms of (20) are, respectively, the energies related to traction, compression, and shear. In particular, we chooseψ and π dis according to (22) ψ (ϕ, ∇ϕ, ) = ψ + 0 ( )g(ϕ) + ψ − 0 ( ) + f (ϕ) + 1 2 g c 2 |∇ϕ| 2 , π dis (ϕ, ∇ϕ,φ, ) = −βφ, where f is a function of ϕ, and g c > 0, > 0, β > 0 are problem-specific-constants. Here, ψ 0 is the elastic free-energy of the undamaged material, f and g c = Gc is a parameter that depends on the Griffith energy G c , carries dimensions of length, and β carries dimensions of (dynamic) viscosity. Moreover, f and g satisfy (23) ∀ 0 ≤ ϕ ≤ 1, f (1) = f (1) = 0, f (ϕ) < 0, g(0) = 0, g(1) = 1, g (ϕ) > 0, where g (ϕ) is the degradation function. Granted thatψ and π dis are as defined in (22), the thermodynamic restrictions (9) and (12) yield (24) ξ = g c 2 ∇ϕ, π = −ψ + 0 ( )g (ϕ) − f (ϕ) + −βφ, ifφ < 0, π r , ifφ < 0. with the superposed prime denoting differentiation with respect to ϕ. Using the particular constitutive relations (24) in the field equation (1), we obtain the Allen-Cahn-Ginzburg-Landau equation (25) ifφ < 0, βφ ifφ = 0, −π r = g c 2 ϕ − ψ + 0 ( )g (ϕ) − f (ϕ) + γ, where = ∇ · ∇ denotes the Laplacian and π r specified by the right-hand-side of (25) in caseφ = 0. In what follows, we set γ = 0 and define the function f (ϕ) as (26) f (ϕ) := g c 1 2 (ϕ − 1) 2 . The strong form of the consistent problem of a deformable body undergoing dynamic fractures is: Find u ∈ R d and ϕ ∈ R such that: (27)                                ρ 0ü − g (ϕ) ∇ · σ = ρ 0 g in B × I, 2 ϕ − ψ + 0 ( ) G c g (ϕ) − f (ϕ) = ηφ ifφ < 0 −π r ifφ = 0 in B × I, u (x, t) = u D in ∂B D × I, σ · n = t in ∂B N × I, ∇ϕ · n = 0 in ∂B N × I, u (x, 0) = u 0 in B, u (x, 0) =u 0 in B. Here, I is the total time window, the traction on a Euler-Cauchy cut t, where n is the outward unit normal, the parameter η = β/g c , and g is a body force. In this strong form, we assume the Cauchy stress is equal to the active stress σ = σ a . 2.2. Model reduction by a history-field variable. The last set of equations produces a branched solution that leads to larger and more complex numerical solutions. In this sense, we introduce a history-field variable that considers a model reduction to force the irreversible nature of the process, following [38,39,16]. Let us introduce H, strain energy history, given by (28) H := ψ + 0 , if ψ + 0 ( (x)) < H f (x), H f otherwise, where H f (x) = max t∈[t=0,t] ψ + ( (x)). Thus, the phase-field equation reduces to (29) βφ = g c 2 ϕ − Hg (ϕ) − f (ϕ). Finally, the strong form of the reduced problem of a deformable body undergoing dynamic fractures is: Find u ∈ R d and ϕ ∈ R such that: (30)                              ρ 0ü − g (ϕ) ∇ · σ = ρ 0 g in B × I, ηφ + H G c g (ϕ) + f (ϕ) − 2 ϕ = 0 in B × I, u (x, t) = u D in ∂B D × I, σ · n = t in ∂B N × I, ∇ϕ · n = 0 in ∂B N × I, u (x, 0) = u 0 in B, u (x, 0) =u 0 in B. We define two degradation functions g, the first one is a quadratic function (31) g (ϕ) = ϕ 2 , and the second one is a cubic function (32) g (ϕ) = S(ϕ 3 − ϕ 2 ) + 3ϕ 2 − 2ϕ 3 , where S is a shape parameter that represents the sharpness of the phase-field interface. Staggered generalized-α time integrator We evolve the partial differential equation system (30) using a second-order generalized-α implicit timemarching method [40]. The weak problem statement is (33)      Find (u, ϕ) ∈ U × P s.t. (v; ρ 0ü ) B + a (v; u, ϕ) = f (v) ; ∀v ∈ V, (q; ηφ) B + b (q; u, ϕ) = g (q) , ∀q ∈ Q, where the Lagrangian equations read (34) a (v; u, ϕ) = (∇v, g (ϕ) σ (u)) B and f (v) = (v, ρ 0 g) B + (v, t) ∂B N , while the Eulerian equations read (35) b (q; u, ϕ) = 2 (∇q, ∇ϕ) B + q, H G c dg (ϕ) dϕ + 1 ϕ B and g (q) = (q, 1) B . The test space for the solid equilibrium equation is (36) V := H 1 0 (B) := u ∈ L 2 (B) |∇u ∈ L 2 (B) , u = u D ∈ ∂B D . being L 2 square integrables functions in the domain B, while the test space for the phase-field equation is (37) Q := H 1 (B) := ϕ ∈ L 2 (B) |∇ϕ ∈ L 2 (B) . The integrability of equation (33) requires thatü and f to belong to the dual space of the test space defined by the Riesz representation theorem V * = H −1 (B), whileφ ∈ Q * . With these definitions, we introduce the following trial spaces (38) U := {u ∈ V |ü ∈ V * } and P := {ϕ ∈ Q |φ ∈ Q * } , Solving the fully coupled system (33) requires a considerable amount of computer resources, mainly during the space refinement. Thus, we solve a staggered scheme that solves the solid equilibrium equations independently from the phase-field equations with a Picard iteration [41]. In every step, we first solve the equilibrium equation to obtain a displacement field; then, we solve the phase-field equation using these displacements. This staggered technique allows us to use different time-integrators to solve each equation. We use a semi-discrete formulation in the time interval t 0 < t 1 < ... < t n < ... < t f and define the time step size as ∆t n = t n − t n−1 . A generalized-α time-marching scheme for u (t n ),ü (t n ), ϕ (t n ) andφ (t n ), respectively, using u n ,ü n , ϕ n andφ n , allows us to state (for more details, see, [42]): (39) u n+α c f = u n + α c f [[u]], u n+α c m =ü n + α c m [[ü]], for the second-order equation, while for the first-order equation we get [43] (40) ϕ n+α j f = ϕ n + α j f [[ϕ]], ϕ n+α j m =φ n + α j m [[φ]]. where [[•]] = • n+1 − • n is the variable's time increment. We expand u n+1 and ϕ n+1 with Taylor series as, [[u]] = 2 j=1 ∆t j j! ∂ j u ∂t j n + β c ∆t 2 [[ü]],(41)[[ϕ]] = ∆t φ n + γ j [[φ]] ,(42) while the velocity is [[u]] = ∆t (ü n + γ c [[ü]]) .(43) We define the parameters of (39), (41), (42) and (43) in terms of the spectral radii for the second-order equation ρ c ∞ and for the first-order equation ρ j ∞ of the amplification matrix, the only user-defined parameter, as [44] (44) α c f = 1 1 + ρ c ∞ α j f = 1 1 + ρ j ∞ α c m = 2 − ρ c ∞ ρ c ∞ + 1 α j m = 1 2 3 − ρ j ∞ ρ j ∞ + 1 γ c = 1 2 + α c m − α c f γ j = 1 2 + α j m − α j f β c = 1 4 1 + α c m − α c f 2 As we solve the equations in a staggered manner, we can compute the equations' spectral radii independently. In this sense, we set ρ ∞ = ρ c ∞ = ρ j ∞ . The staggered generalized-α method to solve (33) results in the following discrete system (45)      v; ρ 0ü k+1 n+α c m B + a v; u k+1 n+α c f , ϕ k n+1 = f n+α c f (v) , ∀v ∈ V q; ηφ k+1 n+α j m B + b q; u k+1 n+1 , ϕ k+1 n+α j f = g n+α j f (q) , ∀q ∈ Q, where the superscript k represents the staggered iteration. Assuming the following functional linearization a v; u k+1 n+α c f , ϕ k n+1 = a v; u n , ϕ k n+1 + α c f − → a v, [[u k+1 ]]; u n , ϕ k n+1 ,(46)b q; u k+1 n+1 , ϕ k+1 n+α j f = b q; u k+1 n+1 , ϕ n + α j f − → b q, [[ϕ k+1 ]]; u k+1 n+1 , ϕ n ,(47) where − → • represents the Gâteaux derivative of the functional, that is, (45), the semi-discrete problem reads: Given u n ,u n ,ü n , ϕ n ,φ n and a time-step size ∆t, (48) − → a (v, [[u]]; u, ϕ) = d d a (v; u + [[u]], ϕ )| =0 , and (49) − → b (q, [[ϕ]]; u, ϕ) = d d b (q; u, ϕ + [[ϕ]])| =0 ,Find ([[u]], [[ϕ]]) ∈ U × P s.t. v; ρ 0 [[u k+1 ]] B + β c ∆t 2 α c f α c m − → a v, [[u k+1 ]]; u n , ϕ k n+1 = f Gα (v) , ∀v ∈ V,(50)q; η[[ϕ k+1 ]] B + γ j ∆tα j f α j m − → b q, [[ϕ k+1 ]]; u k+1 n+1 , ϕ n = g Gα (q) , ∀q ∈ Q,(51) with the following right-hand sides f Gα (v) = β c ∆t 2 α c m f n+α c f (v) + ρ 0 v; α c m 2β c − 1 ü n + α c m β c ∆tu n B − a v; u n , ϕ k n+1 , (52) g Gα (q) = γ j ∆t α j m g n+α j f (q) − b q; u k+1 n+1 , ϕ n + η α j m γ j − 1 (q;φ n ) B .(53) The algorithm (1) summarizes the staggered solution of the resulting formulation, along with the spacetime adaptive scheme. Result: updated variables u n+1 ,u n+1 ,ü n+1 , ∆t n+1 and final mesh M n+1 Calculate δu = ϕ k+1 n+1 − ϕ k n+1 and δϕ = ϕ k+1 n+1 − ϕ k n+1 ; 11 if max ( δu , δϕ ) ≤ tol stg then 12 Calculate temporal error τ Gα (t n + ∆t n+1 ) with (56) ; 13 Calculate E (t n + ∆t n+1 ) with (57) ; 14 if E (t n + ∆t n+1 ) ≤ tol max then 15 Compute space error ε n+1 solving (61) ; 16 if ε n+1 ≥ tol mesh then 17 Mark elements using ε n+1 ; 18 Refine mesh and update mesh M j+1 n+1 ← M j n+1 (ε n+1 ) ; Fully automatic space-and-time adaptivity 1 while t n+1 ≤ t f do 2 Initialize mesh M 0 n+1 ; 3 while ε n+1 ≥ tol mesh do 4 iteration j ← j + 1 ; 5 19 end 20 Update current time step t n+1 ← t n + ∆t n+1 ; 21 Update u n+1 ,u n+1 andü n+1 ; 22 Update u n ← u n+1 ,u n ←u n+1 andü n ←ü n+1 ; 23 if E (t n + ∆t n+1 ) < tol min then 24 Increase delta step for next time increment ∆t n+1 ← F (E (t n+1 ) , ∆t n+1 , tol) Since the phase-field equation is not always time-dependent, we propose an adaptive scheme in terms of the model's solid equilibrium equation, expanding the concept proposed in [27] for second-order time derivatives. We express the local truncation error of the generalized-α time integrator using Taylor expansion as follows (54) τ Gα (t n+1 ) = ∆t 2 n+1 (∆t n + ∆t n−1 ) 6 ... u (t n+1 ) + O ∆t 4 . We save the solutions u n+1 , u n , u n−1 and u n−2 from the generalized-α scheme; then we compute the truncation error of (54) using the third-order backward difference formula (BDF3) (55) ... u (t n+1 ) ≈ 1 ∆t 2 n+1 u n+1 − u n ∆t n+1 − 1 + ∆t n+1 ∆t n u n − u n−1 ∆t n + ∆t n+1 ∆t n ∆t n−1 (u n−1 − u n−2 ) . We express the local truncation error of the generalized-α scheme for second-order in time derivative equations by substituting (55) into (54) obtaining (56) τ Gα (t n+1 ) ≈ ∆t n + ∆t n−1 6 u n+1 − u n ∆t n+1 − 1 + ∆t n+1 ∆t n u n − u n−1 ∆t n + ∆t n+1 ∆t n ∆t n−1 (u n−1 − u n−2 ) , where ∆t n+1 = t n+1 − t n , ∆t n = t n − t n−1 and ∆t n−1 = t n−1 − t n−2 are the time increments of previous time-increments. Finally, we compute the weighted local truncation error and use it as an error indicator [45] (57) E (t n+1 ) = 1 N N i=1 τ Gα i (t n+1 ) ρ abs + ρ rel max (|u n+1 | i , |u n+1 | i + |τ Gα (t n+1 ) | i ) 2 , where ρ abs and ρ rel are user-defined parameters called absolute and relative tolerances, respectively. For all the examples presented in this paper, these parameters are ρ abs = ρ rel = 10 −4 . The time step adaptivity simply follows (58) ∆t k+1 n+1 (t n+1 ) = F E (t n+1 ) , ∆t k n+1 , tol = ρ tol tol E (t n+1 ) 1 2 ∆t k n+1 , where k is the time-step refinement level and ρ tol is a safety factor parameter that we set to 0.9 for all applications. We summarize the time-adaptive scheme as a global procedure in Algorithm 1, where we define two tolerances tol max and tol min that limit the range of reduction or increments of the time-step size. Spatial adaptivity. We propose an adaptive residual minimization method, which measures the error in a proper dual norm. We use low-order elements to approximate the solution and enrich the solution space to estimate the solution error with bubbles. These bubbles allow us to localize the error measurement and guide refinement. The phase-field equation and its characteristic length bound the mesh size; thus, our procedure seeks to avoid mesh bias as the crack propagates by estimating the phase-field error. The phasefield is a scalar function; thus, the spatial error control uses the smallest representative equation system to estimate the error delivering significant computational savings. First, we introduce a conforming partition M h = {T i } N i=1 of the domain B into N disjoint finite elements T i , such that B h = ∪ N i=1 T i satisfies B = int (B h ) .(59) We define P p (T i ), with p ≥ 0, the set of polynomials of degree p defined on the element T i , and consider the following polynomial space (60) P p (B h ) = {ϕ ∈ L 2 (B) | ϕ| Ti ∈ P p (T i ) , ∀i = 1, . .., N } , Let Q B h be a finite-dimensional space built with functions in B h , the residual minimization problem for the phase-field equation considering the generalized-α time integrator reads (61)      Find (ε h , ϕ h ) ∈ P B h × P h such that (w h ; ε h ) Q ∆t h + b Gα h (w h ; u hn+1 , ϕ h ) = g Gα h (w h ) , ∀w h ∈ Q B h b Gα h (q h ; u hn+1 , ε h ) = 0 , ∀q h ∈ Q h , where P B h is the discrete bubble enriched space (62) P B h = {ϕ h ∈ Q B h |φ h ∈ Q B * h } , and (·, ·) Q ∆t h is the inner product induced by the norm (63) ϕ h 2 Q ∆t h = η ϕ h 2 L2 + α j f ∆tγ j α j m ϕ h 2 Q h , and ϕ h 2 Q h = 1 + l 0 G c ϕ h 2 L2 + l 2 0 ∇ϕ h 2 L2 , being the functional (64) b Gα h (w h ; u hn+1 , ϕ h ) = (q h ; η[[ϕ h ]]) B + γ j ∆tα j f α j m − → b h (q h , [[ϕ h ]]; u hn+1 , ϕ hn ) , is the left-hand side (51) and ε h = R −1 Q B h g Gα h (w h ) − b Gα h (w h ; u hn+1 , ϕ h ) ∈ Q B h is an error representation. The residual representation also has the following property (65) ε hn+1 Q ∆t h = B Gα h θ hn+1 − ϕ hn+1 Q ∆t h * , where θ hn+1 ∈ P B h is the solution of the phase-field equation (51) in the bubble enriched space, while B Gα h : Q B h,# → Q B * h is defined as [33]: (66) w h ; B Gα h z Q B h ×Q B * h = b Gα h (w h ; u h , z) . The bubble enriched space P B h ⊂ Q B h allows us to localize the error estimation to mark and refine the mesh in terms of the phase-field error. Figure 1 (a) shows a linear triangular element while Figures 1 (b) and (c) show a third-order bubble function that enriches the discrete space. In all examples, we approximate all unknown fields with linear functions and then enrich these spaces with bubbles to estimate the spatial error. 4.3. Adaptive space-and-time procedure. This section describes the adaptive procedure to provide the reader with all the necessary tools for a straightforward implementation. We implement the solver we use in this paper in the open-source package FEniCS project [46] combined with the FIAT package to deal with different quadratures [47]. The Algorithm 1 describes a general calculation layout for the proposed model. We obtain a reliable user-independent numerical method for simulating dynamic fracture processes. We assume the user provides the algorithm with an initial mesh M 0 n+1 as an iteration starting point. Also, the user needs to define an initial time-step size ∆t 0 to start the calculation procedure. Thereon, the timeadaptive approach bounds the error temporal error between tol min and tol max . Furthermore, the user needs to specify two more tolerances, tol stg that controls the staggered solution of Equations (50) and (51), and tol mesh that control the spatial mesh refinement process. In all numerical experiments, we select for refinement those elements within a certain percentage of the element with maximum error. We summarize the approach as follows (67) refine all T i ∈ M h such that ε n+1 (T i ) ≥ χ max (ε n+1 ) , where χ ∈ [0, 1] is a user-defined parameter. When χ approaches 0, we refine fewer elements in each iteration, while as it tends to 1, we refine most elements. Our experience shows that this element-selection method delivers the most efficient discretizations when for χ ∈ [0.1, 0.3]. Numerical Experiments 5.1. Significance of an error-based time-adaptive scheme. In this section, we study the dynamic branching problem, focusing on the impact of the time-adaptive scheme to avoid spurious solutions. We use.a fixed, fine mesh to isolate the influence of the time adaptive scheme on the process. Figure 2 also includes the material parameters, being the elastic modulus E = 208 M P a, Poisson modulus ν = 0.3, fracture energy G c = 0.5 N/m 2 and density ρ = 2, 400 kg/m 3 . We set the characteristic length equal to the minimum element size = 5mm and use a quadratic degradation function. We denote the minimum element size the mesh size. In particular, we set the staggered iteration number threshold to 10, reducing the time-step size to satisfy the bound. Figure 3 (b) shows the error E (t n+1 ) and the time-step-size ∆t evolution. We divide the last plot into three well-defined stages: stage 1 where the crack nucleation happens, stage 2 where a single crack propagates and finishes when it branches in two, and stage 3 where two independent cracks propagate. The final crack configuration at time t = 25 ms is asymmetric, despite the example is symmetric; the asymmetry is due to the error accumulation during the time-marching process. The most significant increase in the error evolution occurs during stage 1. This simple test shows that the time-step control guided by iteration counts can lead to unreliable results. In contrast, Figure 4 shows the resulting phase-field when we use the error-based approach of Section 4 to adapt the time-step size. We set tol max = 10 −3 . Figures 4 (a)-(c) display three snapshots that correspond to the phase-field solution on the finest mesh at each stage for times t = 5 ms, t = 10 ms, and t = 25 ms, respectively. Figure 4 (d) shows the error and time-step-size evolution. The figure shows that most of the error accumulation occurs during the first stage, which results in a significant time-step size reduction during the first 10 ms of the simulation. Beyond this point, the time-step size remains unchanged until the simulation end, when the crack pathway is already traced. This simulation demonstrates that using an error-based time-adaptive approach delivers reliable results where the final phase-field path is symmetric. 5.2. Crack branching with space-and-time adaptivity. In this numerical experiment, we use our spaceand-time adaptive scheme to simulate the problem that Figure 2 shows. The initial regular mesh has 4, 096 elements with a mesh size of 45 mm, setting the localization length to = 5 mm while we set the maximum tolerance for the time integrator to tol max = 5 × 10 −3 . Figure 5 (a) shows the time evolution of the number of elements (blue) and the minimum mesh size (red). The final mesh has a minimum element size of 0.95 mm with a total number of elements of 115, 653, fewer than half the number we used in the previous example. Figure 5 (b) shows the temporal evolution of the error in time E (t n+1 ) (blue) and the time-step size ∆t n+1 (red). We divide the evolution into four well-defined stages. The first stage represents the elastic regime, where a coarse mesh can appropriately describe the evolution. The second stage contains the crack onset, where the algorithm refines the mesh to capture the tip notch evolution, wherein we need a smaller element size. We introduce a refinement cut-off in this case to avoid unnecessarily small elements. The third stage represents a single branch propagation, where the element number increases faster, while the time-step size also decreases considerably. The fourth stage depicts the crack branching and the propagation of two fractures. In this last stage, the algorithm deploys the maximum number of elements. 5.3. Dissipation analysis and initial mesh sensitivity. We now focus on the bias induced by the initial mesh. We analyze this influence by comparing structured versus unstructured meshes. We consider a cubic degradation function in this case, inducing the fracture with a traction force σ (t) = 8 kN . We compare the dissipated energy for both mesh types, where we compute this energy following [39], that is, (68) D = B0 G c 2 1 (1 − ϕ) 2 + ∇ϕ 2 dB 0 . Figure 8 (a) shows the evolution for an unstructured mesh, starting with 2, 464 elements. By construction, this mesh contains highly irregular elements near the notch tip; see, for example, Figure 8 (b). We use the same numerical and material parameters as above with a cubic degradation function instead of a quadratic one as in the previous cases. We force the refinement procedure to have at least five elements to reproduce the phase-field interface in both cases. Furthermore, Figure 8 (b) shows the free fracture energy evolution for both meshes. Although the structured mesh starts propagating the fracture after the unstructured one, the dissipation velocity is higher for the structured mesh. Our method shows is irreversible, that is, (69) D (t n+1 ) ≥ 0, implying that the system has a free fracture energy at time t n+1 that is lower or equal than one at the previous time step. Figure 10 shows the snapshot sequence of both initial meshes, while Figure 11 shows the phase-field evolution for both cases. The final mesh and phase-field configuration for both cases are almost identical, demonstrating the method's robustness even when starting from highly irregular meshes. Conclusions We present a space-and-time adaptive method for dynamic fracture problems based on Eulerian-Lagrangian formulations. First, we describe a thermodynamically consistent fracture model based on phase-field theory, which allows us to formulate from an energetic point of view. Then, we describe a staggered solution scheme that uses second-order generalized-α time marching methods for first-and second-order time derivatives. Furthermore, we detail a time adaptive method that estimates the temporal error using backward difference formulas from previously computed time steps and the generalized-alpha methods' updates. This strategy results in a simple equation that calculates the truncation error based on previous solutions of the equilibrium equations; this truncation error via a weighted truncation estimate allows us to design an error-based time adaptive process. We combine this time adaptive method with an adaptive mesh method based on a residual minimization based on the phase-field equation to deliver a fully automatic space-and-time adaptive strategy for dynamic fracture simulation. We use a bubble-enriched finite element space to estimate the residual error in a proper norm. The general algorithm we propose solves all equations in the proposed residual-minimization scheme at each time step. We detail important algebraic aspects and the refinement criteria we consider for this class of dynamic fracture branching problems. We use three challenging dynamic fracture propagation problems to demonstrate the efficiency and robustness of our method. First, we study the robustness of our temporal error estimation for time adaptivity in a notched plate subject traction that induces a dynamic cracking process. We use a regular mesh with 262, 144 elements and a mesh size of = 5 mm. In particular, we compare the two time-adaptive strategies, one driven by the iteration count, against our error-based approach. Our tests show that the iteration-count scheme leads to asymmetrical solutions in symmetric problems. Our error-based time-adaptive strategy is robust and delivers symmetric solutions on the same mesh. The second example solves the same problem and studies the influence of the adaptive mesh strategy on the simulation solutions for this case. In this case, we start from a coarse mesh and allow the mesh adaptivity to track the crack path during the fracture's dynamic evolution. Our space-and-time adaptive scheme requires a We obtain a final mesh with 115, 653 elements with a mesh size = 0.95 mm. This final mesh uses less than half the elements the regular mesh requires. The method significantly reduces overall computational cost while delivering a better resolution of the crack path and the crack tip dynamics. The last example analyzes the energy dissipation of the adaptive strategy, showing that the proposed algorithm respects the problem's irreversible nature. In particular, we consider a cubic degradation function and study the dynamic crack evolution that structure and unstructured meshes deliver. We use meshes with a similar number of degrees of freedom. We build the structured mesh to be regular with smooth element size transitions. In contrast, the unstructured mesh is highly irregular, the product of automatic mesh generation. Nevertheless, the mesh bias introduced by the initial mesh is negligible; our formulation deals even with initial meshes that contain almost flat elements. In conclusion, we show that our method is robust, accurate, and computationally efficient for fracture problems involving inertial effects in the context of Euler-Lagrangian formulations. Algorithm 1 : 1Space-&-time adaptivity: Staggered equilibrium & phase-field equations' solution Data: u n , u n−1 , u n−2 ,u n ,ü n , ∆t n+1 , ∆t n , ∆t n−1 and initial mesh M 0 n+1 time-step size∆t n+1 ← F (E (t n+1 ) , ∆t n+1 , tol);28 Go to line 3 and restart the staggered solution with the new time-step; Temporal adaptivity. ( a ) aSpace Q. (b) Space QB.(c) Bubble shape function. Figure 1 . 1Low order space Q and bubble enriched space Q B for 2D. Figure 2 . 2Dynamic fracture in a notched plate. Problem setup. Figure 2 2shows a notched plate of dimension 1 m x 2 m that contains an initial planar notch of 50 cm. We subject the plate to a vertical traction force σ (t) = 10KN . A constant uniform mesh of 262, 144 elements with 263, 938 degrees of freedom for the displacement field and 131, 969 for the phase-field, totaling 395, 907 for the overall problem. Error E (tn+1) & time-step size ∆t evolution Figure 3 . 3Dynamic fracture branching: time adaptivity based on iteration count Figure 3 ( 3a) shows the final crack pattern when we control ∆t by iteration number in the staggered solution scheme. Phase field, t = 10 ms (c) Phase field, t = 25 ms Error E (tn+1) & time-step size ∆t evolution Figure 4 . 4Dynamic fracture branching: time adaptivity based on error control Error E (tn+1) & time-step size ∆t evolution Figure 5 .Figure 6 . 56SpaceCrack tip velocity. time adaptivity versus space-and-time adaptivityFigure 6 compares the crack tip velocity between the time adaptive example presented in the previous section and the space-and-time adaptive case. The figures show similar results with four well-defined sections: elastic regime, crack nucleation, and propagation of one branch and two branches. Also, the crack propagation is around the 60% of the Rayleigh velocity of the solid body. Furthermore, Figure 7 shows a snapshot timeline of the simulation results, with the phase field and the computed mesh in each case. For time t = 3 ms, Figures 7 (c) and (d) show the crack nucleation, while at t = 10 ms, Figures 7 (e) and (f) show the crack branching. Finally, Figures 7 (g) to (j) show the final stages of the problem, where two cracks propagate producing a symmetric crack profile by properly resolving the dynamics locally at the crack tips. Initial phase field, t = 0 ms (c) Mesh snapshot, t = 3 ms (d) Phase-field snapshot, t = 3 ms (e) Mesh snapshot, t = 10 ms (f) Phase-field snapshot, t = 10 ms (g) Mesh snapshot, t = 15 ms (h) Phase-field snapshot, t = 15 ms (i) Final mesh, t = 28 ms (j) Final phase-field snapshot, t = 28 ms Figure 7 .Figure 8 . 78Space-and-time adaptive simulation of branching fracture (cubic degradation function): Mesh & phase-field evolutionFigure 8 (a) shows the temporal evolution of the element number and the mesh size for both types of meshes. In the case of the structured mesh, the refinement starts earlier than in the unstructured one since the crack tip's resolution needs to improve to capture the crack onset accurately. The final element Notch tip detail (a) Initial mesh (overview) (b) Notch tip detail Unstructured initial mesh (a) Element number & mesh size evolution (b) Free fracture energy evolution Figure 9 . 9Structured versus unstructured (highly irregular) meshes numbers are 195, 273 for the unstructured mesh and 166, 529 for the structured one. Unstructured mesh, t = 130 ms (b) Structured mesh for t = 130 ms (c) Unstructured mesh, t = 140 ms (d) Structured mesh, t = 140 ms (e) Unstructured mesh, t = 160 ms (f) Structured mesh, t = 160 ms (g) Unstructured mesh, t = 173 ms (h) Structured mesh, t = 173 ms Figure 10 . 10Space-and-time adaptive simulation of branching fracture (cubic degradation function): Mesh evolution (a) Unstructured mesh, t = 130 ms (b) Structured mesh, t = 130 ms (c) Unstructured mesh, t = 140 ms (d) Structured mesh, t = 140 ms (e) Unstructured mesh, t = 160 ms (f) Structured mesh, t = 160 ms (g) Unstructured mesh, t = 173 ms (h) Structured mesh, t = 173 ms Figure 11 . 11Space-and-time adaptive simulation of branching fracture (cubic degradation function): Phase-field evolution An experimental investigation into dynamic fracture: I. crack initiation and arrest. K Ravi-Chandar, W G Knauss, International Journal of Fracture. 254K. Ravi-Chandar and W. G. Knauss. An experimental investigation into dynamic fracture: I. crack initiation and arrest. International Journal of Fracture, 25(4):247-262, 1984. An experimental investigation into dynamic fracture: Ii. microstructural aspects. K Ravi-Chandar, W G Knauss, International Journal of Fracture. 26K. Ravi-Chandar and W. G. Knauss. An experimental investigation into dynamic fracture: Ii. microstructural aspects. International Journal of Fracture, 26:65-80, 1984. An experimental investigation into dynamic fracture: Iii. on steady-state crack propagation and crack branching. K Ravi-Chandar, W G Knauss, International Journal of Fracture. 26K. Ravi-Chandar and W. G. Knauss. An experimental investigation into dynamic fracture: Iii. on steady-state crack propagation and crack branching. International Journal of Fracture, 26:141-154, 1984. An experimental investigation into dynamic fracture: Iv. on the interaction of stress waves with propagating cracks. K Ravi-Chandar, W G Knauss, International Journal of Fracture. 26K. Ravi-Chandar and W. G. Knauss. An experimental investigation into dynamic fracture: Iv. on the interaction of stress waves with propagating cracks. International Journal of Fracture, 26:189-200, 1984. Mechanics of crack curving and branching -a dynamic fracture analysis. M Ramulu, A S Kobayashi, International Journal of Fracture. 27M. Ramulu and A. S. Kobayashi. Mechanics of crack curving and branching -a dynamic fracture analysis. International Journal of Fracture, 27:187-201, 1985. Computational modelling of impact damage in brittle materials. G T Camacho, M Ortiz, International Journal of Solids and Structures. 3320G.T. Camacho and M. Ortiz. Computational modelling of impact damage in brittle materials. International Journal of Solids and Structures, 33(20):2899-2938, 1996. An efficient adaptive procedure for three-dimensional fragmentation simulations. A Pandolfi, M Ortiz, Engineering with Computers. 18A. Pandolfi and M. Ortiz. An efficient adaptive procedure for three-dimensional fragmentation simulations. Engineering with Computers, 18:148-159, 2002. Dynamic crack propagation based on loss of hyperbolicity and a new discontinuous enrichment. Ted Belytschko, Hao Chen, Jingxiao Xu, Goangseup Zi, International Journal for Numerical Methods in Engineering. 5812Ted Belytschko, Hao Chen, Jingxiao Xu, and Goangseup Zi. Dynamic crack propagation based on loss of hyperbolicity and a new discontinuous enrichment. International Journal for Numerical Methods in Engineering, 58(12):1873-1905, 2003. A method for dynamic crack and shear band propagation with phantom nodes. Jeong-Hoon Song, Pedro M A Areias, Ted Belytschko, International Journal for Numerical Methods in Engineering. 676Jeong-Hoon Song, Pedro M. A. Areias, and Ted Belytschko. A method for dynamic crack and shear band propagation with phantom nodes. International Journal for Numerical Methods in Engineering, 67(6):868-893, 2006. Cracking node method for dynamic fracture with finite elements. Jeong-Hoon Song, Ted Belytschko, International Journal for Numerical Methods in Engineering. 773Jeong-Hoon Song and Ted Belytschko. Cracking node method for dynamic fracture with finite elements. International Journal for Numerical Methods in Engineering, 77(3):360-385, 2009. Numerical simulation of dynamic fracture using finite elements with embedded discontinuities. Francisco Armero, Christian Linder, International Journal for Numerical Methods in Engineering. 160Francisco Armero and Christian Linder. Numerical simulation of dynamic fracture using finite elements with embedded discontinuities. International Journal for Numerical Methods in Engineering, 160:119-141, 2009. Finite elements with embedded branching. C Linder, F Armero, Finite Elements in Analysis and Design. 454C. Linder and F. Armero. Finite elements with embedded branching. Finite Elements in Analysis and Design, 45(4):280- 293, 2009. Revisiting brittle fracture as an energy minimization problem. G A Francfort, J.-J Marigo, Journal of the Mechanics and Physics of Solids. 468G.A. Francfort and J.-J. Marigo. Revisiting brittle fracture as an energy minimization problem. Journal of the Mechanics and Physics of Solids, 46(8):1319-1342, 1998. Numerical experiments in revisited brittle fracture. B Bourdin, G A Francfort, J-J Marigo, Journal of the Mechanics and Physics of Solids. 484B. Bourdin, G.A. Francfort, and J-J. Marigo. Numerical experiments in revisited brittle fracture. Journal of the Mechanics and Physics of Solids, 48(4):797-826, 2000. A phase-field description of dynamic brittle fracture. Michael J Borden, Clemens V Verhoosel, Michael A Scott, J R Thomas, Chad M Hughes, Landis, Computer Methods in Applied Mechanics and Engineering. Michael J. Borden, Clemens V. Verhoosel, Michael A. Scott, Thomas J.R. Hughes, and Chad M. Landis. A phase-field description of dynamic brittle fracture. Computer Methods in Applied Mechanics and Engineering, 217-220:77-95, 2012. A phase field model of dynamic fracture: Robust field updates for the analysis of complex crack patterns. M Hofacker, C Miehe, International Journal for Numerical Methods in Engineering. 933M. Hofacker and C. Miehe. A phase field model of dynamic fracture: Robust field updates for the analysis of complex crack patterns. International Journal for Numerical Methods in Engineering, 93(3):276-301, 2013. Isogeometric continuity constraints for multi-patch shells governed by fourth-order deformation and phase field models. Karsten Paul, Christopher Zimmermann, Thang X Duong, Roger A Sauer, Computer Methods in Applied Mechanics and Engineering. 370113219Karsten Paul, Christopher Zimmermann, Thang X. Duong, and Roger A. Sauer. Isogeometric continuity constraints for multi-patch shells governed by fourth-order deformation and phase field models. Computer Methods in Applied Mechanics and Engineering, 370:113219, 2020. A comparative accuracy and convergence study of eigenerosion and phase-field models of fracture. Anna Pandolfi, Kerstin Weinberg, Michael Ortiz, Anna Pandolfi, Kerstin Weinberg, and Michael Ortiz. A comparative accuracy and convergence study of eigenerosion and phase-field models of fracture. 2021. A primal-dual active set method and predictor-corrector mesh adaptivity for computing fracture propagation using a phase-field approach. Timo Heister, Mary F Wheeler, Thomas Wick, Computer Methods in Applied Mechanics and Engineering. 290Timo Heister, Mary F. Wheeler, and Thomas Wick. A primal-dual active set method and predictor-corrector mesh adap- tivity for computing fracture propagation using a phase-field approach. Computer Methods in Applied Mechanics and Engineering, 290:466-495, 2015. Phase-field analysis of finite-strain plates and shells including element subdivision. P Areias, T Rabczuk, M A Msekh, Computer Methods in Applied Mechanics and Engineering. 312P. Areias, T. Rabczuk, and M.A. Msekh. Phase-field analysis of finite-strain plates and shells including element subdivision. Computer Methods in Applied Mechanics and Engineering, 312:322-350, 2016. An adaptive space-time phase field formulation for dynamic fracture of brittle shells based on lr nurbs. Karsten Paul, Christopher Zimmermann, Kranthi K Mandadapu, J R Thomas, Chad M Hughes, Roger A Landis, Sauer, Computational Mechanics. 65Karsten Paul, Christopher Zimmermann, Kranthi K. Mandadapu, Thomas J. R. Hughes, Chad M. Landis, and Roger A. Sauer. An adaptive space-time phase field formulation for dynamic fracture of brittle shells based on lr nurbs. Computational Mechanics, 65:1039-1062, 2020. Isogeometric analysis of the cahn-hilliard phasefield model. Héctor Gómez, M Victor, Yuri Calo, Thomas J R Bazilevs, Hughes, Computer Methods in Applied Mechanics and Engineering. 19749Héctor Gómez, Victor M. Calo, Yuri Bazilevs, and Thomas J.R. Hughes. Isogeometric analysis of the cahn-hilliard phase- field model. Computer Methods in Applied Mechanics and Engineering, 197(49):4333-4352, 2008. Provably unconditionally stable, second-order time-accurate, mixed variational methods for phase-field models. Hector Gomez, J R Thomas, Hughes, Journal of Computational Physics. 23013Hector Gomez and Thomas J.R. Hughes. Provably unconditionally stable, second-order time-accurate, mixed variational methods for phase-field models. Journal of Computational Physics, 230(13):5310-5327, 2011. Isogeometric analysis of the advective cahn-hilliard equation: Spinodal decomposition under shear flow. Ju Liu, Luca Dedè, A John, Micheal J Evans, Thomas J R Borden, Hughes, Journal of Computational Physics. 242Ju Liu, Luca Dedè, John A Evans, Micheal J Borden, and Thomas J.R. Hughes. Isogeometric analysis of the advective cahn-hilliard equation: Spinodal decomposition under shear flow. Journal of Computational Physics, 242:321-350, 2013. Second order schemes and time-step adaptivity for allen-cahn and cahn-hilliard models. Francisco Guillén, - González, Giordano Tierra, Computers and Mathematics with Applications. 688Francisco Guillén-González and Giordano Tierra. Second order schemes and time-step adaptivity for allen-cahn and cahn-hilliard models. Computers and Mathematics with Applications, 68(8):821-846, 2014. Isogeometric analysis of the cahn-hilliard equation -a convergence study. Markus Kästner, Philipp Metsch, René De Borst, Journal of Computational Physics. 305Markus Kästner, Philipp Metsch, and René de Borst. Isogeometric analysis of the cahn-hilliard equation -a convergence study. Journal of Computational Physics, 305:360-371, 2016. An energy-stable time-integrator for phase-field models. P Vignal, N Collier, L Dalcin, D L Brown, V M Calo, Computer Methods in Applied Mechanics and Engineering. 316P. Vignal, N. Collier, L. Dalcin, D.L. Brown, and V.M. Calo. An energy-stable time-integrator for phase-field models. Computer Methods in Applied Mechanics and Engineering, 316:1179-1214, 2017. Automatically adaptive, stabilized finite element method via residual minimization for heterogeneous, anisotropic advection-diffusion-reaction problems. Roberto J Cier, Sergio Rojas, Victor M Calo, Computer Methods in Applied Mechanics and Engineering. 385114027Roberto J. Cier, Sergio Rojas, and Victor M. Calo. Automatically adaptive, stabilized finite element method via residual minimization for heterogeneous, anisotropic advection-diffusion-reaction problems. Computer Methods in Applied Me- chanics and Engineering, 385:114027, 2021. Automatically adaptive stabilized finite elements and continuation analysis for compaction banding in geomaterials. Roberto J Cier, Thomas Poulet, Sergio Rojas, Manolis Veveakis, Victor M Calo, International Journal for Numerical Methods in Engineering. Roberto J. Cier, Thomas Poulet, Sergio Rojas, Manolis Veveakis, and Victor M. Calo. Automatically adaptive stabilized finite elements and continuation analysis for compaction banding in geomaterials. International Journal for Numerical Methods in Engineering, 2021. A nonlinear weak constraint enforcement method for advectiondominated diffusion problems. Roberto J Cier, Sergio Rojas, Victor M Calo, 2021. Special issue honoring G.I. Taylor Medalist Prof. Arif Masud. 112103602Mechanics Research CommunicationsRoberto J. Cier, Sergio Rojas, and Victor M. Calo. A nonlinear weak constraint enforcement method for advection- dominated diffusion problems. Mechanics Research Communications, 112:103602, 2021. Special issue honoring G.I. Taylor Medalist Prof. Arif Masud. Goal-oriented adaptivity for a conforming residual minimization method in a dual discontinuous galerkin norm. Sergio Rojas, David Pardo, Pouria Behnoudfar, Victor M Calo, Computer Methods in Applied Mechanics and Engineering. 377113686Sergio Rojas, David Pardo, Pouria Behnoudfar, and Victor M. Calo. Goal-oriented adaptivity for a conforming residual minimization method in a dual discontinuous galerkin norm. Computer Methods in Applied Mechanics and Engineering, 377:113686, 2021. A stable discontinuous galerkin based isogeometric residual minimization for the stokes problem. Marcin Loś, Sergio Rojas, Maciej Paszyński, Ignacio Muga, Victor M Calo, Computational Science -ICCS 2020. Valeria V. Krzhizhanovskaya, Gábor Závodszky, Michael H. Lees, Jack J. Dongarra, Peter M. A. Sloot, Sérgio Brissos, and João TeixeiraChamSpringer International PublishingMarcin Loś, Sergio Rojas, Maciej Paszyński, Ignacio Muga, and Victor M. Calo. A stable discontinuous galerkin based isogeometric residual minimization for the stokes problem. In Valeria V. Krzhizhanovskaya, Gábor Závodszky, Michael H. Lees, Jack J. Dongarra, Peter M. A. Sloot, Sérgio Brissos, and João Teixeira, editors, Computational Science -ICCS 2020, pages 197-211, Cham, 2020. Springer International Publishing. An adaptive stabilized conforming finite element method via residual minimization on dual discontinuous galerkin norms. M Victor, Alexandre Calo, Ignacio Ern, Sergio Muga, Rojas, Computer Methods in Applied Mechanics and Engineering. 363112891Victor M. Calo, Alexandre Ern, Ignacio Muga, and Sergio Rojas. An adaptive stabilized conforming finite element method via residual minimization on dual discontinuous galerkin norms. Computer Methods in Applied Mechanics and Engineering, 363:112891, 2020. Continuum theory of thermally induced phase transitions based on an order parameter. E Fried, Gurtin, Physica D: Nonlinear Phenomena. 683E Fried and ME Gurtin. Continuum theory of thermally induced phase transitions based on an order parameter. Physica D: Nonlinear Phenomena, 68(3):326-343, 1993. Continua with microstructure. series: Springer tracts in natural philosophy. G Capriz, Springer-Verlag3592New YorkG Capriz. Continua with microstructure. series: Springer tracts in natural philosophy. New York: Springer-Verlag, 35:92, 1989. Sharp-crack limit of a phase-field model for brittle fracture. N Da Milton, SilvaJr, P Fernando, Eliot Duda, Fried, Journal of the Mechanics and Physics of Solids. 6111Milton N da Silva Jr, Fernando P Duda, and Eliot Fried. Sharp-crack limit of a phase-field model for brittle fracture. Journal of the Mechanics and Physics of Solids, 61(11):2178-2195, 2013. The thermodynamics of elastic materials with heat conduction and viscosity. Archive for Rational Mechanics and Analysis. Bd Coleman, Noll, 13BD Coleman and W Noll. The thermodynamics of elastic materials with heat conduction and viscosity. Archive for Rational Mechanics and Analysis, 13(1):167-178, 1963. A phase field model for rate-independent crack propagation: Robust algorithmic implementation based on operator splits. Christian Miehe, Martina Hofacker, Fabian Welschinger, Computer Methods in Applied Mechanics and Engineering. 19945Christian Miehe, Martina Hofacker, and Fabian Welschinger. A phase field model for rate-independent crack propagation: Robust algorithmic implementation based on operator splits. Computer Methods in Applied Mechanics and Engineering, 199(45):2765-2778, 2010. High-accuracy phase-field models for brittle fracture based on a new family of degradation functions. Juan Michael Sargado, Eirik Keilegavlen, Inga Berre, Jan Martin Nordbotten, Journal of the Mechanics and Physics of Solids. 111Juan Michael Sargado, Eirik Keilegavlen, Inga Berre, and Jan Martin Nordbotten. High-accuracy phase-field models for brittle fracture based on a new family of degradation functions. Journal of the Mechanics and Physics of Solids, 111:458- 489, 2018. Explicit time integration algorithms for structural dynamics with optimal numerical dissipation. M Gregory, Jintai Hulbert, Chung, Computer Methods in Applied Mechanics and Engineering. 1372Gregory M. Hulbert and Jintai Chung. Explicit time integration algorithms for structural dynamics with optimal numerical dissipation. Computer Methods in Applied Mechanics and Engineering, 137(2):175 -188, 1996. A phase-field description of dynamic brittle fracture. Michael J Borden, Clemens V Verhoosel, Michael A Scott, J R Thomas, Chad M Hughes, Landis, Computer Methods in Applied Mechanics and Engineering. Michael J. Borden, Clemens V. Verhoosel, Michael A. Scott, Thomas J.R. Hughes, and Chad M. Landis. A phase-field description of dynamic brittle fracture. Computer Methods in Applied Mechanics and Engineering, 217-220:77-95, 2012. A time integration algorithm for structural dynamics with improved numerical dissipation: the generalized-α method. Jintai Chung, G M Hulbert, Journal of Applied Mechanics. 602Jintai Chung and GM Hulbert. A time integration algorithm for structural dynamics with improved numerical dissipation: the generalized-α method. Journal of Applied Mechanics, 60(2):371-375, 1993. A generalized-α method for integrating the filtered navier-stokes equations with a stabilized finite element method. Kenneth E Jansen, Christian H Whiting, Gregory M Hulbert, Computer Methods in Applied Mechanics and Engineering. 1903Kenneth E. Jansen, Christian H. Whiting, and Gregory M. Hulbert. A generalized-α method for integrating the filtered navier-stokes equations with a stabilized finite element method. Computer Methods in Applied Mechanics and Engineering, 190(3):305-319, 2000. Isogeometric fluid-structure interaction: theory, algorithms, and computations. Y Bazilevs, V M Calo, T J R Hughes, Computational Mechanics. 43Y. Bazilevs, V.M. Calo, and T.J.R. Hughes. Isogeometric fluid-structure interaction: theory, algorithms, and computations. Computational Mechanics, 43:3-37, 2008. Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems. Ernst Hairer, Gerhard Wanner, Springer14Ernst Hairer and Gerhard Wanner. Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, volume 14. Springer, 2010. Jan Martin S Alnaes, Johan Blechta, August Hake, Benjamin Johansson, Anders Kehlet, Chris Logg, Johannes Richardson, Marie E Ring, Garth N Rognes, Wells, The FEniCS project version 1.5. Archive of Numerical Software. 3Martin S Alnaes, Jan Blechta, Johan Hake, August Johansson, Benjamin Kehlet, Anders Logg, Chris Richardson, Johannes Ring, Marie E Rognes, and Garth N Wells. The FEniCS project version 1.5. Archive of Numerical Software, 3(100):9-23, 2015. Algorithm 839: Fiat, a new paradigm for computing finite element basis functions. Robert C Kirby, ACM Trans. Math. Softw. 304Robert C. Kirby. Algorithm 839: Fiat, a new paradigm for computing finite element basis functions. ACM Trans. Math. Softw., 30(4):502-516, December 2004.
[]
[ "Understanding Human Mobility Flows from Aggregated Mobile Phone Data", "Understanding Human Mobility Flows from Aggregated Mobile Phone Data" ]
[ "Caterina Balzotti [email protected] ", "Andrea Bragagnini [email protected] ", "Maya Briani [email protected] ", "Emiliano Cristiani [email protected] ", "\nIstituto per le Applicazioni del Calcolo\nConsiglio Nazionale delle Ricerche\nRomeItaly\n", "\nTIM Services Innovation\nItaly\n", "\nIstituto per le Applicazioni del Calcolo\nConsiglio Nazionale delle Ricerche\nRomeItaly\n", "\nIstituto per le Applicazioni del Calcolo\nConsiglio Nazionale delle Ricerche\nRomeItaly\n" ]
[ "Istituto per le Applicazioni del Calcolo\nConsiglio Nazionale delle Ricerche\nRomeItaly", "TIM Services Innovation\nItaly", "Istituto per le Applicazioni del Calcolo\nConsiglio Nazionale delle Ricerche\nRomeItaly", "Istituto per le Applicazioni del Calcolo\nConsiglio Nazionale delle Ricerche\nRomeItaly" ]
[]
In this paper we deal with the study of travel flows and patterns of people in large populated areas. Information about the movements of people is extracted from coarse-grained aggregated cellular network data without tracking mobile devices individually. Mobile phone data are provided by the Italian telecommunication company TIM and consist of density profiles (i.e. the spatial distribution) of people in a given area at various instants of time. By computing a suitable approximation of the Wasserstein distance between two consecutive density profiles, we are able to extract the main directions followed by people, i.e. to understand how the mass of people distribute in space and time. The main applications of the proposed technique are the monitoring of daily flows of commuters, the organization of large events, and, more in general, the traffic management and control.
10.1016/j.ifacol.2018.07.005
[ "https://arxiv.org/pdf/1803.00814v1.pdf" ]
3,659,196
1803.00814
d58a4eac5ec5780eef176fb9cd2fea85477f54b3
Understanding Human Mobility Flows from Aggregated Mobile Phone Data Caterina Balzotti [email protected] Andrea Bragagnini [email protected] Maya Briani [email protected] Emiliano Cristiani [email protected] Istituto per le Applicazioni del Calcolo Consiglio Nazionale delle Ricerche RomeItaly TIM Services Innovation Italy Istituto per le Applicazioni del Calcolo Consiglio Nazionale delle Ricerche RomeItaly Istituto per le Applicazioni del Calcolo Consiglio Nazionale delle Ricerche RomeItaly Understanding Human Mobility Flows from Aggregated Mobile Phone Data Cellular datapresence dataWasserstein distanceearth mover's distance In this paper we deal with the study of travel flows and patterns of people in large populated areas. Information about the movements of people is extracted from coarse-grained aggregated cellular network data without tracking mobile devices individually. Mobile phone data are provided by the Italian telecommunication company TIM and consist of density profiles (i.e. the spatial distribution) of people in a given area at various instants of time. By computing a suitable approximation of the Wasserstein distance between two consecutive density profiles, we are able to extract the main directions followed by people, i.e. to understand how the mass of people distribute in space and time. The main applications of the proposed technique are the monitoring of daily flows of commuters, the organization of large events, and, more in general, the traffic management and control. INTRODUCTION Since many years researchers use data from cellular networks to extrapolate useful information about social dynamics. The interested reader can find in the survey paper Blondel et al. (2015) an exhaustive list of possible uses of such a data. The main reason for this large interest lies in the fact that, nowadays, basically all of the people in the developed world own a mobile phone (with or without internet connection). Therefore, we can get a complete view of the positions of people considering the location of the fixed antennas each device is connected to. Moreover, the huge amount of available data counterbalances in part the fact that device positioning techniques generally provide poor spatial and temporal accuracy (much less than the GPS, for example). In this paper we are interested in models and methods to inferring activity-based human mobility flows from mobile phone data. Among papers which investigate the usage of mobile phone data in this direction, many of them involve Call Detail Records or similar types of data, see, e.g., Becker et al. (2013); Iqbal et al. (2014); Järv et al. (2014); Gonzalez et al. (2008); Jiang et al. (2017); Naboulsi et al. (2013); Zheng et al. (2016). Other papers use aggregated data such as those coming from Erlang measurements, see, e.g., Calabrese et al. (2011); Reades et al. (2009) ;Sevtsuk and Ratti (2010). This work was supported by funding from project MIE -Mobilità Intelligente Ecosostenibile (CTN01 00034 594122), Cluster "Tecnologie per le Smart Communities". In this paper, instead, data consist of density profiles (i.e. the spatial distribution) of people in a given area at various instants of time. Mobile devices are not singularly tracked, but their logs are aggregated in order to obtain the total number of users in a given area. Such a data, not publicly available at the moment, are provided by the Italian telecommunication company TIM. The goal of the paper is to "assign a direction" to the presence data. In fact, the mere representation of timevarying density of people clearly differentiate attractive from repulsive or neutral areas but does not give any information about the directions of flows of people. In other words, we are interested in a "where-from-whereto" type of information, which reveals travel flows and patterns of people. The goal is pursued by computing a suitable approximation of the Wasserstein distance (also known as 'earth mover's distance' or 'Mallows distance') between two consecutive density profiles. The computation of the Wasserstein distance gives, as a by-product, the optimal flow which, in our case, coarsely corresponds to the main directions followed by people, i.e. how the mass of people distribute in space and time. It is useful to note here that the same methodology is investigated in the recent paper Zhu et al. (2018), where similar phone data are used and similar results are obtained. 1 The applicability of this approach is a priori questionable since it is based on many assumptions that are, in general, very far to be true. Let us mention here the fact that people can move in any direction of the space neglecting hard obstacles and that they are indistinguishable and interchangeable. Moreover, we are not able to distinguish vehicular from pedestrian traffic. In spite of this strong assumptions, numerical simulations presented here show that our approach leads to very meaningful results, and then it can be actually employed in traffic management and control. We think that the main applications of the technique proposed here can be the monitoring of daily flows of commuters and the organization of large events. DATASET TIM provides estimates of mobile phones presence in a given area in raster form: the area under analysis is split into a number of elementary territory units (ETUs) of the same size (about 150 × 150 m 2 in urban areas). The estimation algorithm does not singularly recognize users and does not track them using GPS. It simply counts the number of phone attached to network nodes and, knowing the location and radio coverage of the nodes, estimates the number of TIM users within each ETU at any time. TIM has now a market share of 30% with about 29.4 million mobile lines in Italy (AGCOM, Osservatorio sulle comunicazioni 2/2017). The data we worked with refer to the area of the province of Milan (Italy), which is divided in 198,779 ETUs, distributed in a rectangular grid 511 × 389. Data span six months (February, March and April 2016 and 2017). The entire period is divided into time intervals of 15 minutes, therefore we have 96 data per day per ETU in total. In Fig. 1 we graphically represent presence data at a fixed time. We observe that the peak of presence is located in correspondence of Milan city area. Fig. 2 shows the presences in the province of Milan in a typical working day. The curve in the image decreases during the night, it increases in the day-time and decreases again in the evening. These variations are due to two main reasons: first, the arrival to and departure from Milan's province of visitors and commuters. Second, the fact that when a mobile phone is switched off or is not communicating for more than six hours, its localization is lost. The presence value that most represents the population of the province is observed around 9 pm., when an equilibrium between traveling and phone usage is reached. This value changes between working days and weekends, but it is always in the order of 1.3 × 10 6 . Fig. 3 shows the trend of pres- Fig. 2. Trend of presences in the province of Milan during a typical working day. ence data during April 2017. We can observe a cyclical behavior: in the working days the number of presences in the province is significantly higher than during the weekends. It is interesting to note the presence of two low-density periods on April 15-18 and on April 22-26, 2017, determined respectively by the Easter and the long weekend for the Italy's Liberation Day holiday. MATHEMATICAL MODEL Our purpose is to analyze the flow of people from presence data. To do that, let us first introduce the Monge-Kantorovich mass transfer problem, see Villani (2008), which can be easily explained as follows: given a sandpile with mass distribution ρ 0 and a pit with equal volume and mass distribution ρ 1 , find a way to minimize the cost of transporting sand into the pit. The cost for moving mass depends on both the distance from the point of origin to the point of arrival and the amount of mass it is moved along that path. We are interested in minimizing this cost by finding the optimal paths to transport the mass from the initial to the final configuration. This approach goes through the notion of Wasserstein distance, see again Villani (2008). In the space R n equipped with the euclidean metrics, let ρ 0 and ρ 1 be two density functions such that R n ρ 0 = R n ρ 1 . For all p ∈ [1, +∞), the L p -Wasserstein distance between ρ 0 and ρ 1 is W p (ρ 0 , ρ 1 ) = min T ∈T R n T (x) − x p R n ρ 0 (x)dx 1 p (1) where T := T : R n → R n : B ρ 1 (x)dx = {x:T (x)∈B} ρ 0 (x)dx, ∀ B ⊂ R n bounded . T is the set of all possible maps which transfer the mass from one configuration to the other. The physical interpretation of this definition is naturally related to the solution of the Monge-Kantorovic problem since Wasserstein distance corresponds to the minimal cost needed to rearrange the initial distribution ρ 0 into the final distribution ρ 1 . Remark 1. We are not interested in the actual value of the Wasserstein distance W p , instead we look for the optimal map T * which realizes the arg min in (1), and represents the paths along which the mass is transferred. Following Briani et al. (2017), we reformulate the mass transfer problem on a graph G with N nodes. This procedure gives an approximation of the Wasserstein distance (1) and provides an algorithm for computing optimal paths. Starting from an initial mass m 0 j and a final mass m 1 j , for j = 1, . . . , N , distributed on the graph nodes, we aim at rearranging in an optimal manner the first mass in the second one. We denote by c jk the cost to transfer a unit mass from node j to node k, and by x jk the (unknown) mass moving from node j to node k. The problem is then formulated as A =            1 N 0 0 . . . 0 0 1 N 0 . . . 0 0 0 1 N . . . 0 . . . . . . . . . . . . . . . 0 0 0 . . . 1 N I N I N I N I N I N            , where I N is the N × N identity matrix and 1 N = (1 1 . . . 1) N times , our problem is written as a standard linear programming (LP) problem: minimizes c T x, under the conditions Ax = b and x ≥ 0, see (Santambrogio, 2015, Sec. 6.4.1) and (Sinha, 2005, Chap. 19). The result of the algorithm is a vector x * := arg min c T x whose elements x * jk represent how much mass moves from node j to node k employing the minimum-cost mass rearrangement. APPLICATION TO HUMAN MOBILITY FLOWS In this paragraph we describe the application of the LPbased mass transfer problem to TIM data. First of all, we exploit the subdivision into ETUs of the province of Milan (see Section 2), considering a graph whose nodes coincide with the centers of such ETUs. We assume that each node is connected to all the others. Therefore, the amount of people located in each ETU j represents the mass m j to be moved. Solving the LP problem with two consecutive (in time) mass distributions m 0 and m 1 , we get the optimal path followed by people to move from the first configuration to the second one. Now we focus on the definition of the cost function c. This function is related to the distance between the starting point and the arrival point, so it would make sense to use the standard Euclidean distance. On the other hand, this choice can lead to nonphysical optimal displacements, as we can see in the following example. Example 1. We have to move one to the right three unit masses, using the Euclidean distance as cost function. In the first scenario (see Fig. 4a) all masses move one to the right, while in the second scenario (see Fig. 4b) the leftmost mass move three to the right and the other two are frozen. Although the two mass movements are different, the Wasserstein distance is the same and equal to three. Small movements seem to better describe the flow of large crowds. Therefore, in order to select small movements rather than large ones, we slightly modify the cost function as follows: c(P, Q) = P − Q 1+ε R 2 ,(2) where P and Q are the centers of two ETUs (nodes of the graph) and ε > 0 is a small parameter. By means of parameter ε (0.1 in our simulations) we increase the cost of large movements in favor of small ones. Remark 2. Recalling the definition of Wasserstein distance, the mass flowing along the graph must be preserved in time, i.e. j m 0 j = j m 1 j . The data we work with do not strictly verify this property, so we have modified the mass in two different ways: 1. distributing the excess mass along the boundary of the considered area; 2. distributing the excess mass uniformly in the considered area. In both cases the mass modification allows the algorithm to be correctly implemented but, by analyzing the results, we have found that it is better to proceed by distributing the excess mass uniformly. As already mentioned in the Introduction, people's behavior does not match the assumptions on which the optimal mass transfer problem is originally built. Beside the fact that people cannot freely move in the space, in general the crowd does not move in such a way to minimize the total displacement as a whole (even if a sort of "minimaleffort" assumption could be realistic for single persons). In the next section we will see that these deviations from constitutive assumptions seem to be, at least to some extent, negligible. NUMERICAL RESULTS The LP problem is solved using as inputs all the pairs (m 0 , m 1 ) corresponding to the number of people at two consecutive time instants for the whole day (95 LP problems in total each day). We denote by (x * ) n , n = 0, . . . , 94 the solution of the LP problem between time instants t n and t n+1 , where t n = 00 : 00 + n · 15min. Only movements larger than the daily average M are drawn, with M defined as M := 1 N nz 94 n=0 N j,k=1 j =k (x * ) n jk , where N nz is the number of non-zero values. Note that for j = k, the value x * jj gives the mass which remains in the ETU j between the two times. The following example explains why we exclude them from the set of significant movements. Example 2. Let us consider the graph with two nodes shown in Fig. 5, which has mass 12 in node 1 and mass 16 in node 2 at time t 0 . We assume that in the time interval between t 0 and t 1 a mass equal to 8 is moved from node 1 to node 2 and a mass equal to 10 is moved from node 2 to node 1. At time t 1 we have both the node 1 and 2 with mass 14. The vector x which describes the flow of mass is x 11 = 4 x 12 = 8 x 21 = 10 x 22 = 6, while, the LP algorithm gives as a solution x * 11 = 12 x * 12 = 0 x * 21 = 2 x * 22 = 14. This is because the algorithm has only information about initial and final mass distribution and solves a minimum problem. Therefore, since the cost of a null shift is certainly preferable to any other movement, elements x * jj generally have large values, but they do not represent a real mass transfer and are not significant for the flow analysis. In the following figures flows will be represented by arrows that join departure and arrival ETUs. The gray level of the arrows depends on the intensity of the flow, i.e. the amount of people actually moving. To implement the algorithm we have used Matlab, in particular its function linprog for solving the LP problems. Finally, note that we are not able to analyze the whole area of the province of Milan. This is because, considering the whole graph, the matrix A would have size 2N × N 2 ∼ 1.5 × 10 16 and would be unmanageable for both the amount of memory required and the computing times. For this reason, we either analyzed smaller areas, focusing on the most significant ones, or we considered large areas aggregating data of neighboring ETUs. Test 1. Macroscopic scale: flows of commuters The area shown in Figs. 6-7 is a rectangle 40×24 contained in the province of Milan that has been obtained by aggregating ETUs into groups of 6 × 6. The pictures show the main flows on a generic working day in the morning and in the evening. It is clear that the flows are directed towards and from the city of Milan, and are mainly determined by commuters. In particular we can see movements from/to the left of the province to Milan. Test 2. Aggregated flow along main roads We have chosen 8 main roads which lead to the city of Milan in order to visualize only the flows along some predefined directions. To this end, we have localized the ETUs in a neighborhood of the roads and we summed all the flows pointing from these ETUs to the others in the neighborhood. Finally, we have aggregated the resulting flow along the considered roads, see Fig. 8. Fig. 8. Test 2: ETUs around one of the selected main roads whose flows are aggregated and gathered along the road. The result obtained by this process can be seen in Figs. 9-10. The considered area is a rectangle 44 × 28 contained in the province of Milan that has been obtained by aggregating ETUs into groups of 3 × 3. The pictures show the main flows located at the eight specifically defined directions on a generic working day between 8:15 and 8:30 am and between 5:45 and 6:00 pm. By observing the images we can easily identify the main directions of the flow and the roads with more traffic load. Test 3. Flows influenced by a large event In this test we show the effects of a large event on urban mobility. The event we have analyzed is the exhibition of the Salone del Mobile, held every April at Fiera Milano exhibition center in Rho, near Milan. The area in Figs. 11-13 is a square 31 × 31 contained in Rho area and centered around Fiera Milano. We show three different behavior of flows during the exhibition. Fig. 11 shows the main flows to Fiera Milano at the opening of the exhibition in the morning. We can observe that the more significant arrows are directed to the exhibition. Fig. 12 shows the main flows during lunch time. In this case we find very few arrows because there are no really significant movements and no preferred directions. Finally, Fig. 13 shows a similar behavior to the morning time, with a reverse direction of the flow, due to the closure of the exhibition. It is interesting to note that both in the morning and in the evening, the most intense flows are in the South East part of the map, in correspondence of the roads that join the city of Milan with Fiera Milano. Test 4. Microscopic scale: accesses to the exhibition In this last test, we consider a very small area to catch the flows to/from the ETUs corresponding to access points at Fiera Milano. We show the first day of the exhibition of the Salone del Mobile. We define incoming and outgoing flows as follows: the incoming flows are given by the sum of the flows from the outside of the exhibition to the gates and the flows from the gates to the inside of the exhibition; the outgoing flows are given by the sum of the flows from the inside of the exhibition to the gates and the flows from the gates to the outside of the exhibition. Fig. 14 shows the incoming and the outgoing flows as a function of time during the whole day. By looking at the plots we can identify which gate is the most used by the visitors. CONCLUSIONS This paper aimed at understanding how the mass of people distribute on large areas by a coarse estimation of their locations at consecutive snapshots. Despite the strong constitutive assumptions, the Wasserstein distance allows to get useful information and deserves further investigations. Future work will aim at applying this approach to construct O-D matrices from the optimal map and to control and estimate traffic states. Comparisons with other techniques and the link to different types of transportations metrics will be also investigated. At the same time, more performing implementations will be considered and analysed. Fig. 1 . 13D-plot of the number of TIM users in each ETU of Milan's province on April 18, 2017. Fig. 3 . 3Trend of presences in the province of Milan during April 2017. = m 1 k ∀k and x jk ≥ 0. Defining x = (x 11 , x 12 , . . . , x 1N , x 21 , . . . , x 2N , . . . , x N 1 , . . . , x N N ) T , c = (c 11 , c 12 , . . . , c 1N , c 21 , . . . , c 2N , . . . , c N 1 , . . . , c N N ) T , b = (m 0 1 , . . . , m 0 N , m 1 1 , . . . , m 1 N ) T , and the matrix Fig. 4 . 4Different mass movements with equal Wasserstein distance. Fig. 5 . 5Example of mass movements in a graph with two nodes between time t 0 and t 1 . Fig. 6 . 6Test 1: main flows around Milan's area during a generic working day in the morning. Fig. 7 . 7Test 1: main flows around Milan's area during a generic working day in the evening. Fig. 9 . 9Test 2: flows in Milan's area along selected main roads during a generic working day in the morning. Fig. 10 . 10Test 2: flows in Milan's area along selected main roads during a generic working day in the evening. Fig. 11 . 11Test 3: flows directed to the area of the exhibition between 9:45 and 10:00 am. Fig. 12 . 12Test 3: flows at the area of the exhibition between 1:00 and 1:15 pm. Fig. 13 . 13Test 3: flows leaving the area of the exhibition between 5:45 and 6:00 pm. Fig. 14 . 14Test 4: incoming and outgoing flows from the West, South and East gates of Fiera Milano on the first day of the exhibition. Zhu et al. (2018) was published after the submission of this paper and we have been aware of it during the review process. Human mobility characterization from cellular network data. R Becker, R Cáceres, K Hanson, S Isaacman, J M Loh, M Martonosi, J Rowland, S Urbanek, A Varshavsky, C Volinsky, Communications of the ACM. 561Becker, R., Cáceres, R., Hanson, K., Isaacman, S., Loh, J.M., Martonosi, M., Rowland, J., Urbanek, S., Var- shavsky, A., and Volinsky, C. (2013). Human mobility characterization from cellular network data. Communi- cations of the ACM, 56(1), 74-82. A survey of results on mobile phone datasets analysis. V D Blondel, A Decuyper, G Krings, EPJ Data Science. 4110Blondel, V.D., Decuyper, A., and Krings, G. (2015). A survey of results on mobile phone datasets analysis. EPJ Data Science, 4(1), 10. M Briani, E Cristiani, E Iacomini, Sensitivity analysis of the LWR model for traffic forecast on large networks using Wasserstein distance. Communications in Mathematical Sciences. in pressBriani, M., Cristiani, E., and Iacomini, E. (2017). Sensi- tivity analysis of the LWR model for traffic forecast on large networks using Wasserstein distance. Communi- cations in Mathematical Sciences, in press. Real-time urban monitoring using cell phones: A case study in Rome. F Calabrese, M Colonna, P Lovisolo, D Parata, C Ratti, IEEE Transactions on Intelligent Transportation Systems. 121Calabrese, F., Colonna, M., Lovisolo, P., Parata, D., and Ratti, C. (2011). Real-time urban monitoring using cell phones: A case study in Rome. IEEE Transactions on Intelligent Transportation Systems, 12(1), 141-151. Understanding individual human mobility patterns. M C Gonzalez, C A Hidalgo, A L Barabasi, Nature. 453Gonzalez, M.C., Hidalgo, C.A., and Barabasi, A.L. (2008). Understanding individual human mobility patterns. Na- ture, 453, 779-782. Development of origin-destination matrices using mobile phone call data. M S Iqbal, C F Choudhury, P Wang, M C González, Transportation Research Part C. 40Iqbal, M.S., Choudhury, C.F., Wang, P., and González, M.C. (2014). Development of origin-destination ma- trices using mobile phone call data. Transportation Research Part C, 40, 63-74. Understanding monthly variability in human activity spaces: A twelvemonth study using mobile phone call detail records. O Järv, R Ahas, F Witlox, Transportation Research Part C. 38Järv, O., Ahas, R., and Witlox, F. (2014). Understanding monthly variability in human activity spaces: A twelve- month study using mobile phone call detail records. Transportation Research Part C, 38, 122-135. Activity-based human mobility patterns inferred from mobile phone data: A case study of Singapore. S Jiang, J Ferreira, M C González, IEEE Transactions on Big Data. 32Jiang, S., Ferreira, J., and González, M.C. (2017). Activity-based human mobility patterns inferred from mobile phone data: A case study of Singapore. IEEE Transactions on Big Data, 3(2), 208-219. Human mobility flows in the city of abidjan. D Naboulsi, M Fiore, R Stanica, 3rd International Conference on the Analysis of Mobile Phone Datasets. Boston, United StatesNaboulsi, D., Fiore, M., and Stanica, R. (2013). Human mobility flows in the city of abidjan. In 3rd International Conference on the Analysis of Mobile Phone Datasets (NetMob 2013), 1-8. Boston, United States. Eigenplaces: analysing cities using the space-time structure of the mobile phone network. Environment and Planning B: Planning and Design. J Reades, F Calabrese, C Ratti, 36Reades, J., Calabrese, F., and Ratti, C. (2009). Eigen- places: analysing cities using the space-time structure of the mobile phone network. Environment and Planning B: Planning and Design, 36(5), 824-836. Optimal transport for applied mathematicians. F Santambrogio, Birkäuser, NYSantambrogio, F. (2015). Optimal transport for applied mathematicians. Birkäuser, NY. Does urban mobility have a daily routine? Learning from the aggregate data of mobile networks. A Sevtsuk, C Ratti, Journal of Urban Technology. 171Sevtsuk, A. and Ratti, C. (2010). Does urban mobility have a daily routine? Learning from the aggregate data of mobile networks. Journal of Urban Technology, 17(1), 41-60. Mathematical Programming: Theory and Methods. S Sinha, ElsevierSinha, S. (2005). Mathematical Programming: Theory and Methods. Elsevier. Optimal transport: old and new. C Villani, Springer Science & Business Media338Villani, C. (2008). Optimal transport: old and new, volume 338. Springer Science & Business Media. Telcoflow: Visual exploration of collective behaviors based on Telco data. Y Zheng, W Wu, H Zeng, N Cao, H Qu, M Yuan, J Zeng, L M Ni, 2016 IEEE International Conference on Big Data. Big DataZheng, Y., Wu, W., Zeng, H., Cao, N., Qu, H., Yuan, M., Zeng, J., and Ni, L.M. (2016). Telcoflow: Visual exploration of collective behaviors based on Telco data. In 2016 IEEE International Conference on Big Data (Big Data), 843-852. Inferring spatial interaction patterns from sequential snapshots of spatial distributions. D Zhu, Z Huang, L Shi, L Wu, Y Liu, International Journal of Geographical Information Science. 324Zhu, D., Huang, Z., Shi, L., Wu, L., and Liu, Y. (2018). Inferring spatial interaction patterns from sequential snapshots of spatial distributions. International Journal of Geographical Information Science, 32(4), 783-805.
[]
[ "COVARIANT HAMILTONIAN FIELD THEORY", "COVARIANT HAMILTONIAN FIELD THEORY" ]
[ "Jürgen Struckmeier [email protected] \nGesellschaft für Schwerionenforschung (GSI)\nGesellschaft für Schwerionenforschung (GSI)\nJohann Wolfgang Goethe-Universität Frankfurt am Main\nPlanckstr. 1, Max-von-Laue-Str. 1, Planckstr. 164291, 60438, 64291Darmstadt, Frankfurt am Main, DarmstadtGermany, Germany, Germany\n", "Andreas Redelbach [email protected] \nGesellschaft für Schwerionenforschung (GSI)\nGesellschaft für Schwerionenforschung (GSI)\nJohann Wolfgang Goethe-Universität Frankfurt am Main\nPlanckstr. 1, Max-von-Laue-Str. 1, Planckstr. 164291, 60438, 64291Darmstadt, Frankfurt am Main, DarmstadtGermany, Germany, Germany\n" ]
[ "Gesellschaft für Schwerionenforschung (GSI)\nGesellschaft für Schwerionenforschung (GSI)\nJohann Wolfgang Goethe-Universität Frankfurt am Main\nPlanckstr. 1, Max-von-Laue-Str. 1, Planckstr. 164291, 60438, 64291Darmstadt, Frankfurt am Main, DarmstadtGermany, Germany, Germany", "Gesellschaft für Schwerionenforschung (GSI)\nGesellschaft für Schwerionenforschung (GSI)\nJohann Wolfgang Goethe-Universität Frankfurt am Main\nPlanckstr. 1, Max-von-Laue-Str. 1, Planckstr. 164291, 60438, 64291Darmstadt, Frankfurt am Main, DarmstadtGermany, Germany, Germany" ]
[]
A consistent, local coordinate formulation of covariant Hamiltonian field theory is presented. Whereas the covariant canonical field equations are equivalent to the Euler-Lagrange field equations, the covariant canonical transformation theory offers more general means for defining mappings that preserve the form of the field equations than the usual Lagrangian description. It is proved that Poisson brackets, Lagrange brackets, and canonical 2-forms exist that are invariant under canonical transformations of the fields. The technique to derive transformation rules for the fields from generating functions is demonstrated by means of various examples. In particular, it is shown that the infinitesimal canonical transformation furnishes the most general form of Noether's theorem. We furthermore specify the generating function of an infinitesimal space-time step that conforms to the field equations.
10.1142/s0218301308009458
[ "https://arxiv.org/pdf/0811.0508v5.pdf" ]
14,984,135
0811.0508
d6a76dccdd733a86c54fb21b4266f2ddaf0e83ec
COVARIANT HAMILTONIAN FIELD THEORY 4 Nov 2008 November 4, 2008 Jürgen Struckmeier [email protected] Gesellschaft für Schwerionenforschung (GSI) Gesellschaft für Schwerionenforschung (GSI) Johann Wolfgang Goethe-Universität Frankfurt am Main Planckstr. 1, Max-von-Laue-Str. 1, Planckstr. 164291, 60438, 64291Darmstadt, Frankfurt am Main, DarmstadtGermany, Germany, Germany Andreas Redelbach [email protected] Gesellschaft für Schwerionenforschung (GSI) Gesellschaft für Schwerionenforschung (GSI) Johann Wolfgang Goethe-Universität Frankfurt am Main Planckstr. 1, Max-von-Laue-Str. 1, Planckstr. 164291, 60438, 64291Darmstadt, Frankfurt am Main, DarmstadtGermany, Germany, Germany COVARIANT HAMILTONIAN FIELD THEORY 4 Nov 2008 November 4, 2008Received 18 July 200714:0 WSPC/INSTRUCTION FILE kfte International Journal of Modern Physics E c World Scientific Publishing CompanyField theoryHamiltonian densitycovariant PACS numbers: 1110Ef, 1115Kc A consistent, local coordinate formulation of covariant Hamiltonian field theory is presented. Whereas the covariant canonical field equations are equivalent to the Euler-Lagrange field equations, the covariant canonical transformation theory offers more general means for defining mappings that preserve the form of the field equations than the usual Lagrangian description. It is proved that Poisson brackets, Lagrange brackets, and canonical 2-forms exist that are invariant under canonical transformations of the fields. The technique to derive transformation rules for the fields from generating functions is demonstrated by means of various examples. In particular, it is shown that the infinitesimal canonical transformation furnishes the most general form of Noether's theorem. We furthermore specify the generating function of an infinitesimal space-time step that conforms to the field equations. Introduction Relativistic field theories and gauge theories are commonly formulated on the basis of a Lagrangian density L 1,2,3,4 . The space-time evolution of the fields is obtained by integrating the Euler-Lagrange field equations that follow from the fourdimensional representation of Hamilton's action principle. A characteristic feature of this approach is that the four independent variables of space and time are treated on equal footing, which automatically ensures the description to be relativisticly correct. This is reflected by the fact that the Lagrangian density L dependsapart from a possible explicit dependence on the four space-time coordinates x x x µon the set of fields φ I and evenly on all four derivatives ∂ µ φ I of those fields with respect to the space-time coordinates, i.e. L = L(φ I , ∂ µ φ I , x x x µ ). Herein, the index "I" enumerates the individual fields that are involved in the given physical system. When the transition to a Hamiltonian description is made in textbooks, the equal footing of the space-time coordinates is abandoned 5,1,2 . In these presentations, the Hamiltonian density H is defined to depend on the set of scalar fields φ I and on one set of conjugate scalar fields π I that counterpart the time derivatives of the φ I . Keeping the dependencies on the three spatial derivatives ∂ i φ I of the fields φ I , the functional dependence of the Hamiltonian is then defined as H = H(φ I , π I , ∂ i φ I , x x x µ ). The canonical field equations then emerge as time derivatives of the scalar fields φ I and π I . In other words, the time variable is singled out of the set of independent space-time variables. While this formulation is doubtlessly valid and obviously works for the purpose pursued in these presentations, it closes the door to a full-fledged Hamiltonian field theory. In particular, it appears to be impossible to formulate a theory of canonical transformations on the basis of this particular definition of a Hamiltonian density. On the other hand, numerous papers were published that formulate a covariant Hamiltonian description of field theories where -similar to the Lagrangian formalism -the four independent variables of space-time are treated equally. These papers are generally based on the pioneering works of De Donder 6 and Weyl 7 . The key point of this approach is that the Hamiltonian density H is now defined to depend on a set of conjugate 4-vector fields π π π Iµ that counterbalance the four derivatives ∂ µ φ I of the Lagrangian density L, so that H = H(φ I , π π π Iµ , x x x µ ). Corresponding to the Euler-Lagrange equations of field theory, the canonical field equations then take on a symmetric form with respect to the four independent variables of space-time. This approach is commonly referred to as "multisymplectic" or "polysymplectic field theory", thereby labelling the covariant extension of the symplectic geometry of the conventional Hamiltonian theory 8,9,10,11,12,13,14,15 . Mathematically, the phase space of multisymplectic Hamiltonian field theory is defined within modern differential geometry in the language of "jet bundles" 16,17 . Obviously, this theory has not yet found its way into mainstream textbooks. One reason for this is that the differential geometry approach to covariant Hamiltonian field theory is far from being straightforward and raises mathematical issues that are not yet clarified (see, for instance, the discussion in Ref. 12 ). Furthermore, the approach is obviously not unique -there exist various options to define geometric objects such as Poisson brackets 8,11,14 . As a consequence, any discussion of the matter is unavoidably shifted into the realm of mathematics. With the present paper, we do not pursue the differential geometry path but provide a local coordinate treatise of De Donder and Weyl's covariant Hamiltonian field theory. The local description enables us to keep the mathematics on the level of tensor calculus. Nevertheless, the description is chart-independent and thus applies to all local coordinate systems. With this property, our description is sufficiently general from the point of view of physics. Similar to textbooks on Lagrangian gauge theories, we maintain a close tie to physics throughout the paper. Our paper is organized as follows. In Sec. 2, we give a brief review of De Donder and Weyl's approach to Hamiltonian field theory in order to render the paper self-contained and to clarify notation. After reviewing the covariant canonical field equations, we evince the Hamiltonian density H to represent the eigenvalue of the energy-momentum tensor and discuss the non-uniqueness of the field vector π π π Iµ . The main benefit of the covariant Hamiltonian approach is that it enables us to formulate a consistent theory of covariant canonical transformations. This is demonstrated in Sec. 3. Strictly imitating the point mechanics' approach 18,19 , we set up the transformation rules on the basis of a generating function by requiring the variational principle to be maintained 20 . In contrast to point mechanics, the generating function F F F µ 1 now emerges in our approach as a 4-vector function. We recover a characteristic feature of canonical transformations by deriving the symmetry relations of the mutual partial derivatives of original and transformed fields. By means of covariant Legendre transformations, we show that equivalent transformation rules are obtained from generating functions F F F µ 2 , F F F µ 3 , and F F F µ 4 . Very importantly, each of these generating functions gives rise to a specific set of symmetry relations of original and transformed fields. The symmetry relations set the stage for proving that 4-vectors of Poisson and Lagrange brackets exist that are invariant with respect to canonical transformations of the fields. We furthermore show that each vector component of our definition of a (1, 2)-tensor, i.e. a "4-vector of 2-forms ω µ " is invariant under canonical transformations -which establishes Liouville's theorem of canonical field theory. We conclude this section deriving the field theory versions of the Jacobi identity, Poisson's theorem, and the Hamilton-Jacobi equation. Similar to point mechanics, the action function S S S µ of the Hamilton-Jacobi equation is shown to represent a generating function S S S µ ≡ F F F µ 2 that is associated with the particular canonical transformation that maps the given Hamiltonian into an identically vanishing Hamiltonian. In Sec. 4, examples of Hamiltonian densities are reviewed and their pertaining field equations are derived. As the relativistic invariance of the resulting fields equations is ensured if the Hamiltonian density H is a Lorentz scalar, various equations of relativistic quantum field theory are demonstrated to embody, in fact, canonical field equations. In particular, the Hamiltonian density engendering the Klein-Gordon equation manifests itself as the covariant field theory analog of the harmonic oscillator Hamiltonian of point mechanics. Section 5 starts sketching simple examples of canonical transformations of Hamiltonian systems. Similar to the case of classical point mechanics, the main advantage of the Hamiltonian over the Lagrangian description is that the canonical transformation approach is not restricted to the class of point transformations, i.e., to cases where the transformed fields φ I ′ only depend on the original fields φ I . The most general formulation of Noether's theorem is, therefore, obtained from a general infinitesimal canonical transformation. As an application of this theorem, we show that an invariance with respect to a shift in a space-time coordinate leads to a corresponding conserved current that is given by the pertaining column vector of the energy-momentum tensor. By specifying its generating function, we furthermore show that an infinitesimal step in space-time which conforms to the canonical field equations itself establishes a canonical transformation. Similar to the corresponding time-step transformation of point mechanics, the generating function is mainly determined by the system's Hamiltonian density. It is precisely this canonical transformation which ensures that a Hamiltonian systems remains a Hamiltonian system in the course of its spacetime evolution. The existence of this canonical transformation is thus crucial for the entire approach to be consistent. As canonical transformations establish mappings of one physical system into another, canonically equivalent system, it is remarkable that Higg's mechanism of spontaneous symmetry breaking can be formulated in terms of a canonical transformation. This is shown in Sec. 5.8. We close our treatise with a discussion of the generating function of a non-Abelian gauge transformation. With Appendix B, we add an excursion to differential geometry by providing a geometrical representation of the canonical field equations. Covariant Hamiltonian density Covariant canonical field equations The transition from particle dynamics to the dynamics of a continuous system is based on the assumption that a continuum limit exists for the given physical problem. This limit is defined by letting the number of particles involved in the system increase over all bounds while letting their masses and distances go to zero. In this limit, the information on the location of individual particles is replaced by the value of a smooth function φ(x x x µ ) that is given at a spatial location x 1 , x 2 , x 3 at time t ≡ x 0 . The differentiable function φ(x x x µ ) is called a field. In this notation, the index µ runs from 0 to 3, hence distinguishes the four independent variables of space-time x x x µ ≡ (x 0 , x 1 , x 2 , x 3 ) ≡ (ct, x, y, z), and x x x µ ≡ (x 0 , x 1 , x 2 , x 3 ) ≡ (ct, −x, −y, −z) . We furthermore assume that the given physical problem can be described in terms of I = 1, . . . , N -possibly interacting -scalar fields φ I (x x x µ ), with the index "I" enumerating the individual fields. In order to clearly distinguish scalar quantities from vector quantities, we denote the latter with boldface letters. Throughout our paper, the summation convention is used. This means that whenever a pair of the same upper and lower indices appears on one side of an equation, this index is to be summed over. If no confusion can arise, we omit the indices in the argument list of functions in order to avoid the number of indices to proliferate. The Lagrangian description of the dynamics of a continuous system is based on the Lagrangian density function L that is supposed to carry the complete information on the given physical system. In a first-order field theory, the Lagrangian density L is defined to depend on the φ I , possibly on the vector of independent variables x x x µ , and on the four first derivatives of the fields φ I with respect to the kfte Covariant Hamiltonian Field Theory 5 independent variables, i.e., on the 1-forms ∂ µ φ I ≡ (∂ t φ I , ∂ x φ I , ∂ y φ I , ∂ z φ I ). The Euler-Lagrange field equations are then obtained as the zero of the variation δS of the action integral S = L(φ I , ∂ µ φ I , x x x µ ) d 4 x (1) as ∂ ∂x α ∂L ∂(∂ α φ I ) − ∂L ∂φ I = 0.(2) To derive the equivalent covariant Hamiltonian description of continuum dynamics, we first define for each field φ I (x x x ν ) a 4-vector of conjugate momentum fields π π π Iµ (x x x ν ). Its components are given by π Iµ = ∂L ∂(∂ µ φ I ) ≡ ∂L ∂ ∂φI ∂x µ .(3) The 4-vector π π π Iµ is thus induced by the Lagrangian L as the dual counterpart of the 1-form ∂ µ φ I . For the entire set of N scalar fields φ I (x x x ν ), this establishes a set of N conjugate 4-vector fields. With this definition of the 4-vectors of canonical momenta π π π Iµ (x x x ν ), we can now define the Hamiltonian density H(φ I , π π π Iµ , x x x µ ) as the covariant Legendre transform of the Lagrangian density L(φ I , ∂ µ φ I , x x x µ ) H(φ I , π π π Iµ , x x x µ ) = π Jα ∂φ J ∂x α − L(φ I , ∂ µ φ I , x x x µ ).(4) At this point we suppose that L is regular, hence that for each index "I" the Hesse matrices (∂ 2 L/∂(∂ µ φ I ) ∂(∂ ν φ I )) are non-singular. This ensures that H takes over the complete information on the given dynamical system from L by means of the Legendre transformation. The definition of H by Eq. (4) is referred to in literature as the "De Donder-Weyl" Hamiltonian density. Obviously, the dependencies of H and L on the φ I and the x µ only differ by a sign, ∂H ∂φ I = − ∂L ∂φ I , ∂H ∂x µ expl = − ∂L ∂x µ expl . These variables do not take part in the Legendre transformation of Eqs. (3), (4). With regard to this transformation, the Hamiltonian density H is, therefore, to be considered as a function of the π Iµ only, and, correspondingly, the Lagrangian density L as a function of the ∂ µ φ I only. In order to derive the canonical field equations, we calculate from Eq. (4) the partial derivative of H with respect to π Iµ , ∂H ∂π Iµ = δ J I δ α µ ∂φ J ∂x α + π Jα ∂(∂ α φ J ) ∂π Iµ − ∂L ∂(∂ α φ J ) ∂(∂ α φ J ) ∂π Iµ = ∂φ I ∂x µ . According to the definition of π Iµ in Eq. (3), the second and the third terms on the right hand side cancel. In conjunction with the Euler-Lagrange equation, we obtain the set of covariant canonical field equations finally as ∂H ∂π Iµ = ∂φ I ∂x µ , ∂H ∂φ I = − ∂π Iα ∂x α .(5) This pair of first-order partial differential equations is equivalent to the set of second-order differential equations of Eq. (2). We observe that in this formulation of the canonical field equations all coordinates of space-time appear symmetrically -similar to the Lagrangian formulation of Eq. (2). Provided that the Lagrangian density L is a Lorentz scalar, the dynamics of the fields is invariant with respect to Lorentz transformations. The covariant Legendre transformation (4) passes this property to the Hamiltonian density H. It thus ensures a priori the relativistic invariance of the fields that emerge as integrals of the canonical field equations if L -and hence H -represents a Lorentz scalar. Energy-Momentum Tensor In the Lagrangian description, the energy-momentum tensor T ν µ is defined as the following mixed second rank tensor T ν µ = ∂L ∂(∂ ν φ I ) ∂φ I ∂x µ − L δ ν µ .(6) With the definition (3) of the conjugate momentum fields π Iµ , and the Hamiltonian density of Eq. (4), the energy-momentum tensor (6) is equivalently expressed as T ν µ = H δ ν µ + π Iν ∂φ I ∂x µ − δ ν µ π Iα ∂φ I ∂x α .(7) The inner product of the mixed tensors T ν µ of Eq. (7) with the 1-form ∂ ν φ I yields T ν µ ∂φ I ∂x ν = H δ ν µ ∂φ I ∂x ν + π Iν ∂φ I ∂x ν ∂φ I ∂x µ − π Iα ∂φ I ∂x α δ ν µ ∂φ I ∂x ν , hence T ν µ − H δ ν µ ∂φ I ∂x ν = 0. We can similarly set up the inner product of T ν µ with the vector π π π Iµ T ν µ π Iµ = H δ ν µ π Iµ + π Iν π Iµ ∂φ I ∂x µ − δ ν µ π Iµ π Iα ∂φ I ∂x α , hence T ν µ − H δ ν µ π Iµ = 0. This shows that the De Donder-Weyl Hamiltonian density H constitutes the eigenvalue of the energy-momentum tensor T ν µ with eigenvectors ∂ ν φ I and π π π Iµ . By identifying H as the eigenvalue of the energy-momentum tensor, we obtain a clear An important property of the energy-momentum tensor is revealed by calculating the divergence ∂T α µ /∂x α . From the definition (7), we find ∂T α µ ∂x α = δ α µ ∂H ∂φ I ∂φ I ∂x α + ∂H ∂π Iβ ∂π Iβ ∂x α + ∂H ∂x α expl. + ∂π Iα ∂x α ∂φ I ∂x µ +π Iα ∂ 2 φ I ∂x µ ∂x α − δ α µ ∂π Iβ ∂x α ∂φ I ∂x β + π Iβ ∂ 2 φ I ∂x α ∂x β Inserting the canonical field equations (5), this becomes ∂T α µ ∂x α = ∂H ∂x µ expl. .(8) If the Hamiltonian density H does not explicitly depend on the independent variable x µ , then H is obviously invariant with respect to a shift of the reference system along the x µ axis. Then, the components of the µ-th column of the energy-momentum tensor satisfy the continuity equation ∂T α µ ∂x α = 0 ⇐⇒ ∂H ∂x µ expl. = 0. Using the definition (7) of the energy-momentum tensor, we infer from Eq. (8) ∂T α µ ∂x α = ∂T α µ ∂x α expl. . Based on the four independent variables x µ of space-time, this divergence relation for the energy-momentum tensor constitutes the counterpart to the relation dH/dt = ∂H/∂t of the time derivatives of the Hamiltonian function of point mechanics. Yet, such a relation does not exist in general for the Hamiltonian density H of field theory. As we easily convince ourselves, the derivative of H with respect to x µ is not uniquely determined by its explicit dependence on x µ ∂H ∂x µ = ∂H ∂x µ expl. + ∂H ∂π Iα ∂π Iα ∂x µ + ∂H ∂φ I ∂φ I ∂x µ = ∂H ∂x µ expl. + ∂φ I ∂x α ∂π Iα ∂x µ − ∂φ I ∂x µ ∂π Iβ ∂x β = ∂H ∂x µ expl. + ∂π Iα ∂x µ − δ α µ ∂π Iβ ∂x β ∂φ I ∂x α .(9) Owing to the fact that the number of independent variables is greater than one, the two rightmost terms of Eq. (9) constitute a sum. In contrast to the case of point mechanics, these terms generally do not cancel by virtue of the canonical field equations. 2.3. Non-uniqueness of the conjugate vector fields π π π Iµ From the right hand side of the second canonical field equation (5) we observe that the dependence of the Hamiltonian density H on φ I only determines the divergence of the conjugate vector field π Iµ . Vice-versa, the canonical field equations are invariant with regard to all transformations of the mixed tensor (∂π Iµ /∂x ν ) that preserve its trace. The expression ∂H/∂φ I thus only quantifies the change of the flux of π Iµ through an infinitesimal space-time volume around a space-time location x x x µ . The vector field π Iµ itself is, therefore, only determined up to a vector field η Iµ (x x x ν ) that leaves its divergence invariant π Iµ → π Iµ e = π Iµ − η Iµ .(10) This is obviously the case if ∂η Iα ∂x α = 0.(11) With this condition fulfilled, we are allowed to subtract a field η Iµ (x x x ν ) from π Iµ (x x x ν ) without changing the canonical field equations (5), hence the description of the dynamics of the given system. We will show in Sec. 5.2 that the transition (10) can be conceived as a canonical transformation of the given Hamiltonian system. In the Lagrangian formalism, the transition (10) corresponds to the transformation L → L ′ = L − η Iα (x x x) ∂φ I (x x x) ∂x α . which leaves -under the condition (11) -the Euler-Lagrange equations (2) invariant. The Hamiltonian density H ′ , expressed as a function of π π π e , is obtained from the Legendre transformation π Iµ e = ∂L ′ ∂(∂ µ φ I ) = π Iµ − η Iµ H ′ (φ, π π π e , x x x) = π Iα e ∂φ I ∂x α − L ′ (φ, ∂φ, x x x) = π Iα ∂φ I ∂x α − η Iα ∂φ I ∂x α − L + η Iα ∂φ I ∂x α = H(φ, π π π, x x x). The value of the Hamiltonian density H thus remains invariant under the action of the shifting transformation (10), (11). This means for the canonical field equations (5) ∂H ′ ∂π Iµ e = ∂H ∂π Iµ = ∂φ I ∂x µ , ∂H ′ ∂φ I = ∂H ∂φ I = − ∂π Iα ∂x α , ∂H ′ ∂x µ expl. = ∂H ∂x µ expl. . Thus, both momentum fields π π π Iµ e and π π π Iµ equivalently describe the same physical system. In other words, we can switch from π π π Iµ to π π π Iµ e = π π π Iµ − η η η Iµ with ∂η Iα /∂x α = 0 without changing the physical description of the given system. F F F 1 (φ, φ ′ , x x x) Similar to the canonical formalism of point mechanics, we call a transformation of the fields (φ, π π π) → (φ ′ , π π π ′ ) canonical if the form of the variational principle that is based on the action integral (1) is maintained, δ R π Iα ∂φ I ∂x α − H(φ, π π π, x x x) d 4 x = δ R π Iα ′ ∂φ I ′ ∂x α − H ′ (φ ′ , π π π ′ , x x x) d 4 x.(12) Equation (12) tells us that the integrands may differ by the divergence of a vector field F µ 1 , whose variation vanishes on the boundary ∂R of the integration region R within space-time δ R ∂F α 1 ∂x α d 4 x = δ ∂R F α 1 dS α ! = 0. The immediate consequence of the form invariance of the variational principle is the form invariance of the covariant canonical field equations (5) ∂H ′ ∂π Iµ ′ = ∂φ I ′ ∂x µ , ∂H ′ ∂φ I ′ = − ∂π Iα ′ ∂x α .(13) For the integrands of Eq. (12) -hence for the Lagrangian densities L and L ′we thus obtain the condition L = L ′ + ∂F α 1 ∂x α π Iα ∂φ I ∂x α − H(φ, π π π, x x x) = π Iα ′ ∂φ I ′ ∂x α − H ′ (φ ′ , π π π ′ , x x x) + ∂F α 1 ∂x α .(14) With the definition F µ 1 ≡ F µ 1 (φ, φ ′ , x x x), we restrict ourselves to a function of exactly those arguments that now enter into transformation rules for the transition from the original to the new fields. The divergence of F µ 1 writes, explicitly, ∂F α 1 ∂x α = ∂F α 1 ∂φ I ∂φ I ∂x α + ∂F α 1 ∂φ I ′ ∂φ I ′ ∂x α + ∂F α 1 ∂x α expl .(15) The rightmost term denotes the sum over the explicit dependence of the generating function F µ 1 on the x ν . Comparing the coefficients of Eqs. (14) and (15), we find the local coordinate representation of the field transformation rules that are induced by the generating function F µ 1 π Iµ = ∂F µ 1 ∂φ I , π Iµ ′ = − ∂F µ 1 ∂φ I ′ , H ′ = H + ∂F α 1 ∂x α expl .(16) The transformation rule for the Hamiltonian density implies that summation over α is to be performed. In contrast to the transformation rule for the Lagrangian density L of Eq. (14), the rule for the Hamiltonian density is determined only by the explicit dependence of the generating function F µ 1 on the x µ . Differentiating the transformation rule for π Iµ with respect to φ J ′ , and the rule for π Iµ ′ with respect to φ I , we obtain a symmetry relation between original and transformed fields ∂π Iµ ∂φ J ′ = ∂ 2 F µ 1 ∂φ I ∂φ J ′ = − ∂π Jµ ′ ∂φ I .(17) The emerging of symmetry relations is a characteristic feature of canonical transformations. As the symmetry relation directly follows from the second derivatives of the generating function, is does not apply for arbitrary transformations of the fields that do not follow from generating functions. 3.2. Generating functions of type F F F 2 (φ, π π π ′ , x x x) The generating function of a canonical transformation can alternatively be expressed in terms of a function of the original fields φ I and of the new conjugate fields π Iµ ′ . To derive the pertaining transformation rules, we perform the covariant Legendre transformation F µ 2 (φ, π π π ′ , x x x) = F µ 1 (φ, φ ′ , x x x) + φ I ′ π Iµ ′ , π Iµ ′ = − ∂F µ 1 ∂φ I ′ .(18) By definition, the functions F µ 1 and F µ 2 agree with respect to their φ I and x µ dependencies ∂F µ 2 ∂φ I = ∂F µ 1 ∂φ I = π Iµ , ∂F α 2 ∂x α expl = ∂F α 1 ∂x α expl = H ′ − H. These two F µ 2 -related transformation rules thus coincide with the respective rules derived previously from F µ 1 . In other words, the variables φ I and x µ do not take part in the Legendre transformation from Eq. (18). We must, therefore, conceive F µ 1 as a function of the φ I ′ only, and, correspondingly, F µ 2 as a function of π Iµ ′ only. The new transformation rule thus follows from the derivative of F µ 2 with respect to π Iµ ′ , ∂F µ 2 ∂π Jν ′ = ∂F µ 1 ∂φ I ′ ∂φ I ′ ∂π Jν ′ + φ I ′ ∂π Iµ ′ ∂π Jν ′ + π Iµ ′ ∂φ I ′ ∂π Jν ′ = −π Iµ ′ ∂φ I ′ ∂π Jν ′ + φ I ′ δ I J δ µ ν + π Iµ ′ ∂φ I ′ ∂π Jν ′ = φ J ′ δ µ ν . We thus end up with set of transformation rules π Iµ = ∂F µ 2 ∂φ I , φ I ′ δ µ ν = ∂F µ 2 ∂π Iν ′ , H ′ = H + ∂F α 2 ∂x α expl ,(19) which is equivalent to the set (16) by virtue of the Legendre transformation (18) if ∂ 2 F µ 1 /∂φ I ∂φ J ′ = 0 for all indices "µ", "I", and "J". From the second partial derivations of F µ 2 one immediately derives the symmetry relation ∂π Iµ ∂π Jν ′ = 3.3. Generating functions of type F F F 3 (φ ′ , π π π, x x x) ∂ 2 F µ 2 ∂φ I ∂π Jν ′ = ∂φ J ′ ∂φ I δ µ ν .(20) By means of the Legendre transformation F µ 3 (φ ′ , π π π, x x x) = F µ 1 (φ, φ ′ , x x x) − φ I π Iµ , π Iµ = ∂F µ 1 ∂φ I ,(21) the generating function of a canonical transformation can be converted into a function of the new fields φ I ′ and the original conjugate fields π Iµ . The functions F µ 1 and F µ 3 agree in their dependencies on φ I ′ and x µ , ∂F µ 3 ∂φ I ′ = ∂F µ 1 ∂φ I ′ = −π Iµ ′ , ∂F α 3 ∂x α expl = ∂F α 1 ∂x α expl = H ′ − H. Consequently, the pertaining transformation rules agree with those of Eq. (16). The new rule follows from the dependence of F µ 3 on the π Jν : ∂F µ 3 ∂π Jν = ∂F µ 1 ∂φ I ∂φ I ∂π Jν − φ I ∂π Iµ ∂π Jν − π Iµ ∂φ I ∂π Jν = π Iµ ∂φ I ∂π Jν − φ I δ I J δ µ ν − π Iµ ∂φ I ∂π Jν = −φ J δ µ ν . For ∂ 2 F µ 1 /∂φ I ∂φ J ′ = 0, we thus get a third set of equivalent transformation rules, π Iµ ′ = − ∂F µ 3 ∂φ I ′ , φ I δ µ ν = − ∂F µ 3 ∂π Iν , H ′ = H + ∂F α 3 ∂x α expl .(22) The pertaining symmetry relation between original and transformed fields emerging from F µ 3 writes ∂π Iµ ′ ∂π Jν = − ∂ 2 F µ 3 ∂φ I ′ ∂π Jν = ∂φ J ∂φ I ′ δ µ ν .(23) 3.4. Generating functions of type F F F 4 (π π π, π π π ′ , x x x) Finally, by means of the Legendre transformation F µ 4 (π π π, π π π ′ , x x x) = F µ 3 (φ ′ , π π π, x x x) + φ I ′ π Iµ ′ , π Iµ ′ = − ∂F µ 3 ∂φ I ′(24) we may express the generating function of a canonical transformation as a function of both the original and the transformed conjugate fields π Iµ , π Iµ ′ . The functions F µ 4 and F µ 3 agree in their dependencies on the π Iµ and x µ , ∂F µ 4 ∂π Iν = ∂F µ 3 ∂π Iν = −φ I δ µ ν , ∂F α 4 ∂x α expl = ∂F α 3 ∂x α expl = H ′ − H. The related pair of transformation rules thus corresponds to that of Eq. (22). The new rule follows from the dependence of F µ 4 on the π Jν ′ , ∂F µ 4 ∂π Jν ′ = ∂F µ 3 ∂φ I ′ ∂φ I ′ ∂π Jν ′ + φ I ′ ∂π Iµ ′ ∂π Jν ′ + π Iµ ′ ∂φ I ′ ∂π Jν ′ = −π Iµ ′ ∂φ I ′ ∂π Jν ′ + φ I ′ δ I J δ µ ν + π Iµ ′ ∂φ I ′ ∂π Jν ′ = φ J ′ δ µ ν . Under the condition that ∂ 2 F µ 3 /∂φ I ′ ∂π Jν = 0, we thus get a fourth set of equivalent transformation rules φ I ′ δ µ ν = ∂F µ 4 ∂π Iν ′ , φ I δ µ ν = − ∂F µ 4 ∂π Iν , H ′ = H + ∂F α 4 ∂x α expl .(25) The subsequent symmetry relation between original and transformed fields that is associated with F µ 4 follows as ∂φ I ∂π Jβ ′ δ µ α = − ∂ 2 F µ 4 ∂π Iα ∂π Jβ ′ = − ∂φ J ′ ∂π Iα δ µ β .(26) For the particular cases α = β = µ, this means ∂φ I ∂π Jµ ′ = − ∂φ J ′ ∂π Iµ .(27) With regard to Eq. (27), we observe that the symmetry relation (17) similarly depicts only the particular cases α = β = µ. Making use of the complete set of symmetry relations, we show in Appendix A that -in analogy to Eq. (26) -the general form of Eq. (17) is given by ∂π Jβ ′ ∂φ I δ α µ = − ∂π Iα ∂φ J ′ δ β µ .(28) Consistency check of the canonical transformation rules As a test of consistency of the canonical transformation rules derived in the preceding four sections, we now rederive the rules obtained from the generating function F µ 1 from a Legendre transformation of F µ 4 . Both generating functions are related by F µ 1 (φ, φ ′ , x x x) = F µ 4 (π π π, π π π ′ , x x x) + φ J π Jµ − φ J ′ π Jµ ′ , φ I ′ δ µ ν = ∂F µ 4 ∂π Iν ′ , φ I δ µ ν = − ∂F µ 4 ∂π Iν . In this case, the generating functions F µ 1 and F µ 4 only agree in their explicit dependence on x µ . This involves the common transformation rule In the actual case, we thus transform at once two field variables φ I , φ I ′ and π Iµ , π Iµ ′ . The transformation rules associated with F µ 1 follow from its dependencies on both φ I and φ I ′ according to ∂F α 1 ∂x α expl = ∂F α 4 ∂x α expl = H ′ − H.∂F µ 1 ∂φ I + ∂F µ 1 ∂φ J ′ ∂φ J ′ ∂φ I = ∂F µ 4 ∂π Jα ∂π Jα ∂φ I + ∂F µ 4 ∂π Jα ′ ∂π Jα ′ ∂φ I + π Iµ + φ J ∂π Jµ ∂φ I − π Jµ ′ ∂φ J ′ ∂φ I − φ J ′ ∂π Jµ ′ ∂φ I = −φ J δ µ α ∂π Jα ∂φ I + φ J ′ δ µ α ∂π Jα ′ ∂φ I + π Iµ + φ J ∂π Jµ ∂φ I − π Jµ ′ ∂φ J ′ ∂φ I − φ J ′ ∂π Jµ ′ ∂φ I = π Iµ − π Jµ ′ ∂φ J ′ ∂φ I . Comparing the coefficients, we encounter the transformation rules π Iµ = ∂F µ 1 ∂φ I , π Iµ ′ = − ∂F µ 1 ∂φ I ′ . As expected, the rules obtained previously in Eq. (16) are recovered. The same result follows if we differentiate F µ 1 with respect to φ I ′ . Poisson brackets, Lagrange brackets For a system with given Hamiltonian density H(φ, π π π, x x x), and for two differentiable functions f (φ, π π π, x x x), g(φ, π π π, x x x) of the fields φ I , π Iµ and the independent variables x µ , we define the µ-th component of the Poisson bracket of f and g as follows f, g φ,π π π µ = ∂f ∂φ I ∂g ∂π Iµ − ∂f ∂π Iµ ∂g ∂φ I .(29) With this definition, the four Poisson brackets [f, g] φ,π π π µ constitute the components of a dual 4-vector, i.e., a 1-form. Obviously, the Poisson bracket (29) satisfies the following algebraic rules f, g φ,π π π µ = − g, f φ,π π π µ cf, g φ,π π π µ = c f, g φ,π π π µ , c ∈ Ê f, g φ,π π π µ + h, g φ,π π π µ = f + h, g φ,π π π µ The Leibnitz rule is obtained from (29) via f, gh φ,π π π µ = ∂f ∂φ I ∂ ∂π Iµ (gh) − ∂f ∂π Iµ ∂ ∂φ I (gh) = ∂f ∂φ I h ∂g ∂π Iµ + g ∂h ∂π Iµ − ∂f ∂π Iµ g ∂h ∂φ I + h ∂g ∂φ I = h ∂f ∂φ I ∂g ∂π Iµ − ∂f ∂π Iµ ∂g ∂φ I + g ∂f ∂φ I ∂h ∂π Iµ − ∂f ∂π Iµ ∂h ∂φ I = h f, g φ,π π π µ + g f, h φ,π π π µ For an arbitrary differentiable function f (φ I , π π π I , x x x) of the field variables, we can, in particular, set up the Poisson brackets with the canonical fields φ I , and π π π I . As the individual field variables φ I and π Iµ are independent by assumption, we immediately get φ I , f φ,π π π µ = ∂φ I ∂φ J ∂f ∂π Jµ − ∂φ I ∂π Jµ ∂f ∂φ J = ∂f ∂π Jµ δ J I = ∂f ∂π Iµ π Iν , f φ,π π π µ = ∂π Iν ∂φ J ∂f ∂π Jµ − ∂π Iν ∂π Jµ ∂f ∂φ J = −δ I J δ ν µ ∂f ∂φ J = −δ ν µ ∂f ∂φ I . The Poisson bracket of a function f of the field variables with a particular field variable thus corresponds to the derivative of that function f with respect to the conjugate field variable. The fundamental Poisson brackets are constituted by pairing field variables φ I and π Iµ , φ I , φ J φ,π π π µ = ∂φ I ∂φ K ∂φ J ∂π Kµ − ∂φ I ∂π Kµ ∂φ J ∂φ K = 0, φ I , π Jν φ,π π π µ = ∂φ I ∂φ K ∂π Jν ∂π Kµ − ∂φ I ∂π Kµ ∂π Jν ∂φ K = ∂π Jν ∂π Iµ = δ ν µ δ J I ,(30)π Iα , π Jβ φ,π π π µ = ∂π Iα ∂φ K ∂π Jβ ∂π Kµ − ∂π Iα ∂π Kµ ∂π Jβ ∂φ K = 0. Similar to point mechanics, we can define the Lagrange brackets as the dual counterparts of the Poisson brackets. In local description, we define the components of a 4-vector of Lagrange brackets {f, g} φ,π π π µ of two differentiable functions f, g by {f, g} φ,π π π µ = ∂φ I ∂f ∂π Iµ ∂g − ∂π Iµ ∂f ∂φ I ∂g .(31) The fundamental Lagrange bracket then emerge as φ I , φ J φ,π π π µ = ∂φ K ∂φ I ∂π Kµ ∂φ J − ∂π Kµ ∂φ I ∂φ K ∂φ J = 0, φ I , π Jν φ,π π π µ = ∂φ K ∂φ I ∂π Kµ ∂π Jν − ∂π Kµ ∂φ I ∂φ K ∂π Jν = ∂π Iµ ∂π Jν = δ µ ν δ I J ,(32)π Iα , π Jβ φ,π π π µ = ∂φ K ∂π Iα ∂π Kµ ∂π Jβ − ∂π Kµ ∂π Iα ∂φ K ∂π Jβ = 0. In the next section, we shall prove that both the Poisson brackets as well as the Lagrange brackets are invariant under canonical transformations of the fields φ I , π π π I . Canonical invariance of Poisson and Lagrange brackets In the first instance, we will show that the fundamental Poisson brackets are invariant under canonical transformations, hence that the relations (30) equally apply for canonically transformed fields φ I ′ and π π π I ′ . Making use of the symmetry relations (20), (23), (27), and (28), we get φ I ′ , φ J ′ φ,π π π µ = ∂φ I ′ ∂φ K ∂φ J ′ ∂π Kµ − ∂φ I ′ ∂π Kµ ∂φ J ′ ∂φ K = ∂φ I ′ ∂φ K ∂φ J ′ ∂π Kµ − ∂φ I ′ ∂π Kν ∂φ J ′ ∂φ K δ ν µ = − ∂φ I ′ ∂φ K ∂φ K ∂π Jµ ′ − ∂φ I ′ ∂π Kν ∂π Kν ∂π Jµ ′ = − ∂φ I ′ ∂π Jµ ′ = 0 = φ I , φ J φ,π π π µ (33) φ I ′ , π Jν ′ φ,π π π µ = ∂φ I ′ ∂φ K ∂π Jν ′ ∂π Kµ − ∂φ I ′ ∂π Kµ ∂π Jν ′ ∂φ K = ∂φ I ′ ∂φ K δ α µ ∂π Jν ′ ∂π Kα − ∂φ I ′ ∂π Kµ ∂π Jν ′ ∂φ K = ∂π Kα ∂π Iµ ′ ∂π Jν ′ ∂π Kα + ∂φ K ∂π Iµ ′ ∂π Jν ′ ∂φ K = ∂π Jν ′ ∂π Iµ ′ = δ ν µ δ J I = φ I , π Jν φ,π π π µ (34) π Iα ′ , π Jβ ′ φ,π π π µ = ∂π Iα ′ ∂φ K ∂π Jβ ′ ∂π Kµ − ∂π Iα ′ ∂π Kµ ∂π Jβ ′ ∂φ K = ∂π Iα ′ ∂φ K ∂φ K ∂φ J ′ δ β µ − ∂π Iα ′ ∂π Kγ ∂π Jβ ′ ∂φ K δ γ µ = ∂π Iα ′ ∂φ K ∂φ K ∂φ J ′ + ∂π Iα ′ ∂π Kγ ∂π Kγ ∂φ J ′ δ β µ = ∂π Iα ′ ∂φ J ′ δ β µ = 0 = π Iα , π Jβ φ,π π π µ(35) The Poisson bracket of two arbitrary differentiable functions f (φ, π π π, x x x) and g(φ, π π π, x x x), as defined by Eq. (29), can now be expanded in terms of transformed fields φ I ′ and π π π I ′ . For a general transformation (φ, π π π) → (φ ′ , π π π ′ ), we have f, g φ,π π π µ = ∂f ∂φ K ∂g ∂π Kµ − ∂f ∂π Kµ ∂g ∂φ K = ∂f ∂φ I ′ ∂φ I ′ ∂φ K + ∂f ∂π Iα ′ ∂π Iα ′ ∂φ K ∂g ∂φ J ′ ∂φ J ′ ∂π Kµ + ∂g ∂π Jβ ′ ∂π Jβ ′ ∂π Kµ − ∂f ∂φ I ′ ∂φ I ′ ∂π Kµ + ∂f ∂π Iα ′ ∂π Iα ′ ∂π Kµ ∂g ∂φ J ′ ∂φ J ′ ∂φ K + ∂g ∂π Jβ ′ ∂π Jβ ′ ∂φ K After working out the multiplications, we can recollect all products so as to form fundamental Poisson brackets f, g φ,π π π µ = ∂f ∂φ I ′ ∂g ∂φ J ′ φ I ′ , φ J ′ φ,π π π µ + ∂f ∂π Iα ′ ∂g ∂π Jβ ′ π Iα ′ , π Jβ ′ φ,π π π µ + ∂f ∂φ I ′ ∂g ∂π Jα ′ − ∂f ∂π Jα ′ ∂g ∂φ I ′ φ I ′ , π Jα ′ φ,π π π µ . For the special case that the transformation is canonical, the equations (33), (34), and (35) for the fundamental Poisson brackets apply. We then get f, g φ,π π π µ = ∂f ∂φ I ′ ∂g ∂π Jα ′ − ∂f ∂π Jα ′ ∂g ∂φ I ′ δ α µ δ J I = f, g φ ′ ,π π π µ ′ .(36) We thus abbreviate in the following the index notation of the Poisson bracket by writing f, g µ ≡ f, g φ,π π π µ , as the brackets do not depend on the underlying set of canonical field variables φ I , π Iµ . The proof of the canonical invariance of the fundamental Lagrange brackets is based on the symmetry relations (17), (20), (23), and (26). Explicitly, we have The Lagrange bracket (31) of two arbitrary differentiable functions f (φ, π π π, x x x) and g(φ, π π π, x x x) can now be expressed in terms of transformed fields φ I ′ , π π π I ′ f, g φ I ′ , φ J ′ φ,π π π µ = ∂φ K ∂φ I ′ ∂π Kµ ∂φ J ′ − ∂π Kµ ∂φ I ′ ∂φ K ∂φ J ′ = − ∂φ K ∂φ I ′ ∂π Jµ ′ ∂φ K − ∂π Kν ∂φ I ′ ∂φ K ∂φ J ′ δ µ ν = − ∂φ K ∂φ I ′ ∂π Jµ ′ ∂φ K − ∂π Kν ∂φ I ′ ∂π Jµ ′ ∂π Kν = − ∂π Jµ ′ ∂φ I ′ = 0 = φ I , φ J φ,π π π µ (37) φ I ′ , π Jν ′ φ,π π π µ = ∂φ K ∂φ I ′ ∂π Kµ ∂π Jν ′ − ∂π Kµ ∂φ I ′ ∂φ K ∂π Jν ′ = ∂φ K ∂φ I ′ δ µ α ∂π Kα ∂π Jν ′ + ∂π Iµ ′ ∂φ K ∂φ K ∂π Jν ′ = ∂π Iµ ′ ∂π Kα ∂π Kα ∂π Jν ′ + ∂π Iµ ′ ∂φ K ∂φ K ∂π Jν ′ = ∂π Iµ ′ ∂π Jν ′ = δ µ ν δ I J = φ I , π Jν φ,π π π µ (38) π Iα ′ , π Jβ ′ φ,π π π µ = ∂φ K ∂π Iα ′ ∂π Kµ ∂π Jβ ′ − ∂π Kµ ∂π Iα ′ ∂φ K ∂π Jβ ′ = ∂φ K ∂π Iα ′ ∂φ J ′ ∂φ K δ µ β − ∂π Kγ ∂π Iα ′ ∂φ K ∂π Jβ ′ δ µ γ = ∂φ K ∂π Iα ′ ∂φ J ′ ∂φ K + ∂π Kγ ∂π Iα ′ ∂φ J ′ ∂π Kγ δ µ β = ∂φ J ′ ∂π Iα ′ δ µ β = 0 = π Iα , π Jβ φ,π π π µ .(39)φ,π π π µ = ∂φ K ∂f ∂π Kµ ∂g − ∂π Kµ ∂f ∂φ K ∂g = ∂φ K ∂φ I ′ ∂φ I ′ ∂f + ∂φ K ∂π Iα ′ ∂π Iα ′ ∂f ∂π Kµ ∂φ J ′ ∂φ J ′ ∂g + ∂π Kµ ∂π Jβ ′ ∂π Jβ ′ ∂g − ∂π Kµ ∂φ I ′ ∂φ I ′ ∂f + ∂π Kµ ∂π Iα ′ ∂π Iα ′ ∂f ∂φ K ∂φ J ′ ∂φ J ′ ∂g + ∂φ K ∂π Jβ ′ ∂π Jβ ′ ∂g . Multiplication and regathering the terms to form fundamental Lagrange brackets yields f, g φ,π π π µ = ∂φ I ′ ∂f ∂φ J ′ ∂g φ I ′ , φ J ′ φ,π π π µ + ∂π Iα ′ ∂f ∂π Jβ ′ ∂g π Iα ′ , π Jβ ′ φ,π π π µ + ∂φ I ′ ∂f ∂π Jβ ′ ∂g φ I ′ , π Jβ ′ φ,π π π µ − ∂π Iα ′ ∂f ∂φ J ′ ∂g φ J ′ , π Iα ′ φ,π π π µ . For canonical transformations, we can make use of the relations (37), (38), and (39) for the fundamental Lagrange brackets. We thus obtain f, g φ,π π π µ = ∂φ I ′ ∂f ∂π Jβ ′ ∂g δ I J δ µ β − ∂π Iα ′ ∂f ∂φ J ′ ∂g δ J I δ µ α = ∂φ I ′ ∂f ∂π Iµ ′ ∂g − ∂π Iµ ′ ∂f ∂φ I ′ ∂g = f, g φ ′ ,π π π µ ′ . The notation of the Lagrange brackets (31) can thus be simplified as well. In the following, we denote these brackets as f, g µ since their value does not depend on the particular set of canonical field variables φ I , π Iµ . Liouville's theorem of covariant Hamiltonian field theory For general transformations (φ, π π π) → (φ ′ , π π π ′ ) of the scalar fields φ I and the pertaining conjugate vector fields π π π I , the transformation of the 2-form ω µ = dφ I ∧ dπ Iµ is determined by dφ I ∧ dπ Iµ = ∂φ I ∂φ J ′ dφ J ′ + ∂φ I ∂π Jα ′ dπ Jα ′ ∧ ∂π Iµ ∂φ K ′ dφ K ′ + ∂π Iµ ∂π Kβ ′ dπ Kβ ′ = ∂φ I ∂φ J ′ ∂π Iµ ∂φ K ′ dφ J ′ ∧ dφ K ′ + ∂φ I ∂π Jα ′ ∂π Iµ ∂π Kβ ′ dπ Jα ′ ∧ dπ Kβ ′ + ∂φ I ∂φ J ′ ∂π Iµ ∂π Kα ′ − ∂φ I ∂π Kα ′ ∂π Iµ ∂φ J ′ dφ J ′ ∧ dπ Kα ′ = 1 2 ∂φ I ∂φ J ′ ∂π Iµ ∂φ K ′ − ∂φ I ∂φ K ′ ∂π Iµ ∂φ J ′ dφ J ′ ∧ dφ K ′ + 1 2 ∂φ I ∂π Jα ′ ∂π Iµ ∂π Kβ ′ − ∂φ I ∂π Kβ ′ ∂π Iµ ∂π Jα ′ dπ Jα ′ ∧ dπ Kβ ′ + ∂φ I ∂φ J ′ ∂π Iµ ∂π Kα ′ − ∂φ I ∂π Kα ′ ∂π Iµ ∂φ J ′ dφ J ′ ∧ dπ Kα ′ . The terms in parentheses can be expressed as Lagrange brackets dφ I ∧ dπ Iµ = 1 2 {φ J ′ , φ K ′ } µ dφ J ′ ∧ dφ K ′ + 1 2 {π Jα ′ , π Kβ ′ } µ dπ Jα ′ ∧ dπ Kβ ′ + {φ J ′ , π Kα ′ } µ dφ J ′ ∧ dπ Kα ′ . If the transformation of the fields is canonical, then we can apply the transformation rules for the fundamental Lagrange brackets of Eqs. (37), (38), and (39). Then, the transformation of the 2-form ω µ simplifies to ω µ ≡ dφ I ∧ dπ Iµ = δ J K δ µ α dφ J ′ ∧ dπ Kα ′ = dφ I ′ ∧ dπ Iµ ′ ≡ ω µ ′ .(40) The 2-forms ω µ are thus invariant under canonical transformations. We may thus refer to the ω µ as canonical 2-forms. Jacobi's identity and Poisson's theorem in canonical field theory In order to derive the canonical field theory analog of Jacobi's identity of point mechanics, we let f (φ, π π π, x x x), g(φ, π π π, x x x), and h(φ, π π π, x x x) denote arbitrary differentiable functions of the canonical fields. The sum of the three cyclicly permuted nested Poisson brackets be denoted by a µν , a µν = f, g, h µ ν + h, f, g µ ν + g, h, f µ ν . (41) We will now show that the a µν are the components of an anti-symmetric (0, 2) tensor, hence that a µν + a νµ = 0. Writing Eq. (41) explicitly, we get a sum of 24 terms, each of them consisting of a triple product of two first-order derivatives and one second-order derivative of the ∂h ∂φ I − ∂f ∂π Jν ∂ ∂φ J ∂g ∂φ I ∂h ∂π Iµ − ∂g ∂π Iµ ∂h ∂φ I + ∂h ∂φ J ∂ ∂π Jν ∂f ∂φ I ∂g ∂π Iµ − ∂f ∂π Iµ ∂g ∂φ I − ∂h ∂π Jν ∂ ∂φ J ∂f ∂φ I ∂g ∂π Iµ − ∂f ∂π Iµ ∂g ∂φ I + ∂g ∂φ J ∂ ∂π Jν ∂h ∂φ I ∂f ∂π Iµ − ∂h ∂π Iµ ∂f ∂φ I − ∂g ∂π Jν ∂ ∂φ J ∂h ∂φ I ∂f ∂π Iµ − ∂h ∂π Iµ ∂f ∂φ I . The proof can be simplified making use of the fact that the terms of a µν from Eq. (41) emerge as cyclic permutations of the functions f , g, and h. With regard to the explicit form of Eq. (41) from above it suffices to show that Eq. (42) is fulfilled for all terms containing second derivatives of, for instance, f , a µν = ∂h ∂φ J ∂g ∂π Iµ ∂ 2 f ∂π Jν ∂φ I − ∂h ∂φ J ∂g ∂φ I ∂ 2 f ∂π Jν ∂π Iµ − ∂h ∂π Jν ∂g ∂π Iµ ∂ 2 f ∂φ J ∂φ I + ∂h ∂π Jν ∂g ∂φ I ∂ 2 f ∂φ J ∂π Iµ + ∂g ∂φ J ∂h ∂φ I ∂ 2 f ∂π Jν ∂π Iµ − ∂g ∂φ J ∂h ∂π Iµ ∂ 2 f ∂π Jν ∂φ I − ∂g ∂π Jν ∂h ∂φ I ∂ 2 f ∂φ J ∂π Iµ + ∂g ∂π Jν ∂h ∂π Iµ ∂ 2 f ∂φ J ∂φ I + . . .(43) Resorting and interchanging the sequence of differentiations yields a µν = − ∂h ∂φ I ∂g ∂π Jν ∂ 2 f ∂π Iµ ∂φ J + ∂h ∂φ I ∂g ∂φ J ∂ 2 f ∂π Iµ ∂π Jν + ∂h ∂π Iµ ∂g ∂π Jν ∂ 2 f ∂φ I ∂φ J − ∂h ∂π Iµ ∂g ∂φ J ∂ 2 f ∂φ I ∂π Jν − ∂g ∂φ I ∂h ∂φ J ∂ 2 f ∂π Iµ ∂π Jν + ∂g ∂φ I ∂h ∂π Jν ∂ 2 f ∂π Iµ ∂φ J + ∂g ∂π Iµ ∂h ∂φ J ∂ 2 f ∂φ I ∂π Jν − ∂g ∂π Iµ ∂h ∂π Jν ∂ 2 f ∂φ I ∂φ J + . . .(44) Mutually renaming the formal summation indices I and J, the right hand sides of Eqs. (43) and (44) Poisson's theorem in the realm of canonical field theory is based on the identity ∂ ∂x ν f, g µ = ∂f ∂x ν , g µ + f, ∂g ∂x ν µ .(45) In contrast to point mechanics, this identity is most easily proved directly, i.e., without referring to the Jacobi identity (42). From to the definition (29) of the Poisson brackets, we conclude for two arbitrary differentiable functions f (φ, π π π, x x x) and g(φ, π π π, x x x) ∂ ∂x ν f, g µ = ∂ ∂x ν ∂f ∂φ I ∂g ∂π Iµ − ∂f ∂π Iµ ∂g ∂φ I = ∂g ∂π Iµ ∂ ∂x ν ∂f ∂φ I + ∂f ∂φ I ∂ ∂x ν ∂g ∂π Iµ − ∂g ∂φ I ∂ ∂x ν ∂f ∂π Iµ − ∂f ∂π Iµ ∂ ∂x ν ∂g ∂φ I = ∂g ∂π Iµ ∂ 2 f ∂φ I ∂φ J ∂φ J ∂x ν + ∂ 2 f ∂φ I ∂π Jα ∂π Jα ∂x ν + ∂ 2 f ∂φ I ∂x ν expl + ∂f ∂φ I ∂ 2 g ∂π Iµ ∂φ J ∂φ J ∂x ν + ∂ 2 g ∂π Iµ ∂π Jα ∂π Jα ∂x ν + ∂ 2 g ∂π Iµ ∂x ν expl − ∂g ∂φ I ∂ 2 f ∂π Iµ ∂φ J ∂φ J ∂x ν + ∂ 2 f ∂π Iµ ∂π Jα ∂π Jα ∂x ν + ∂ 2 f ∂π Iµ ∂x ν expl − ∂f ∂π Iµ ∂ 2 g ∂φ I ∂φ J ∂φ J ∂x ν + ∂ 2 g ∂φ I ∂π Jα ∂π Jα ∂x ν + ∂ 2 g ∂φ I ∂x ν expl = ∂g ∂π Iµ ∂ ∂φ I ∂f ∂φ J ∂φ J ∂x ν + ∂f ∂π Jα ∂π Jα ∂x ν + ∂f ∂x ν expl − ∂g ∂φ I ∂ ∂π Iµ ∂f ∂φ J ∂φ J ∂x ν + ∂f ∂π Jα ∂π Jα ∂x ν + ∂f ∂x ν expl + ∂f ∂φ I ∂ ∂π Iµ ∂g ∂φ J ∂φ J ∂x ν + ∂g ∂π Jα ∂π Jα ∂x ν + ∂g ∂x ν expl − ∂f ∂π Iµ ∂ ∂φ I ∂g ∂φ J ∂φ J ∂x ν + ∂g ∂π Jα ∂π Jα ∂x ν + ∂g ∂x ν expl = ∂g ∂π Iµ ∂ ∂φ I ∂f ∂x ν − ∂g ∂φ I ∂ ∂π Iµ ∂f ∂x ν + ∂f ∂φ I ∂ ∂π Iµ ∂g ∂x ν − ∂f ∂π Iµ ∂ ∂φ I ∂g ∂x ν = ∂f ∂x ν , g µ + f, ∂g ∂x ν µ . Provided that both the first derivative ∂/∂x ν as well as the two second derivatives ∂ 2 /∂φ I ∂x ν and ∂ 2 /∂π Iµ ∂x ν vanish for both functions f and g, then the first derivative with respect to x ν of the Poisson bracket [f, g] µ also vanishes ∂f ∂x ν = 0, ∂g ∂x ν = 0, ∂ 2 f ∂φ I ∂x ν = 0, ∂ 2 f ∂π Iµ ∂x ν = 0 =⇒ ∂ ∂x ν f, g µ = 0. This establishes Poisson's theorem for canonical field theory. Hamilton-Jacobi equation In the realm of canonical field theory, we can set up the Hamilton-Jacobi equation as follows: we look for a generating function F µ 1 (φ, φ ′ , x x x) of a canonical transformation that maps a given Hamiltonian density H into a transformed density that vanishes identically, H ′ ≡ 0. In the transformed system, all partial derivatives of H ′ thus vanish as well -and hence the derivatives of all fields φ I ′ (x x x), π π π I ′ (x x x) with respect to the system's independent variables x µ , ∂H ′ ∂φ I ′ = 0 = ∂π Iα ′ ∂x α , ∂H ′ ∂π Iµ ′ = 0 = ∂φ I ′ ∂x µ . According to the transformation rules (16) that arise from a generating function of type S S S ≡ F F F 1 (φ, φ ′ , x x x), this means for a given Hamiltonian density H H(φ, π π π, x x x) + ∂S α ∂x α expl = 0. In conjunction with the transformation rule π Iµ = ∂S µ /∂φ I , we may subsequently set up the Hamilton-Jacobi equation as a partial differential equation for the 4vector function S S S H φ, ∂S S S ∂φ , x x x + ∂S α ∂x α expl = 0. This equation illustrates that the generating function S S S defines exactly that particular canonical transformation which maps the space-time state of the system into its fixed initial state φ I ′ = φ I (0 0 0) = const., π Iµ ′ = π Iµ (0 0 0) = const. The inverse transformation then defines the mapping of the system's initial state into its actual state in space-time. As a result of the fact that H ′ as well as all ∂φ I ′ /∂x µ vanish, the divergence of S S S(φ, φ ′ , x x x) simplifies to We consider the scalar field φ(x, t) whose Lagrangian density L is given by ∂S α ∂x α = ∂S α ∂φ I ∂φ I ∂x α + ∂S α ∂x α expl = π Iα ∂φ I ∂x α − H = L.L(φ, ∂ t φ, ∂ x φ) = 1 2 (∂ t φ) 2 − v 2 (∂ x φ) 2 + λ φ 2 − 1 2 .(46) Herein, v and λ are supposed to denote constant quantities. The particular Euler-Lagrange equation for this Lagrangian density simplifies the general form of Eq. (2) to ∂ ∂t ∂L ∂(∂ t φ) + ∂ ∂x ∂L ∂(∂ x φ) − ∂L ∂φ = 0. The resulting field equation is ∂ 2 φ ∂t 2 − v 2 ∂ 2 φ ∂x 2 − 4λ φ φ 2 − 1 = 0.(47) In order to derive the equivalent Hamiltonian representation, we first define the conjugate momentum fields from L π t (x, t) = ∂L ∂(∂ t φ) = ∂φ ∂t , π x (x, t) = ∂L ∂(∂ x φ) = −v 2 ∂φ ∂x . The Hamiltonian density H now follows as the Legendre transform of the Lagrangian density L H(φ, π t , π x ) = π t ∂φ ∂t + π x ∂φ ∂x − L(φ, ∂ t φ, ∂ x φ). The Ginzburg-Landau Hamiltonian density H is thus given by H(φ, π t , π x ) = 1 2 π 2 t − 1 v 2 π 2 x − λ φ 2 − 1 2 .(48) The canonical field equations for the density H of Eq. (48) are ∂H ∂π t = ∂φ ∂t , ∂H ∂π x = ∂φ ∂x , ∂H ∂φ = − ∂π t ∂t − ∂π x ∂x , from which we derive the following set coupled first order equations π t = ∂φ ∂t , π x = −v 2 ∂φ ∂x , ∂π t ∂t + ∂π x ∂x − 4λ φ φ 2 − 1 = 0. As usual, the canonical field equations for the scalar field φ(x, t) just reproduce the definition of the momentum fields π t and π x from the Lagrangian density L. By inserting π t and π x into the second field equation the coupled set of first order field equations is converted into a single second order equation for φ(x, t): 4.2. "Natural" Hamiltonian density A general Hamiltonian system with a quadratic momentum dependence is often referred to as a "natural" ∂ 2 φ ∂t 2 − v 2 ∂ 2 φ ∂x 2 − 4λ φ φ 2 − 1 = 0,H = 1 2 π α π α + W (φ, x x x) . We note that this Hamiltonian density H resembles the Hamiltonian function H of a conservative Hamiltonian system of classical particle mechanics, which is given by H = T + V as the sum of kinetic energy T and potential energy V . The first set of canonical field equations then follows as ∂φ ∂x µ = ∂H ∂π µ = 1 2 π µ + 1 2 π α ∂π α ∂π µ = 1 2 π µ + 1 2 π α ∂π α ∂π µ = 1 2 π µ + 1 2 π α δ α µ = π µ .(49) The second canonical field equation writes for the present Hamiltonian density ∂H ∂φ = ∂W ∂φ = − ∂π α ∂x α = − ∂π α ∂x α . Inserting the momentum fields π µ , π µ we again end up with a second order equation for the scalar field φ(x x x) ∂ ∂x α ∂ ∂x α φ(x x x) + ∂W (φ, x x x) ∂φ = 0. For a "harmonic" potential W (φ, x x x) = 1 2 V (x x x) φ 2 , we immediately obtain the Klein-Gordon equation ∂ ∂x α ∂ ∂x α + V (x x x) φ(x x x) = 0.(50) Equation (50) is thus the field equation pertaining to the Klein-Gordon Hamiltonian density H KG = 1 2 π α π α + 1 2 V (x x x) φ 2 . In this regard, the Klein-Gordon equation is nothing else as the field theory analog of the equation of motion of the harmonic oscillator of point mechanics. For the constant potential factor V (x x x) = mc 2 =⇒ H KG = 1 2 π α π α + 1 2 mc 2 φ 2 ,(51) we obtain the particular Klein-Gordon equation which describes in relativistic quantum field theory the dynamics of a free particle of zero spin and mass m ∂ 2 ∂x α ∂x α + mc 2 φ(x x x) = 0. Regarding the first canonical field equation (49) that follows from the "natural" Hamiltonian density, we observe that the momentum fields π µ (x x x), π µ (x x x) coincide with the partial derivatives of the scalar field φ(x x x). This reminds of the method of "canonical quantization", where the the transition from the classical mechanics is made by replacing the canonical momenta p µ with corresponding operatorsp µ that are supposed to act on a complex-valued "wave function" φ(x x x). In position representation, these operators arê p µ = i ∂ ∂x µ ,p µ = i ∂ ∂x µ .(52) Solved for the conjugate momentum fields π µ of covariant field theory, this yields the connection of the π µ to the operator notation of quantum mechanics for all "natural" Hamiltonian densities π µ (x x x) ≡ − i p µ φ(x x x), π µ (x x x) ≡ − i p µ φ(x x x). In the usual quantum mechanics' formulation, the Klein-Gordon equation is derived by replacing according to Eq. (52) the physical quantities momentum and energy in the relativistic energy-momentum relation p µ p µ = m 2 c 2 by the corresponding operators p µ →p µ and letting the resulting operator act on a wave function φ(x x x). Obviously, this yields exactly the same field equation that we obtain in covariant field theory for the "harmonic" Hamiltonian density from Eq. (51). Klein-Gordon Hamiltonian density for complex fields We first consider the Klein-Gordon Lagrangian density L KG for a complex scalar field φ (see, for instance, Ref. 1 ): L KG (φ, φ * , ∂ µ φ, ∂ µ φ * ) = ( c ∂ α φ * )( c ∂ α φ) − (mc 2 φ * )(mc 2 φ).(53) Herein φ * denotes complex conjugate field of φ. Both quantities are to treated as independent. The Euler-Lagrange equations (2) for φ and φ * follow from this Lagrangian density as ∂ 2 ∂x α ∂x α φ * = − mc 2 φ * , ∂ 2 ∂x α ∂x α φ = − mc 2 φ.(54) As a prerequisite for deriving the corresponding Hamiltonian density H KG we must first define from L KG the conjugate momentum fields, π µ = ∂L KG ∂(∂ µ φ) = 2 c 2 ∂φ * ∂x µ , π µ * = ∂L KG ∂(∂ µ φ * ) = 2 c 2 ∂φ ∂x µ . The Hamiltonian density H then follows again as the Legendre transform of the Lagrangian density H(π I µ , π Iµ * , φ I , φ * I ) = π I α ∂φ I ∂x α + π Iα * ∂φ * I ∂x α − L. The Klein-Gordon Hamiltonian density H KG is thus given by H KG (π µ , π µ * , φ, φ * ) = 1 2 c 2 π α π α * + (mc 2 ) 2 φ * φ.(55) For the Hamiltonian density (55), the canonical field equations (5) provide the following set of coupled first order partial differential equations ∂H KG ∂π µ = ∂φ ∂x µ = 1 2 c 2 π µ * , ∂H KG ∂π µ * = ∂φ * ∂x µ = 1 2 c 2 π µ ∂H KG ∂φ = − ∂π α ∂x α = (mc 2 ) 2 φ * , ∂H KG ∂φ * = − ∂π α * ∂x α = (mc 2 ) 2 φ. Again, the canonical field equations for the scalar fields φ and φ * coincide with the definitions of the momentum fields π µ and π µ * from the Lagrangian density L KG . Eliminating the π µ , π µ * from the canonical field equations then yields the Euler-Lagrange equations of Eq. (54). For complex fields, the energy-momentum tensor in the Lagrangian formalism is defined analogously to the real case of Eq. (6) T ν µ = ∂L ∂(∂ ν φ I ) ∂φ I ∂x µ + ∂L ∂(∂ ν φ * I ) ∂φ * I ∂x µ − δ ν µ L. Expressed by means of the complex Hamiltonian density H, this means T ν µ = δ ν µ H + π Iν ∂φ I ∂x µ − δ ν µ π I α ∂φ I ∂x α + π Iν * ∂φ * I ∂x µ − δ ν µ π Iα * ∂φ * I ∂x α(56) For the Klein-Gordon Hamiltonian density H KG from Eq. (55), we thus get the particular energy-momentum tensor T ν µ,KG T ν µ,KG = 1 2 c 2 π ν π * µ + π ν * π µ − δ ν µ π α π α * + δ ν µ (mc 2 ) 2 φ * φ. Maxwell's equations as canonical field equations The Lagrangian density L M of the electromagnetic field is given by L M (A A A, ∂A A A, x x x) = − 1 4 f µν f µν − 4π c j µ (x x x) A µ , f µν = ∂A ν ∂x µ − ∂A µ ∂x ν .(57) Herein, the four components A µ of the 4-potential A A A µ now take the place of the fields φ I ≡ φ µ ≡ A µ in the notation used so far. The Lagrangian density (57) thus entails a set of four Euler-Lagrange equations, i.e., an equation for each component A µ . The source vector j j j µ = (cρ, j x , j y , j z ) denotes the 4-vector of electric currents combining the usual current density vector (j x , j y , j z ) of configuration space with the charge density ρ. In this notation, the Euler-Lagrange equations (2) take on the form, ∂ ∂x ν ∂L M ∂(∂ ν A µ ) − ∂L M ∂A µ = 0, µ = 0, . . . , 3.(58) With L M from Eq. (57), we obtain directly ∂f µν ∂x ν + 4π c j µ = 0.(59) This is the tensor form of the inhomogeneous Maxwell equation. In order to formulate the equivalent Hamiltonian description, we first define, according to Eq. (3), the tensor field components π µν as the conjugate objects of vector components A µ π µν (x x x) = ∂L M ∂(∂ ν A µ ) .(60) With the particular Lagrangian density (57), this means, in detail, π λα = − 1 4 ∂f µν ∂ (∂ α A λ ) f µν + ∂f µν ∂ (∂ α A λ ) f µν = − 1 4 ∂f µν ∂ (∂ α A λ ) f µν + ∂f µν ∂ (∂ α A λ ) f µν = − 1 2 f µν ∂f µν ∂ (∂ α A λ ) = − 1 2 f µν ∂ ∂ ∂A λ ∂x α ∂A ν ∂x µ − ∂A µ ∂x ν = − 1 2 f µν δ λ ν δ α µ − δ λ µ δ α ν = − 1 2 f αλ − f λα = f λα = ∂A α ∂x λ − ∂A λ ∂x α . The tensor π µν thus coincides with the electromagnetic field tensor f µν , defined in Eq. (57). Corresponding to Eq. (4), we obtain the Hamiltonian density H M as the Legendre-transformed Lagrangian density L M H M (A A A, π π π, x x x) = π µν ∂A µ ∂x ν − L M (A A A, ∂A A A, x x x). The double sum π µν ∂A µ /∂x ν can be expressed in terms of the Lagrangian expression f µν f µν , f µν f µν = ∂A ν ∂x µ − ∂A µ ∂x ν ∂A ν ∂x µ − ∂A µ ∂x ν = 2 ∂A µ ∂x ν ∂A µ ∂x ν − 2 ∂A µ ∂x ν ∂A ν ∂x µ = −2 ∂A µ ∂x ν ∂A ν ∂x µ − ∂A µ ∂x ν = −2π µν ∂A µ ∂x ν .(61) Because of π µν ≡ f µν , the Hamiltonian density H M of the electromagnetic field is then obtained as H M (A A A, π π π, x x x) = − 1 4 π µν π µν + 4π c j µ (x x x) A µ , π µν = ∂A ν ∂x µ − ∂A µ ∂x ν .(62)∂A λ ∂x α = ∂H M ∂π λα = − 1 4 π µν ∂π µν ∂π λα − 1 4 π µν ∂π µν ∂π λα = − 1 4 π µν ∂π µν ∂π λα − 1 4 π µν ∂π µν ∂π λα = − 1 2 π µν ∂π µν ∂π λα = − 1 2 π µν δ µ λ δ ν α = − 1 2 π λα . Interchanging the indices, we likewise get ∂A α ∂x λ = − 1 2 π αλ . Making use of the antisymmetry of the tensor π µν , the two preceding field equations can be combined to yield For the Maxwell Hamiltonian density (62), the second field equation is thus given by ∂A α ∂x λ − ∂A λ ∂x α = π λα .(63)∂π λα ∂x α + 4π c j λ = 0,(64) which agrees, as expected, with the corresponding Euler-Lagrange equation (59) because of π µν = f µν . The Proca Hamiltonian density In relativistic quantum field theory, the dynamics of particles of spin 1 and mass m is derived from the Proca Lagrangian density L P , L P = − 1 4 f µν f µν + 1 2 Ω 2 A µ A µ , f µν = ∂A ν ∂x µ − ∂A µ ∂x ν , Ω = mc .(65) We observe that the kinetic term of L P agrees with that of the Lagrangian density L M of the electromagnetic field of Eq. (57). Therefore, the Euler-Lagrange equations read similar to those of Eq. (59) ∂f µν ∂x ν − Ω 2 A µ = 0.(66) The transition to the corresponding Hamilton description is performed by defining the momentum field tensors Π µν on the basis of the actual Lagrangian L P by Π µν = ∂L P ∂ (∂ ν A µ ) , Π µν = ∂L P ∂ (∂ ν A µ ) . Similar to the preceding section, we conclude Π µν = f µν , Π µν = f µν . With the Legendre transformation H P = Π µν ∂A µ ∂x ν − L P , we obtain the Proca Hamiltonian density by following the path of Eq. (61) H P = − 1 4 Π µν Π µν − 1 2 Ω 2 A µ A µ .(67) The canonical field equations emerge as ∂H P ∂Π µν = ∂A µ ∂x ν − ∂A ν ∂x µ = Π νµ ∂H P ∂A µ = − ∂Π µν ∂x ν = −Ω 2 A µ . By means of eliminating Π µν , this coupled set of pairs of first order equations can be converted into second order equations for the vector field A A A µ , ∂ ∂x ν ∂A µ ∂x ν − ∂A ν ∂x µ + Ω 2 A µ = 0. As expected, this equation coincides with the Euler-Lagrange equation (66). Canonical field equations of a coupled Klein-Gordon-Maxwell system The Lagrangian density L KGM of a complex Klein-Gordon field φ that couples minimally to an electromagnetic 4-vector potential A A A µ is given by L KGM = ∂φ ∂x µ + iqA µ φ ∂φ * ∂x µ − iqA µ φ * + Ω 2 φ * φ − 1 4 f µν f µν .(68) The components f µν of the electromagnetic field tensor are defined in Eq. (57). The conjugate fields of φ and A A A µ are defined from the Lagrangian L KGM by The corresponding Hamiltonian density H KGM is now obtained as the Legendre transform of L KGM , π ν = ∂L KGM ∂ (∂ ν φ) = ∂φ * ∂x ν − iqA ν φ * π * ν = ∂L KGM ∂ (∂ ν φ * ) = ∂φ ∂x ν + iqA ν φ Π µν = ∂L KGM ∂ (∂ ν A µ ) = f µν .H KGM = Π µν ∂A µ ∂x ν + π µ ∂φ ∂x µ + π * µ ∂φ * ∂x µ − L KGM . To obtain the canonical form of H KGM , all partial derivatives of the fields φ and A A A µ must be replaced by the conjugate fields π µ and Π µν , respectively, H KGM = π * µ π µ + iqA µ π * µ φ * − π µ φ − Ω 2 φ * φ − 1 4 Π µν Π µν .(69) As shown in Sect. 4.4, the derivative of the Hamiltonian density H KGM with respect to the Π µν yields the canonical equation Π νµ = ∂A µ ∂x ν − ∂A ν ∂x µ . From the derivatives of H KGM with respect to the π µ and π * µ , the following canonical field equations arise ∂H KGM ∂π µ = π * µ − iqA µ φ = ∂φ ∂x µ ∂H KGM ∂π * µ = π µ + iqA µ φ * = ∂φ * ∂x µ . The third group of canonical field equations emerges from the derivatives of H KGM with respect to the A µ , and with respect to the φ, φ * as ∂H KGM ∂A µ = iq φ * π µ * − φπ µ = − ∂Π µα ∂x α ∂H KGM ∂φ = −Ω 2 φ * − iqA α π α = − ∂π α ∂x α ∂H KGM ∂φ * = −Ω 2 φ + iqA α π * α = − ∂π α * ∂x α . By eliminating the conjugate fields Π µν and π µ , these field equations can be rewritten as second order partial differential equations, corresponding to those that follow from the Euler-Lagrange equations for the Lagrangian density L KGM ∂f µα ∂x α = J µ ∂ 2 φ * ∂x α ∂x α − Ω 2 + q 2 A α A α φ * − 2iqA α ∂φ * ∂x α − iqφ * ∂A α ∂x α = 0 ∂ 2 φ ∂x α ∂x α − Ω 2 + q 2 A α A α φ + 2iqA α ∂φ ∂x α + iqφ ∂A α ∂x α = 0, with the J µ being defined by J µ = −iq φ * ∂φ ∂x µ + iqA µ φ − φ ∂φ * ∂x µ − iqA µ φ * . The Dirac Hamiltonian density The dynamics of particles of spin 1 2 having mass m is described by the Dirac Lagrangian density L D . Introducing anticommutating 4× 4 matrices γ i , i = 1, . . . , 4 and spin 1 2 fields ψ, the Dirac Lagrangian density is given by L D = iψγ µ ∂ µ ψ − mψψ,(70) whereinψ ≡ ψ † γ 0 . In the following we show some fundamental relations among γ matrices: {γ µ , γ ν } ≡ γ µ γ ν + γ ν γ µ = 2g µν γ ν = γ 0 γ ν † γ 0 γ 0 † = γ 0 , γ 0 γ 0 = 1 γ ν γ ν = 4 [γ µ , γ ν ] ≡ γ µ γ ν − γ ν γ µ ≡ −2iσ µν(71) Note that in Eq. (70) the derivative acts on ψ on the right. The Dirac Lagrangian density L D can be symmetrized using the aforementioned relations of γ matrices and by combining the Lagrangian density Eq. (70) with its adjoint, which leads to L D = i 2 ψ γ µ ∂ µ ψ − ∂ µψ γ µ ψ − mψψ.(72) The resulting Euler-Lagrange equations are identical to those derived from Eq. (70), iγ µ ∂ µ ψ − mψ = 0 i∂ µψ γ µ + mψ = 0.(73) Since the Wronskian determinant vanishes, det ∂ 2 L D ∂ (∂ µ ψ) ∂ (∂ ν ψ) = 0,(74) the corresponding Legendre transformation of the Lagrangian density Eq. (72) is irregular. A term including the derivatives ∂ µ ψ and ∂ ν ψ enters the Lagrangian density with a prefactor of dimension mass −1 . As has been shown e.g. by Gasiorowicz 21 , one can construct a divergence-free term of this structure, leading to invariant equations of motion and a regular Legendre transformation. The additional term is given by ∂ µψ σ µν ∂ ν ψ, corresponding to the divergence ofψσ µν ∂ ν ψ: Note that (∂ µ ∂ ν ψ)ψσ µν vanishes, since this term is summed over a symmetric and antisymmetric part. One obtains the equivalent Lagrangian density d dx µ ψ σ µν ∂ ν ψ = ∂ µ + ∂ µψ ∂ ∂ψ + (∂ µ ψ) ∂ ∂ψ + ∂ µ ∂ νψ ∂ ∂ ∂ νψ + (∂ µ ∂ ν ψ) ∂ ∂ (∂ ν ψ) ψ σ µν ∂ ν ψ = ∂ µψ σ µν ∂ ν ψ + (∂ µ ∂ ν ψ)ψσ µν = ∂ µψ σ µν ∂ ν ψL ′ D = i 2 ψ γ µ ∂ µ ψ − ∂ µψ γ µ ψ − iλ∂ µψ σ µν ∂ ν ψ.(75) The canonical momenta follow as π µ = ∂L ′ D ∂ ∂ µψ = − i 2 γ µ ψ − iλσ µν ∂ ν ψ π µ = ∂L ′ D ∂ (∂ µ ψ) = i 2ψ γ µ − iλ∂ νψ σ νµ .(76) In order to Legendre transform the Lagrangian density, it is useful to express ∂ µ ψ and ∂ µψ in terms ofπ µ andπ µ . For the "inversion" of Eq. (76) the matrix τ µν = 2 3 ig µν − 1 3 σ µν is introduced which obeys the following relations: γ µ τ µν = τ νµ γ µ = − i 3 γ ν τ µν σ νλ = σ λν τ νµ = δ λ µ The terms τ νµ π µ andπ µ τ µν lead to ∂ ν ψ = i λ τ νµ π µ + i 6λ γ ν ψ ∂ νψ = i λπ µ τ µν − i 6λψ γ ν .(77) Therefore the Legendre transformation can be fulfilled using H D =π ν ∂ ν ψ + ∂ νψ π ν − L ′ D . The terms arising in the expansion of this equation can be simplified using Eqs. (71), leading to the Dirac Hamiltonian density of the form H D = 1 λ iπ ν τ νµ π µ + i 6π ν γ ν ψ − i 6ψ γ ν π ν + 1 3ψ ψ + mψψ.(78) The canonical equations of motion ∂ µ ψ = ∂H D ∂π µ = i λ τ µν π ν + 1 6 γ µ ψ ∂ µψ = ∂H D ∂π µ = i λ π ν τ νµ − 1 6ψ γ µ correspond to the definition of the canonical momenta, see also Eq. (76) and Eq. (77). The other canonical equations of motion are given by ∂ µ π µ = − ∂H D ∂ψ = 1 3λ i 2 γ ν π ν − ψ − mψ (79) ∂ µπ µ = − ∂H D ∂ψ = − 1 3λ i 2π ν γ ν +ψ − mψ.(80) In order to show the equivalence of these equations of motion to those derived in the formalism of Euler-Lagrange, we express the canonical momenta through ψ and ∂ µ ψ, thereby using Eq. (76). ∂ µ π µ = i 6λ γ ν − i 2 γ ν ψ − iλσ νµ ∂ µ ψ − 1 3λ ψ − mψ = 1 6 γ ν σ νµ ∂ µ ψ − mψ = i 12 [4γ µ − γ ν (2g νµ − γ ν γ µ )] ∂ µ ψ − mψ = i 2 γ µ ∂ µ ψ − mψ ∂ µ π µ = ∂ µ − i 2 γ µ ψ − iλσ µν ∂ ν ψ = − i 2 ∂ µ γ µ ψ − iλ ∂ µ σ µν ∂ ν =0 ψ ⇒ iγ µ ∂ µ ψ − mψ = 0. It should be mentioned that this section is similar to the derivation of the Dirac Hamiltonian density in Ref. 22 . However, the results of this section are worked out here in order to present a consistent and thorough study of covariant Hamiltonian field theory. Hamiltonian density for a SU(2) gauge theory The Lagrangian density L YM of a SU(2) Yang-Mills gauge theory consisting of a complex doublet φ of scalar fields, the coupling constant g and SU(2) gauge fields A µ a (a = 1, 2, 3) is given by (see e.g. Ref. 23 ) L YM = ∂ µ − ig τ 2 A µ φ † ∂ µ − ig τ 2 A µ φ − V φ † φ − 1 4 f µν a f a µν . (81) This Lagrangian density is invariant under space-time-dependent SU(2) gauge transformations. In the following the Hamiltonian density corresponding to Eq. (81) shall be derived and in section 5.9 we will present the generating function of an infinitesimal local SU(2) gauge transformation. The Hamiltonian density will be kfte Covariant Hamiltonian Field Theory 33 obtained by a Legendre transformation without restriction to a particular gauge. The resulting Hamiltonian density is therefore different from those given in Ref. 22 , where a gauge A 0 a = 0, a = 1, 2, 3 has been chosen and only gauge fields have been taken into account. Making use of the Levi-Civita tensor ǫ abc , the following relations and definitions for Hermitian Pauli matrices τ a and covariant derivative D µ hold: τ = (τ 1 , τ 2 , τ 3 ) τ a 2 , τ b 2 = iǫ abc τ c 2 a, b, c = 1, 2, 3 φ = φ 1 φ 2 A µ = (A µ 1 , A µ 2 , A µ 3 ) D µ = ∂ µ − ig τ 2 A µ f µν a = ∂ µ A ν a − ∂ ν A µ a + gǫ abc A bµ A cν f µν a = −f νµ a(82) The momenta conjugate to φ are given by π µ = ∂L YM ∂ (∂ µ φ) = ∂ µ − ig τ 2 A µ φ † = ∂ µ φ † + igφ † τ 2 A µ π µ † = ∂L YM ∂ (∂ µ φ † ) = ∂ µ − ig τ 2 A µ φ = ∂ µ φ − ig τ 2 A µ φ.(83) Note that π µ has the form of a (1×2) matrix in SU(2) parameter space, while π µ † takes on the form of a 2 × 1 matrix. In analogy to quantum electrodynamics, an Abelian gauge theory, we obtain the conjugate momentum field tensors Π λα d = ∂L YM ∂ ∂ α A d λ = f λα d ,(84) as can been shown in the following way: Π λα d = ∂ ∂ ∂ α A d λ − 1 4 (∂ µ A ν a − ∂ ν A µ a ) ∂ µ A a ν − ∂ ν A a µ − 1 4 gǫ abc A bµ A cν ∂ µ A a ν − ∂ ν A a µ − (∂ µ A ν a − ∂ ν A µ a ) gǫ abc A bµ A cν = ∂ ∂ ∂ α A d λ − 1 4 (∂ µ A ν a − ∂ ν A µ a ) ∂ µ A a ν − ∂ ν A a µ − 1 2 gǫ abc A bµ A cν ∂ µ A a ν − ∂ ν A a µ(85) Note that here the term without ∂ µ A ν contribution has been omitted. In analogy to the conjugate momentum tensor of electrodynamics, the first line in Eq. (85) can be written as ∂ λ A α d − ∂ α A λ d . Furthermore one obtains Π λα d = ∂ λ A α d − ∂ α A λ d − 1 2 gǫ abc A bµ A cν δ a d δ α µ δ λ ν − δ a d δ α ν δ λ µ = ∂ λ A α d − ∂ α A λ d − 1 2 gǫ dbc A bα A cλ − A bλ A cα = ∂ λ A α d − ∂ α A λ d + 1 2 gǫ dbc A bλ A cα + A cα A bλ = f λα d . The Hamiltonian density H YM follows as the Legendre transform of the Lagrangian density L YM : H YM = π ν ∂ ν φ + ∂ ν φ † π ν † + Π µν a ∂ ν A a µ − L YM Using Eq. (83) and Eq. (84) to express the Lagrangian density in terms of the conjugate momenta yields L YM = π ν π † ν − V φ † φ − 1 4 Π µν a Π a µν . Moreover we can use Eq. (83) to express ∂ µ φ and ∂ µ φ † in terms of fields and conjugate momenta, ∂ µ φ = π † µ + ig τ 2 A µ φ ∂ µ φ † = π µ − igφ † τ 2 A † µ . Because of Π µν a ∂ ν A a µ = ∂ µ A ν a − ∂ ν A µ a + gǫ abc A bµ A cν ∂ ν A a µ = ∂ µ A ν a ∂ ν A a µ − ∂ ν A µ a ∂ ν A a µ + gǫ abc A bµ A cν ∂ ν A a µ −Π µν a ∂ µ A a ν = − ∂ µ A ν a − ∂ ν A µ a + gǫ abc A bµ A cν ∂ µ A a ν = −∂ µ A ν a ∂ µ A a ν + ∂ ν A µ a ∂ µ A a ν − gǫ abc A bµ A cν ∂ µ A a ν = −∂ ν A µ a ∂ ν A a µ + ∂ µ A ν a ∂ ν A a µ − gǫ abc A bν A cµ ∂ ν A a µ = −∂ ν A µ a ∂ ν A a µ + ∂ µ A ν a ∂ ν A a µ + gǫ abc A bµ A cν ∂ ν A a µ Π µν a ∂ ν A a µ − ∂ µ A a ν = 2Π µν a ∂ ν A a µ Π µν a Π a νµ − gǫ abc A bν A cµ = 2Π µν a ∂ ν A a µ one obtains Π µν a ∂ ν A a µ = 1 2 Π µν a Π a νµ − gǫ abc A bν A cµ = − 1 2 Π µν a Π a µν − 1 2 gǫ abc Π µν a A bν A cµ . The Hamiltonian density follows as H YM = π µ π † µ + igπ µ τ 2 A µ φ − igφ † τ 2 A µ π µ † + V φ † φ − 1 4 Π µν a Π a µν − 1 2 gǫ abc Π µν a A bν A cµ .(86) kfte Covariant Hamiltonian Field Theory 35 The equations of motion for scalar fields and the corresponding conjugate momenta are derived in the following way H YM ∂π ν = π † ν + ig τ 2 A ν φ = ∂ ν φ H YM ∂π † ν = π ν − igφ † τ 2 A ν = ∂ ν φ † H YM ∂φ = igπ ν τ 2 A ν + ∂ ∂φ V φ † φ = − ∂π ν ∂x ν H YM ∂φ † = −ig τ 2 A ν π ν † + ∂ ∂φ † V φ † φ = − ∂π ν † ∂x ν . As for the equations of motion for gauge fields and the corresponding momentum tensors we obtain H YM ∂Π µν a = ∂ ν A a µ − ∂ µ A a ν = Π a νµ − gǫ abc A bν A cµ (87) ∂H YM ∂A aµ = igπ µ τ a 2 φ − igφ † τ a 2 π µ † − gǫ abc Π µν b A cν = − ∂Π aµν ∂x ν(88) It is worth noting that the equation of motion Eq. (87) follows from H YM ∂Π κλ d = ∂Π µν a ∂Π κλ d ∂ ν A a µ + Π µν a ∂ ∂ ν A a µ ∂Π κλ d − ∂L YM ∂ ∂ ν A a µ ∂ ∂ ν A a µ ∂Π κλ d = ∂Π µν a ∂Π κλ d ∂ ν A a µ = δ d a (δ µ κ δ ν λ − δ µ λ δ ν κ ) ∂ ν A a µ = ∂ λ A d κ − ∂ κ A d λ = Π d λκ − gǫ dbc A bλ A cκ . Moreover the term proportional to Π µν b in the equation of motion Eq. (88) can be derived as ∂ ∂A dλ 1 2 gǫ abc Π µν a A bν A cµ = 1 2 gΠ µν a ǫ abc δ d b δ λ ν A cµ + ǫ abc δ d c δ λ µ A bν = 1 2 g ǫ dca Π µλ a A cµ + ǫ dab Π λν a A bν = 1 2 g ǫ dab Π λµ a A bµ + ǫ dab Π λν a A bν = gǫ dab Π λµ a A bµ . Examples of canonical transformations in covariant Hamiltonian field theory Point transformation Canonical transformations for which the transformed fields φ I ′ only depend on on the original fields φ I , and possibly on the independent variables x µ , but not on the original conjugate fields π π π I are referred to as point transformations. The generic form of a 4-vector generating function F F F 2 that defines such transformations has the components F µ 2 (φ, π π π ′ , x x x) = f J (φ, x x x) π Jµ ′ . Herein, f J = f J (φ, x x x) denotes a set of differentiable but otherwise arbitrary functions. According to the general rules (19) for generating functions of type F F F 2 , the transformed field φ I ′ follows as φ I ′ δ µ ν = ∂F µ 2 ∂π Iν ′ = f J (φ, x x x) ∂π Jµ ′ ∂π Iν ′ = f J (φ, x x x) δ J I δ µ ν . The complete set of transformation rules is then π Iµ = π Jµ ′ ∂f J ∂φ I , φ I ′ = f I (φ, x x x), H ′ = H + π Jα ′ ∂f J ∂x α expl .(89) As a trivial example of a point transformation, we consider the generating function of the identical transformation F µ 2 (φ, π π π ′ ) = φ J π Jµ ′ .(90) The pertaining transformation rules (89) for the particular case f J (φ) = φ J are, viz, π Iµ = π Iµ ′ , φ I ′ = φ I , H ′ = H. The existence of a neutral element is a necessary condition for the set of canonical transformations to form a group. 5.2. Canonical shift of the conjugate momentum vector field π π π I The generator of a canonical transformation that shifts the conjugate 4-vector field π π π I (x x x) can be defined in terms of a function of type F F F 3 (φ ′ , π π π, x x x) as F µ 3 = −φ I ′ π Iµ + η Iµ .(91) Herein, the 4-vector fields η η η I = η η η I (x x x) are supposed to denote arbitrary functions of the x µ . The general transformation rules (22) simplify for this particular generating function to π Iµ ′ = − ∂F µ 3 ∂φ I ′ = π Iµ + η Iµ φ I δ µ ν = − ∂F µ 3 ∂π Iν = φ I ′ δ µ ν H ′ = H + ∂F α 3 ∂x α expl = H − φ I ′ ∂η Iα ∂x α = H − φ I ∂η Iα ∂x α , hence π Iµ ′ = π Iµ + η Iµ , φ I ′ = φ I , H ′ = H − φ I ∂η Iα ∂x α .(92) Provided that the divergence of the fields η η η I (x x x) vanishes, then the Hamiltonian density H is conserved ∂η Iα ∂x α = 0 =⇒ H ′ = H.(93) This means for the canonical field equations (5) that the vector field π π π I (x x x) conjugate to φ I (x x x) is only determined up to a "shifting" field η η η I (x x x) that conforms to the condition (93), ∂H ∂π Iµ ′ = ∂φ I ∂x µ , ∂H ∂φ I = − ∂π Iα ′ ∂x α . Local and global gauge transformation of the field φ I A phase transformation of the field φ I (x x x) of the form φ I (x x x) → φ I ′ (x x x) = φ I (x x x) e iθ(x x x)(94) is commonly called a "local gauge transformation". We can conceive this as a point transformation that is generated by a 4-vector function of type F F F 2 F µ 2 (φ, π π π ′ , x x x) = φ I π Iµ ′ e iθ(x x x) . The pertaining transformation rules follow directly from the general rules of Eqs. (19) φ I ′ = φ I e iθ(x x x) , π Iµ ′ = π Iµ e −iθ(x x x) , H ′ = H + i φ I π Iα ∂θ(x x x) ∂x α . In the particular case that θ does not depend on the x µ , hence if θ = const., then the gauge transformation is referred to as "global". In that case, the generating function (95) itself does no longer explicitly depend on the x µ . The Hamiltonian density is thus always conserved under global gauge transformations φ I (x x x) → φ I (x x x) e iθ , φ I ′ = φ I e iθ , π Iµ ′ = π Iµ e −iθ , H ′ = H. Infinitesimal canonical transformation, generalized Noether theorem The generating function F µ 2 of an infinitesimal canonical transformation differs from that of an identical transformation (90) by a small quantity δx α g µ α (φ, π π π, x x x) F µ 2 (φ, π π π ′ , x x x) = φ I π Iµ ′ + δx α g µ α (φ, π π π, x x x). To first order in the δx α , the subsequent transformation rules (19) are π Iµ ′ = π Iµ − δx α ∂g µ α ∂φ I , φ I ′ δ µ ν = φ I δ µ ν + δx α ∂g µ α ∂π Iν , H ′ = H + δx α ∂g β α ∂x β expl , hence δπ Iµ = −δx α ∂g µ α ∂φ I , δφ I δ µ ν = δx α ∂g µ α ∂π Iν , δH = δx α ∂g β α ∂x β expl .(97) In order to derive Noether's theorem, we additionally need the transformation rule for the partial derivative ∂φ I /∂x ν , which we derive from the rule for φ I from Eq. (97) by calculating the divergence ∂φ I ′ ∂x µ δ µ ν = ∂φ I ∂x µ δ µ ν + δx α ∂ ∂x µ ∂g µ α ∂π Iν , thus ∂φ I ′ ∂x ν = ∂φ I ∂x ν + δx α ∂ ∂x µ ∂g µ α ∂π Iν .(98) We furthermore need to calculate the divergence of the characteristic function g µ α of the generating function (96). With the transformation rules (97), the divergence reads δx α ∂g β α ∂x β = δx α ∂g β α ∂φ I ∂φ I ∂x β + ∂g β α ∂π Iγ ∂π Iγ ∂x β + ∂g β α ∂x β expl = −δπ Iβ ∂φ I ∂x β + δφ I δ β γ ∂π Iγ ∂x β + δH. As we are interested in symmetries that evolve in the course of the system's spacetime evolution, the canonical field equations (5) can be inserted to yield δx α ∂g β α ∂x β = − ∂H ∂π Iβ δπ Iβ − ∂H ∂φ I δφ I + ∂H ∂φ I δφ I + ∂H ∂π Iβ δπ Iβ + δx α ∂H ∂x α expl = δx α ∂H ∂x α expl , Therefore, ∂g β α ∂x β expl = ∂H ∂x α expl − ∂g β α ∂φ I ∂φ I ∂x β − ∂g β α ∂π Iγ ∂π Iγ ∂x β .(99) Noether's theorem can now be derived by calculating the change of the Lagrangian density L that is induced by the infinitesimal canonical transformation (97). As the transformation is supposed to be canonical, original and transformed Lagrangian densities, L and L ′ , can only differ be a divergence δx α ∂f β α /∂x β , δL ≡ L ′ − L = δx α ∂f β α ∂x β .(100) This means in the Hamiltonian description π Iβ ∂φ I ∂x β − H = π Iβ ′ ∂φ I ′ ∂x β − H ′ − δx α ∂f β α ∂x β . The primed quantities in the preceding equation is now expressed in terms of the unprimed ones according to the transformation rules (97) The terms not depending on δx α cancel. As the δx α are supposed to be independent, the first-order terms in δx α entail the set of equations π Iβ ∂φ I ∂x β = π Iβ − δx α ∂g β α ∂φ I ∂φ I ∂x β + δx α ∂ ∂x µ ∂g µ α ∂π Iβ − δx α ∂g β α ∂x β expl − δx α ∂f β α ∂x β .π Iβ ∂ ∂x µ ∂g µ α ∂π Iβ − ∂g β α ∂φ I ∂φ I ∂x β = ∂g β α ∂x β expl + ∂f β α ∂x β . Inserting Eq. (99), this writes, equivalently π Iµ ∂ ∂x β ∂g β α ∂π Iµ + ∂g β α ∂π Iµ ∂π Iµ ∂x β = ∂H ∂x α expl + ∂f β α ∂x β . The sum on the left hand side can now be written as a divergence, ∂ ∂x β π Iµ ∂g β α ∂π Iµ − f β α = ∂H ∂x α expl .(101) This is the generalized Noether theorem of classical field theory in the Hamiltonian formulation. The theorem thus consists of the continuity equation that emerges if we relate the characteristic function g µ α of (96) with the change of the Lagrangian density L that is induced by the infinitesimal canonical transformation (96). If we apply an infinitesimal canonical transformation with characteristic function g µ α to a given Hamiltonian system H, then the related change δL of the Lagrangian density L is determined by functions f β α . For the four vectors of the 4-current densities j j j α ("Noether currents") j β α (φ, π π π, x x x) = π Iγ ∂g β α ∂π Iγ − f β α (φ, π π π, x x x) we have a set of four equations ∂j β α ∂x β = ∂H ∂x α expl ,(103) which each represents a continuity equation for the Noether current j j j α if the Hamiltonian density H does not explicitly depend on the respective x α . It is, of course, not assured a priori that for a given function g β α in the generator (97) analytical functions f β α (φ, π π π, x x x) exist that satisfy Eq. (101). If, however, functions f β α of the canonical fields φ I , π Iµ exist, such that δL = δx α ∂f β α ∂x β , holds under transformation (97), then the infinitesimal canonical transformation generated by Eq. (96) represents a symmetry transformation of the given system. In that case, the continuity equation (103) holds for the 4-currents j j j α , defined by Eq. (102). We can express the Noether currents (102) alternatively in terms of the variation of the fields φ I . Inserting the transformation rule of Eq. (97) into Eq. (102), we get δx α j β α = π Iβ δφ I − δx α f β α .(104) Defining functions ψ Iα by δφ I = δx α ψ Iα (φ, π π π, x x x), then we can write Eq. (104) separately for each component α because of the independence of the δx α . We thus get an alternative formulation of Noether's theorem of Eqs. (102), (103), ∂j β α ∂x β = ∂H ∂x α expl , j β α = π Iβ ψ Iα (φ, π π π, x x x) − f β α (φ, π π π, x x x). In the particular case that the Lagrange density L remains invariant (δL = 0) under the infinitesimal canonical transformation (97), we have ∂f β α /∂x β = 0. We may then set f β α = 0, as otherwise the f β α would be trivial contributions to the current components j β α . Then, Noether's theorem takes on the simple form ∂j β α ∂x β = ∂H ∂x α expl , j β α (φ, π π π, x x x) = π Iγ ∂g β α ∂π Iγ .(105) The conventional form of Noether's is recovered for the special case of an infinitesimal point transformation. The latter is associated with a characteristic function g ν µ in the generator (96) that depends linearly on the π Iν g ν µ (φ, π π π, x x x) = π Iν ψ Iµ (φ, x x x), wherein each ψ Iµ denotes an arbitrary function of the φ I and the independent variables x x x. The ψ Iµ then determine the variation of the fields φ I according to the transformation rule from Eq. (97) δφ I ≡ φ I ′ − φ I = δx α ψ Iα (φ, x x x). The special Noether theorem -as it emerges similarly from the Lagrangian formalism -thus reads ∂j β α ∂x β = ∂H ∂x α expl , j β α (φ, π π π, x x x) = π Iβ ψ Iα (φ, x x x) =g β α −f β α (φ, x x x).(107) We note that due to the restriction to a generating function (96) with particular characteristic function g ν µ from Eq. (106), the Noether theorem of Eq. (107) cannot cover, in general, all symmetries of a given system. The reason is that the point transformation defined by Eq. As a simple example of an application of Noether's theorem, we now determine the continuity equation that emerges if a given system is invariant with respect to a shift δx x x in space-time x x x ′ = x x x + δx x x. The related change of the Lagrangian density L is expressed by Eq. (100), δL ≡ L ′ − L = ∂L ∂x α δx α = ∂f β α ∂x β δx α . As the δx α do not depend on each other, we get separately an equation for each component ∂L ∂x α = ∂f β α ∂x β . A shift of the reference system in space-time entails a variation of the fields φ I , δφ I ≡ φ I ′ − φ I = ∂φ I ∂x α δx α . As the transformed fields φ I ′ only depend on the original fields φ I , we are dealing with a point transformation. According to Eq. (106), the components g ν µ of the generating function (96) follow as ψ Iµ (φ, x x x) = ∂φ I ∂x µ , g ν µ (φ, π π π, x x x) = π Iν ∂φ I ∂x µ . For each index α, we can now set up Noether's theorem from Eq. (101), ∂ ∂x β π Iµ ∂g β α ∂x µ − ∂L ∂x α = ∂H ∂x α expl . Inserting the actual g β α and replacing the Lagrangian density L by a Hamiltonian density H according to Eq. (4) yields ∂ ∂x β π Iβ ∂φ I ∂x α − δ β α π Iγ ∂φ I ∂x γ + δ β α H = ∂H ∂x α expl . With the terms in parentheses on the left hand side, Noether's theorem obviously provides the components of the energy-momentum tensor T β α from Eq. (7) ∂T β α ∂x β = ∂H ∂x α expl . (108) If the Hamiltonian density H does not explicitly depend on x µ , then the system is invariant with respect to a shift of the independent variable x µ . We then get a continuity equation for the related Noether 4-current j j j µ , ∂H ∂x µ expl = 0 ⇐⇒ ∂T β µ ∂x β = 0, T β µ = π Iβ ∂φ I ∂x µ − δ β µ π Iγ ∂φ I ∂x γ + δ β µ H. For ∂H/∂x µ | expl = 0, the four components of the conserved Noether current are thus given by the µ-th column of the energy-momentum tensor T ν µ . Canonical transformation inducing an infinitesimal space-time step We consider the following generating function F µ 2 of an infinitesimal canonical transformation: F µ 2 (φ, π π π ′ , x x x) = φ I π Iµ ′ + δx α δ µ α H + π Iµ ∂φ I ∂x α − φ I ∂π Iµ ∂x α − δ µ α π Iβ ∂φ I ∂x β − φ I ∂π Iβ ∂x β + x β ∂π Iµ ∂x α ∂φ I ∂x β − ∂π Iµ ∂x β ∂φ I ∂x α .(109) In order to illustrate this generating function, we imagine for a moment a system with only one independent variable, t. As a consequence, only one conjugate field π I could exist for each φ I . In that system, the last six terms of Eq. (109) would obviously cancel, hence, the generating function F 2 would simplify to F 2 (φ I , π I ′ , t) = φ I π I ′ + H δt. We recognize this function from point mechanics as the generator of the infinitesimal canonical transformation that shifts an arbitrary Hamiltonian system along an infinitesimal time step δt. Alternatively, we can express the generating function (109) in terms of the energy-momentum tensor from Eq. (7) F µ 2 (φ, π π π ′ , x x x) = φ I π Iµ ′ + δx α T µ α − φ I ∂π Iµ ∂x α − δ µ α ∂π Iβ ∂x β + x β ∂π Iµ ∂x α ∂φ I ∂x β − ∂π Iµ ∂x β ∂φ I ∂x α . We observe that it is essentially the energy-momentum tensor (7) that determines the infinitesimal space-time step transformation. Applying the general transformation rules (19) for generating functions of type F F F 2 to the generator from Eq. (109), then -similar to the preceding exampleonly terms of first order in δx µ need to be taken into account. The derivative of F µ 2 with respect to φ I yields π Iµ = ∂F µ 2 ∂φ I = π Iµ ′ + δx α δ µ α ∂H ∂φ I − ∂π Iµ ∂x α + δ µ α ∂π Iβ ∂x β = π Iµ ′ + δx α −δ µ α ∂π Iβ ∂x β − ∂π Iµ ∂x α + δ µ α ∂π Iβ ∂x β = π Iµ ′ − ∂π Iµ ∂x α δx α . This means for δπ Iµ ≡ π Iµ ′ − π Iµ δπ Iµ = ∂π Iµ ∂x α δx α .(110) To first order, the general transformation rule (19) for the field φ I takes on the particular form for the actual generating function (109): φ I ′ δ µ ν = ∂F µ 2 ∂π Iν ′ = φ I δ µ ν + δx α δ µ α ∂H ∂π Iν + δ µ ν ∂φ I ∂x α − δ µ α δ β ν ∂φ I ∂x β = φ I δ µ ν + δx α δ µ α ∂φ I ∂x ν + δ µ ν ∂φ I ∂x α − δ µ α ∂φ I ∂x ν = φ I δ µ ν + δ µ ν ∂φ I ∂x α δx α , hence with δφ I ≡ φ I ′ − φ I δφ I = ∂φ I ∂x α δx α .(111) The transformation rule δH ≡ H ′ − H for the Hamiltonian density finally follows from the explicit dependence of the generating function on the x ν δH = ∂F µ 2 ∂x µ expl = δx α δ µ α ∂H ∂x µ expl + δ β µ ∂π Iµ ∂x α ∂φ I ∂x β − ∂π Iµ ∂x β ∂φ I ∂x α = δx α ∂H ∂x α expl + ∂π Iµ ∂x α ∂φ I ∂x µ − ∂π Iµ ∂x µ ∂φ I ∂x α = δx α ∂H ∂x α expl + ∂H ∂π Iµ ∂π Iµ ∂x α + ∂H ∂φ I ∂φ I ∂x α = ∂H ∂x α δx α .(112) Summarizing, we infer from the transformation rules (110), (111), and (112) that the generating function (109) defines the particular canonical transformation that infinitesimally shifts a given system in space-time in accordance with the canonical field equations (5). As such a canonical transformation can be repeated an arbitrary number of times, we can induce that a transformation along finite steps in space-time is also canonical. We thus have the important result the space-time evolution of a system that is governed by a Hamiltonian density itself constitutes a canonical transformation. As canonical transformations map Hamiltonian systems into Hamiltonian systems, it is ensured that each Hamiltonian system remains so in the course of its space-time evolution. Lorentz gauge as a canonical point transformation of the Maxwell Hamiltonian density The Hamiltonian density H M of the electromagnetic field was derived in Sec. 4.4. The correlation of the conjugate fields π µν with the 4-vector potential A A A is determined by the first field equation (63) as the generalized rotation of A A A. This means, on the other hand, that the correlation between A A A and the π µν is not unique. Defining a transformed vector potential A A A ′ according to A µ ′ = A µ + ∂χ(x x x) ∂x µ ,(113) with χ = χ(x x x) an arbitrary differentiable function, we find π µν ′ = ∂A ν ′ ∂x µ − ∂A µ ′ ∂x ν = ∂A ν ∂x µ + ∂ 2 χ(x x x) ∂x ν ∂x µ − ∂A µ ∂x ν − ∂ 2 χ(x x x) ∂x µ ∂x ν = ∂A ν ∂x µ − ∂A µ ∂x ν = π µν .(114) We will now show that the gauge transformation (113) can be regarded as a canonical point transformation, whose generating function F ν 2 is given by F ν 2 (A A A, π π π ′ , x x x) = A λ + ∂χ(x x x) ∂x λ π λν ′ .(115) In the notation of this example, the general transformation rules (19) are rewritten as π µν = ∂F ν 2 ∂A µ , A µ ′ δ ν α = ∂F ν 2 ∂π µα ′ , H ′ = H + ∂F α 2 ∂x α expl ,(116) which yield for the particular generating function of Eq. (115) the transformation prescriptions π µν = ∂A λ ∂A µ π λν ′ = δ µ λ π λν ′ = π µν ′ A µ ′ δ ν α = A λ + ∂χ(x x x) ∂x λ δ λ µ δ ν α = A µ + ∂χ(x x x) ∂x µ δ ν α ⇒ A µ ′ = A µ + ∂χ(x x x) ∂x µ H ′ = H + ∂ 2 χ(x x x) ∂x λ ∂x α π λα = H. The canonical transformation rules coincide with the correlations of Eqs. (113) and (114) defining the Lorentz gauge. The last equation holds because of the antisymmetry of the canonical momentum tensor π λα = −π αλ . Thus, the value of the Hamiltonian density (62) is invariant under the Lorentz gauge. In oder to determine the conserved "Noether current" that is associated with the canonical point transformation generated by F F F 2 from Eq. (115), we need the generator of the corresponding infinitesimal canonical point transformation, F ν 2 (A A A, π π π ′ , x x x) = A λ π λν ′ + ǫg ν (π π π, x x x), g ν = π λν ∂χ(x x x) ∂x λ .(117) From the pertaining transformation rules π µν ′ = π µν , A µ ′ = A µ + ǫ ∂χ(x x x) ∂x µ , H ′ = H, we directly find that L is also maintained, which means that ∂f β /∂x β = 0, hence f β = 0 according to Eq. (100). In a source-free region, we have j j j(x x x) = 0, hence ∂H M /∂x µ | expl = 0 for all µ. The Noether theorem for point transformations from Eq. (107) then directly yields the conserved 4-current j j j N ∂j α N ∂x α = 0, j µ N (π π π, x x x) = π λµ ∂χ(x x x) ∂x λ = ∂A µ ∂x λ − ∂A λ ∂x µ ∂χ(x x x) ∂x λ . We verify that j j j N is indeed the conserved Noether current by calculating its divergence ∂j α N ∂x α = π λα ∂ 2 χ ∂x λ ∂x α + ∂π λα ∂x α ∂χ ∂x λ .(118) For the Hamiltonian density H M of the electromagnetic field from Eq. (62), the tensor π µν of canonical momenta is antisymmetric. The first term on the right hand side of Eq. (118) thus vanishes from symmetry considerations. In source-free regions, the canonical field equation from Eq. (59) is ∂π λα ∂x α = 0. Therefore, the second term on the right hand side of Eq. (118) also vanishes. Extended gauge transformation of the coupled Klein-Gordon-Maxwell field, local gauge invariance The Hamiltonian density H KGM of a complex Klein-Gordon field that couples minimally to an electromagnetic 4-vector potential A A A was introduced in Sec. 4.6 by Eq. (69). We now define for this Hamiltonian density an "extended gauge transformation" by means of the generating function F µ 2 = φπ µ ′ e −iqχ(x x x) + φ * π µ ′ * e iqχ(x x x) + A ν + ∂χ(x x x) ∂x ν Π νµ ′ .(119) Because of the explicit dependence of χ on x x x, this generator (119) defines a local gauge transformation. The general transformation rules (19), (116) read for the present generating function: Π λµ ′ = Π λµ , A µ ′ = A µ + ∂χ ∂x µ , π µ ′ = π µ e iqχ(x x x) , φ ′ = φe −iqχ(x x x) , π µ ′ * = π µ * e −iqχ(x x x) , φ ′ * = φ * e iqχ(x x x) , H ′ = H + iq φ * π α * − φπ α ∂χ(x x x) ∂x α . In the transformation rule for the Hamiltonian density, the term Π να ′ ∂ 2 χ/∂x ν ∂x α vanishes because as Π να is antisymmetric. The gauge-transformed Hamiltonian density H ′ KGM is now obtained by inserting the transformation rules into the Hamiltonian density H KGM of Eq. (69), H ′ KGM = π * µ ′ π µ ′ + iqA µ ′ π * µ ′ φ ′ * − π µ ′ φ ′ − Ω 2 φ ′ * φ ′ − 1 4 Π µν ′ Π µν ′ . We observe that the Hamiltonian density (69) is form invariant under the canonical transformation generated by F F F 2 from Eq. (119). In order to derive the conserved Noether current that is associated with this symmetry transformation, we first set up the generating function of the infinitesimal canonical transformation corresponding to (119) F µ 2 = φπ µ ′ + φ * π µ ′ * + A ν Π νµ ′ + ǫ (g µ 1 + g µ 2 + g µ 3 ) , with the characteristic functions g µ 1,2,3 given by: g µ 1 = −iqφπ µ χ(x x x), g µ 2 = iqφ * π µ * χ(x x x), g µ 3 = ∂χ(x x x) ∂x ν Π νµ . The subsequent transformation rules are Π λµ ′ = Π λµ , A µ ′ = A µ + ǫ ∂χ ∂x µ , π µ ′ = π µ (1 + ǫiqχ(x x x)), φ ′ = φ(1 − ǫiqχ(x x x)), π µ ′ * = π µ * (1 − ǫiqχ(x x x)), φ ′ * = φ * (1 + ǫiqχ(x x x)) , H ′ = H + iq φ * π α * − φπ α ∂χ(x x x) ∂x α L ′ = L. For a conserved Lagrangian density, we have f β = 0. The Noether theorem from Eq. (107) now directly yields the conserved Noether current j j j N , j µ N = g µ 1 + g µ 2 + g µ 3 hence for the present case j µ N = iqχ(x x x) φ * π µ * − φπ µ + Π νµ ∂χ(x x x) ∂x ν . By direct calculation, we again verify that ∂j α N /∂x α = 0. Spontaneous breaking of gauge symmetry, Higgs mechanism In order to present the Higgs mechanism in the context of the canonical transformation approach, we consider the Hamiltonian density H H = π * µ π µ + iqA µ π * µ φ * − π µ φ − Ω 2 φ * φ + 1 2 λ 2 (φ * φ) 2 − 1 4 Π µν Π µν .(120) This Hamiltonian density differs from the density (69) of Sec. 4.6 by an additional fourth order potential term. As all terms of the form φ * φ are invariant with respect to local gauge transformations generated by Eq. (119), the Hamiltonian density (120) is also form invariant. The potential U (φ) of the Hamiltonian density (120) U (φ) = −Ω 2 φ * φ + 1 2 λ 2 (φ * φ) has a minimum (φ * φ) min = Ω 2 λ 2 . Thus, φ min lies on a circle in the complex plane φ min = Ω λ e iω , 0 ≤ ω ≤ 2π. We now want to express the Hamiltonian density from Eq. (120) in terms of the shifted potential φ ′ φ ′ = φ − φ min , ∂φ ′ ∂x µ = ∂φ ∂x µ . Because of φ min = const., the derivatives of φ with respect to the x µ must be unchanged under this transformation. Being defined by the generating function F µ 2 = π µ ′ − i Ω λ qA µ e −iω φ − Ω λ e iω ,(121) this transformation is actually canonical. As the transformation only affects the fields φ and π µ , the other fields, A µ and Π µν that are contained in the Hamiltonian density (120) must be treated as constant parameters. The transformation rules following from Eq. (121) are π µ ′ = π µ + i Ω λ qA µ e −iω , π * µ ′ = π * µ − i Ω λ qA µ e iω φ ′ = φ − Ω λ e iω , φ ′ * = φ * − Ω λ e −iω . As the generating function (121) H ′ H = π * µ ′ π µ ′ + 1 2 Ω φ ′ * e iω + φ ′ e −iω + λφ ′ * φ ′ 2 − 1 4 Π µν Π µν − Ω 2 λ 2 q 2 A µ A µ +iqA µ π * µ ′ φ ′ * − π µ ′ φ ′ − Ω λ q 2 A µ A µ φ ′ * e iω + φ ′ e −iω − Ω 4 2λ 2 .(122) We verify that the transformation does not change the derivatives of φ, as requested, ∂φ ∂x µ = ∂H H ∂π µ = π * µ − iqA µ φ = π * µ ′ + i Ω λ qA µ e iω − iqA µ φ ′ + Ω λ e iω = π * µ ′ − iqA µ φ ′ = ∂H ′ H ∂π µ ′ = ∂φ ′ ∂x µ . We express the complex fields φ ′ = φ ′ 1 +iφ ′ 2 and π µ ′ = π µ ′ 1 +iπ µ ′ 2 in the Hamiltonian density (122) in terms of real field components. We thus encounter the equivalent representation of H ′ H , H ′ H = π 1,µ ′ π µ ′ 1 + π 2,µ ′ π µ ′ 2 + 1 2 2Ω (φ ′ 1 cos ω + φ ′ 2 sin ω) + λ φ ′ 2 1 + φ ′ 2 2 2 − 1 4 Π µν Π µν − Ω 2 λ 2 q 2 A µ A µ (123) +2qA µ (π 2,µ ′ φ ′ 1 + π 1,µ ′ φ ′ 2 ) − 2Ω λ q 2 A µ A µ (φ ′ 1 cos ω + φ ′ 2 sin ω) − Ω 4 2λ 2 . We now observe that the massless gauge field A µ that is contained in the original Hamiltonian density from Eq. (120) now appears as massive field through the term (q Ω/λ) 2 A µ A µ = (φ 2 1 + φ 2 2 ) min q 2 A µ A µ in the transformed Hamiltonian density from Eq. (123) -independently from the gauge of φ and the angle ω. This term thus originates in the shift of the reference system of φ. The amount of the emerging mass depends on the depth of the potential's minimum, hence from the underlying potential model. Because of the gauge freedom of the original Hamiltonian density (120), we may gauge it to yield φ 2 ≡ 0, ∂φ 2 /∂x µ ≡ 0. Under this gauge, we have π 2,µ ′ = −qA µ φ ′ 1 , which eliminates the unphysical coupling terms A µ ∂φ ′ 1,2 /∂x µ that are contained in the Hamiltonian density (123). The transformed Hamiltonian density from Eq. (123) then simplifies to, setting ω = 0 H ′ H = π 1,µ ′ π µ ′ 1 + 2Ω 2 φ ′ 2 1 + 2λΩφ ′ 3 1 + 1 2 λ 2 φ ′ 4 1 − 1 4 Π µν Π µν − Ω 2 λ 2 q 2 A µ A µ −2q 2 A µ A µ φ ′ 2 1 − 2Ω λ q 2 A µ A µ φ ′ 1 − Ω 4 2λ 2 .(124) The physical system that is described by the Hamiltonian density (124) emerged by means of a canonical transformation from the original density (120). Therefore, both systems are canonically equivalent. In the transformed system, we only consider the real part φ 1 of the complex field φ. The corresponding degree of freedom now finds itself in the mass of the massive vector field A µ . This transformation of two massive scalar fields φ 1,2 that interact with a massless vector field A µ into a single massive scalar field φ ′ 1 plus one now massive vector field A µ is commonly referred to as Higgs mechanism. SU(2) gauge transformation as a canonical transformation Provided that the parameters for a local SU(2) transformation are given by θ(x) = (θ 1 , θ 2 , θ 3 ), the infinitesimal generating function that defines the transformation of the scalar fields has the form Yet, the converse is not true. There exist canonical transformations of the fields that cannot be expressed as point transformations in the Lagrangian formalism. We provided two examples of canonical transformations like that, namely the general infinitesimal canonical transformation yielding the generalized Noether theorem, and the canonical transformation inducing an infinitesimal space-time step that conforms to the canonical field equations. Compared to the Lagrangian description, the canonical transformation formalism of covariant Hamiltonian field theory thus allows to define more general classes of gauge transformations in relativistic quantum field theories. We refer to ∆ L µ as the Lagrangian vector field. Acting on the function f , this is identical to the Lie derivative of f with respect to the vector field ∆ L µ ∆ L µ f ≡ L L L ∆ L µ f. Herein L L L ∆ L µ , µ = 0, . . . , 3 denotes the four Lie operators that act of the function f . We now define the Liouville 1-forms θ µ in local coordinates by θ µ = ∂L ∂(∂ µ φ I ) dφ I . The Lie derivative of this 1-form along the vector field ∆ L µ , followed by a summation over µ yields another 1-form, L L L ∆ L µ θ µ = ∆ L µ dθ µ + d ∆ L µ θ µ = ∆ L µ ∂ 2 L ∂(∂ µ φ I )∂(∂ ν φ J ) d ∂ ν φ J ∧ dφ I + ∂ 2 L ∂(∂ µ φ I )∂φ J dφ J ∧ dφ I + d ∂L ∂(∂ µ φ I ) ∂φ I ∂x µ = ∂ 2 L ∂(∂ µ φ I )∂(∂ ν φ J ) ∂ 2 φ J ∂x µ ∂x ν + ∂ 2 L ∂(∂ µ φ I )∂φ J ∂φ J ∂x µ dφ I + ∂L ∂(∂ µ φ I ) d ∂ µ φ I = ∂ ∂x µ ∂L ∂(∂ µ φ I ) dφ I + ∂L ∂(∂ µ φ I ) d ∂ µ φ I Inserting the Euler-Lagrange field equation (2), we finally get L L L ∆ L µ θ µ = ∂L ∂φ I dφ I + ∂L ∂(∂ µ φ I ) d ∂φ I ∂x µ = dL. The 1-form equation L L L ∆ L µ θ µ = dL (B.1) is thus the geometric representation of the Euler-Lagrange field equation (2). The Lie derivative of the 1-Form θ µ along the µ component ∆ L µ of the Lagrangian vector field, summed over all µ = 0, . . . , 3, equals the exterior derivative dL of the Lagrangian density L. All three constituents of this equation, namely, the operators L L L ∆ L µ of the Lie derivative along the Lagrangian vector field ∆ L µ , the 1-forms θ µ , and the exterior derivative of the Lagrangian density L show up as geometric quantities, hence without any reference to a local coordinate system. In order to formulate the corresponding geometric representation of the covariant canonical field equations (5) In terms of the conjugate fields π Iµ , we rewrite the 1-form θ µ as θ µ = π Iµ dφ I . (B. 3) The 2-form ω µ can now be defined as the negative exterior derivative of the 1-form θ µ ω µ = −dθ µ = −d π Iµ dφ I , hence as the wedge product ω µ = −dπ Iµ ∧ dφ I = dφ I ∧ dπ Iµ . We calculate the inner product ∆ H µ ω µ , i.e., the 1-form that emerges from contracting the 2-forms ω µ with the vector fields ∆ thus embodies the geometric representation of the covariant canonical field equations (5). The contraction of the Hamiltonian vector fields ∆ H µ with the 2-forms ω µ is equal to the exterior derivative dH of the Hamiltonian density H. We can now furthermore calculate the sum of the Lie derivatives L L L ∆ H µ of the 1-forms θ µ along the pertaining Hamiltonian vector fields ∆ H µ , L L L ∆ H µ θ µ = d ∆ H µ θ µ + ∆ H µ dθ µ = d π Iµ ∂φ I ∂x µ − H = dL. We immediately conclude that the sum of the Lie derivatives of the 2-forms ω µ must vanish L L L ∆ H µ ω µ = −L L L ∆ H µ (dθ µ ) = −dL L L ∆ H µ θ µ = −ddL = 0. The sum of the 2-forms ω µ along the fluxes that are generated by the ∆ H µ is thus invariant. Vice versa, if L L L ∆ H µ ω µ = 0 holds, then we have, because of dω µ = 0 L L L ∆ H µ ω µ = d ∆ H µ ω µ . The 2-form L L L ∆ H µ ω µ = 0 is thus closed and also exact, according to Poincaré's Lemma. Therefore, we can always find a function H with dH = ∆ H µ ω µ . the physical meaning of the De Donder-Weyl Hamiltonian density H. Generating functions of type differ only by the sign and the interchange of the indices µ and ν. Thereby, Eq. (42) is proved. This equation coincides with the transformation rule (14) of the Lagrangians for the particular case L ′ ≡ 0. The 4-vector function S S S thus embodies the field theory analogue of Hamilton's principal function S, dS/dt = L of point mechanics. 4. Examples for Hamiltonian densities in covariant field theory 4.1. Ginzburg-Landau Hamiltonian density Similar to the previous examples, the first field equation reproduces the definition of the conjugate tensor field π µν from the Lagrangian density L M . The second canonical field equation of Eqs. (5) is obtained calculating the derivative of the Hamiltonian density (62) with respect to A λ − ∂π λα ∂x α = ∂H M ∂A λ = 4π c j λ . (106) is not the most general transformation that conserves the variational principle of Eq. (12). Symmetries of Hamiltonian (Lagrangian) systems that cannot be represented by point transformations, are referred to in literature as "non-Noether symmetries". Yet, such symmetries can always be expressed in terms of the generalized form of Noether's theorem from Eqs. (102), (103). kfte Covariant Hamiltonian Field Theory 41 5.4.1. Example: shift of reference system in space-time F µ 2 2= 1 − i 2 τ θ π Iµ ′ φ I , with the Pauli matrices from Eqs. (82). The general transformation rules from Eqs. (19) then yield the following particular equations for the transformation of functions of Hamiltonian point dynamics, the generating functions in the realm of Hamiltonian field theory are now 4-vectors. Similarly, Poisson brackets, Lagrange brackets, as well as the canonical 2-forms now equally emerge as components of vector quantities. This emerging of 4-vector quantities in place of the scalar quantities in Hamiltonian point dynamics reflects the transition to four independent variables in Lorentz-invariant field theories. In the usual Lagrangian description that is based on a covariant Lagrangian density L, mappings of the fields φ I and their four partial derivatives ∂ µ φ I are formulated in terms of point transformations -which constitute a subgroup of the superordinated group of canonical transformations. Thus, all point transformations in the Lagrangian formalism can be reformulated as canonical point transformations in the Hamiltonian description. This was demonstrated in our example section, where several well-known symmetry transformations of Lagrangian densities L were reformulated as canonical point transformations of corresponding Hamiltonian densities H. November 4, 2008 14:0 WSPC/INSTRUCTION FILE kfte Covariant Hamiltonian Field Theory 17 which coincides with Eq. (47), as expected. Covariant Hamiltonian Field Theory 23kfte does not explicitly depend on the x µ , we conclude that H ′ H = H H . The transformed Hamiltonian density H ′ H is thus obtained by expressing the unprimed fields in terms of the primed ones, , we define the Hamiltonian vector field ∆ H µ as the Legendre transformed vector field ∆ L µ ,∆ H µ = ∂φ I ∂x µ ∂ ∂φ I + ∂π Iα ∂x µ ∂ ∂π Iα . (B.2) H µ , ∆ H µ ω µ = ∆ H µ dφ I ∧ dπ Iµ = ∆ H µ dφ I dπ Iµ − ∆ H µ dπ Iµ dφ I = ∂φ I ∂x µ dπ Iµ −∂π Iα ∂x α dφ I . Inserting the canonical field equations (5) yields ∆ H µ ω µ = ∂H ∂π Iµ dπ Iµ + ∂H ∂φ I dφ I = dH.The 1-form equation ∆ H µ ω µ = dH (B.4) November 4, 2008 14:0 WSPC/INSTRUCTION FILE kfte Covariant Hamiltonian Field Theory 55 J. Struckmeier, A. Redelbach J. Struckmeier, A. Redelbach November 4, 2008 14:0 WSPC/INSTRUCTION FILE J. Struckmeier, A. Redelbach November 4, 2008 14:0 WSPC/INSTRUCTION FILE kfte Both, the gauge fields as well as the momentum field tensors are thus mapped like a triplet under a SU(2)transformation. In the Abelian case of the electromagnetic field, all terms with ǫ contributions vanish. The non-Abelian gauge fields thus carry a charge under SU(2) so that self-interactions take place. The corresponding self-coupling terms of the gauge fields can thus be deduced from the Lagrangian density (81), considering the structure of f µν a in (82), or, alternatively, from the Hamiltonian density (86) considering Eq. (84).ConclusionsWith the present paper, we have worked out a consistent local coordinate description of the foundations of covariant Hamiltonian field theory. The essential feature of this field theory is to define for each scalar field φ I a 4-vector π π π I of conjugate fields. Similar to classical Hamiltonian point dynamics, these fields, φ I and the four π Iµ , are treated as independent variables. All mappings (φ I , π π π I ) → (φ I ′ , π π π I ′ ) of these fields that preserve the Hamiltonian structure of the given dynamical system are referred to as canonical transformations. The local coordinate description enables us to explicitly formulate the field transformation rules for canonical transformations as derivatives of generating functions. In contrast to the scalar generating Appendix A. Proof of the symmetry relation from Eq. (28)From the transformation rules for the generating functions F F F 1,2,3,4 , the symmetry relations(17),(20),(23), and (26) were derived, viz,The third symmetry relation represents the particular case µ = α = β of the general relationTo prove this, we first consider the following identities that hold for the derivatives of the fields that emerge from a general, not necessarily canonical transformation, π π π ′ (φ, π π π) and π Iµ = π Iµ φ ′ (φ, π π π), π π π ′ (φ, π π π) :We express these identities in matrix form asBoth sides of this equation are now multiplied from the right by the matrixWe thus getIf the transformation is canonical, then we can insert the identities for the fundamental Poisson brackets from Eqs.(33)and(34),The matrix products on the right hand side isBy comparing the individual matrix components, we obtain the following four relations:Comparing now Eq. (A.1a) with the symmetry relation from Eq. (23), we concludeConsequently, the inner product of this expression with ∂φ J ′ /∂φ I must also vanish,Because of ∂φ J ′ /∂π Kα ′ = 0, this equation is equivalent toAs this equation must hold for arbitrary coefficients ∂φ J ′ /∂π Iµ , it must hold separately for each index µ,In conjunction with Eq. (A.1c), this proves the assertion.Appendix B. Geometric representation of the field equationsLet f = f (φ I , ∂ µ φ) denote a function depending on the fields φ I and on the 1-form ∂ µ φ that is constituted by their first derivatives. Of course, the dynamics of the fields φ I is supposed to follow from the Euler-Lagrange field equations(2). The change of f due to a change of the φ I and the ∂ µ φ I is then. W Greiner, B Müller, J Rafelski, Quantum Electrodynamics of Strong Fields. BerlinSpringer-VerlagW. Greiner, B. Müller, and J. Rafelski, Quantum Electrodynamics of Strong Fields, (Springer-Verlag, Berlin, 1985). S Weinberg, The Quantum Theory of Fields. CambridgeCambridge University PressIS. Weinberg, The Quantum Theory of Fields, Vol. I, (Cambridge University Press, Cambridge, 1996). L H Ryder, Quantum Field Theory. CambridgeCambridge University PressL. H. Ryder, Quantum Field Theory, (Cambridge University Press, Cambridge, 2006). Introduction to Elementary Particles. D Griffiths, J. Wiley & SonsNew YorkD. Griffiths, Introduction to Elementary Particles, (J. Wiley & Sons, New York, 1987). . J V José, E J Saletan, Classical Dynamics. Cambridge University PressJ. V. José, E. J. Saletan, Classical Dynamics, (Cambridge University Press, Cam- bridge, 1998). Théorie Invariantive Du Calcul des Variations. De Th, Donder, Gaulthier-Villars & CieParisTh. De Donder, Théorie Invariantive Du Calcul des Variations, (Gaulthier-Villars & Cie., Paris, 1930). Geodesic Fields in the Calculus of Variation for Multiple Integrals. H , Annals of Mathematics. 36607H. Weyl, Geodesic Fields in the Calculus of Variation for Multiple Integrals, in Annals of Mathematics 36 607 (1935). I V Kanatchikov, Canonical Structure of Classical Field Theory in the Polymomentum Phase Space. 4149I. V. Kanatchikov, Canonical Structure of Classical Field Theory in the Polymomen- tum Phase Space, in Reports on Mathematical Physics 41 49 (1998). M J Gotay, J Isenberg, J E Marsden, R Montgomery, arXiv:physics/9801019v2Momentum Maps and Classical Fields, Part I, Covariant Field Theory. M. J. Gotay, J. Isenberg, J. E. Marsden, and R. Montgomery, Momentum Maps and Classical Fields, Part I, Covariant Field Theory, arXiv:physics/9801019 v2, 19 Aug 2004. Canonical Theories of Lagrangian Dynamical Systems in Physics. H A Kastrup, Physics Reports. 1011H. A. Kastrup, Canonical Theories of Lagrangian Dynamical Systems in Physics, in Physics Reports 101 1 (1983). . P C Paufler, Multisymplektische Feldtheorie, GermanyUniversity of Freiburg im BreisgauThesisP. C. Paufler, Multisymplektische Feldtheorie, Thesis, University of Freiburg im Breis- gau, Germany (2001). The Poisson Bracket for Poisson Forms in Multisymplectic Field Theory. M Forger, C Paufler, H Römer, Reviews in Mathematical Physics. 15705M. Forger, C. Paufler, and H. Römer, The Poisson Bracket for Poisson Forms in Multisymplectic Field Theory, in Reviews in Mathematical Physics 15 705 (2003). A Echeverría-Enríquez, M C Muñoz-Lecanda, N Román-Roy, Geometry of Lagrangian First-oder Classical Field Theories. 44235A. Echeverría-Enríquez, M. C. Muñoz-Lecanda, and N. Román-Roy, Geometry of Lagrangian First-oder Classical Field Theories, in Fortschr. Phys. 44 235 (1996). Generalized Hamiltonian Formalism for Field Theory. G Sardanashvily, World Scientific Publishing CoSingaporeG. Sardanashvily, Generalized Hamiltonian Formalism for Field Theory, (World Sci- entific Publishing Co., Singapore, 1995). p-mechanics as a physical theory: an introduction. V V Kisil, J. Phys. A: Math. Gen. 37183V. V. Kisil, p-mechanics as a physical theory: an introduction, in J. Phys. A: Math. Gen. 37 183 (2004). D J Saunders, The Geometry of Jet Bundles. CambrigdeCambridge University PressD. J. Saunders, The Geometry of Jet Bundles, (Cambridge University Press, Cam- brigde, 1989). The Polysymplectic Hamiltonian Formalism in Field Theory and Calculus of Variations I: The Local Case. Ch, Günther, J. Differential Geometry. 2523Ch. Günther, The Polysymplectic Hamiltonian Formalism in Field Theory and Cal- culus of Variations I: The Local Case, in J. Differential Geometry 25 23 (1987). . W Greiner, Classical Mechanics. Springer VerlagW. Greiner, Classical Mechanics, (Springer Verlag, New York, 2003). H Goldstein, C Poole, J Safko, Classical Mechanics. Upper Saddle River, NJAddison-Wesley3rd ed.H. Goldstein, C. Poole, and J. Safko, Classical Mechanics, 3rd ed. (Pearson, Addison- Wesley, Upper Saddle River, NJ, 2002). On canonical formalism in field theory with derivatives of higher ordercanonical transformations. D Mušicki, J. Phys. A: Math. Gen. 1139D. Mušicki, On canonical formalism in field theory with derivatives of higher order- canonical transformations, in J. Phys. A: Math. Gen. 11 39 (1978). S Gasiorowicz, Elementary particle physics. New YorkWileyS. Gasiorowicz, Elementary particle physics, (Wiley, New York, 1966). The Hamilton-Jacobi theory of De Donder and Weyl applied to some relativistic field theories. J Rieth, J. Math. Phys. 251102J. von Rieth, The Hamilton-Jacobi theory of De Donder and Weyl applied to some relativistic field theories, in J. Math. Phys. 25 1102 (1984). Pei Ta, Ling-Fong Cheng, Li, Gauge theory of elementary particle physics. OxfordOxford University PressTa-Pei Cheng and Ling-Fong Li, Gauge theory of elementary particle physics, (Oxford University Press, Oxford, 2000).
[]
[ "ASYMPTOTIC FREENESS OF JUCYS-MURPHY ELEMENTS", "ASYMPTOTIC FREENESS OF JUCYS-MURPHY ELEMENTS" ]
[ "Lech Jankowski " ]
[]
[]
We give a natural proof of the appearance of free convolution of transition measures in outer product of representations of symmetric groups by showing that some special Jucys-Murphy elements are asymptotically free.
null
[ "https://arxiv.org/pdf/1202.0888v1.pdf" ]
118,148,656
1202.0888
010ae30ce72989a05d9fd3d3421d65dfecd263c9
ASYMPTOTIC FREENESS OF JUCYS-MURPHY ELEMENTS Lech Jankowski ASYMPTOTIC FREENESS OF JUCYS-MURPHY ELEMENTS We give a natural proof of the appearance of free convolution of transition measures in outer product of representations of symmetric groups by showing that some special Jucys-Murphy elements are asymptotically free. Introduction Almost everything seems to be known about representations of the symmetric groups S n . The answers for basic questions are encoded in the combinatorial structure of Young diagrams, there is for example Murnaghan-Nakayama formula for computing characters. In asymptotic representation theory, as n becomes large, this combinatorial description becomes very complicated and inefficient for concrete calculations so one needs some new methods. One way of dealing with that is to replace a Young diagram with its transition measure (introduced by Kerov [Ker99,Ker03]). It is related to the shape of a Young diagram. The theory is then more analytical and it becomes much easier, for example, to describe the asymptotics of characters. In 1986 Voiculescu discovered a new type of independence, called free independence or just freeness. Just like in the case of classical independence of random variables, Voiculescu's freeness leads to a special type of convolution of probability measures on the real line, called free convolution. It turnes out that it can be found in real life situations, for example in random matrix theory. [Spe93] What is important for this article is that the asymptotic representation theory described in terms of the transition measure is related to Voiculescu's free probability theory. This was first described in [Bia95] in a special case of the left regular representation of S n . Then in [Bia98] Biane discovered more connections between asymptotic representation theory and free probability, e.g. he proved that the typical irreducible component of outer product of two irreducible representations of S n can be asymptotically described by the free convolution of their transition measures. It is an important result as computing outer product of representations is a difficult matter related to Littlewood-Richardson coefficients. Biane's proof was quite difficult and somehow unnatural, as there was no freely independent random variables. In this paper we will show that the reason for appearance of free convolution is that some elements of group algebra C(S n ) called Jucys-Murphy elements are asymptotically freely independent. Preliminaries 0.1. Partitions. A partition of A = {1, . . . , m} is any collection π = {B 1 , . . . , B k } of pairwise disjoint, nonempty subsets of A such that B 1 ∪ · · · ∪ B k = A. We call the elements of π blocks and denote their number by |π|. If every block has two elements we call π a pair partition. Assume we have a partition π = {(1, 5, 6, 8), (2, 4), (3), (7, 9)}. We can draw it in the following way: A partition π is said to be non-crossing if there exist no 1 ≤ k 1 < l 1 < k 2 < l 2 ≤ m such that {k 1 , k 2 }, {l 1 , l 2 } are contained in two different blocks of π. This condition just means that when we draw π in the above-described way, the lines do not cross. A block B of π is an inner block of a block B if there exist two numbers in B , one smaller than any number in B and one bigger than any number in B. We will distinguish between direct and indirect inner In this article we will use the following notations: (1) P (m) is the set of all partitions of an m-element set. (2) N C(m) is the set of all non-crossing partitions. (3) N C 1,2 (m) is the set of those non-crossing partitions whose every block has cardinality one or two. (4) N C 1<2 (m) is the subset of N C 1,2 (m) of those partitions whose every one-element block is an inner block of some two-element block. (5) N C ≥2 (m) is the set of all non-crossing partitions of an m-element set having blocks of cardinality at least two. Let us define a function F : N C 1<2 (m) → N C ≥2 (m) by the following procedure: for every two-element block B of a partition π ∈ N C 1<2 (m) take all its direct inner one-element blocks and sum them together with B. This procedure gives us a new non-crossing partition F (π) which does not have any one-element blocks. It is clear that F is a bijection as there is a procedure to get π back from F (π): for every block B which has at least three elements take all the elements except the biggest and the smallest one and put them separately to new blocks. The following picture shows an example of how the function F works. We can now write N C ≥2 (m) F ← → N C 1<2 (m) ⊂ N C 1,2 (m) ⊂ N C(m) ⊂ P (m). Let p = (A 1 , . . . A m ) be a tuple of objects of any kind, some of them may appear many times in the tuple. We can think of (A 1 , . . . A m ) as a coloring of a set {1, 2, . . . , m}, namely we think of A i as the color of the number i. We say that a partition π respects coloring p if no block of π contains two numbers with different colors. Let N C (p) (m) denote the set of all non-crossing partitions respecting the coloring p. If π ∈ N C (p) (m), then p induces the coloring of blocks of π. In this paper the coloring will have only two colors: X and Y . 0.2. Representation theory. A group representation is any homomorphism ρ from a group G to the automorphism group of some vector space V ρ : G → Aut(V ) A character of a representation ρ is a function χ ρ : G → C given by χ ρ (g) = tr ρ(g) where tr denotes trace of a matrix divided by its dimension, i.e. tr(ρ(g)) = Tr(ρ(g)) Tr(ρ(e)) . A representation is called irreducible if it is not a direct sum of other representations [Ser77]. Two representations ρ and ρ are equivalent if there are bases in corresponding vector spaces V ρ and V ρ such that the matrices ρ(g) and ρ (g) are equal for all g ∈ G. It turns out that for a finite group G the maximal number of pairwise inequivalent irreducible representations is finite and for symmetric groups irreducible representations can be enumerated by objects called Young diagrams. More information about representation theory can be found in [Ser77]. A Young diagram with n boxes can be defined as a descending sequence of nonnegative integers summing up to n or as a geometric object, namely a finite collection of boxes, or cells, arranged in leftjustified rows with the non-increasing row lengths when we move upwards. The equivalence is easy to see if we interpret the numbers as lengths of rows. In asymptotic representation theory the size of the diagram tends to infinity. In the grate majority of results in this field we consider only so-called C-balanced diagrams. Let C be a positive real number. A Young diagram (y 1 , . . . , y j ) with y 1 + · · · + y j = n is said to be balanced or, more precisely, C-balanced if y i ≤ C √ n for all 1 ≤ i ≤ j and j ≤ C √ n. Sometimes the assumption becomes very precise and we require that the sequence of Young diagrams (rescaled by √ n to avoid growth of the size) tend to some prescribed shape. In this paper we will not use Young diagrams but we will refer to some dual objects, namely their transition measures. Asymptotic behaviour of characters of symmetric groups can be described by the following theorem by Biane ([Bia98], [Bia01a]) Fact 1. Let λ n be a sequence of C-balanced Young diagrams and ρ n the corresponding representations of S n . Fix a permutation σ ∈ S k and note that σ can be treated as an element of S n if we add n−k additional fixpoints. There exists a constant K such that tr(ρ(σ)) ≤ Kn −|σ| 2 . 0.3. Free probability theory. We will work in the frame of a non commutative probability space [Voi86] i.e. a unital algebra A over C and a functional ϕ, unital in the sense that ϕ(1) = 1. We call a ∈ A a non-commutative random variable and ϕ(a n ) its moments. If there are many non-commutative random variables a 1 , . . . , a k , their mixed moments are values of ϕ on words in a 1 , . . . , a k . The parallelism between the classical and the non-commutative probability is easy to see when we let A be a commutative algebra of all random variables having all moments and ϕ the classical expectation functional. Most of the classical results like the law of large numbers or central limit theorem can be translated into this language. In many situations it turnes out that even if a ∈ A is not a genuine random variable, the sequence ϕ(a n ) of its moments is a sequence of moments of a probability measure µ a on a real line. We call this measure the distribution of a. One of the fundamental tools in both classical and free probability theory are classical and free cumulants [NS99]. Definition 1. Assume we have a tuple (A 1 , . . . , A m ) of non-commutative random variables. Free cumulants is a family of functions C π (A 1 , A 2 , . . . , A m ) satisfying the following conditions: (1) C π (A 1 , A 2 , . . . , A m ) factorize according to the block structure of π, i.e. C π (A 1 , A 2 , . . . , A m ) = b|π C b (A 1 , A 2 , . . . , A m ), (2) C b (A 1 , A 2 , . . . , A m ) depends only on arguments with indexes from b so if b = {i 1 , . . . , i l } then C b (A 1 , A 2 , . . . , A m ) = C l (A i 1 , . . . , A i l ), (3) C l (A i 1 , . . . , A i l ) are multilinear functionals, (4) C π satisfy the free moment-cumulant formula: ϕ(A 1 , A 2 , . . . , A m ) = π∈N C(m) C π (A 1 , A 2 , . . . , A m ). To get the definition of classical cumulants and the classical momentcumulant formula we only have to replace the set N C(m) of noncrossing partitions in condition (4) by the set P (m) of all partitions. Definition 2. We say that non-commutative random variables X 1 , . . . , X k are independent in the free way (or simply that they are free) if their mixed free cumulants vanish, i.e. C l (X i 1 , . . . , X i l ) = 0 whenever at least two i j are not equal. If A = A 1 = A 2 = · · · = A m The following definition of freeness is clearly equivalent to the one above and will be more convenient for our purpose: Definition 3. We say that X 1 , . . . , X k are free if for every tuple (A 1 , . . . , A m ) of these variables we have To get the definition of asymptotic freeness we have to replace equality by a limit: ϕ(A 1 , A 2 , . . . , A m ) = π∈N C (p) (m) C π (A 1 , A 2 , . . . , A m ), Definition 4. Let (A n , ϕ n ) be a sequence of non-commutative probability spaces and for every n let X n 1 , . . . , X n k ∈ A n be a tuple of noncommutative random variables. The sequences (X n 1 ), . . . , (X n k ) are said to be asymptotically free if ϕ n (A n 1 , A n 2 , . . . , A n m ) → π∈N C (p) (m) C π (A 1 , A 2 , . . . , A m ) for every tuple (A n 1 , . . . , A n m ) of sequences X n 1 , . . . , X n k . Theorem 1 (Voi86). If X, Y are free random variables with respective distributions µ X and µ Y , the distribution of their sum depends only on µ X and µ Y and is called a free convolution of measures µ X and µ Y . We denote it by µ X µ Y . Theorem 2 (NS99). If X n and Y n are asymptotically free sequences of random variables converging to some free random variables X and Y , the sequence of distributions of their sums X n + Y n converges in distribution to the distribution of X + Y which is the free convolution µ X µ Y . Convergence in distribution means that all mixed moments of X n and Y n converge to the respective mixed moments of X and Y . The following fact is a simple consequence of combining Definition 2 with (1) in Definition 1 and will be later used to recognize free cumulants of free random variables. Fact 2. Let (A 1 , . . . , A m ) be a tuple of non-commutative random variables X and Y with respective distributions µ X and µ Y . As we described before, (A 1 , . . . , A m ) defines a coloring of the set {1, 2, . . . , m} with colors X and Y . If X and Y are free, then for π ∈ N C (p) (m) their free cumulants are given by C π (A 1 , . . . , A m ) = b|π X C µ X |b| b|π Y C µ Y |b| where π X and π Y are sets of those blocks of π which have color X and Y respectively. The only non-commutative probability space investigated in this paper will be the group algebra C[S 2n+1 ] of the symmetric group S 2n+1 with a functional ϕ defined by ϕ(σ) = tr(ρ 1 ρ 2 id)(σ ↓ S 2n+1 Sn×Sn×{e} ). The arrow denotes restriction to a subgroup, so if σ ∈ S 2n+1 is not an element of S n × S n × {e}, the value of ϕ on it is by definition equal to zero. If σ is an element of the subgroup, it is a product of two permutations σ 1 and σ 2 such that the support of σ 1 is contained in {1, 2, . . . , n} and the support of σ 2 is contained in {n + 1, . . . , 2n}. The value of ϕ on σ is then a product of two characters of irreducible representations ρ 1 and ρ 2 of a symmetric group S n evaluated respectively on σ 1 and σ 2 . 0.4. Outer product. Let H be a subgroup of a finite group G and let ρ be a representation of H in a space V . The induced representation U ρ of G is defined [FH91] as follows: • The set {f : G → V : (∀h ∈ H)(∀g ∈ G)f (hg) = ρ(h)f (g)} is the space of U ρ . • G acts by the formula [U ρ (g 0 )f ](g) = f (gg 0 ). The outer product [FH91] of two representations ρ 1 and ρ 2 of the symmetric group S n is a representation of S 2n induced from a representation ρ 1 ρ 2 of a subgroup S n × S n defined by (ρ 1 ρ 2 )(g 1 , g 2 ) = (ρ 1 (g 1 )) (ρ 2 (g 2 )). In this paper what we need to know about the outer product is how to compute its character: The summands will be called Jucys-Murphy transpositions. There are a few equivalent definitions of Kerov transition measure but for our purposes the following one is the most convenient. Given a Young diagram λ or, equivalently, an irreducible representation of S n , we define its transition measure µ λ as the unique probability measure on the real line with a property that R x k dµ λ = tr ρ λ J k n+1 ↓ S n+1 Sn . In the following we will consider normalized Jucys-Murphy elements It follows from linearity that the value of ϕ on a product of a tuple of Jucys-Murphy elements X n and Y n is a sum of values of ϕ on many products of transpositions. Our aim is to group the summands to obtain a sum over non-crossing partitions. We will then get the formula from Definition 4. We will investigate Jucys-Murphy elements, Jucys-Murphy transpositions and their moments so now we will try understanding these objects as well as we have to. Assume we have a tuple A 1 , . . . , A m where every A i is equal either to X n or to Y n . What we need to compute is ϕ(A 1 · · · A m ). Each We will need this notation to write the moment of a product of a tuple ϕ(A 1 · · · A m ). Define a permutation γ putting γ(α i ) = α i for every α i . For the other numbers we can put arbitrary values but we need to make sure that γ ∈ S n ×S n ×{e}. It is possible because when we put γ(α i ) = α i we use exactly the same number of elements from {1, 2, . . . , n} as arguments and as values. Any permutation defined in that way satisfies a 1 · · · a m = γ −1 a 1 · · · a m γ Lemma 2. If two permutations σ and σ are conjugate by an element of S n × S n × {e} then ϕ(σ) = ϕ(σ ). Proof. If σ is not an element of S n × S n × {e}, then neither is σ and by definition ϕ(σ) = 0 = ϕ(σ ). If σ ∈ S n × S n × {e} then we can split it into commuting σ 1 and σ 2 such that σ = σ 1 σ 2 and supp(σ 1 ) ⊂ {1, 2, . . . , n} , supp(σ 2 ) ⊂ {n + 1, n + 2, . . . , 2n}. If σ = γσγ −1 where γ ∈ S n × S n × {e} then we can analogously split γ into γ 1 and γ 2 . One can then see that the conjugation is really taken in subgroups S n × {e Sn } × {e} and {e Sn } × S n × {e}, namely σ i = γ i σ i γ −1 i . By definition the value of ϕ is a product of corresponding characters, i.e. traces of matrices: tr(σ i ) = tr(γ i σ i γ −1 i ) = tr(σ i ) and thus ϕ(σ) = ϕ(σ ) Lemma 3. Let a 1 , . . . , a m be Jucys Murphy transpositions. Assume that for some k we have a k = a i for all i = k. Let σ 1 = a 1 a 2 · · · a k−1 and σ 2 = a k+1 · · · a m . Then |σ 1 a k σ 2 | = |σ 1 σ 2 | + 1 where the length |σ| of a permutation |σ| is defined as the minimal number of factors needed to write σ as a product of transpositions. Proof. To prove the lemma it suffices to explain the following equalities: |σ 1 a k σ 2 | = |σ 2 σ 1 a k | = |σ 2 σ 1 | + 1 = |σ 1 σ 2 | + 1. The first two permutations are conjugate hence have the same length. The second equality comes from the fact that if a k = (j, 2n + 1), the numbers j and 2n + 1 appear in the permutation σ 2 σ 1 in two different cycles. Therefore if we multiply σ 2 σ 1 by a k , then a k glues these two cycles together so the length of their product is bigger by one that the length of σ 2 σ 1 . The last equality holds because again the two permutations are conjugate. The following Lemma is proved in [Bia95]. . . , m} joining in blocks those numbers j, k for which a j = a k and assume this is a pair partition. Then the product a 1 · · · a m is the identity permutation if and only if π ∈ N C 2 (m). Proof. As the if part of the proof is obvious we will only prove the only part. We know thus that a 1 ...a m is the identity permutation and that every factor has exactly one copy in the tuple. We prefer to other element α k . Knowing that a 1 · · · a m is the identity permutation we know that the copy of a k−1 can't be between a k and a m because it would make it impossible for 2n + 1 to come back from the place of α k . So the copy of a k−1 is some a j with j < k − 1. We iterate the same argument until we get to 1. Let us now see what happens to the number α m . First it goes to the place of 2n + 1, then the transposition a m−1 = (α m−1 , 2n+1) = a r sends it to the place of α m−1 . Knowing that a 1 ...a m is the identity permutation we know that α m has to be back on the place of the number 2n + 1 right before the action of a k because it is its only way back to the right place. It follows that a r is somewhere between a k+1 and a m−2 . We iterate the argument until we get to a k+1 . Applying the same argument inductive finishes the proof. Lemma 5. Let (a 1 , . . . , a m ) be a tuple of Jucys-Murphy transpositions, let π be the partition described in the previous Lemma and let σ = a 1 · · · a m be the product of the tuple. Then 2|π| ≥ |σ| + m if and only if π ∈ N C 1,2 (m). Proof. If we remove every one-element block and all the corresponding transpositions from the tuple (a 1 , . . . , a m ) , we get another partition π of the m -element set and a new permutation σ . First notice that every one-element block has the same contribution to both sides of the inequality. Indeed, it is clear that its contribution to m and π is equal to one and since it corresponds to some transposition which is not repeated in the tuple, Lemma 3 tells us that the contribution to |σ| is also one. Thus the new partition π satisfies the inequality if and only if π does. Assume now that the inequality is true. The maximal possible value of 2|π | is m because every block consists at least two elements. As we already have m on the right hand side of the inequality, it follows that m is also the minimal possible value of 2π so π is a pair partition and that |σ | has to be equal 0. Partitions of {1, 2, ..., m } for which the product of corresponding tuples has length 0 ( is equal to e) are by Lemma 4 exactly N C 2 (m ). Conversely if π ∈ N C 2 (m), then σ = e and 2|π | = m so the inequality is true. We will now show how to read the cycle structure of a product of Jucys-Murphy transpositions corresponding to a given partition π ∈ N C 1,2 (m). Denote by α i the element exchanged with 2n + 1 by the transposition a i , i.e. a i = (α i , 2n + 1). Lemma 6. Suppose that a tuple a 1 , . . . , a m corresponds to a partition π ∈ N C 1,2 (m). Let i 1 ≤ · · · ≤ i k be the elements of those one-element blocks of π which are not inner blocks. Then the product a 1 · · · a m has a cycle (2n + 1, α i k , . . . , α i 1 ). If there are no such one-element inner blocks, then 2n + 1 is a fixed point of the product. The other nontrivial cycles are in one-to-one correspondence with those two-element blocks of π which have some direct inner one-element blocks. If such a two-element block corresponds to some a j and the direct inner oneelement blocks correspond to a j 1 , . . . , a j l with j 1 ≤ · · · ≤ j l , then the corresponding cycle is (α j , α j l , . . . , α j 1 ). It is strongly recommended to read the following example and do the included calculations before or instead of reading the proof. Example 1. Suppose we have a tuple (a 1 , . . . , a 9 ) with a 2 = a 9 and a 4 = a 7 and no other equalities. The corresponding partition is on the picture: The product of the tuple: a 1 · · · a 9 = = (α 1 , 2n + 1)(α 9 , 2n + 1)(α 3 , 2n + 1)(α 7 , 2n + 1) (α 5 , 2n + 1)(α 6 , 2n + 1)(α 7 , 2n + 1)(α 8 , 2n + 1)(α 9 , 2n + 1) has the following cycle decomposition: (2n + 1, α 1 )(α 9 , α 8 , α 3 )(α 7 , α 6 , α 5 ). Proof of Lemma 6. Let us first see what happens to the element 2n + 1 under the action of the product (a 1 · · · a m ). The first transposition a m sends it to the place of the element α m . If m is in a one-element block of π, then 2n + 1 will stay at the place of α m . If m is in a two-element block {k, m}, then 2n + 1 will return to its original place under the action of a k . Any a l with k < l < m will not affect 2n + 1. So the element 2n + 1 goes to the place of an element α i with {i} being the first non-inner one-element block from the right. If such an i does not exist, then 2n + 1 stays at its original place. Note that the same is true for any element that is at the place of the element 2n + 1. So if 2n + 1 goes to the place of some α i , then α i goes to the place of the first α j where {j} is the first one element block from the right if we restrict the partition π to the set {1, 2, . . . , i − 1}. Using the above-described method we can read the cycle consisting 2n + 1 out of the partition π. First we write 2n + 1 and then we choose all non-inner one-element blocks {s} of π from the right to the left and write the corresponding α s after 2n + 1 from the left to the right just like in Example 1. Let us now see how to construct the other cycles of a 1 · · · a m . Choose some two-element block {p, r} of π. Note that the elements α p , α p+1 , . . . , α r cannot appear in transpositions a 1 , . . . , a p−1 , a r+1 , . . . , a m as π is noncrossing. Consider α r . Under the action of the product a 1 · · · a m it is first moved by a r and it goes to the place of the element 2n + 1. Definition 5. Let (n) k = n(n − 1) · · · (n − k + 1) denote the product of descending integers. Lemma 7. Suppose we have a tuple A 1 , . . . , A m of Jucys-Murphy elements i.e. every A i is equal either to X n or to Y n and a partition π respecting the corresponding coloring. Then the number of tuples a 1 , . . . , a m such that π ≈ a 1 , . . . , a m ∼ A 1 , . . . , A m is equal to (n) k (n) l where k is the number of blocks of π with color X and l is the number of blocks of π with color Y . The following Lemma is a reformulation of Theorem 1.3 from [Bia98]. Lemma 8. Let balanced Young diagrams λ 1 and λ 2 corresponding to ρ 1 and ρ 2 in the definition of ϕ have, in the limit when n goes to infinity, some limit shapes Λ 1 and Λ 2 . Let σ be a product of some tuple of Jucys-Murphy transpositions satisfying π ≈ (a 1 , . . . , a m ) ∼ (A 1 , . . . , A m ) and assume that σ ∈ S n × S n × {e}. Let σ 1 , σ 2 be such that supp(σ 1 ) ⊂ {1, . . . , n}, supp(σ 2 ) ⊂ {n + 1, . . . , 2n} and σ = σ 1 σ 2 where supp denotes the support of a permutation. Then n |σ| 2 ϕ(a 1 . . . a m ) → c|σ 1 C µ Λ 1 |c|+2 c|σ 2 C µ Λ 2 |c|+2 where C µ Λ k denotes the k-th free cumulant of µ Λ . The result Now we are ready to compute the mixed moment of a tuple (A 1 , A 2 , . . . , A m ): ϕ(A 1 A 2 · · · A m ) = n − m 2 (a 1 ,...,am)∼(A 1 ,...Am) ϕ(a 1 · · · a m ). Every tuple (a 1 , . . . , a m ) of transpositions defines a partition π ≈ (a 1 , . . . , a m ) so we can write the above sum grouping the summands according to partitions: (1) From Lemmas 1 and 2 we know that the value of ϕ on each tuple (a 1 , . . . , a m ) corresponding to a fixed partition π is the same. We can thus denote this value by ϕ π (A 1 , A 2 , . . . , A m ) and write Then from Lemma 7 we get ϕ(A 1 A 2 · · · A m ) = π∈P (m) n − m 2 (n) k (n) l ϕ π (A 1 , A 2 , . . . , A m ). ϕ(A 1 A 2 · · · A m ) = n − m From Fact 1 we know that for a fixed π the absolute value of a summand is bounded : n − m 2 (n) k (n) l ϕ π (A 1 , A 2 , . . . , A m ) ≤ n − m 2 n |π| Cn − |σ| 2 = Cn |π|− |σ| where σ is a product of any tuple which satisfies π ≈ (a 1 , . . . , a m ) ∼ (A 1 , . . . , A m ) (the value |σ| does not depend on the choice of the tuple). From Lemma 5, the only partitions the contribution of which does not asymptoticaly vanish are partitions from N C 1,2 (m) and from Lemma 6 we know how to read permutations out of partitions. The functional ϕ is equal to zero on permutations which do not belong to S n × S n × {e}, so in order to get some nonzero value, every two-element block must have the same color as all its direct inner one-element blocks. This just means that F (π) where F is a function described in section 0.1 must respect the coloring given by A 1 , . . . , A m so it is better to sum over F (π) instead of π. We can thus write ϕ(A 1 A 2 · · · A m ) = F (π)∈N C (p) >1 (m) n − m 2 (n) k (n) l ϕ π (A 1 , A 2 , . . . , A m )+o(1). From Lemma 8 we have: ϕ(A 1 A 2 · · · A m ) = = F (π)∈N C (p) >1 (m) (n) k (n) l n − m 2 n − |σ| 2 this tends to 1 n |σ| 2 ϕ π (A 1 , A 2 , . . . , A m ) this tends to c|σ 1 C µ Λ 1 |c|+2 c|σ 2 C µ Λ 2 |c|+2 → → F (π)∈N C (p) >1 (m) c|σ 1 C µ Λ 1 |c|+2 c|σ 2 C µ Λ 2 |c|+2 . LECH JANKOWSKI From Fact 2 we know that c|σ 1 C µ Λ 1 |c|+2 c|σ 2 C µ Λ 2 |c|+2 are free cumulants of a tuple of some free non-commutative random variables with distributions µ Λ 1 and µ Λ 2 . By Definition 4. this means that A 1 , . . . , A m are asymptotically free. Research was supported by the Polish Ministry of Higher Education research grant N N201 364436 for the years 2009-2012. and the distribution of A is some measure µ, then we call C m (A, A, . . . , A) the m-th free cumulant of A or m-th free cumulant of µ and we write C m (A) or C µ m . where p denotes the coloring of the set {1, 2, . . . , m} given by p(i) = A i and N C (p) (m) is the set of those non-crossing partitions of {1, 2, . . . , m} which respect the coloring p. Theorem 3 ( 3Frobenius' Duality Theorem).[FH91,NaiS82] Let ρ G be a representation of a group G induced from a representation ρ H of its subgroup H. Let χ G and χ H be the corresponding characters and letA G = {gh 0 g −1 : g ∈ G} ∈ C[G] and A H = {hh 0 h −1 : h ∈ G} ∈ C[H] beconjugacy classes of the same element h 0 of H but in different groups. Then χ G (A G ) = χ H (A H ) Thus to compute a character of the outer product of representations it suffices to compute the character of their tensor product, i.e. the product of their characters. 0.5. Jucys-Murphy elements and transition measure. In this section we will investigate moments of Jucys-Murphy elements. A Jucys-Murphy element [J66,OV04] is the sum of transpositions J n = (1, n) + (2, n) + · · · + (n − 1, n) ∈ C[S n ]. X n := n − Note that X n and Y n are not genuine Jucys-Murphy elements in C[S 2n+1 ] as the sums are not taken over {1, . . . , 2n}. The genuine Jucys-Murphy element is their scaled sum √ n(X n + Y n ). Elements X n and Y n can be seen as Jucys-Murphy elements in subalgebras of C[S 2n+1 ] generated by sugroups {g ∈ S 2n+1 : g(i) = i, for i ∈ {n + 1, . . . 2n}} and {g ∈ S 2n+1 : g(i) = i, for i ∈ {1, . . . n}} respectively. We will call X n and Y n Jucys-Murphy elements as it makes sense in the context of outer product of representations. A i is a sum of Jucys-Murphy transpositions, all together normalized by a factor n − m 2 . For a tuple (a 1 , . . . , a m ) of Jucys-Murphy transpositions we will write (a 1 , . . . , a m ) ∼ (A 1 , . . . , A m ) if each a i is a summand of the corresponding A i . Each tuple (a 1 , . . . , a m ) defines a partition π of the set {1, . . . , m} by the following rule: two numbers are in the same block if and only if the corresponding transpositions are equal. We will denote it by π ≈ (a 1 , . . . , a m ). Of course, for every partition π there are many tuples (a 1 , . . . , a m ) defining π.Each tuple (A 1 , . . . , A m ) defines a coloring of the set {1, . . . , m}.The color of a number i is simply A i , i.e. either X n or Y n . So if π ≈ (a 1 , . . . , a m ) ∼ (A 1 , . . . , A m ) then the partition π must respect the coloring defined by (A 1 , . . . , A m ). Lemma 1 . 1If (a 1 , . . . , a m ) ≈ π ≈ (a 1 , . . . , a m ) and (a 1 , . . . , a m ) ∼ (A 1 , . . . , A m ) ∼ (a 1 , . . . , a m ), then the products a 1 · · · a m and a 1 · · · a m are permutations conjugate by an element of S n × S n × {e}.Proof. Let b 1 , . . . , b k be blocks of π. Every block b j corresponds to some Jucys-Murphy transposition a i = (α i , 2n+1) and some a i = (α i , 2n+1). From the assumption that (a 1 1, . . . , a m ) ∼ (A 1 , . . . , A m ) ∼ (a 1 , . . . , a m ), both α i and α i belong either to {1, 2, . . . , n} or {n + 1, n + 2, . . . , 2n}. Lemma 4 . 4Suppose a 1 , . . . , a m are Jucys-Murphy transpositions. Define a partition π of the set {1, . think of the product a 1 ...a m as of m transpositions acting 'separately' in the appropriate order. Let us see what happens with the number 2n + 1 under the action of a 1 ...a m Let a m = (α m , 2n + 1) = a k The transposition a m sends 2n+1 to the place of α m and it stays there until a k sends it back. If k = 1 then a k−1 sends 2n + 1 to the place of some Now we can apply the same argument as before with a difference that -as we are inside the block {p, r} -instead of looking at non-inner one-element blocks we need to look at direct inner one-element blocks of {p, r}. So to read the cycle corresponding to the block {p, r} out of the partition π we need to choose all direct inner one-element blocks {s} of {p, r} from the right to the left and write the corresponding α s after α r from the left to the right. second sum is taken over all tuples (a 1 , . . . , a m ) such that π ≈ (a 1 , . . . , a m ) and (a 1 , . . . , a m ) ∼ (A 1 , . . . , A m ). #{(a 1 , . . . a m ) : π ≈ (a 1 , . . . , a m ) ∼ (A 1 , . . . , A m )} ϕ π (A 1 , A 2 , . . . , A m ). blocks. A block B is a direct inner block of a block B if there is no block B with a property that B is an inner block of B and B is an inner block of B . For example, block {2, 4} is a direct inner blocks of {1, 5, 6, 8} because there are no blocks in between whilst block {3} is an indirect inner block of {1, 5, 6, 8} because there is {2, 4} in between. − m 2 , Permutation model for semicircular systems and quantum random walks. Philippe Biane, Pacific J. Math. 1712Philippe Biane. Permutation model for semicircular systems and quantum random walks. Pacific J. Math. Volume 171, Number 2 (1995), 373-387. Representations of symmetric groups and free probability. Philippe Biane, Adv. Math. 1381Philippe Biane. Representations of symmetric groups and free probability. Adv. Math., 138(1):126-181, 1998. Approximate factorization and concentration for characters of symmetric groups. Philippe Biane, Internat. Math. Res. Notices. 4Philippe Biane. Approximate factorization and concentration for characters of symmetric groups. Internat. Math. Res. Notices, (4):179-192, 2001. Representation theory:a first course. William Fulton, Joe Harris, SpringerWilliam Fulton, Joe Harris 'Representation theory:a first course' Springer 1991 On the Young operators of the symmetric group. Algimantas Jucys, Adolfas, Lietuvos Fizikos Rinkinys. 6Jucys, Algimantas Adolfas (1966), "On the Young operators of the symmetric group", Lietuvos Fizikos Rinkinys 6: 163-180 A differential model for the growth of Young diagrams. S Kerov, Proceedings of the St. the StProvidence, RIAmer. Math. SocIVAmer. Math. Soc. Transl. Ser.S. Kerov. A differential model for the growth of Young dia- grams. In Proceedings of the St. Petersburg Mathematical Society, Vol. IV, volume 188 of Amer. Math. Soc. Transl. Ser. 2, pages 111-130, Providence, RI, 1999. Amer. Math. Soc. Asymptotic representation theory of the symmetric group and its applications in analysis. S V Kerov, Translations of Mathematical Monographs. AnsmericanMathematical Society. N. V219ProvidenceS. V. Kerov. Asymptotic representation theory of the symmet- ric group and its applications in analysis, volume 219 of Translations of Mathematical Monographs. AnsmericanMathematical Society, Provi- dence, RI, 2003. Translated from the Russian manuscript by N. V. . Tsilevich, G. OlshanskiTsilevich, With a foreword by A. Vershik and comments by G. Olshan- ski. . Theory, M A Group Representations, A I Naimark, Stern, 568[NaiS82] THEORY OF GROUP REPRESENTATIONS. M. A. Naimark, A. I. Stern [translated by Elizabeth and Edwin. Hewitt]. 568 pp. . Springer-Verlag, New YorkSpringer-Verlag, New York, 1982. A Nica, R Speicher, Lectures on the Combinatorics of Free Probability Theory. ParisA. Nica, R. Speicher, Lectures on the Combinatorics of Free Probability Theory, Paris, 1999. A New Approach to the Representation Theory of the Symmetric Groups. Andrei ; Okounkov, Anatoly Vershik, Zapiski Seminarod POMI. 307In RussianOkounkov, Andrei; Vershik, Anatoly (2004), "A New Approach to the Representation Theory of the Symmetric Groups. 2", Zapiski Seminarod POMI (In Russian) v. 307 Jean-Pierre Serre, Linear Representations of Finite Groups. Number 42 in Graduate Texts in Mathematics. Springer-VerlagJean-Pierre Serre. Linear Representations of Finite Groups. Number 42 in Graduate Texts in Mathematics. Springer-Verlag, 1977. Free convolution and the random sum of matrices. R Speicher, RIMS. 29R. Speicher, Free convolution and the random sum of matrices, RIMS 29 (1993), 731-744. Addition of certain noncommuting random variables. Dan Voiculescu, [email protected]. 663J. Funct. Anal.. plDan Voiculescu. Addition of certain noncommuting random variables. J. Funct. Anal., 66(3):323-346, 1986. E-mail address: [email protected] . Instytut Matematyczny, Uniwersytet Wrocławski, Pl. Grunwaldzki. 24Instytut Matematyczny, Uniwersytet Wrocławski, Pl. Grunwaldzki 2/4, 50-384 Wrocław, Poland
[]
[ "Quantum Walk Topology and Spontaneous Parametric Down Conversion", "Quantum Walk Topology and Spontaneous Parametric Down Conversion" ]
[ "Graciana Puentes \nDepartamento de Fisica\nFacultad de Ciencias Exactas y Naturales\nPabellón 1, Ciudad Universitaria1428Buenos AiresArgentina\n" ]
[ "Departamento de Fisica\nFacultad de Ciencias Exactas y Naturales\nPabellón 1, Ciudad Universitaria1428Buenos AiresArgentina" ]
[]
In a recent detailed research program we proposed to study the complex physics of topological phases by an all optical implementation of a discrete-time quantum walk. The main novel ingredient proposed for this study is the use of non-linear parametric amplifiers in the network which could in turn be used to emulate intra-atomic interactions and thus analyze many-body effects in topological phases even when using light as the quantum walker. In this paper, and as a first step towards the implementation of our scheme, we analize the interplay between quantum walk lattice topology and spatial correlations of bi-photons produced by spontaneous parametric down-conversion. We also describe different detection methods suitable for our proposed experimental scheme.
10.1364/josab.33.000461
[ "https://arxiv.org/pdf/1510.03089v2.pdf" ]
118,515,986
1510.03089
6ce4c7ffb6d783096987d76f3d01e07ab9c6eb89
Quantum Walk Topology and Spontaneous Parametric Down Conversion Graciana Puentes Departamento de Fisica Facultad de Ciencias Exactas y Naturales Pabellón 1, Ciudad Universitaria1428Buenos AiresArgentina Quantum Walk Topology and Spontaneous Parametric Down Conversion numbers: 4265Lm4250Dv4265Wi In a recent detailed research program we proposed to study the complex physics of topological phases by an all optical implementation of a discrete-time quantum walk. The main novel ingredient proposed for this study is the use of non-linear parametric amplifiers in the network which could in turn be used to emulate intra-atomic interactions and thus analyze many-body effects in topological phases even when using light as the quantum walker. In this paper, and as a first step towards the implementation of our scheme, we analize the interplay between quantum walk lattice topology and spatial correlations of bi-photons produced by spontaneous parametric down-conversion. We also describe different detection methods suitable for our proposed experimental scheme. 1.. INTRODUCTION Phase transitions play a fundamental role in physics. From melting ice to the creation of mass in the Universe, phase transitions are at the center of most dynamical processes which involve an abrupt change in the properties of a system. Phase transitions are usually driven by some form of fluctuation. While classical phase transitions are typically driven by thermal noise, quantum phase transitions are triggered by quantum fluctuations. Quantum phase transitions have been extensively studied in a large number of fields ranging from cosmology to condensed matter and have received much attention in the field of ultra-cold atoms since the observation of Bose-Einstein condensation [1], and the subsequent experimental realization of Superfluid-Mott Insulator phase transition in optical lattices [2]. A common feature of quantum phase transitions is that they involve some form of spontaneous symmetry breaking, such that the ground state of the system has less overall symmetry than the Hamiltonian and can be described by a local order parameter. A rather distinctive class of quantum phases is present in systems characterized by a Hilbert space which is split into different topological sectors, the so called topological phases. Topological phases have received much attention after the discovery of the quantum Hall effect [3] and the interest increased following the prediction [4] and experimental realization [5] of a new class of material called topological insulators. Topological insulators are band insulators with particular symmetry properties arising from spin-orbit interactions which are predicted to exhibit surface edge states which should reflect the non-trivial topological properties of the band structure, and which should be topologically protected by time reversal symmetry. Unlike most familiar phases of matter which break different kinds of symmetries, topological phases are not characterized by a broken symmetry, they have degenerate ground states which present more symmetry than the underlying Hamiltonian, and can not be described by a local order parameter. Rather, these partially unexplored type of phases are described by topological invariants, such as the Chern number which is intimately related to the adiabatic Berry phase, and are predicted to convey a variety of exotic phenomena, such as fractional charges and magnetic monopoles [6]. It has recently been theoretically demonstrated that it is possible to simulate a large "Zoo" of topological phases by means of discrete-time quantum walks (DTQWs) of a single particle hopping between adjacent sites of an optical lattice, through a sequence of unitary operations [7,8]. In this paper, we propose a detailed research program for the study for non-linear effects in photonic quantum walks and their interplay with topological phenomena [9]. This paper is based on an original proposal written in the year 2010 by G. Puentes [10,11]. More specifically, we analyze the interplay between a non-trivial topology determined by a linear quantum walk Hamiltonian (H QW ), on the phase-matching condition characterizing bi-photons produced by the non-linear process of spontaneous parametric down conversion (SPDC), characterized by a non-linear Hamiltonian (H SPDC ). By considering both a linear and non-linear contributions in the overall bi-photon Hamiltonian, we analyze the coupling efficiency and emission probability in different topological scenarios. Random walks have been used to model a variety of dynamical physical processes containing some form of stochasticity, including phenomena such as Brownian motion and the transition from binomial to Gaussian arXiv:1510.03089v2 [quant-ph] 27 Apr 2016 distribution in the limit of large statistics. The quantum walk (QW) is the quantum analogue of the random walk, where the classical walker is replaced by a quantum particle, such as a photon or an electron, and the stochastic evolution is replaced by a unitary process. The stochastic ingredient is added by introducing some internal degrees of freedom which can be stochastically flipped during the evolution, which is usually referred to as a quantum coin. One of the main ingredients of quantum walks is that the different paths of the quantum walker can interfere, and therefore present a complicated (non Gaussian) probability distribution for the final position of the walker after a number of discrete steps. In recent years, quantum walks have have been successfully implemented to simulate a number of quantum phenomena such as photosyntesis [12], quantum diffusion [13], vortex transport [14] and electrical brake-down [15], and they have provided a robust platform for the simulation of quantum algorithms and maps [16]. QWs have been experimentally implemented in the context of NMR [17], cavity QED [18], trapped ions [19], cold atoms [20] as well as optics, both in the spatial [21] and frequency domain [22]. In recent years, quantum walks with single and correlated photons have been successfully introduced using wave-guides [23] and bulk optics [25], and time-domain implementations [24]. It is relevant to point out that any implementation of a quantum walk so far [13-16, 18, 20, 22-25, 27, 28] has introduced passive linear elements only for the composing elements of the random network. A full class of topological insulators can be realized in a system of noninteracting particles, with a binary (psuedo) spin space for (bosons) fermions, via a random walk of discrete time unitary steps as described in Ref [7]. The particular type of phase is determined by the size of the system (1D or 2D) and by the underlying symmetries characterizing the Hamiltonian, such as particle-hole symmetry (PHS), time-reversal symmetry (TRS), or chiral symmetry (CS). The 1D discrete time quantum walk (DTQW) can be specified by a series of unitary spin dependent translations T and rotations R(θ), where θ specifies the rotation angle. Thus, the quantum evolution is determined by applying a series of unitary operations or steps: U (θ) = T R(θ). (1.1) The generator of the unitary evolution operator (or map) in Eq. [1] is the time-independent Hamiltonian H(θ), for which the discrete time evolution operator U (θ) can be defined as: U (θ) = e −iH(θ)δt ,(1.2) where we have chosen = 1, and the finite time evolution after N steps is given by U N = e −iH(θ)N δt . The Hamiltonian H(θ) determined by the translation and rotation steps T and R(θ), posses particle hole symmetry (PHS) for some operator P (i.e. P HP −1 = −H) and it also contains chiral symmetry (CS). The presence of PHS and CS guaranties time reversal symmetry (TRS). The presence of TRS and PHS imply that the system belongs to a topological class contained in the Su-Schrieffer-Heeger (SSH) model [31] and can thus be employed to simulate a class of SSH topological phase. An extension to 2D topological insulator can be obtained by extending the lattice of sites to 2D. Different geometries such as square lattice or triangular lattice are described in Ref [7]. In this work we propose to study the dynamical evolution given a general overall Hamiltonian of the form: H = H QW + H SPDC , (1.3) where the first term is the linear contribution given by the non-trivial topology of the quantum walk lattice, and the second term is the non-linear contribution due to the spontaneous parametric down conversion in non-linear media along the lattice. 2.. SPLIT-STEP QUANTUM WALK HAMILTONIAN (HQW) The basic step in the standard DTQW is given by a unitary evolution operator U (θ) = T R n (θ), where R n (θ) is a rotation along an arbitrary direction n = (n x , n y , n z ), given by: R n (θ) = cos(θ) − in z sin(θ) (in x − n y ) sin(θ) (in x + n y ) sin(θ) cos(θ) + in z sin(θ) , (2.4) in the Pauli basis [41]. In this basis, the y-rotation is defined by a coin operator of the form R y (θ) = cos(θ) − sin(θ) sin(θ) cos(θ) (2.5) This is followed by a spin-or polarization-dependent translation T given by T = x |x + 1 x| ⊗ |H H| + |x − 1 x| ⊗ |V V |, (2.6) where H = (1, 0) T and V = (0, 1) T . The evolution operator for a discrete-time step is equivalent to that generated by a Hamiltonian H(θ), such that U (θ) = e −iH(θ) ( = 1), with H QW (θ) = π −π dk[E θ (k) n(k). σ] ⊗ |k k| and σ the Pauli matrices, which readily reveals the spinorbit coupling mechanism in the system. The quantum walk described by U (θ) has been realized experimentally in a number of systems [19,[24][25][26], and has been shown to posses chiral symmetry, and display Dirac-like dispersion relation given by cos(E θ (k)) = cos(k) cos(θ). Here we analize a DTQW protocol consisting of two consecutvie spin-dpendent translations T and rotations R, such that the unitary step becomes U (θ 1 , θ 2 ) = T R(θ 1 )T R(θ 2 ). The so-called "split-step" quantum walk, has been shown to possess a non-trivial topological landscape characterized by topological sectors with different topological numebers, such as the winding number W = 0, 1. The dispersion relation for the split-step quantum walk results in [7]: cos(E θ,φ (k)) = cos(k) cos(θ 1 ) cos(θ 2 ) − sin(θ 1 ) sin(θ 2 ). (2.7) The 3D-norm for decomposing the quantum walk Hamiltonian of the system in terms of Pauli matrices H QW = E(k) n · σ becomes [7]: n x θ1,θ2 (k) = sin(k) sin(θ1) cos(θ2) sin(E θ 1 ,θ 2 (k)) n y θ1,θ2 (k) = cos(k) sin(θ1) cos(θ2)+sin(θ2) cos(θ1) sin(E θ 1 ,θ 2 (k)) n z θ1,θ2 (k) = − sin(k) cos(θ2) cos(θ1) sin(E θ 1 ,θ 2 (k)) . (2.8) Diagonalization of H QW gives the lattice Bloch eigenvectors, characterizing the quantum walk Hamiltonian, result in: u ± (k) = 1 N (1, n x (k) + in y (k) n z (k) ± λ(k) ) T ,(2.9) with λ 2 = n 2 x + n 2 y + n 2 z , and N a normalization factor. We note that the relation between the two components of u ± will eventually determine the phase-matching condition for down converted photons, and for this reason it is of relevance for our analysis. For the particular case that n z (k) = 0, the eigenvectors take the simple form: u ± (k) = 1 √ 2 (1, e −iφ(k) ) T ,(2.10) with φ(k) = atan( ny nx ). For the split-step quantum walk this relative phase results in: φ(k) = atan( cos(k) sin(θ 1 ) cos(θ 2 ) + sin(θ 2 ) cos(θ 1 ) sin(k) sin(θ 1 ) cos(θ 2 ) ). (2.11) Zak Phase We can gain further insight by calculating the Zak phase of this system, which is analogous to the Berry phase on the torus (i.e., the Brillouin zone). Consider the general Hamiltonian: H ∼ n x σ x + n y σ y + n z σ z ,(2.12) Since the eigenvalues are the only quantities of interest for the present problem, the overall constants of this Hamiltonian can be safely ignored. The explicit expression for this Hamiltonian is H = n z n x − in y n x + in y − n z ,(2.13) with eigenvalues λ = ± n 2 x + n 2 y + n 2 z (2.14) The normalized eigenvectors then result |V ± >=    nx+iny 2n 2 x +2n 2 y +2n 2 z ∓2nz √ n 2 x +n 2 y +n 2 z nz∓ √ n 2 x +n 2 y +n 2 z 2n 2 x +2n 2 y +2n 2 z ∓2nz √ n 2 x +n 2 y +n 2 z    (2.15) Note that the scaling n i → λn i does not affect the result, as should be. The overall Zak phase for the problem is: Z = i (< V + |∂ k V + > + < V − |∂ k V − >)dk. (2.16) For the split-step quantum walk, the Zak phase results in an analytic expresssion of the form [11]: Z = φ(−π/2) − φ(π/2) = tan(θ 2 ) tan(θ 1 ) . (2.17) 3.. SPONTANEOUS PARAMETRIC DOWN-CONVERSION (SPDC) HAMILTONIAN (HSPDC) We can decompose the H SPDC in terms of the Bloch eigen-vectors u ± (k) by defining Bloch waves of the form A p (k) = nâ n u ± (k)e ikn , withâ n =(â n,1 ,â n,2 , ...,â N,m ) andâ N,m the anihilation operator of the m th sublattice. The SPDC Hamiltonian results in: (3.18) where the coupling efficiency Γ ps,i (k s , k i ) to the Bloch wave results in the contribution of N sublattices of the form: H SPDC = ps,i dk s dk i Γ ps,i (k s , k i )Â † ps (k s )Â † pi (k i ),Γ ps,i (k s , k i ) = γ N n=1 E p n (k s +k i )u ps,j (k s )u pi,j (k i ). (3.19) For the case of a pump mode coupled to only two sublattices labeled by n = 1, 2, substituting the eigen-mode profile determined by the topology of the Quantum Walk Hamiltonian u s,i (Eq. 8), we obtain an expression for Γ ps,i (k s , k i ) of the form: Γ ps,i (k s , k i ) = γ(E p 1 (k s +k i )+E p 2 (k s +k i ) n x (k) + in y (k) n z (k) ± λ(k) ). (3.20) For the particular case that n z = 0 we obtain the simplified expression: Γ ps,i (k s , k i ) = γ(E p 1 (k s +k i )e −i|φs(k)+φi(k)| +E p 2 (k s +k i )),(3. 21) Where φ s,i (k) are the phase matching functions for signal and idler, which depend on the quantum walk topology. For the particular case of the split-step quantum walk, this results in Eq. (9), for each of the bi-photons independently. 4.. NUMERICAL RESULTS We first performed simulation in order to quantify the impact of the pump envelope E p 1 (k s + k i ) on the coupling efficiency Γ ps,i (k s , k i ). This in turn can provide information about the spatial correlations between the bi-photons produced by SPDC, since the efficiency is proportional to the probability amplitude of emission of bi-photons. In particular, a tilted coupling efficiency parameter in the k s,i -plane, will characterize spatial correlations or anti-correlation between the bi-photons. In all simulations we assumed a sufficiently large crystal length L, so that the sinc dependence of the phase-matching function is maximal and can be considered constant. This is illustrated in Fig. 1, for (θ s,i 1 = 0.01, θ s,i 2 = 0.0001) and for two different pump envelope widths σ = 500 and σ = 10, and for different relative phases between the signal and idler φ s (k) = ±φ i (k). Fig. 1 (a) φ s (k) = −φ i (k) and pump envelope width σ = 10. It is apparent that a small envelope reduces coupling efficiency to only the larges values of momentum for signal and idles. On the other hand, as expected, as the width σ of the pump envelope increases so does the coupling efficiency. In order to further analyze the impact of the quantum walk lattice topology on the type of coupling efficiency that can be expected, we performed simulations considering a constant amplitude for the pump E p (k s + k i ) = E p , with no dependence on k in Fourier Space. We consider two cases, corresponding to phase parameters θ s,i 1,2 defining different phase matching conditions φ s,i (k) for signal and idler photons. This is illustrated in Fig. 2: Fig. 2 (a) θ s,i 1 = 0.01, θ s,i 2 = 9 × π/20 and φ s (k) = φ i (k), Fig. 2 2 reveals the emergence of a non-trivial 2D-imensional grid in the coupling efficiency for quantum walk lattice parameters in distinct topological sectors (Fig. 2 (a) and (c)). The periodicity in the grid is a clear consequence of the periodicity in lattice parameters and kspace. On the other hand for lattice parameters in the same topological sector (Fig. 2 (b) and (d)), we obtain the same type of coupling coefficient as expected for standard SPDC, of course with no k-dependence since this was ignored in the approximation of constant pump envelope E p (k s + k i ) = E p . 5.. EXPERIMENTAL METHODS Fiber Network In Ref [25] the authors performed an optical implementation of the operator defined by Eq. (2), using polarization degrees of freedom of single photons and a sequence of half-wave plates and calcite beam-splitters. On the other hand, in Ref [23] the authors implemented a quantum walk in a lattice of coupled wave-guides. In this work we propose to use a fiber network to implement a quantum walk to simulate 1D and 2D topological phases. One of the main ingredients is the implementation of an optical non-linearity which can introduce the production of bi-photons, for instance via the process of Spontaneous Parametric Down Conversion (SPDC) as described in the previous sections. We argue that in this way, one could simulate both attractive and repulsive interactions (for the case of correlated or anti-correlated down-converted bi-photons). A similar idea was already proposed in [35], where attractive interactions were introduced in a planar AlGaAs waveguide characterized by a strong focussing Kerr non-linearity. Likewise, repulsive interactions can be simulated using defocusing non-linearities [35], though this would remain part of future efforts. A suitable alternative kind of waveguide for the simulation of attractive interactions are photonic band gap fibers with a Raman active gas, which are predicted to have a strong nonlinearity. These fibers consist of a hollow core photonic crystal fiber filled with an active Raman gas which are capable of exceeding intrinsic Kerr non-linearities by orders of magnitude [36,37]. Input state preparation For the linear (non-interacting) case (QW with SU(2) symmetry) we plan to use single-mode states both with Poissonian and Subpoissonian statistics, such as coherent states and squeezed coherent or single-photon states. The non-classical nature of the squeezed and single-photon states should be revealed in the intensity distribution of counts as well as in the standard deviation. On the other hand, for the non-linear (interacting) case (SPDC with SU(1,1) symmetry) quantum theory predicts that the probability amplitudes of the modes should interfere leading to an enhancement/reduction of the initial correlations. One of the goals of the project is to analyze the sensitivity of the non-linear network to phase relations dictated by the topology of the network in the input state and to the amount of gain. We also plan to analyze the effect of correlations and entanglement in the input state on localized edge states and to find some kind of non-local order parameter characterizing topological order [38]. Finally, one of the aims of this research plan is to demonstrate the feasibility of entanglement topological protection [9]. Detection Schemes • Intensity probability distributions and standard deviation The most direct form of measurement is to detect the statistics of counts by studying intensity histograms of photons and their standard deviation, as described in Ref [25,35]. In particular, by placing a photo-diode/APD at the output of each fiber, characterizing a given site in the network, it is possible to obtain a probability distribution of counts and its standard deviation along the N steps of the quantum walk. While in the case of input states with Poissonian statistics we expect to find a classical binary distribution of counts as the output of the quantum walk, in the non-classical case we expect to find a localized edge state at the boundary between two different topological sectors. Furthermore, we plan to measure the normalized standard deviation σ N for the classical and non-classical case, where we expect to find a markedly different dependance on the number of steps N ; namely, while for the classical walk (coherent states) we plan to obtain σ N ∝ √ N diffusive dependance, for the non-classical case (squeezed states, single photons) we plan to obtain an σ N ∝ N ballistic dependance with the number of discrete steps. • HBT correlation measurements When using non-linear fibers in the amplifying network, it would be interesting to analyze 1mode (g 1 (∆r)) and 2-mode (g 2 (∆r)) spatial correlations functions by means of Hanbury-Brown-Twiss (HBT) like interferometers between different output modes in the network, as described in Ref [35]. In particular, while in the case of attractive interactions, as simulated by Kerr non-linearities in fibers, the correlations are expected to increase, for the repulsive case the correlations are expected to decrease. We also plan to analyze the dependance of spatial correlations on the amount of gain present in the medium. In particular, for some critical value of the overall gain G c we expect to find a decay of the correlations, which in turn can be related to the classical-quantum transition in amplifying media [32,37]. Finally, we will also investigate the response of the amplifying network to different phases in the input states dictated in turn by the topology of the network, as well as the response to phase noise, by introducing phase averaging mechanisms (see Fig. 3). 6.. DISCUSSION AND OUTLOOK In this work we propose an experimental implementation of topological phases by means of an optical im-plementation of a discrete time quantum walk architecture (DTQWs). One of the main novel ingredients is the inclusion of non-linear media and non-linear effects in the DTQW via the possibility of spontaneous parametric down conversion (SPDC) in the lattice. By means of numerical simulations, we have analyzed the interplay between quantum walk topology and spatial properties of photon pairs produced by spontanenous parametric down conversion. In particular, have numerically described how the topology of the quantum walk lattice can play an important role in the phase-matching function of bi-photons produced by spontaneous parametric down conversion. As a future work, we expect to characterize the robustness of such topological phases and their characteristic bound states against amplitude and phase noise as well as to decoherence, by tracing over spatial modes of the field. One of the main goals of the proposed work is to investigate the use of parametric amplifiers as a means of simulating many-body effects in topological phases. In particular, we expect to link such phases with the classical or quantum statistics of the fields by means of intensity distribution and spatial correlation measurements, and we intend to find a link between some measure of entanglement and a non-local order parameter characterizing the topology of the phase [38], or the feasibility of entanglement topological protection approaches [9]. Some significant signatures of many-body dynamics in topological order are expected to be apparent, such as charge fractionalization and Hall quantization, which motivate the extension of the research to the non-linear (many-body) scenario. Furthermore, other more complex topological phases (such as spin Hall phase) could be simulated in the future by all optical means by using 2D quantum walks and higher dimensional internal degrees of freedom of the radiation field, such as the orbital angular momentum [39]. Furthermore, topological order has been considered as a useful ingredient for fault tolerant quantum computation, as it can protect the system against local perturbations which would otherwise introduce decoherence and loss of quantum information [40]. φ s (k) = φ i (k) and pump envelope width σ = 500,Fig. 1(b) θ s,i 1 = 0.01, θ s,i 2 = 0.001, φ s (k) = −φ i (k) and pump envelope width σ = 500, Fig. 1 (c) θ s,i 1 = 0.01, θ s,i 2 = 0.001, φ s (k) = φ i (k) and pump envelope width σ = 10, Fig. 1 (d)θ s,i 1 = 0.01, θ s,i 2 = 0.001, of coupling efficiency Γp s,i (ks, ki) in Fourier Domain. (a) θ s,i 1 = 0.01, θ s,i 2 = 0.001, φs(k) = φi(k) and pump envelope width σ = 10, (b) θ s,i 1 = 0.01, θ s,i 2 = 0.001, φs(k) = −φi(k) and pump envelope width σ = 10, (c) φs(k) = −φi(k) and pump envelope width σ = 500. (b) θ s,i 1 = 0.01, θ s,i 2 = 0.001 and φ s (k) = φ i (k), Fig. 2 (c) θ s,i 1 = 0.01, θ s,i 2 = 9 × π/20 and φ s (k) = −φ i (k), Fig. 2 (d) θ s,i 1 = 0.01, θ s,i 2 = 0.001 and φ s (k) = −φ i (k). Fig. of coupling efficiency Γp s,i (ks, ki) in Fourier Domain. (a) θ s,i 1 = 0.01, θ s,i 2 = 9×π/20 and φs(k) = φi(k), (b) θ s,i 1 = 0.01, θ s,i 2 = 0.001 and φs(k) = φi(k), (c) π/20 and φs(k) = −φi(k), θ s,i 1 = 0.01, (d)θ s,i 2 = 0.001 and φs(k) = −φi(k). FIG. 3: Experimental setup for measurement of spatial correlation via coincidence counts between different sublattice modes (n).n=1$ n=N$ n=2$ &$ N=3$ Coincidence$ Detec1on$ La4ce$$ Sub8la4ce$index$ . M H Anderson, Phys. Rev. Lett. 2693969Phys. Rev. Lett.M. H. Anderson et al, Science 269, 198; C. Bradley et al Phys. Rev. Lett. 75, 1687; Davis et al, Phys. Rev. Lett. 75, 3969 (1995). . M Greiner, O Mandel, T Esslinger, T Hänsch, I Bloch, Nature. 41539M. Greiner, O. Mandel, T. Esslinger, T. Hänsch, and I. Bloch, Nature 415, 39 (2002). . D Thouless, Phys. Rev. Lett. 49405D. Thouless et al, Phys. Rev. Lett. 49, 405 (1982); . K Klitzing, Phys. Rev. Lett. 45494K. von Klitzing, Phys. Rev. Lett. 45, 494 (1980). . C Kane, E Mele, Phys. Rev. Lett. 95146802C. Kane, and E. Mele, Phys. Rev. Lett. 95, 146802 (2005); . B Bernevig, Science. 3141757B. Bernevig et al, Science 314, 1757 (2006). . M Koening, Science. 318766M. Koening et al, Science 318, 766 (2007); . D Hsieh, Nature. 452970D. Hsieh et al, Nature 452, 970 (2008). . X Qi, Nature Phys. 4273X. Qi et al, Nature Phys. 4, 273 (2008). . T Kitagawa, M Rudner, E Berg, E Demler, Phys. Rev. A. 8233429T. Kitagawa, M. Rudner, E. Berg, and E. Demler, Phys. Rev. A 82, 033429 (2010). . T Kitagawa, Nature Communications. 3882T. Kitagawa et al, Nature Communications 3, 882 (2012). Entanglement engineering and topological protection by discrete-time quantum walks. S Moulieras, M Lewenstein, G Puentes, J. Phys. B. 46104005S. Moulieras, M. Lewenstein, G. Puentes, Entanglement engineering and topological protection by discrete-time quantum walks, J. Phys. B 46, 104005 (2013). Unraveling the physics of topological phases by quantum walks of light. G Puentes, arxiv:quant-ph/1409.1273G. Puentes, Unraveling the physics of topological phases by quantum walks of light, arxiv:quant-ph/1409.1273 (2014). G Puentes, O Santillan, arxiv:quant-ph/1506.08100v2Zak Phase in Discrete-Time Quantum Walks. G. Puentes and O. Santillan, Zak Phase in Discrete-Time Quantum Walks, arxiv:quant-ph/1506.08100v2 (2015). . M Mohseni, P Rebentrost, S Lloyd, A Aspure-Guzik, J. of Chem. Phys. 129174106M. Mohseni, P. Rebentrost, S. Lloyd, and A. Aspure- Guzik, J. of Chem. Phys. 129, 174106 (2008). . S Godoy, S Fujita, J. Chem. Phys. 975148S. Godoy and S. Fujita, J. Chem. Phys. 97, 5148 (1992). . M Rudner, Phys. Rev. Lett. 10265703M. Rudner, Phys. Rev. Lett. 102, 065703 (2009). . T Oka, Phys. Rev. Lett. 95137601T. Oka et al, Phys. Rev. Lett. 95, 137601 (2005). . L Ermann, J P Paz, M Saraceno, Phys. Rev. A. 7312302L. Ermann, J. P. Paz, and M. Saraceno, Phys. Rev. A, 73, 012302 (2006). . C Ryan, Phys. Rev. A. 7262317C. Ryan et al, Phys. Rev. A 72, 062317 (2005). . G Agarwal, P Pathak, Phys. Rev. A. 7233815G. Agarwal, and P. Pathak, Phys. Rev. A 72, 033815 (2005). . H Schmitz, Phys. Rev. Lett. 10390504H. Schmitz et al, Phys. Rev. Lett. 103, 090504 (2009); . F Zahringer, Phys. Rev. Lett. 104100503F. Zahringer et al, Phys. Rev. Lett. 104, 100503 (2010). . M Karski, Science. 325174M. Karski et al, Science 325, 174 (2009). . B Do, J. Opt. Soc. Am. B. 22499B. Do et al, J. Opt. Soc. Am. B 22, 499 (2005). . D Bouwmeester, Phys. Rev. A. 6113410D. Bouwmeester et al, Phys. Rev. A 61, 013410 (1999). . Y Bromberg, Phys. Rev. Lett. 102253904Y. Bromberg, Phys. Rev. Lett. 102, 253904 (2009); . A Peruzzo, Sience. 3291500A. Peruzzo, Sience 329, 1500 (2010). . A Schreiber, Phys. Rev. Lett. 10450502A. Schreiber et al., Phys. Rev. Lett. 104, 050502 (2010). . M A Broome, Phys. Rev. Lett. 104153602M. A. Broome et al, Phys. Rev. Lett. 104, 153602 (2010). . H Schmitz, Phys. Rev. Lett. 10390504H. Schmitz et al., Phys. Rev. Lett. 103, 090504 (2009). . A Schreiber, Phys. Rev. Lett. 106180403A. Schreiber et al, Phys. Rev. Lett. 106, 180403 (2011). . A Schreiber, Science. 336A. Schreiber et al, Science 336, pp. 55-58 (2012). . J E Moore, Nature. 464194J. E. Moore, Nature 464, 194 (2010). . M Hasan, C Kane, Rev. Mod. Phys. 823045M. Hasan and C. Kane, Rev. Mod. Phys. 82, 3045 (2010). . W Su, J Schrieffer, A Heeger, Phys. Rev. Lett. 421698W. Su, J. Schrieffer, and A. Heeger, Phys. Rev. Lett. 42, 1698 (1979). . P Torma, Phys. Rev. Lett. 812185P. Torma, Phys. Rev. Lett. 81, 2185 (1998). . A Aiello, G Puentes, D Voigt, J P Woerdman, Phys. Rev. A. 7562118A. Aiello, G. Puentes, D. Voigt, and J. P. Woerdman, Phys. Rev. A 75, 062118 (2007) . G Puentes, A Aiello, D Voigt, J P Woerdman, Phys. Rev. A. 7532319G. Puentes, A. Aiello, D. Voigt, and J. P. Woerdman, Phys. Rev. A 75, 032319 (2007); . A Aiello, G Puentes, J P Woerdman, Phys. Rev. A. 7632323A. Aiello, G. Puentes, and J. P. Woerdman, Phys. Rev. A 76, 032323 (2007). . Y Broomberg, Nature Photonics. 4710Y. Broomberg et al, Nature Photonics 4, 710 (2010). . D V Skryabin, F Bialanca, D Bird, F Benabid, Phys. Rev. Lett. 93143907D. V. Skryabin, F. Bialanca, D. Bird. and F. Benabid, Phys. Rev. Lett. 93, 143907 (2004). . A Regensburger, Nature. 488167A. Regensburger et al, Nature 488, 167 (2013). . H Li, F Haldane, Phys. Rev. Lett. 10110504H. Li and F. Haldane, Phys. Rev. Lett. 101, 010504 (2008). . G Puentes, N Hermosa, J P Torres, Phys. Rev. Lett. 10940401G. Puentes, N. Hermosa, J. P. Torres, Phys. Rev. Lett. 109, 040401 (2012). Introduction to quantum computation and information. J , arXiv:quant-ph/9712048H.-K. Lo, S. Popescu, and T. SpillerWorld ScientificSingapore,HardcoverJ. Preskill, Introduction to quantum computation and information, Eds. H.-K. Lo, S. Popescu, and T. Spiller (World Scientific, Singapore,Hardcover 1998 Paperback 2000), arXiv:quant-ph/9712048. M Nielsen, I Chuang, Quantum Computation and Quantum Information. Cambridge University PressM. Nielsen, and I. Chuang, Quantum Computation and Quantum Information, Cambridge University Press (2000).
[]
[ "LonGSp: the Gornergrat Longslit Infrared Spectrometer", "LonGSp: the Gornergrat Longslit Infrared Spectrometer" ]
[ "L Vanzi \nDipartimento di Astronomia e Scienza dello Spazio\nUniversità di Firenze\nLargo E. Fermi 5I-50125FirenzeItaly\n", "M Sozzi \nCentro per l'Astronomia Infrarossa\nMezzo Interstellare-CNR\nLargo E. Fermi 5I-50125FirenzeItaly\n", "G Marcucci \nDipartimento di Astronomia e Scienza dello Spazio\nUniversità di Firenze\nLargo E. Fermi 5I-50125FirenzeItaly\n", "A Marconi \nDipartimento di Astronomia e Scienza dello Spazio\nUniversità di Firenze\nLargo E. Fermi 5I-50125FirenzeItaly\n", "F Mannucci \nCentro per l'Astronomia Infrarossa\nMezzo Interstellare-CNR\nLargo E. Fermi 5I-50125FirenzeItaly\n", "F Lisi \nOsservatorio Astrofisico di Arcetri\nLargo E. Fermi 5I-50125FirenzeItaly\n", "L Hunt \nCentro per l'Astronomia Infrarossa\nMezzo Interstellare-CNR\nLargo E. Fermi 5I-50125FirenzeItaly\n", "E Giani \nDipartimento di Astronomia e Scienza dello Spazio\nUniversità di Firenze\nLargo E. Fermi 5I-50125FirenzeItaly\n", "S Gennari \nOsservatorio Astrofisico di Arcetri\nLargo E. Fermi 5I-50125FirenzeItaly\n", "V Biliotti \nOsservatorio Astrofisico di Arcetri\nLargo E. Fermi 5I-50125FirenzeItaly\n", "C Baffa \nOsservatorio Astrofisico di Arcetri\nLargo E. Fermi 5I-50125FirenzeItaly\n" ]
[ "Dipartimento di Astronomia e Scienza dello Spazio\nUniversità di Firenze\nLargo E. Fermi 5I-50125FirenzeItaly", "Centro per l'Astronomia Infrarossa\nMezzo Interstellare-CNR\nLargo E. Fermi 5I-50125FirenzeItaly", "Dipartimento di Astronomia e Scienza dello Spazio\nUniversità di Firenze\nLargo E. Fermi 5I-50125FirenzeItaly", "Dipartimento di Astronomia e Scienza dello Spazio\nUniversità di Firenze\nLargo E. Fermi 5I-50125FirenzeItaly", "Centro per l'Astronomia Infrarossa\nMezzo Interstellare-CNR\nLargo E. Fermi 5I-50125FirenzeItaly", "Osservatorio Astrofisico di Arcetri\nLargo E. Fermi 5I-50125FirenzeItaly", "Centro per l'Astronomia Infrarossa\nMezzo Interstellare-CNR\nLargo E. Fermi 5I-50125FirenzeItaly", "Dipartimento di Astronomia e Scienza dello Spazio\nUniversità di Firenze\nLargo E. Fermi 5I-50125FirenzeItaly", "Osservatorio Astrofisico di Arcetri\nLargo E. Fermi 5I-50125FirenzeItaly", "Osservatorio Astrofisico di Arcetri\nLargo E. Fermi 5I-50125FirenzeItaly", "Osservatorio Astrofisico di Arcetri\nLargo E. Fermi 5I-50125FirenzeItaly" ]
[]
We present a near-infrared cooled grating spectrometer that has been developed at the Arcetri Astrophysical Observatory for the 1.5 m Infrared Telescope at Gornergrat (TIRGO).The spectrometer is equipped with cooled reflective optics and a grating in Littrow configuration. The detector is an engineering grade Rockwell NICMOS3 array (256×256 pixels of 40µm). The scale on the focal plane is 1.73 arcsec/pixel and the field of view along the slit is 70 arcsec. The accessible spectral range is 0.95 − 2.5µm with a dispersion, at first order, of about 11.5Å/pixel. This paper presents a complete description of the instrument, including its optics and cryo-mechanical system, along with astronomical results from test observations, started in 1994. Since January 1996, LonGSp is offered to TIRGO users and employed in several Galactic and extragalactic programs.
10.1051/aas:1997207
[ "https://arxiv.org/pdf/astro-ph/9703191v1.pdf" ]
15,876,371
astro-ph/9703191
3a14a8888e738878ce7be3ea7778fa873b061f58
LonGSp: the Gornergrat Longslit Infrared Spectrometer 28 Mar 1997 L Vanzi Dipartimento di Astronomia e Scienza dello Spazio Università di Firenze Largo E. Fermi 5I-50125FirenzeItaly M Sozzi Centro per l'Astronomia Infrarossa Mezzo Interstellare-CNR Largo E. Fermi 5I-50125FirenzeItaly G Marcucci Dipartimento di Astronomia e Scienza dello Spazio Università di Firenze Largo E. Fermi 5I-50125FirenzeItaly A Marconi Dipartimento di Astronomia e Scienza dello Spazio Università di Firenze Largo E. Fermi 5I-50125FirenzeItaly F Mannucci Centro per l'Astronomia Infrarossa Mezzo Interstellare-CNR Largo E. Fermi 5I-50125FirenzeItaly F Lisi Osservatorio Astrofisico di Arcetri Largo E. Fermi 5I-50125FirenzeItaly L Hunt Centro per l'Astronomia Infrarossa Mezzo Interstellare-CNR Largo E. Fermi 5I-50125FirenzeItaly E Giani Dipartimento di Astronomia e Scienza dello Spazio Università di Firenze Largo E. Fermi 5I-50125FirenzeItaly S Gennari Osservatorio Astrofisico di Arcetri Largo E. Fermi 5I-50125FirenzeItaly V Biliotti Osservatorio Astrofisico di Arcetri Largo E. Fermi 5I-50125FirenzeItaly C Baffa Osservatorio Astrofisico di Arcetri Largo E. Fermi 5I-50125FirenzeItaly LonGSp: the Gornergrat Longslit Infrared Spectrometer 28 Mar 1997Received ...arXiv:astro-ph/9703191v1 A&A manuscript no. (will be inserted by hand later) Your thesaurus codes are: ASTRONOMY ANDInstrumentation: Spectrometers, Near In- frared We present a near-infrared cooled grating spectrometer that has been developed at the Arcetri Astrophysical Observatory for the 1.5 m Infrared Telescope at Gornergrat (TIRGO).The spectrometer is equipped with cooled reflective optics and a grating in Littrow configuration. The detector is an engineering grade Rockwell NICMOS3 array (256×256 pixels of 40µm). The scale on the focal plane is 1.73 arcsec/pixel and the field of view along the slit is 70 arcsec. The accessible spectral range is 0.95 − 2.5µm with a dispersion, at first order, of about 11.5Å/pixel. This paper presents a complete description of the instrument, including its optics and cryo-mechanical system, along with astronomical results from test observations, started in 1994. Since January 1996, LonGSp is offered to TIRGO users and employed in several Galactic and extragalactic programs. Introduction The development of the spectrometer LonGSp (Longslit Gornergrat Spectrometer) was part of a project aimed at providing the 1.5-m Infrared Telescope at Gornergrat (TIRGO) with a new series of instruments based on NIC-MOS3 detectors. The infrared (IR) camera, ARNICA, developed in the context of this project is described in Lisi et al. (1995). LonGSp is an upgrade of GoSpec (Lisi et al. 1990), the IR spectrometer operating at TIRGO since October 1988. Thanks to the NICMOS detector, the new spectrometer enables longslit spectroscopy with background limited performance (BLP). The GoSpec characteristics of compactness and simplicity are maintained in the new instrument. Only a subsection of the engineering Send offprint requests to: L. Vanzi grade array (40×256 pixels) is used. A description of the optics, cryogenics, and mechanics is presented in Section 2 and 3; the electronics, software and the performance of the detector are presented in Section 4 and 5. Finally, in Section 6 we present details regarding the observations and data reduction, and in Section 7 the results of the first tests at the telescope. Optical Design The optical scheme of the instrument is sketched in Fig. 1; it is designed to match the f/20 focal ratio of the TIRGO telescope. (1) field lens, (2) secondary mirror and (3) primary mirror of the collimator (an inverted cassegrain), (4) a plane mirror, (5) the grating, (6) the plane mirror, (7) the paraboloidal mirror of the camera, and (8) the detector. Following the optical path from the telescope, the beam encounters the window of the dewar, the order sorting filter, a field lens, and the slit; the latter resides at the focal plane of telescope. The window and field lens are composed of calcium fluoride. Filters and slits are respectively mounted on two wheels and can be quickly changed during the observations. The field lens images the pupil on the secondary mirror of an inverted cassegrain (with focal length of 1400 mm) that produces a parallel beam 70 mm in diameter. This beam is reflected onto the grating by a flat mirror tilted by 10 • . The grating, arranged in Littrow configuration, has 150 grooves/mm and a blaze wavelength of 2µm at first order; rotation around the 10 • tilted axis allows the selection of wavelengths and orders. A modified Pfund camera (with focal length of 225 mm) following the grating, collects the dispersed beams on the detector. The sky-projected pixel size is 1.73 arcsec, and the total field covered along the slit direction is 70 arcsec. The back face of the grating is a flat mirror so that, when the grating is rotated by 180 degrees, the instrument functions as a camera, in the band defined by the filters, with a field of view of about 1.5 arcmin square. This facility can be useful for tests, maintenance, and for centering weak sources on the slit. All the mirrors are gold coated to provide good efficiency over a wide spectral range, and the optics are acromatic at least up to 5µm. The optical components are cooled to about 80 K by means of thermal contact with a cryogenic vessel filled with liquid nitrogen at atmospheric pressure as described below. The mounting of optical elements is designed to take into account the dimensional changes between mirrors (in pyrex) and supports (in aluminium) generated by the cooling and the differences in thermal expansion coefficents. The resolving power is (for first order) about 600 in the center of J band, and 950 in the center of the K band, using a slit of two pixels (3.46 arcsec). Cryogenics and Mechanics As can be seen in Fig. 2, where we show some parts of the mechanical structure of the instrument, the core of the instrument is the liquid nitrogen reservoir, which has a toroidal shape with rectangular cross section and a capacity of 3 liters. It provides support and cooling for two optical benches, which are located on opposite sides of the vessel. The central hole of the toroid allows the beam to pass from one optical bench to the other. The grating motion is assured by an external stepper motor via a ferrofluidic feedthrough, and the position is controlled by an encoder connected to the motor axis outside the dewar. Two springs acting on the worm gear guarantee good stability of the grating position. Two internal stepper motors, modified to operate at cryogenic temperatures , drive the filter and slit wheels. Mechanics and optics are enclosed in a radiation shield. The internal cold structure is supported by nine lowthermal-conductivity rods, which are fixed between the internal liquid nitrogen reservoir and the external vacuum shield, and are rigidly linked to the focal plane adaptor of the telescope. Externally, the instrument has the form A small amount of active charcoal is present to maintain the value of the pressure required (less than 10 −4 mb) for a sufficiently long time (more than 20 days). The charcoal is cooled by an independent cryogenic system; in the rear optical bench there is a smaller nitrogen vessel (∼ 0.5 l) thermally insulated from the surrounding environment. The regeneration of the charcoal must be carried out once a month in order to maintain a sufficiently high absorption rate. This operation consists of heating the charcoal to 300 K, while the pressure inside the dewar is maintained below 10 −1 mb by means of a rotary vacuum pump. Because the charcoal is cooled by an independent cryogenic system, the heating of the optics and the main part of the mechanical structure is not necessary and the operation can be completed in about four hours. To cool and warm the entire instrument reasonably quickly, the dewar is filled with gaseous nitrogen at a pressure of about 200 mb during the cooling and heating phases: in this way the thermal transients prove to be shorter than seven hours. The rate of evaporation of the nitrogen from the main reservoir allows about 16 hours of operation in working conditions, more than a winter night of observation. Electronics and Software The electronics of LonGSp comprise two main parts: "upper" electronics, that are situated near the instrument, and the "lower" electronics in the control room. The connection between the two parts is assured via a fiber-optics link. Two boards, close to the cryostat, house part of the interface electronics, that is a set of four preamplifiers and level shifters and an array of drivers and filters that feed the clock signals and the bias to the array multiplexer. The "upper" electronics are composed of an intelligent multi-part sequence generator and a data acquisition section. The first is controlled by a microprocessor (a Rockwell 65C02), and the sequencer is capable of generating many different waveforms template (at 8 bits depth) stored in an array of 128 Kbytes of memory. The final waveform is generated by selecting, via software, the templates needed together with their repetitions. The data acquisition segment consists of a bank of four analog-to-digital converters at 16 bits, and the logic for converting them to serial format. A transceiver sends data to the telescope control room through a fiber optics link. Data are sent as groups of four, one for each quadrant, and are presented together with the quadrant identifier (two bits) to the frame grabber. The fiber optics link is bidirectional, so that it is possible to send instructions to the control microprocessor in the "upper" electronics, and to communicate with the motor control through a serial connection (RS-232) encoded on the same fiber-optics link. The "upper" electronics are completed by the box which contains the power supply, the stepper motors controllers, and the temperature controller of the array. The "lower" electronics implement the logic to decode the serial data protocol, in order to correctly reconstruct the frame coming from the array detector. Data are collected by the custom frame grabber (known as the "Ping-Pong") which is capable of acquiring up to four images in each of its two banks. When a bank is written, the other can be read, enabling continuous fast acquisitions. Also of note is the ability to re-synchronize data acquisition to the quadrant address, virtually eliminating mis-aligned frames. The instrument is controlled by an MS-DOS PC equipped with a 80486 CPU (33 MHz clock), highresolution monitor, and 600 Mbytes of hard disk space. At the end of a data acquisition sequence, each single frame or the stack average of a group of frames is stored on the PC hard disk, and are later transferred to optical disk (WORM) storage. In the near future, the WORM cartridges will be superseded by more standard writable CD-ROM cartridges. A local Ethernet network connects the control computer to the TIRGO Sun workstation, so that it is possible to transfer the data for preliminary reduction using standard packages. The software developed for this instrument is "layer organized", that is to say organized as a stack of many layers of subroutines of similar levels of complexity. To accomplish its task, each routine need rely only on the immediately adjacent level and on global utility packages. Such a structure greatly simplifies the development and maintenance of the software. Our efforts were directed towards several different requirements. Our first priority was to have a flexible laboratory and telescope engine, capable of acquiring easily the large quantity of data a panoramic IR array can produce. The human interface is realized through a fast characterbased menu interface. The operator is presented only with the options which are currently selectable, and the menu is rearranged on the basis of user choices or operations. We have also stressed the auto-documentation of data. After the decision to store data in standard FITS format, it was deemed useful to fully exploit the header facility to label each frame with all relevant information, such as telescope status, instrument status, and user acquisition choices. Data are also labelled with the observer name in order to facilitate data retrieval from our permanent archive. In particular, the form of the FITS file is completely compatible with the context IRSPEC of the ESO package MIDAS. Finally, one of our main goals was to produce an easyto-use software and with the smallest "learning curve". Our idea was that data acquisition must disappear from observer attention, giving him/her the possibility to concentrate on the details of the observations; in this way, observing efficiency is much higher. As a result, we have implemented automatic procedures such as multi-position ("mosaic"), and multi-exposure (stack of many frames). Detector Performance Although the spectrometer was initially designed to use a subsection 40x100 of a NICMOS3 detector, we found later that very good performance can be obtained on an even larger area. Using 256 pixels in the wavelength direction, we have a spectral coverage of almost 0.3 µm. This means that with a single grating setting we can measure a complete J spectrum and have good coverage in H and K. The best 40x256 subsection was selected on the basis of good cosmetics (low percentage of bad pixels) and low dark current and readout noise. We measured the percentage of bad pixels, the dark current, and the readout noise via laboratory tests based on sets of images taken at a series of exposure times of a spatially uniformly illuminated scene, and without any illumination (by substituting the filter with a cold stop). The readout noise is determined as the mean standard deviation of each pixel in the stack of short integration times where the dark current is negligible. The dark current and gain measurement are based on two linear regressions: values of dark frames as a function of exposure time in the first case, and spatial medians of the stack variance relative to the stack median in the second one. Details of these tests are presented in Vanzi et al. (1995). In Table 1, we present the results of further tests carried out in April 1995. Observations and Data Reduction The procedures for LonGSp observations are those commonly used in NIR spectroscopy, optimized for the characteristics of the instrument. For compact sources, observations consist of several groups of frames with the object placed at different positions along the slit. In the case of extended sources, observations consist of several pairs of object and sky frames. On-chip integration time is 60 sec or less for a background level of roughly 6000 counts/pixel because of ensuing problems with sky line subtraction (see below). At a given position along the slit, several frames can be coadded. The main steps in the reduction of NIR spectroscopic data are flat-field correction, subtraction of sky emission, wavelength calibration, correction for telluric absorption, and correction for optical system + detector efficency. Data reduction can be performed using the IRSPEC context in MIDAS, the ESO data analysis package, properly modified to take into account the LonGSp instrumental setup. We have found it useful to acquire dark and flat frames at the beginning and the end of the night; we obtain flat-field frames by illuminating the dome with a halogen lamp. Observations of a reference star are taken for a fixed grating position. An early type, featureless star (preferably an O star) is needed to correct for telluric absorption and differential efficency of the system, and a photometric standard star is needed if one wants to flux calibrate the final spectrum (only one grating position in each band is required). An alternative technique, proposed by Maiolino et al (1996), consists of using a G star corrected through data of solar spectrum. Both methods have been succesfully tested. Flat field frames are first corrected for bad pixels, then dark-current subtracted, and normalized. Dark current is subtracted from all raw frames, and then divided by the normalized flat field. For compact sources (the frames taken at the different positions along the slit are denoted by A, B and C), the sky is subtracted by considering A − B, B − (A + C)/2, and C − B, and taking a median of the three differences. In case of extended objects, if A and B denote object and sky frames, the sky is subtracted by considering (A1 + A2)/2 − (B1 + B2)/2 (the order of observations is A1 B1 B2 A2). However, a simple sky subtraction is almost never sufficient to properly eliminate the bright OH lines whose intensity varies on time scales comparable with object and sky observations. Moreover, mechanical instabilities can produce movements of spectra (usually a few hundreds of a pixel) which are nevertheless enough to produce residuals which exceed the detector noise. To correct for these two effects, the sky frames are multiplied by a correcting factor and shifted along the dispersion direction by a given amount. These factors and shifts are chosen automatically by minimizing the standard deviation in selected detector areas where only sky emission is present. Because this effect increases with the integration time, it is advisable not to exceed 60 seconds for each single integration. Slit images at various wavelengths are tilted as a consequence of the off-axis mount of the grating. Sky subtracted frames are corrected by computing analytically the tilt angle from the instrumental calibration parameters, or by directly measuring it from the data. Wavelength calibration in LonGSp data is performed using OH sky emission lines. The wavelength dispersion on the array is linear to within a small fraction of the pixel size and is computed analytically once the central wavelength of the frame is known. At the beginning of the data reduction, the nominal central wavelength used in the observations is assigned to a properly chosen sky frame. Then the calibration is refined using the bright OH sky lines (precise wavelengths of OH lines as well as a discussion of their use as calibrators are given in Oliva & Origlia 1992). The same procedures are applied to the reference stars frames to obtain the calibration spectra, and the spectrum of a photometric standard star can be used to flux calibrate the final frames. Astronomical Results The first tests at the telescope took place successfully in early 1994. From these observations we measured the efficiency of the instrument (through the observation of photometric standard stars) and its sensitivity (1σ in 60 sec of integration time); these are reported in table 2. Since January 1996, LonGSp is offered to TIRGO users and employed in several galactic and extragalactic programs. To give an impression of the capabilities of the instrument, we show (in Figs. 3,4 and 5) some acquired spectra of various type of sources: extended, compact and extragalactic, without comment as to their astrophysical significance. Fig. 1 . 1Optical diagram of instrument. The optical components of the instrument include (enumeration follows the path of radiation): Fig. 2 . 2Mechanical arrangement of instrument. The optical parts have the same enumeration as in Fig. 1. (A) liquid nitrogen vessel, (B) support of mirrors in the back side of instruments, (C) support of the detector, (D) support of primary mirror of collimator, (E) filter wheel, (F) slit wheel (each slit support a field lens), (G) grating mounting and wheel for positioning in wavelength. of a cylinder with a base of about 40 cm in diameter and length of about 60 cm. Fig. 3 . 3Pk2 in Orion, three positions along the slit, with a total integration time of 60 sec. Fig. 4 . 4The spectrum of LkHa215 (Ae/Be star) in K band, with a total integration time of 60 sec. Table 1 . 1Measured parameters of the detectorBad pixels 2.9% Dark current 0.9 e − /sec Read out noise 45 e − Table 2 . 2Astronomical performance erg cm −2 s −1 2 erg cm −2 s −1Å −1Band(order) Efficency Line 1 Continuum 2 J (I) 0.045 4×10 −14 2×10 −15 H (I) 0.10 2×10 −14 8×10 −16 K (I) 0.08 2×10 −14 8×10 −16 1 Cryogenic stepper motors for infrared astronomical instrumentation. S Gennari, F Mannucci, L Vanzi, Gennari S., Mannucci, F., Vanzi L., 1993, Cryogenic stepper motors for infrared astronomical instrumentation. In: A. Proc. SPIE 1946, International Symposium on Optical Engigneering and Photonics in Aerospace and Remote Sensing. M. FowlerSPIE 1946, International Symposium on Optical Engigneering and Photonics in Aerospace and Remote Sensing610M. Fowler (ed.) Proc. SPIE 1946, International Symposium on Optical Engigneering and Photonics in Aerospace and Remote Sensing, p. 610 LonGSp: the new infrared spectrometer of TIRGO. S Gennari, L Vanzi, Proc. XXXVII Annual Meeting of the S.A.It. R. BandieraXXXVII Annual Meeting of the S.A.It752Gennari S., Vanzi L, 1993, LonGSp: the new infrared spec- trometer of TIRGO. In: R. Bandiera (ed.) Proc. XXXVII Annual Meeting of the S.A.It., p. 752 LonGSp: a near infrared spectrometer. S Gennari, L Vanzi, Infrared astronomy with arrays. I. McLeanKluwer Academic Publishers351Gennari S., Vanzi L, 1994, LonGSp: a near infrared spectrom- eter. In: I. McLean (ed.) Infrared astronomy with arrays, Kluwer Academic Publishers, p. 351 . F Lisi, C Baffa, V Biliotti, PASP. 108364Lisi F, Baffa C., Biliotti V., et al., 1995, PASP 108, 364 . R Maiolino, G H Rieke, M J Rieke, AJ. 111537Maiolino R., Rieke G.H., Rieke M.J., 1996, AJ 111, 537 . E Oliva, L Origlia, A&A. 254466Oliva E., Origlia L. 1992, A&A 254, 466 NICMOS3 detector for astronomical spectroscopy. L Vanzi, S Gennari, A Marconi, 167.Or.031IAU Symp. 167Vanzi L., Gennari S., Marconi A., 1994, NICMOS3 detector for astronomical spectroscopy, IAU Symp. 167, Com. N. 167.Or.031 This article was processed by the author using Springer-Verlag L a T E X A&A style file L-AA version 3. This article was processed by the author using Springer-Verlag L a T E X A&A style file L-AA version 3.
[]
[ "Weak first-order transition in the three-dimensional site-diluted Ising antiferromagnet in a magnetic field", "Weak first-order transition in the three-dimensional site-diluted Ising antiferromagnet in a magnetic field" ]
[ "A Maiorano \nDepartamento de Física\nFacultad de Ciencias\nUniversidad de Extremadura\n06071BadajozSPAIN\n\nInstituto de Biocomputación y Física de Sistemas Complejos (BIFI)\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSPAIN\n", "V Martín-Mayor \nInstituto de Biocomputación y Física de Sistemas Complejos (BIFI)\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSPAIN\n\nDepartamento de Física Teórica\nFacultad de Ciencias Físicas\nUniversidad Complutense de Madrid\n28040MadridSPAIN\n", "J J Ruiz-Lorenzo \nDepartamento de Física\nFacultad de Ciencias\nUniversidad de Extremadura\n06071BadajozSPAIN\n\nInstituto de Biocomputación y Física de Sistemas Complejos (BIFI)\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSPAIN\n", "A Tarancón \nInstituto de Biocomputación y Física de Sistemas Complejos (BIFI)\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSPAIN\n\nDepartamento de Física Teórica\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSPAIN\n" ]
[ "Departamento de Física\nFacultad de Ciencias\nUniversidad de Extremadura\n06071BadajozSPAIN", "Instituto de Biocomputación y Física de Sistemas Complejos (BIFI)\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSPAIN", "Instituto de Biocomputación y Física de Sistemas Complejos (BIFI)\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSPAIN", "Departamento de Física Teórica\nFacultad de Ciencias Físicas\nUniversidad Complutense de Madrid\n28040MadridSPAIN", "Departamento de Física\nFacultad de Ciencias\nUniversidad de Extremadura\n06071BadajozSPAIN", "Instituto de Biocomputación y Física de Sistemas Complejos (BIFI)\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSPAIN", "Instituto de Biocomputación y Física de Sistemas Complejos (BIFI)\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSPAIN", "Departamento de Física Teórica\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSPAIN" ]
[]
We perform intensive numerical simulations of the three-dimensional site-diluted Ising antiferromagnet in a magnetic field at high values of the external applied field. Even if data for small lattice sizes are compatible with second-order criticality, the critical behavior of the system shows a crossover from second-order to first-order behavior for large system sizes, where signals of latent heat appear. We propose "apparent" critical exponents for the dependence of some observables with the lattice size for a generic (disordered) first-order phase transition.
10.1103/physrevb.76.064435
[ "https://arxiv.org/pdf/0705.1517v3.pdf" ]
73,709,166
0705.1517
9268727aa0321bd33bf274476496eeea4ca9405b
Weak first-order transition in the three-dimensional site-diluted Ising antiferromagnet in a magnetic field 14 Feb 2008 (Dated: February 14, 2008) A Maiorano Departamento de Física Facultad de Ciencias Universidad de Extremadura 06071BadajozSPAIN Instituto de Biocomputación y Física de Sistemas Complejos (BIFI) Facultad de Ciencias Universidad de Zaragoza 50009ZaragozaSPAIN V Martín-Mayor Instituto de Biocomputación y Física de Sistemas Complejos (BIFI) Facultad de Ciencias Universidad de Zaragoza 50009ZaragozaSPAIN Departamento de Física Teórica Facultad de Ciencias Físicas Universidad Complutense de Madrid 28040MadridSPAIN J J Ruiz-Lorenzo Departamento de Física Facultad de Ciencias Universidad de Extremadura 06071BadajozSPAIN Instituto de Biocomputación y Física de Sistemas Complejos (BIFI) Facultad de Ciencias Universidad de Zaragoza 50009ZaragozaSPAIN A Tarancón Instituto de Biocomputación y Física de Sistemas Complejos (BIFI) Facultad de Ciencias Universidad de Zaragoza 50009ZaragozaSPAIN Departamento de Física Teórica Facultad de Ciencias Universidad de Zaragoza 50009ZaragozaSPAIN Weak first-order transition in the three-dimensional site-diluted Ising antiferromagnet in a magnetic field 14 Feb 2008 (Dated: February 14, 2008)numbers: 7550Lk6470Pf6460Cn7510Hk We perform intensive numerical simulations of the three-dimensional site-diluted Ising antiferromagnet in a magnetic field at high values of the external applied field. Even if data for small lattice sizes are compatible with second-order criticality, the critical behavior of the system shows a crossover from second-order to first-order behavior for large system sizes, where signals of latent heat appear. We propose "apparent" critical exponents for the dependence of some observables with the lattice size for a generic (disordered) first-order phase transition. I. INTRODUCTION The study of systems with random fields is of paramount importance in the arena of the disordered systems. A paradigm in this field is the random field Ising model 1,2 (RFIM). In spite of much effort devoted to the investigation of the RFIM, 2 several important questions remain open. Some of these questions refer to the nature of the (replica symmetric?) low temperature phase, to universality issues (binary versus Gaussian external magnetic field 3,4 ), and to the order of the phase transitions. Here we study the diluted antiferromagnetic Ising model in an external magnetic field (DAFF) that is believed to belong to the same universality class of the RFIM (the DAFF is expected to behave as a Gaussian RFIM because of the short ranged correlations in the superexchange coupling). 5,6 As a matter of fact, DAFF systems are the most widely investigated experimental realization of the RFIM. One of the best examples of a diluted Ising antiferromagnet is Fe x Zn 1−x F 2 . Its large crystal field anisotropy persists even when the Fe ions are diluted (x < 1), thus providing a good (antiferromagnetic) Ising system for all ranges of the magnetic concentration. Other systems behaving as Ising (anti)ferromagnets are Fe x Mg 1−x Cl 2 , CoZn 1−x F 2 and Mn x Zn 1−x F 2 . 1 The experimental results on the order of the phase transition are somewhat inconclusive. On the one hand, these materials show a large critical slowing down around the critical temperature, as well as a symmetric logarithmic divergence of the specific heat. On the other hand, the order parameter (the staggered magnetization) behaves with a critical exponent β near zero, possibly marking the onset of a first-order phase transition. 1 Note that β should be exactly zero if the order parameter is discontinuous as a function of temperature or the magnetic field. The numerical investigations of the DAFF at T > 0 are scarce. It was investigated a long time ago by Ogielski and Huse. 7 They considered several values of the pair temperature-magnetic field on lattice sizes up to L = 32 but far from the critical region. 8 They investigated the thermodynamics as well as the (equilibrium) dynamical critical behavior. Their thermodynamic results pointed to a second-order phase transition. However, they found activated dynamics (which could be interpreted as a signal of a first-order phase transition) rather than standard critical slowing down (for numerical studies of the DAFF at T = 0, see Refs. [3] and [9]). Numerical and analytical studies rather focused on the RFIM, which is expected to display the DAFF critical behavior. 5,6 Even if the RFIM is more amenable than the DAFF to analytical investigations, the situation is still confusing. Indeed, mean-field theory predicts a secondorder phase transition for low magnetic field. If the probability distribution function of the random fields does not have a minimum at zero field, the transition is expected to remain of the second-order all the way down to zero temperature. However, if the probability distribution for the random field does show a minimum at zero field, a tricritical point and a first-order transition line at sufficiently high field values are predicted. 10 The numerical investigation of the RFIM has neither confirmed nor refuted this counterintuitive mean-field result. Rieger and Young studied binary distributed quenched magnetic fields, 11 where mean-field predicts a tricritical point. After extrapolation to the thermodynamic limit, they interpreted their results as indicative of a first-order transition for all external field strengths in the thermodynamic limit (the tricritical point did not show up). Rieger studied the case with Gaussian fields, 12 where only second-order behavior should be found according to mean-field expectations. Actually, his results were consistent with a second-order phase transition with vanishing (!) order parameter exponent. The simulation of Hernandez and Diep 13 of the binary RFIM supports the existence of a tricritical point at finite temperature and magnetic field. Also the study by Machta et al. 14 of the Gaussian RFIM showed evidences of finite jumps in the magnetization at disorder dependent transition points. A completely different numerical strategy is suggested by the expectations of a T = 0 renormalization-group fixed point 15 . Since the ground state for a RFIM on a sample of linear size L can be found in polynomial (in L) time, T = 0 physics can be directly addressed by studying the properties of the ground state for a large number of samples. Hartmann and Young 16 studied lattices up to L = 96 for the gaussian RFIM. They concluded that their data supported a second-order phase transition scenario for the Gaussian RFIM. The same model was investigated by Middleton and Fisher 17 in lattices up to L = 256. Their data suggested as well a continous phase transition with a very small order parameter exponent, β = 0.017 (5). Using the same technique Hartmann and Nowak 9 found non-universal behavior in the binary and Gaussian RFIM, not excluding that the former could undergo a first-order phase transition. They also studied T = 0 critical properties of the DAFF for system sizes up to L = 120, and found critical exponents β = 0.02(1), ν = 1.14(10) and γ = 3.4(4), compatible with their results for the Gaussian RFIM. The aim of this work is to revisit the Ogielski-Huse investigation, that was carried out for T > 0, with modern computers, algorithms 18 and finite-size scaling methods (the quotient method, 19 that uses the finite volume correlation length 20 to characterize the phase transition). The significant CPU investment allowed us to simulate large lattices (L = 24) and a large number of disorder realizations. We plan, in the future, to perform a more complete investigation of the critical surface of this model (this would require to vary three variables: temperature, dilution and magnetic field). Our main finding is that the DAFF probably undergoes a very weak first-order phase transition. This seems a natural explanation for the finding of the activated dynamics (at equilibrium) in Ref. [7]. The discontinuity in the magnetization density is sizable, yet that of the internal energy is very small. Nevertheless, even if we perform a standard second-order analysis, the critical exponent for the staggered magnetization turns out to be ridiculously small. The outline of the rest of this paper is the following: in the next section we describe the model (Sect. II A), observables (Sect. II B), as well as the theoretical expectations for the finite-size scaling behavior in a second-order phase transition (Sect. II C) and for a first-order one (Sect. II D). Details about our simulations are given in Sect. III. Our numerical results are presented in Sect. IV. We first perform a second-order analysis (Sect. IV A), then consider the possibility of a weak first-order tran-sition (Sect. IV B). After summarizing our results in Sect. V, we discuss in the Appendix that the bound 21 ν ≥ 2/d of Chayes et al. holds as well for first-order phase transitions in the presence of disorder. In addition, we have found upper bounds for the divergence of the susceptibility and specific heat with the lattice size. ǫ i S i ǫ j S j − H i ǫ i S i ,(1) where the first sum runs over pairs of nearest-neighbor sites, while H is the external uniform magnetic field. The ǫ i are quenched dilution variables, taking values 0 and 1 (with probability 1 − p and p respectively) in empty and occupied sites. In this work we have fixed this probability to p = 0.7. In this way we are guaranteed to stay away from both from the pure case (p = 1) and the percolation threshold (p c ≈ 0.31). 22 It is understood that for every choice of the {ǫ i } V i=1 , called hereafter a sample or a disorder realization, we are to perform the Boltzmann (thermal) average. The mean over the disorder is only taken afterwards. For low magnetic field, at low temperatures, model (1) stays in an antiferromagnetic state that we will call the ordered phase. The staggered magnetization M s , see Eq. (2) below, is an order parameter for this phase. Note that, for H = 0, the Z 2 transformation S i → −S i , yields a degenerate antiferromagnetic state. The increase of the magnetic field or the temperature T weakens the antiferromagnetic correlations, and the system eventually enters into a paramagnetic state. The paramagnetic and the ordered phases are separated by a critical line in the (T ,H) plane [it would be a critical surface in the (T ,H,p) phase diagram]. Note that the effect of disorder (quenched dilution), combined with the applied field H in a finite DAFF system, breaks the Z 2 symmetry even in the ordered phase. Consider the state of minimal energy at T = 0 and H low enough so that the staggered magnetization is maximal and M s = p. Now let us change the sign to all spins in one hit: if p = 1, the two states are completely degenerate, but the random dilution introduces a subextensive shift in the total energy. In fact, the inversion does not change the nearest-neighbor energy, but changes the sign of the magnetic part. In the pure system the magnetic energy of the fully ordered antiferromagnetic state is zero but, in the presence of random dilution, the number of spins aligned or misaligned with the field H is a random variable. So, the total magnetic contribution to energy is of order [p(1 − p)N ] 1/2 . It follows that the M s = −p state has an energy shift of order 2 [p(1 − p)N ] 1/2 with respect to the M s = p one and its Boltzmann weight is depressed (an analogous effect of degeneration removal is present in the RFIM). A "quasisymmetric" state may exist if there is a configuration of spins in which almost all the spins are reversed with respect to the M s = p state such that the sum of the energies of unsatisfied bonds cancels the magnetic excess. If this is the case, then the two states are degenerate but the probability distribution of the order parameter results peaked around asymmetric values. The magnetic energy excess is a subextensive effect and is expected to be suppressed as L increases. Nevertheless, the probability of transitions between states of opposite spontaneous staggered magnetizations decreases for large system, which is a major problem for simulations. B. Observables In the following, (· · ·) denotes thermal averages (including averages of real replicas) and (· · ·) indicates a sample average (average on the disorder). Measures focused on several observables: the order parameter, i.e., the average value of the staggered magnetization M s = 1 V j ǫ j S j e iπ( P d µ=1 jµ) ,(2) (j µ is the µ-th lattice coordinate of site j) whose values are limited in the interval −p ≤ M s ≤ p (in average for large lattices); the average energy densities are 1 V H = E = E K + H E M ,(3)1 V H K = E K = 1 V <i,j> ǫ i S i ǫ j S j ,(4)1 V H M = E M = − 1 V i ǫ i S i ,(5) with H K and H M respectively the kinetic and magnetic contributions to the Hamiltonian. The definition of E M coincides with the definition of the usual magnetization density. We also computed the average values of the squares and fourth powers of the above quantities, and some cumulants and susceptibilities: given an observable 1 V A = A we compute the Binder cumulant : g A 4 = 1 2 3 − A 4 A 2 2(6) and the connected and disconnected susceptibilities χ A c = V A 2 − A 2 ,(7)χ A dis = V A 2 .(8) These are the ordinary susceptibilities in case A = M s , while χ H c is proportional to the specific heat C v . The lack of Z 2 symmetry, explained in Sec. II A makes mandatory the use of connected correlation functions in finite lattice sizes, especially in the case of the order parameter M s . Yet, the connected staggered susceptibility χ Ms c = V M s 2 − M s 2 does not show a peak in the (T, H) ranges we considered, so we also study the behavior of the connected and disconnected staggered susceptibilities defined with the absolute value of the staggered magnetization: χ c = V M 2 s − |M s | 2 ,(9)χ dis = V |M s | 2 .(10) In the following, when no observable subscript is specified in the susceptibility symbol, we will be referring to Eqs. (9) and (10). It will turn out useful to define a correlation length on a finite lattice by the following analogy with a (lattice) Gaussian model: 20 ξ 2 = G(k 1 ) − G(k 2 ) k 2 2 G(k 2 ) − k 2 1 G(k 1 ) ,(11) with k 2 = 4 d µ=1 sin 2 (k µ /2) on a discrete lattice and G(k) the momentum-dependent propagator G(k) = V F (k)F (−k) − F (k) F (−k) , (12) = G 0 ξ −2 + k 2 (k 2 ≪ ξ −2 ) , F (k) = 1 V j ǫ j S j e i P d µ=1 (kµ+π)jµ ,(13)(G(0) = χ Ms c , F (0) = M s ) , and F (k) the staggered Fourier transform of the spin field. Also in this case we use the connected part for G(k). Choosing k 1 = (0, 0, 0) and k 2 = (2π/L)k µ as one of the minimum wave vectors (k µ , µ = 1, . . . , d are the d versors in the reciprocal space), we have ξ 2 = 1 4 sin 2 (π/L)   χ Ms c d µ=1 G (2π/L)k µ /d   . (14) Equation (14) is a good estimate of the correlation length only in the disordered phase but is useful to identify the critical region where ξ/L should be a nontrivial universal value. Finally, with mass storage not being a problem on modern equipment, it is easy to compute derivatives with respect to inverse temperature β = 1/T and applied field H, through connected correlations. In particular, the specific heat is C v = 1 V d H dT .(15) C. Finite-size scaling in second-order phase transitions We made use of finite size scaling, 23 both studying the behavior of peaks of susceptibilities and applying the quotient method (QM) 19 to extract values for critical exponents. Let us briefly recall both. Consider an observable A, that in the infinite volume limit behaves as (T − T c ) −a = t −a near the critical region (t is the reduced temperature). Then, disregarding correction-to-scaling terms, we expect the following temperature dependency on a finite lattice of size L A(L, t) = L a/ν f A (tL 1/ν ) ,(16) with ν the correlation length exponent, ξ ∝ t −ν , and f A (s) a smooth universal scaling function showing a peak at some value s m = t m (L)L 1/ν . It follows that T m (L) − T ∞ c ∝ L −1/ν . In addition, the scaling of the peak-height gives the value of the critical exponent a. The QM is based on the same scaling ansatz: A(L, t) = L a/ν g A (ξ −1 (L, t)L)(17) We compare data in two lattices L 1 and L 2 at the (unique) reduced temperature t * where the correlationlength in units of the lattice size coincides, ξ( L 1 , t)/L 1 = ξ(L 2 , t)/L 2 . At this temperature we have, apart from corrections to scaling: A(L 1 , t * ) A(L 2 , t * ) = L 1 L 2 a/ν(18) Note that the crossing temperature T * (L 1 ; L 2 ) approaches the critical temperature for large L much faster than the peak of any susceptibility: t * = T * (L 1 ; L 2 ) − T ∞ c ∝ L −ω−1/ν (ω is the leading correction-to-scaling exponent). From the definition [Eq. (14)] of the correlation length, one sees that, respectively, ξ/L ∼ O(L cd ) in the "ordered" (low T , low H) phase and ξ/L ∼ O(1/L) in the "disordered" phase. The constant c is 1/2 in the case when the ordered phase has a Z 2 global symmetry, for in finite lattices the disconnected susceptibility would vanish. Near a second-order transition, ξ/L does not depend on L, so there is a region in which ξ/L for different lattice sizes cross each other. The method then consist in finding the value T * (L 1 ; L 2 ) at which this crossing happens and extracting the exponent a/ν by means of Eq. (18). We apply the methods to several observables to extract exponents, in particular χ Ms c , χ c −→ a = γ = ν(2 − η) ,(19)χ Ms dis , χ dis −→ a = γ = ν(2 − η) ,(20)C v −→ a = α ,(21)|M s | −→ a = −β ,(22)∂ β ξ −→ a = 1 + ν .(23) Notice that we follow Ogielski and Huse 7 in defining η. D. Finite-size scaling for first-order phase transitions Finite-size effects in first-order phase transitions on pure systems are qualitatively similar to their secondorder counterpart, provided that one considers effective critical exponents. 24,25,26 With the assumption that the lattice size is much larger than the correlation length, simple scaling relations hold for the size of the broadened transition region, the height of the peak of the specific heat and the extremal point of the binder cumulant for the energy density: denoting with subscripts + and − values of observables of the two competing phases at a first-order transition in the infinite volume limit (one of the phases can be degenerate) and being Q = E + − E − the latent heat, one has the following in a finite system: 25 T * (L) − T c = a(Q)L −d ,(24)C v (T * ) = c 1 (C v+ , C v− ) + c 2 (Q)L d ,(25)1 − g E 4 (T * ) = g 1 (E + , E − ) (26) + g 2 (E + , E − , C v+ , C v− )L −d , where, in particular, a(Q), c 2 (Q) and g 1 (E + , E − ) vanish if the latent heat is zero, i.e., E+ = E − . C v± are the specific heats of the ± phases. Finally, the susceptibility also diverges with the volume of the system. However, in the presence of disorder the scaling law of T * (L) − T c should be modified (see the appendix) T * (L) − T c = b(Q)L −d/2 .(27) This follows, for instance, from a simple mean-field argument, 27 that yields a linear relation between the critical temperature and the number of spins in the samples. Since the average spin density fluctuates as L −d/2 , we expect this to be the width of the critical region on finite lattices. Furthermore, the specific heat and the connected susceptibility may diverge only as fast as L d/2 . See the appendix for a detailed discussion of these bounds. Hence, assuming that the observables diverge as much as possible, we can write the following "apparent" critical exponents for a disordered first-order transition: 1 ν = d 2 ,(28)α ν = d 2 ,(29)γ ν = d 2 .(30) From the last equation and using η = 2 − γ/ν = 2 − d/2, we find in d = 3 that η = 0.5 and ν = 2/3. If we assume that the averaged probability distribution of the energy P (E) is composed (in the thermodynamic limit and at the critical point) bof the sum of Dirac deltas, we should obtain a divergence L d for the normalized variance of this averaged probability, obtaining (e.g., for the energy) L d E 2 − E 2 = Q 2 L d .(31) In particular, we should recover Eq. (26) for g E 4 which is computed with P (E) [see Eq. (6)]. Please note that the width of P (E) is not related to the specific heat, which is rather related to L d E 2 − E 2 . III. SIMULATION DETAILS We simulate the model using the usual Metropolis algorithm with sequential spin flip schedule and the parallel tempering technique. 18 We restricted our simulation to the H = 1.5T (32) diagonal in the (T, H) plane in order to keep away from the crossover to the zero field case. This should also avoid problems with an oblique crossing of the transition line and will help in the comparison with previous numerical simulations. 7 The critical temperature on this diagonal stays around T c = 1. The use of optimized asynchronous multispin coded update routines in our programs allowed us to thermalize systems on lattices with size up to L = 24. The Simulating 1280 samples for the largest lattice (L = 24 and 24 × 10 6 MC steps) took about four weeks and 20 computation nodes on the Linux cluster at BIFI. By monitoring nonlocal observables (like the susceptibilities), we have checked that the runs are thermalized: we have reached a plateau in all the nonlocal observables we are measuring. In particular, for each lattice size, let t sim be the total time in MC steps devoted to simulate a sample, the time needed to achieve equilibrium always resulted shorter than t sim /2. Indeed, we discarded measures at all times t < t sim /2. Simulation parameters are summarized in Table I. Two further thermalization test were provided by the parallel tempering statistics: (1) we have checked that the temperature samples perform all the road from higher to lower temperatures and come back; (2) the temperature samples have stayed essentially the same Monte Carlo time in all the temperatures simulated. IV. NUMERICAL RESULTS A. Second-order phase transition scenario We will use first the old fashioned peak method and turn later to the quotient method. In this way we will obtain complementary information. In Fig. 1 we show the staggered magnetization connected susceptibility χ c data. Clear peaks are present from which it is possible to extract information on exponents γ, ν and η. There is a lot of noise in the signal for χ c at low temperatures for large lattice sizes (L = 20 and L = 24). This is almost exclusively due to the disconnected part of the connected susceptibility, which is difficult to obtain because of metastability (see Sec. IV C). The exact peak position and height are located by means of cubic polynomial interpolation, and by using the standard second-order phase transition equations for the peak and the position of the maximum of the susceptibility ( χ max ∝ L γ/ν and T c − T max ∝ L −1/ν ), we obtain γ ν = (2 − η) = 1.6(1) → η = 0.4(1) ,(33)ν = 1.0(3) ,(34)T c = 1.58(8) ,(35) (data for L = 8 has been excluded in determining T c and ν). These results are fully consistent with β = 0, even if we are using a second-order ansatz in the analysis. Figure 2 shows the dependence of peak heights and positions on the size L. These estimates are compatible with previous ones by Ogielski and Huse: 7 T c = 1.50(15), ν = 1.3(3), and η = 0.5(1). 28 Ground states calculations by Hartmann and Nowak 9 for the DAFF (but at dilution p = 0.55) gave ν = 1.14(10). The specific heat shows no tendency to diverge at all near the transition region. On the contrary, the peak of C v tends to slightly decrease and broaden as system size increases (see Fig. 3). This is probably an artifact due to the large slope of the path in the (T, H) plane that we simulated [see Eq.(32)]. In fact, the larger the system size, the lower the peak height. However, note that the specific heat has a contribution (from the magnetic energy) with an explicit linear dependence on the field strength (also the magnetization depends strongly on it). Anyhow, this supports a scenario of negative (maybe vanishing) α, as reported, for example, by Rieger and Young 11 , Rieger 12 , Middleton and Fisher 17 in their simulation of the random field Ising model and in experiments. 1 We shall discuss further the specific heat in the following. At the time being, note that the peak position of C v may be fitted to the usual power law and we find T c = 1.68(4) ,(36) which is compatible with the estimate given in Eq. (35). This fit provides no information on the ν exponent (ν = 2(2)). We can extract further information on several exponents by means of the QM. Figure 4 shows a clear crossing of the ratio ξ/L as function of T for different values of L. Data for L = 20 and L = 24 are quite noisy, again due to difficulties in measuring the disconnected part of G(k) [Eq. (12)] but still allow for locating a crossing temperature with other curves. As expected on general grounds, the crossing temperatures stay well away from the positions of the peak of C v and χ c but lie fairly close to the T ∞ c value that has been extrapolated from the peaks position. Indeed, in the absence of scaling corrections, L1, L2 Tcross η η α/ν β/ν 8,12 1.6(2) 0.5(1) -1.0(1) 0.091 (6) there should be no system size dependency of the crossing temperature. Such dependency, if any, carries important information on scaling corrections. 19 Unfortunately, in our case, there is no clear systematic dependence of T cross on the lattice size as, for example, for small systems T cross tends to shift to lower values as L 1 and L 2 increase, while it is sensibly shifted toward higher values when lattice size L = 20 or L = 24 is considered. This probably indicates that a crossover to first-order behavior is showing up. We show in Table II our results for values of exponents α, β, η and η, obtained from the QM. Unfortunately, we have not been able to measure ∂ β ξ with enough precision to give a direct estimate for the thermal exponent ν. Maybe the most striking result in Table II is the smallness of the β exponent, indicating that the order parameter could be discontinuous at the transition. A similar behavior has been found for the RFIM 9,12 and in experiments. 1 A very small (but definitely positive) value of β has been found also by Falicov et al. 29 by means of renormalization-group calculations for the binary RFIM. They also calculated magnetization curves as functions of the temperature and field strength, showing abrupt jumps at the transition point. The value of η agrees with the one found in Ref. [7],η = −1.0(3) and agrees with the smallness of the order parameter exponent and the estimate [Eq. (34)] of ν as the relation β = (1 + η)ν/2 holds. Note also that, given the value of η from Eq. (33), the Schwartz-Soffer 30 relations 2(η − 1) ≥ η ≥ −1 are satisfied as equalities within errors. We see that, at larger sizes, the value of η decreases down even to negative values (showing a large error) but is always compatible with our previous estimate (at least at two standard deviations): η = 0.4 (1). Also the disconnected susceptibility diverges as L 3 (since η is very close to −1, because χ dis ∝ L 2−η ). As for the specific heat exponent, we know from the Harris criterion 31 that α should be negative or zero in a disordered second-order transition framework. Here we report values which are small but definitely positive. It is also true that if the specific heat had a cusplike singularity, our data would not allow to evaluate the asymptotic value for C v , and the estimates for α would be meaningless. B. First-order phase transition scenario The analysis presented above, based on the hypothesis that a second-order transition is taking place, looks inconclusive. This is especially clear from exponent η, which lies so near to our prediction for a first-order phase transition: η = 0.5. In addition this exponent in the Schwartz-Soffer inequality fixes the value of η to -1, and all our estimates of the η exponent are compatible with this value. Of course, this could be due to finite L corrections to scaling, but we think that the phase transition is truly first-order. We now proceed to show that our data are compatible with a weak first-order transition with a very large, but not diverging, correlation length at the transition point. A good observable to test is the Binder cumulant for the total energy density: 25 g E 4 = 1 2 3 − E 4 E 2 2 ,(37) which is usually easy to measure in simulations because of the good noise-to-signal ratio of the energy density. Notice that this Binder cumulant works directly on the averaged probability distribution of the energy. In both the disordered and ordered phases, well away from the transition temperature, the probability distribution of the energy P (E) tends to a single delta function in the thermodynamic limit, so that g E 4 → 1. In the case of a second-order transition this is also true at T c , while in the presence of a first-order transition, we have an energy distribution with more than one sharp peak, so the infinite volume limit of g E 4 is nontrivial. Challa et al. 25 obtained the expression for the nontrivial limit and finite size correction to leading order in the framework of a double Gaussian approximation for the multipeaked P (E) : 1 − g E 4 (T * ) = g 1 (E + , E − ) (38) + g 2 (E + , E − , C v+ , C v− )L −d , g 1 (E + , E − ) = E 4 + + E 4 − E 2 + + E 2 − 2 − 1 2 ,(39) where T * is the temperature at which the minimum (maximum of 1 − g E 4 ) appears and g 2 is a complicated combination of the specific heats and energies of the infinite volume coexistent states (+, −). The term g 1 is vanishing if the latent heat Q = E + − E − is zero. Our data for 1 − g E 4 effectively show broad peaks at temperatures T * (L) shifting toward lower temperature values as L increases (Fig. 5). We also expect from Eq. (27) that the critical temperature shift should scale as T * (L) − T c ∝ L −d/2 . The matching of the data with this model is impressive, (see Fig. 6). A power-law fit against 1 − g E 4 (T * (L)) = g 1 + g 2 L −dg gives g 1 = 0(2) × 10 −4 ,(40)d g = 3.0(1) ,(41) where we excluded in the fit the L = 8 data [fitting also L = 8 data brings d g = 2.8(1) but g 1 , even if compatible with zero, has an unphysical negative value]. The extrapolated transition temperature is (assuming a power d g /2 = 1.5 and L > 8) T c = 1.64(2) .(42) with a reasonable χ 2 /DOF = 0.6. We also recall that susceptibility data gave 1/ν = 1.0(3) which is acceptable: an exponent 3/2 is within two standard deviations. However, the infinite volume limit of 1 − g E 4 [T * (L = ∞)] is very small, suggesting a zero latent heat for the transition. Actually, we will show below that one can estimate from g 1 the order of magnitude of the latent heat, which will turn out to be in agreement with metastability estimates. C. Metastability A very small latent heat may be very hard to detect due to large thermal and sample-to-sample (disorder) fluctuations. If this is the case, it should be possible to detect the latent heat by exploring the behavior of single samples. Indeed, for our largest systems (L = 24), around 20% of the samples started to display metastability between a disordered, small M s state and a large M s state. This behavior was not detected on smaller systems. Furthermore, for a large fraction of the samples, the metastability on M s was correlated with a metastability in the internal energy and in the magnetization density. This can be observed, for instance, in the Monte Carlo history at temperature T = 1.5 (H = 2.25) shown in Fig. 7 for a L = 24 sample. Note that the fluctuations for the internal energy were huge. However, if one bins 25 consecutive Monte Carlo measurements (white squares in the central plot) metastability is very clear. We also learn from Fig. 7 that the probability distribution for the staggered magnetization shows three clear peaks, one for a disordered state and two for a quasisymmetric ordered phase. The transition time is of the order of 10 6 MC steps (and tunneling is probably sped up by our use of parallel tempering). It is then clear that some of the samples may not have had enough time during the simulation (2.4 × 10 7 MC steps) to perform enough transitions between metastable states to give a correct value for the mean staggered magnetization, and this explains the noise we found in observables involving connected functions (susceptibilities, correlation length and specific heat). One can estimate the latent heat and the mean energy from Fig. 7 (recall L = 24): Q ≃ 0.005 and E + ≃ E − = E ≃ 1.36. We can introduce these values in the equation from g 1 [see Eq. (26)]. For small latent heat (we write only the dominant term) it is possible to obtain 1 − g = g 1 ≃ Q 4E 3(43) obtaining g 1 = 5 × 10 −4 , only at two standard deviations of the g 1 value computed by extrapolating the Binder cumulant. V. CONCLUSIONS We have studied the three dimensional diluted antiferromagnetic Ising model in a magnetic field using equilibrium numerical techniques and analysis methods. We have found that the data can be described in the framework of a second phase transition, and obtained critical exponents are compatible with those obtained by Ogielski and Huse. 7 However, the critical exponent for the order parameter is very small, which points to a first-order transition. Note, however, that similarly small values of this critical exponent were found in groundstate investigations both for the DAFF 9 (at different dilutions) and for the Gaussian RFIM 9,17 (these authors claimed that the phase transition was continous). Nevertheless, by studying the Binder cumulant of the energy, we obtained clear indications on our largest lattices of a weak first-order phase transition. Furthermore, on our largest systems, a large number of samples show flip-flops between the ordered (quasidegenerated) and disordered phases both in the energy as well as in the order parameter, which again is a strong evidence for the weakly first-order scenario. We remark that a complete theory of scaling in disordered first-order phase transitions (in line with that of Ref. [25] for ordered systems) is still lacking. However, we have proposed a set of effective exponents and have shown that this scaling accounts for our data. Let us also remark that in footnote 7 of Ref. [21] it is reported that ν = 2/d for first-order transitions in the presence of disorder but without an explanation of this fact. Finally we will show that the Schwartz-Soffer 30 inequality also holds in a first-order phase transition scenario. Schwartz and Soffer show that χ Ms c (q) ≤ 1 h χ Ms dis (q) ,(A4) where q is the momentum, h is the standard deviation of the magnetic field andχ Ms c (q) andχ Ms dis (q) are the Fourier transforms of the connected susceptibility and the disconnected part of it respectively. 30 In order to obtain the in-equality, we introduce the minimum momenta available which is of order 1/L, and use χ(L) =χ Ms c (q min ) and χ dis (L) =χ Ms dis (q min ). On the other hand, in a first-order phase transition, we define the effective exponents η and η by means of the scaling of the susceptibilities at the critical point: χ(L) ≃ L 2−η and χ dis (L) ≃ L 2−η . All together, we obtain the Schwartz-Soffer inequality: 2 − η 2 ≥ 2 − η .(A5) Note that we have not used the criticality properties of the propagators. , phase diagram, symmetries The model is defined in terms of Ising spin variables S i = ±1, i = 1, . . . , V = L 3 placed on the nodes of a cubic lattice of size L with periodic boundary condition. The spins interact through the following lattice Hamiltonian H = <i,j> FIG. 1 : 1The connected susceptibility computed with |Ms| as function of T for lattice sizes L = 8, 12, 16, 20 and 24. Lines are interpolating splines as a guide to the eye. 5, and we simulated N T = 29 temperatures for every lattice size at equally spaced (T, H) values along this diagonal. For smaller lattices (L = 8, 12, and 16) the T values were in the range [1.3, 2.7], while for the larger sizes (L = 20 and 24) the temperature range was [1.3, 2.0]. For every lattice size, 1280 samples (different realizations of the disorder) were simulated. Statistics is also doubled as our program processes two real replicas per sample, with the same disorder, at each (T, H) value. FIG. 2 : 2Top: the susceptibility peak height as function of L γ/ν . Bottom: Peak position as function of L −1/ν . Solid lines are the fitting power-law functions. See section IV A for more details. FIG. 3 : 3The specific heat as function of T for various lattice sizes. The main feature is in the decrease and flattening of the peaks as L increases. : Exponents and crossing temperatures extracted with the QM applied to the intersection of the cumulant ξ/L. FIG. 4 : 4The cumulant ξ/L as function of the temperature for various system sizes. Lines are interpolating splines and serve only as a guide to the eye. Note the noise in the curve for L = 24 in all the crossing region. FIG. 5 : 5The energy Binder cumulant as function of T for all lattice sizes. FIG. 6 : 6Scaling of the minima of Binder cumulant Eq. (37) (top) and of the minima positions (bottom). Fitting functions are also showed. In the bottom we have fixed dg/2 to 3/2 in the fit. See section IV B for more details. FIG. 7 : 7Monte Carlo histories of magnetic energy density (top), total energy density (middle) and staggered magnetization (bottom) for a typical sample of lattice size L = 24 showing jumps between states at the transition temperature. The latent heat is clearer when binning 25 consecutive Monte Carlo measurements for the total energy (open symbols). TABLE I : IParameters characterizing the simulation. See text for details.program can perform Metropolis update at a 1.3 ns/spin rate on a conventional 64 bits Intel CPU at 3.4 GHz. Of course, the use of parallel tempering (PT) slows down the performance of the multispin code simulation, but we can limit the loss of performance if we let the program perform PT swaps every many Metropolis lattice sweeps. We verified that a PT swap trial every 20 Metropolis lat- tice sweeps also allows for hotter replicas to decorrelate before the exchange with colder ones, at the cost of a factor of 1.5 in performances. Cluster update algorithms did not prove convenient due to a dramatic increase in total computational load. In what follows, we consider simulation time units such that 20 Monte Carlo (MC) steps are 20 Metropolis full lattice updates plus 1 PT step. AcknowledgmentsThis work has been partially supported by MEC (contracts Nos.BFM2003-08532, FISES2004-01399, FIS2004-05073, and FIS2006-08533), by the European Commission (contractNo. HPRN-CT-2002-00307), and by UCM-BCSH. We are grateful to T. Jörg, and L. A. Fernández for interesting discussions.APPENDIX A: SCALING IN FIRST-ORDER PHASE TRANSITIONS IN PRESENCE OF DISORDERIt is straightforward to use the Cauchy-Schwartz inequality to obtain a bound on the p derivative of an arbitrary observable. This bound will hold both for firstand second-order phase transitions. Following the lines of reasoning of Ref.[21]we getWe recall that p is the dilution of the model, andNotice that this inequality holds for any temperature, dilution and lattice size.Assuming now that A 2 is of the same order of magnitude of A (which is certainly the case for the internal energy), we translate Eq. (A1) into a bound for the logarithmic derivative:Now, the logarithmic derivative tells us about the width of the critical region on a finite system. For instance, at a given temperature and magnetic field, let p(L) be the spin dilution at a susceptibility peak and p c the thermodynamic limit of any such quantity. We then expect p(L) − p c ∝ L −d/2 . Notice that the notion of a critical region permits us to define an effective ν exponent asIf the coexistence line has finite slope in the (T ,p) plane, it is clear that the critical width in dilution is proportional to the critical width in temperature. A similar argument holds for the derivative with respect to the magnetic field. Thus, the logarithmic derivative with respect to temperature or magnetic field of A may diverge (at most) as fast as L d/2 . So, we have found an upper bound for the divergences of the specific heat and connected susceptibilities (both are derivatives of the energy and magnetization respectively): they cannot diverge, with the lattice size, with an exponent greater than d/2. D P Belanger, Experiments on the Random Field Ising Model in Spin Glasses and Random Fields. A. P. YoungSingaporeWorld ScientificD. P. Belanger, Experiments on the Random Field Ising Model in Spin Glasses and Random Fields, edited by A. P. Young (World Scientific, Singapore, 1997). T Nattermann, Theory of the Random Field Ising Model in Spin Glasses and Random Fields. A. P. YoungSingaporeWorld ScientificT. Nattermann,Theory of the Random Field Ising Model in Spin Glasses and Random Fields, edited by A. P. Young (World Scientific, Singapore, 1997). . N Sourlas, Comp. Phys. Comm. 121183N. Sourlas, Comp. Phys. Comm. 121, 183 (1999). . G Parisi, N Sourlas, Phys. Rev. Lett. 89257204G. Parisi and N. Sourlas, Phys. Rev. Lett. 89, 257204 (2002). . S Fishman, A Aharony, J. Phys. C. 12729S. Fishman and A. Aharony, J. Phys. C 12 L729 (1979). . J L Cardy, Phys. Rev. B. 29505J. L. Cardy, Phys. Rev. B 29, 505 (1984). . A T Ogielski, D A Huse, Phys. Rev. Lett. 561298A. T. Ogielski and D. A. Huse, Phys. Rev. Lett. 56, 1298 (1986). A few samples in a L = 64 lattice were simulated. A few samples in a L = 64 lattice were simulated. . A K Hartmann, U Nowak, Eur. Phys. J. B. 7105A. K. Hartmann and U. Nowak, Eur. Phys. J. B 7, 105, (1999). . A Aharony, Phys. Rev. B. 183318A. Aharony, Phys. Rev. B 18, 3318, (1978) . H Rieger, A P Young, J. Phys. A: Math. Gen. 265279H. Rieger and A. P. Young, J. Phys. A: Math. Gen. 26 5279 (1993). . H Rieger, Phys. Rev. B. 526659H. Rieger, Phys. Rev. B 52, 6659 (1995). . L Hernandez, H T Diep, Phys. Rev. B. 5514080L. Hernandez and H. T. Diep, Phys. Rev. B 55, 14080 (1997). . J Machta, M E J Newman, L B Chayes, Phys. Rev. E. 628782J. Machta, M. E. J. Newman and L. B. Chayes, Phys. Rev. E 62, 8782 (2000). . A J Bray, M A Moore, J. Phys. C. 18927A. J. Bray and M. A. Moore, J. Phys. C 18, L927, (1985). . A K Hartmann, A P Young, Phys. Rev. B. 64214419A. K. Hartmann and A. P. Young, Phys. Rev. B 64, 214419 (2001). . A A Middleton, D S Fisher, Phys. Rev. B. 65134411A. A. Middleton, D. S. Fisher, Phys. Rev. B 65, 134411 (2002). . M Tesi, E Van Resburg, E Orlandini, S G Whillington, J. Stat. Phys. 82155M. Tesi, E. Janse van Resburg, E. Orlandini and S. G. Whillington, J. Stat. Phys. 82, 155 (1996); . K Hukushima, K Nemoto, J. Phys Soc. Jpn. 651604K. Hukushima and K. Nemoto, J. Phys Soc. Jpn. 65, 1604 (1996); . E , E. Numerical Simulations of Spin Glass Systems in Spin Glasses and Random Fields. G Marinari, J J Parisi, Ruiz-Lorenzo, A. P. YoungWorld ScientificSingaporeMarinari, G. Parisi and J. J. Ruiz-Lorenzo,Numerical Sim- ulations of Spin Glass Systems in Spin Glasses and Ran- dom Fields, edited by A. P. Young (World Scientific, Sin- gapore, 1997). . H G Ballesteros, L A Fernández, V Martín-Mayor, A. Muñoz Sudupe, Phys. Lett. 378207H. G. Ballesteros, L. A. Fernández, V. Martín-Mayor and A. Muñoz Sudupe. Phys. Lett. B378, 207 (1996). . F Cooper, B Freedman, D Preston, Nucl. Phys. B. 210210F. Cooper, B. Freedman and D. Preston, Nucl. Phys. B 210, 210 (1989). . J T Chayes, L Chayes, D S Fisher, T Spencer, Phys. Rev. Lett. 572999J. T. Chayes, L. Chayes, D.S. Fisher and T. Spencer, Phys. Rev. Lett. 57, 2999 (1986). Aharony in Introduction to the percolation theory. D Stauffer, A , Taylor and FrancisLondonD. Stauffer and A. Aharony in Introduction to the percola- tion theory. (Taylor and Francis, London 1984). M N Barber, Finite Size Scaling in Phase Transitions and Critical Phenomena. C. Domb and J. L. LebowitzAcademic Press8M. N. Barber, Finite Size Scaling in Phase Transitions and Critical Phenomena, edited by C. Domb and J. L. Lebowitz (Academic Press 1983) volume 8. . K Binder, D P Landau, Phys. Rev. B. 301477K. Binder and D. P. Landau, Phys. Rev. B 30, 1477 (1984). . M S S Challa, D P Landau, K Binder, Phys. Rev. B. 341841M. S. S. Challa, D. P. Landau and K. Binder, Phys. Rev. B 34, 1841 (1986). . K Binder, Rep. Prog. Phys. 50783K. Binder, Rep. Prog. Phys. 50, 783 (1987). . C Chatelain, B Berche, W Janke, P E Berche, Phys Rev. E. 6436120C. Chatelain, B. Berche, W. Janke and P. E. Berche, Phys Rev. E 64, 036120 (2001). We recall that they obtained their estimates of the critical temperature and exponents by simulating in the paramagnetic phase, without perform finite size scaling, and by fitting the largest lattice simulated to the standard law. for example for the susceptibility: χ = A(T − Tc) −γWe recall that they obtained their estimates of the critical temperature and exponents by simulating in the param- agnetic phase, without perform finite size scaling, and by fitting the largest lattice simulated to the standard law, for example for the susceptibility: χ = A(T − Tc) −γ . . A Falicov, A N Berker, S R Mckay, Phys. Rev. B. 518266A. Falicov, A. N. Berker and S. R. McKay, Phys. Rev. B 51, 8266 (1995). . M Schwartz, A Soffer, Phys. Rev. Lett. 552499M. Schwartz and A. Soffer, Phys. Rev. Lett. 55, 2499 (1985). . A B Harris, J. Phys. C. 71671A. B. Harris, J. Phys. C 7, 1671 (1974).
[]
[ "From Stein Identities to Moderate Deviations", "From Stein Identities to Moderate Deviations" ]
[ "Louis H Y Chen ", "Xiao Fang ", "Qi-Man Shao [email protected] ", "\nInstitute for Mathematical Sciences\nNational University of Singapore\n\n", "\nPrince George's Park\nDepartment of Statistics and Applied Probability\nNational University of Singapore\n6 Science DriveSingapore 118402 Republic of Singapore\n", "\nDepartment of Mathematics Hong Kong University of Science and Technology Clear Water Bay\nKowloon, Hong KongSingapore 117546 Republic of Singapore, P.R. China\n" ]
[ "Institute for Mathematical Sciences\nNational University of Singapore\n", "Prince George's Park\nDepartment of Statistics and Applied Probability\nNational University of Singapore\n6 Science DriveSingapore 118402 Republic of Singapore", "Department of Mathematics Hong Kong University of Science and Technology Clear Water Bay\nKowloon, Hong KongSingapore 117546 Republic of Singapore, P.R. China" ]
[]
Stein's method is applied to obtain a general Cramér-type moderate deviation result for dependent random variables whose dependence is defined in terms of a Stein identity. A corollary for zero-bias coupling is deduced. The result is also applied to a combinatorial central limit theorem, a general system of binary codes, the anti-voter model on a complete graph, and the Curie-Weiss model. A general moderate deviation result for independent random variables is also proved.
10.1214/12-aop746
[ "https://arxiv.org/pdf/0911.5373v4.pdf" ]
40,502,218
0911.5373
517e412411ed8901d4ffec688b45dc79a807203b
From Stein Identities to Moderate Deviations 27 May 2011 Louis H Y Chen Xiao Fang Qi-Man Shao [email protected] Institute for Mathematical Sciences National University of Singapore Prince George's Park Department of Statistics and Applied Probability National University of Singapore 6 Science DriveSingapore 118402 Republic of Singapore Department of Mathematics Hong Kong University of Science and Technology Clear Water Bay Kowloon, Hong KongSingapore 117546 Republic of Singapore, P.R. China From Stein Identities to Moderate Deviations 27 May 2011AMS 2000 subject classifications: Primary 60F10; secondary 60F05 Keywords and phrases: Stein's methodStein identityModerate devia- tionsBerry-Esseen boundszero-bias couplingexchangeable pairsdepen- dent random variablescombinatorial central limit theoremgeneral system of binary codesanti-voter modelCurie-Weiss model Stein's method is applied to obtain a general Cramér-type moderate deviation result for dependent random variables whose dependence is defined in terms of a Stein identity. A corollary for zero-bias coupling is deduced. The result is also applied to a combinatorial central limit theorem, a general system of binary codes, the anti-voter model on a complete graph, and the Curie-Weiss model. A general moderate deviation result for independent random variables is also proved. Introduction Moderate deviations date back to Cramér (1938) who obtained expansions for tail probabilities for sums of independent random variables about the nor-expressed in terms of a Markov process. This generator approach to Stein's method is due to Barbour (1988 and1990). By (2.2), bounding Eh(W ) − Eh(Z) is equivalent to bounding E{Lf h (W )}. To bound the latter one finds another operatorL such that E{Lf (W )} = 0 for a class of functions f including f h and writeL = L − R for a suitable operator R. The error term E{Lf h (W )} is then expressed as ERf h (W ). The equation E{Lf (W )} = 0 (2.3) for a class of functions f including f h is called a Stein identity for L(W ). For normal approximation there are four methods for constructing a Stein identity: the direct method (Stein (1972)), zero-bias coupling (Goldstein and Reinert (1997) and Goldstein (2005)), exchangeable pairs (Stein (1986)), and Stein coupling (Chen and Röllin (2010)). We discuss below the construction of Stein identities using zero-bias coupling and exchangeable pairs. As proved in Goldstein and Reinert (1997), for W with EW = 0 and Var(W ) = 1, there always exists W * such that EW f (W ) = Ef ′ (W * ) (2.4) for all bounded absolutely continuous f with bounded derivative f ′ . The distribution of W * is called W -zero-biased. If W and W * are defined on the same probability space (zero-bias coupling), we may write ∆ = W * − W . Then by (2.4), we obtain the Stein identity EW f (W ) = Ef ′ (W + ∆) = E ∞ −∞ f ′ (W + t)dµ(t|W ),(2.5) where µ(·|W ) is the conditional distribution of ∆ given W . HereL(w) = ∞ −∞ f ′ (w+ t)dµ(t|W = w) − wf (w). The method of exchangeable pairs (Stein (1986)) consists of constructing W ′ such that (W, W ′ ) is exchangeable. Then for any anti-symmetric function F (·, ·), imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 that is, F (w, w ′ ) = −F (w ′ , w), EF (W, W ′ ) = 0 if the expectation exists. Suppose that there exist a constant λ (0 < λ < 1) and a random variable R such that E(W − W ′ | W ) = λ(W − E(R|W )). (2.6) Then for all f E{(W − W ′ )(f (W ) + f (W ′ ))} = 0 provided the expectation exists. This gives the Stein identity EW f (W ) = − 1 2λ E{(W − W ′ )(f (W ′ ) − f (W ))} + E(Rf (W )) = E ∞ −∞ f ′ (W + t)K(t)dt + E(Rf (W )) (2.7) for all absolutely continuous functions f for which expectations exist, wherê K(t) = 1 2λ ∆(I(0 ≤ t ≤ ∆) − I(∆ ≤ t < 0)) and ∆ = W ′ − W . In this case, L(w) = ∞ −∞ f ′ (w + t)E(K(t)|W = w)dt + E(R|W = w)f (w) − wf (w). Both Stein identities (2.5) and (2.7) are special cases of EW f (W ) = E ∞ −∞ f ′ (W + t)dμ(t) + E(Rf (W )) (2.8) whereμ is a random measure. We will prove a moderate deviation result by assuming that W satisfies the Stein identity (2.8). Then A Cramér-type moderate deviation theorem P (W > x) 1 − Φ(x) = 1 + O α (1)θ 3 (1 + x 3 )(δ + δ 1 + δ 2 ) (3.6) for 0 ≤ x ≤ θ −1 min(δ −1/3 , δ −1/3 1 , δ −1/3 2 ), where O α (1) denotes a quantity whose absolute value is bounded by a universal constant which depends on α only under the second alternative of (3.4). Remark 3.1. Theorem 3.1 is intended for bounded random variables but with very general dependence assumptions. For this reason, the support of the random measureμ is assumed to be within [−δ, δ] where δ is typically of the order of 1/ √ n due to standardization. In order for the normal approximation to work, E(D|W ) should be close to 1 and E(R|W ) small. This is reflected in δ 1 and δ 2 which are assumed to be small. For zero-bias coupling, D = 1 and R = 0, so conditions (3.3), (3.4) and (3.5) are satisfied with δ 1 = δ 2 = 0 and θ = 1. Therefore, we have for 0 ≤ x ≤ δ −1/3 . Remark 3.2. For an exchangeable pair (W, W ′ ) satisfying (2.6) and |∆| ≤ δ, (3.1) is satisfied with D = ∆ 2 /(2λ). Remark 3.3. Although one cannot apply Theorem 3.1 directly to unbounded random variables, one can adapt the proof of Theorem 3.1 to give a proof of (1.1) for independent random variables assuming the existence of the moment generating functions of |X i | 1/2 thereby extending a result of Linnik (1961). This result is given in Proposition 4.6. The proof also suggests the possibility of extending Theorem 3.1 to the case where the support ofμ may not be bounded. Applications In this section we apply Theorem 3.1 to four cases of dependent random variables, namely, a combinatorial central limit theorem, the anti-voter model on a complete graph, a general system of binary codes, and the Curie-Weiss model. The proofs of the results for the third and the fourth example will be given in the last section. At the end of this section, we will present a moderate deviation result for sums of independent random variables and the proof will also be given in the last section. Combinatorial central limit theorem Let {a ij } n i,j=1 be an array of real numbers satisfying n j=1 a ij = 0 for all i and n i=1 a ij = 0 for all j. Set c 0 = max i,j |a ij | and W = n i=1 a iπ(i) /σ, where π is a uniform random permutation of {1, 2, · · · , n} and σ 2 = E( n i=1 a iπ(i) ) 2 . In Goldstein (2005) W is coupled with the zero-biased W * in such a way that |∆| = |W * − W | ≤ 8c 0 /σ. Therefore, by Corollary 3.1 with δ = 8c 0 /σ, we have P (W ≥ x) 1 − Φ(x) = 1 + O(1)(1 + x 3 )c 0 /σ (4.1) for 0 ≤ x ≤ (σ/c 0 ) 1/3 . Anti-voter model on a complete graph Consider the anti-voter model on a complete graph with n vertices, 1, · · · , n, and (n − 1)n/2 edges. Let X i be a random variable taking value 1 or −1 at the vertex i, i = 1, · · · , n. Let X = (X 1 , · · · , X n ), where X i takes values 1 or −1. The anti-voter model in discrete time is described as the following Markov chain: in each step, uniformly pick a vertex I and an edge connecting it to J, and then change . Therefore (see, for example, Rinott and Rotar (1997)) X I to −X J . Let U = n i=1 X i and W = U/σ, where σ 2 = V ar(U ). Let W ′ = (U − X I − X J )/σ, where I is uniformly distributed on {1,E[(W − W ′ ) 2 |X] = 1 σ 2 E[(U ′ − U ) 2 |X] = 4 σ 2 2a + 2b n(n − 1) = 1 σ 2 2U 2 + 2n 2 − 4n n(n − 1) = 2σ 2 W 2 + 2n 2 − 4n σ 2 n(n − 1) , (4.3) E(D|W ) − 1 = n 4 E((W ′ − W ) 2 |W ) − 1 = W 2 2(n − 1) − 2σ 2 (n − 1) − (n 2 − 2n) 2σ 2 (n − 1) . (4.4) imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 Noting that E(E(D|W ) − 1) = 0 and EW 2 = 1, we have σ 2 = n 2 −2n 2n−3 . Hence, E(D|W ) − 1 = W 2 2(n − 1) − 1 2(n − 1) (4.5) which means that (3.3) is satisfied with δ 1 = O(n −1/2 ). Thus, we have the following moderate deviation result. Proposition 4.1. We have P (W ≥ x) 1 − Φ(x) = 1 + O(1)(1 + x 3 )/ √ n for 0 ≤ x ≤ n 1/6 . A general system of binary codes In Chen, Hwang and Zacharovas (2011), a general system of binary codes is where I is an independent Bernoulli(1/2) random variable. Chen, Hwang and Zacharovas (2011) proved the asymptotic normality ofS n . Here, we apply Theorem 3.1 to obtain the following Cramér moderate deviation result. For n ≥ 1, let integer k be such that 2 k−1 −1 < n ≤ 2 k −1, and letW n = (S n −k/2)/ k/4. P (W n ≥ x) 1 − Φ(x) = 1 + O(1)(1 + x 3 ) 1 √ k . (4.8) for 0 ≤ x ≤ k 1/6 . As an example of this system of binary codes, we consider the binary expansion of a random integer X uniformly distributed over {0, 1, . . . , n}. For 2 k−1 − 1 < n ≤ 2 k − 1, write X as X = k i=1 X i 2 k−i and let S n = X 1 + · · · X k . Set W n = (S n − k/2)/ k/4. It is easy to verify that S n satisfies (4.7). A Berry-Esseen bound for W n was first obtained by Diaconis (1977). Curie-Weiss model Consider the Curie-Weiss model for n spins Σ = (σ 1 , σ 2 , · · · , σ n ) ∈ {−1, 1} n . The joint distribution of Σ is given by Z −1 β,h exp( β n 1≤i<j≤n σ i σ j + βh n i=1 σ i ), where Z β,h is the normalizing constant, and β > 0, h ∈ R are called the inverse of temperature and the external field respectively. We are interested in the total magnetization S = n i=1 σ i . We divide the region β > 0, h ∈ R into three parts, and for each part, we list the concentration property and the limiting distribution of S under proper standardization. Consider the solution(s) to the equation m = tanh(β(m + h)). (4.9) Case 1. 0 < β < 1, h ∈ R or β ≥ 1, h = 0. There is a unique solution m 0 to (4.9) such that m 0 h ≥ 0. In this case, S/n is concentrated around m 0 and has a Gaussian limit under proper standardization. Case 2. β > 1, h = 0. There are two non-zero solutions to (4.9), m 1 < 0 < m 2 . Given condition on S < 0 (S > 0 respectively), S/n is concentrated around m 1 (m 2 respectively) and has a Gaussian limit under proper standardization. Case 3. β = 1, h = 0. S/n is concentrated around 0 but the limit distribution is not Gaussian. We refer to Ellis (1985) for the concentration of measure results, Ellis andNewman (1978a, 1978b) where σ 2 = n(1 − m 2 0 ) 1 − (1 − m 2 0 )β . (4.11) Then we have P (W ≥ x) 1 − Φ(x) = 1 + O(1)(1 + x 3 )/ √ n, (4.12) for 0 ≤ x ≤ n 1/6 .W 1 = S − nm 1 σ 1 , W 2 = S − nm 2 σ 2 (4.13) where σ 2 1 = n(1 − m 2 1 ) 1 − (1 − m 2 1 )β , σ 2 2 = n(1 − m 2 2 ) 1 − (1 − m 2 2 )β . (4.14) imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 Then we have P (W 1 ≥ x|S < 0) 1 − Φ(x) = 1 + O(1)(1 + x 3 )/ √ n. (4.15) and P (W 2 ≥ x|S > 0) 1 − Φ(x) = 1 + O(1)(1 + x 3 )/ √ n (4.16) for 0 ≤ x ≤ n 1/6 . Independent random variables Moderate deviation for independent random variables has been extensively studied in literature (see, for example, Petrov (1975), Chapter 8) based on the conjugated method. Here, we will adapt the proof of Theorem 3.1 to prove the following moderate deviation result, which is a variant of those in the literature (see again Petrov (1975), Chapter 8). Proposition 4.5. Let ξ i , 1 ≤ i ≤ n be independent random variables with Eξ i = 0 and Ee tn|ξi| < ∞ for some t n and for each 1 ≤ i ≤ n. Assume that n i=1 Eξ 2 i = 1. (4.17) Then P (W ≥ x) 1 − Φ(x) = 1 + O(1)(1 + x 3 )γe 4x 3 γ (4.18) for 0 ≤ x ≤ t n , where γ = n i=1 E|ξ i | 3 e x|ξi| .Proposition 4.6. Let X i , 1 ≤ i ≤ n be a sequence of independent random variables with EX i = 0. Put S n = n i=1 X i and B 2 n = n i=1 EX 2 i . Assume that there exists positive constants c 1 , c 2 > 0, t 0 such that B 2 n ≥ c 2 1 n, Ee t0 √ |Xi| ≤ c 2 for 1 ≤ i ≤ n. (4.19) Then P (S n /B n ≥ x) 1 − Φ(x) = 1 + O(1)(1 + x 3 )/ √ n (4.20) for 0 ≤ x ≤ t 0 c 1/2 1 n 1/6 /4, where O(1) is an absolute constant depending on c 2 and t 0 . In particular, we have P (S n /B n ≥ x) 1 − Φ(x) → 1 (4.21) uniformly in 0 ≤ x ≤ o(n 1/6 ). Proof of Proposition 4.6. The main idea is first truncating X i and then applying Proposition 4.5 to the truncated sequence. W.l.o.g., assume c 1 = 1. LetX i = X i 1(|X i | ≤ n 2/3 ),S n = n i=1X i . Observe that |P (S n /B n ≥ x) − P (S n /B n ≥ x)| ≤ n i=1 P (|X i | ≥ n 2/3 ) ≤ n i=1 e −t0n 1/3 Ee t0 √ |Xi| ≤ c 2 ne −t0n 1/3 = O(1)(1 − Φ(x))(1 + x 3 )/ √ n for 0 ≤ x ≤ c 1 t 0 n 1/6 /4. Now let ξ i = (X i −EX i )/B n , whereB 2 n = n i=1 Var(X i ). It is easy to see that n i=1 |EX i | ≤ n i=1 E|X i |1(|X i | ≥ n 2/3 ) ≤ n i=1 ne −t0n 1/3 Ee t0 √ |X1| = o(n −2 ) (4.22) and similarly,B n = B n (1 + o(n −2 )). Thus, for 0 ≤ x ≤ t 0 n 1/6 /4 x|ξ i | ≤ 2x c 1 n |X i |1(|X i | ≤ n 2/3 ) + o(1) ≤ 2x c 1 n 1/6 |X i | + o(1) ≤ t 0 |X i |/2 + o(1) and hence γ = O(n −1/2 ). Applying Proposition 4.5 to {ξ i , 1 ≤ i ≤ n} gives Preliminary Lemmas To prove Theorem 3.1, we first need to develop two preliminary lemmas. Our first lemma gives a bound for the moment generating function of W . Lemma 5.1. Let W be a random variable with E|W | ≤ C. Assume that there exist δ > 0, δ 1 ≥ 0, 0 ≤ δ 2 ≤ 1/4 and θ ≥ 1 such that (3.1), (3.3) -(3.5) are satisfied. Then for all 0 < t ≤ 1/(2δ) satisfying tδ 1 + C α tθδ 2 ≤ 1/2 (5.2) where C α = 12 under the first alternative of (3.4) 2(3+α) 1−α under the second alternative of (3.4) (5.3) we have Ee tW ≤ exp(t 2 /2 + c 0 (t)) (5.4) where c 0 (t) = c 1 (C, C α )θ δ 2 t + δ 1 t 2 + (δ + δ 1 + δ 2 )t 3 (5.5) where c 1 (C, C α ) is a constant depending only on C and C α . imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 Proof. Fix a > 0, t ∈ (0, 1/(2δ)] and s ∈ (0, t] and let f (w) = e s(w∧a) . Letting h(s) = Ee s(W ∧a) , firstly we prove that h ′ (s) can be bounded by sh(s) and EW 2 f (W ). By (3.1), h ′ (s) = E(W ∧ a)e s(W ∧a) ≤ E(W f (W )) = E f ′ (W + t)dμ(t) + E(Rf (W )) = sE e s(W +t) I(W + t ≤ a)dμ(t) + E(e s(W ∧a) E(R|W )) ≤ sE e s[(W +t)∧a] dμ(t) + E(e s(W ∧a) E(R|W )) ≤ sE e s(W ∧a+δ) dμ(t) + E(e s(W ∧a) E(R|W )) = sE e s(W ∧a) dμ(t) + sE e s(W ∧a) (e sδ − 1)dμ(t) + E(e s(W ∧a) E(R|W )) ≤ sEe s(W ∧a) D + sEe s(W ∧a) |e sδ − 1|D + 2δ 2 E((1 + W 2 )e s(W ∧a) ), where we have applied (3.2) and (3.4) to obtain the last inequality. Now, applying the simple inequality |e x − 1| ≤ 2|x| for |x| ≤ 1 , and then (3.3), we find that Collecting terms we obtain E(W f (W )) ≤ sEe s(W ∧a) D + sEe s(W ∧a) 2sδD + 2δ 2 E((1 + W 2 )e s(W ∧a) ) ≤ sEe s(W ∧a) E(D|W ) + 2s 2 θδEe s(W ∧a) + 2δ 2 E((1 + W 2 )e s(W ∧a) ) = sEe s(W ∧a) + sEe s(W ∧a) [E(D|W ) − 1] +2s 2 θδEe s(W ∧a) + 2δ 2 E((1 + W 2 )e s(W ∧a) ) ≤ sEe s(W ∧a) + sδ 1 Ee s(W ∧a) (1 + |W |) + 2s 2 θδEe s(W ∧a) +2δ 2 E((1 + W 2 )e s(W ∧a) ). Note that E|W |e s(W ∧a) = EW e s(W ∧a) + 2EW − e s(W ∧a) ≤ E(W f (W )) + 2E|W | ≤ 2C + E(W f (W )h ′ (s) ≤ E(W f (W )) (5.7) ≤ (s(1 + δ 1 + 2tθδ) + 2δ 2 )h(s) + 2δ 2 EW 2 f (W ) + 2Csδ 1 /(1 − sδ 1 ). Secondly, we show that EW 2 f (W ) can be bounded by a function of h(s) and h ′ (s). Letting g(w) = we s(w∧a) , and then arguing as for (5.7), EW 2 f (W ) = EW g(W ) (5.8) = E (e s[(W +t)∧a] + s(W + t)e s[(W +t)∧a] I(W + t ≤ a))dμ(t) + E(RW f (W )) ≤ E (e s(W ∧a) e sδ + s[(W + t) ∧ a]e s(W ∧a) e sδ )dμ(t) + E(RW f (W )) = e sδ E(f (W ) + sf (W )((W ∧ a) + δ))D + E(RW f (W )) ≤ θe 0.5 (1 + 0.5)Ef (W ) + sθe sδ E(W ∧ a)f (W ) + E(RW f (W )) ≤ 1.5e 0.5 θh(s) + 2sθh ′ (s) + E(RW f (W )). Note that under the first alternative of (3.4), |E(RW f (W ))| ≤ δ 2 Ef (W ) + 2δ 2 EW 2 f (W ),(5.9) and under the second alternative of (3.4), |E(RW f (W ))| ≤ αEf (W ) + αEW 2 f (W ). (5.10) Thus, recalling δ 2 ≤ 1/4 and α < 1, we have EW 2 f (W ) ≤ C α 2 (θh(s) + sθh ′ (s)) (5.11) where C α is defined in (5.3). imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 We are now ready to prove (5.4). Substituting (5.11) into (5.7) yields (1 − sδ 1 )h ′ (s) ≤ (s(1 + δ 1 + 2tθδ) + 2δ 2 )h(s) +δ 2 C α (θh(s) + sθh ′ (s)) + 2Csδ 1 = s(1 + δ 1 + 2tθδ) + 2δ 2 (1 + C α θ) h(s) +C α sθδ 2 h ′ (s) + 2Csδ 1 ≤ s(1 + δ 1 + 2tθδ) + 2δ 2 (1 + C α θ) h(s) +C α tθδ 2 h ′ (s) + 2Csδ 1 . (5.12) Solving for h ′ (s), we obtain h ′ (s) ≤ (sc 1 (t) + c 2 (t))h(s) + 2Csδ 1 1 − c 3 (t) ,(5.13) where c 1 (t) = 1 + δ 1 + 2tθδ 1 − c 3 (t) , c 2 (t) = 2δ 2 (1 + C α θ) 1 − c 3 (t) , c 3 (t) = tδ 1 + C α tθδ 2 . Now taking t to satisfy (5.2) yields c 3 (t) ≤ 1/2, so in particular c i (t) is nonnegative for i = 1, 2, and 1/(1 − c 3 (t)) ≤ 1 + 2c 3 (t). Solving (5.13), we have h(s) ≤ exp( t 2 2 c 1 (t) + tc 2 (t) + 2Cδ 1 t 2 ) (5.14) Note that c 3 (t) ≤ 1/2, δ 2 ≤ 1/4 and θ ≥ 1. Elementary calculations now give t 2 2 (c 1 (t) − 1) + tc 2 (t) + 2Cδ 1 t 2 = t 2 2 δ 1 + 2tθδ + c 3 (t) 1 − c 3 (t) + 2tδ 2 (1 + C α θ) 1 − c 3 (t) + 2Cδ 1 t 2 ≤ t 2 (δ 1 + 2tθδ + tδ 1 + C α tθδ 2 ) + 4tδ 2 (1 + C α ) + 2Cδ 1 t 2 ≤ c 0 (t) imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 and hence t 2 c 1 (t)/2 + tc 2 (t) ≤ t 2 /2 + c 0 (t), thus proving (5.4) by letting a → ∞. Lemma 5.2. Suppose that for some nonnegative δ, δ 1 and δ 2 satisfying max(δ, δ 1 , δ 2 ) ≤ 1 and θ ≥ 1 that (5.4) is satisfied, with c 0 (t) as in (5.5), for all t ∈ [0, θ −1 min(δ −1/3 , δ −1/3 1 , δ −1/3 2 )]. (5.15) Then for integers k ≥ 1, t 0 u k e u 2 /2 P (W ≥ u)du ≤ c 2 (C, C α ) t k (5.16) where c 2 (C, C α ) is a constant depending only on C and C α defined in Lemma 5.1. Proof. For t satisfying (5.15) it is easy to see that c 0 (t) ≤ 5c 1 (C, C α ) where c 1 (C, C α ) is as in Lemma 5.1, and (5.2) is satisfied. Write t 0 u k e u 2 /2 P (W ≥ u)du = [t] 0 u k e u 2 /2 P (W ≥ u)du + t [t] u k e u 2 /2 P (W ≥ u)du,e (j−1) 2 /2−j(j−1) , we have [t] 0 u k e u 2 /2 P (W ≥ u)du ≤ [t] j=1 j k j j−1 e u 2 /2−ju e ju P (W ≥ u)du ≤ [t] j=1 j k e (j−1) 2 /2−j(j−1) j j−1 e ju P (W ≥ u)du ≤ 2 [t] j=1 j k e −j 2 /2 ∞ −∞ e ju P (W ≥ u)du = 2 [t] j=1 j k e −j 2 /2 (1/j)Ee jW ≤ 2 [t] j=1 j k−1 exp(−j 2 /2 + j 2 /2 + c 0 (j)) ≤ 2e c0(t) [t] j=1 j k−1 ≤ c 2 (C, C α ) t k . (5.17) Similarly, we have t [t] u k e u 2 /2 P (W ≥ u)du ≤ t k t [t] e u 2 /2−tu e tu P (W ≥ u)du ≤ t k e [t] 2 /2−t[t] t [t] e tu P (W ≥ u)du ≤ 2t k e −t 2 /2 ∞ −∞ e tu P (W ≥ u)du ≤ c 2 (C, C α ) t k . This completes the proof. imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 Proofs of results In this section, let O α (1) denote universal constants which depend on α only under the second alternative of (3.4). Proof of Theorem 3.1 If θ −1 min(δ −1/3 , δ −1/3 1 , δ −1/3 2 ) ≤ O α (1), then 1/(1 − Φ(x)) ≤ 1/(1 − Φ(O α (1))) for 0 ≤ x ≤ O α (1). Moreover, θ 3 (δ + δ 1 + δ 2 ) ≥ O α (1). Therefore, (3.6) is trivial. Hence, we can assume θ −1 min(δ −1/3 , δ −1/3 1 , δ −1/3 2 ) ≥ O α (1) (6.1) so that δ ≤ 1, δ 2 ≤ 1/4, δ 1 + 2δ 2 < 1, and moreover, δ 1 + δ 2 + α < 1 under the second alternative of (3.4). Our proof is based on Stein's method. Let f = f x be the solution to the Stein equation wf (w) − f ′ (w) = I(w ≥ x) − (1 − Φ(x)) (6.2) It is known that f (w) = √ 2πe w 2 /2 (1 − Φ(w))Φ(x), w ≥ x √ 2πe w 2 /2 (1 − Φ(x))Φ(w), w < x ≤ 4 1 + w 1(w ≥ x) + 3(1 − Φ(x))e w 2 /2 1(0 < w < x) +4(1 − Φ(x)) 1 1 + |w| 1(w ≤ 0) (6.3) by using the following well-known inequality (1 − Φ(w))e w 2 /2 ≤ min 1 2 , 1 w √ 2π , w > 0. It is also known that wf (w) is an increasing function (see Lemma 2.2, Chen and Shao (2005)). By (3.1) we have EW f (W ) − ERf (W ) = E f ′ (W + t)dμ(t) (6.4) and monotonicity of wf (w) and equation (6.2) imply that f ′ (W + t) ≤ (W + δ)f (W + δ) + 1 − Φ(x) − 1(W ≥ x + δ) (6.5) Recall that dμ(t) = D. Thus using non-negativity ofμ and combining (6.4), (6.5) we have EW f (W ) − ERf (W ) ≤ E ((W + δ)f (W + δ) − W f (W ))dμ(t) + EW f (W )D +E 1 − Φ(x) − 1(W > x + δ) dμ(t),(6.6) Now, by (3.2), the expression above can be written E((W + δ)f (W + δ) − W f (W ))D +EW f (W )D + E 1 − Φ(x) − 1(W > x + δ) D = 1 − Φ(x) − P (W > x + δ) +E((W + δ)f (W + δ) − W f (W ))D + EW f (W )D +E 1 − Φ(x) − 1(W > x + δ) (D − 1). (6.7) Therefore, we have P (W > x + δ) − (1 − Φ(x)) ≤ E((W + δ)f (W + δ) − W f (W ))D + EW f (W )(D − 1) +E 1 − Φ(x) − 1(W > x + δ) (D − 1) + ERf (W ) ≤ θE((W + δ)f (W + δ) − W f (W )) + δ 1 E(|W |(1 + |W |)f (W )) +δ 1 E 1 − Φ(x) − 1(W > x + δ) (1 + |W |) + δ 2 E(2 + W 2 )f (W ), (6.8) where we have again applied the monotonicity of wf (w) as well as (3.5), (3.3) and (3.4). Hence we have that P (W > x + δ) − (1 − Φ(x)) ≤ θI 1 + δ 1 I 2 + δ 1 I 3 + δ 2 I 4 ,(6.9) where I 1 = E((W + δ)f (W + δ) − W f (W )), I 2 = E(|W |(1 + |W |)f (W ), I 3 = E 1 − Φ(x) − 1(W > x + δ) (1 + |W |) and I 4 = E(2 + W 2 )f (W ). By (6.3) we have Ef (W ) ≤ 4P (W > x) + 4(1 − Φ(x)) +3(1 − Φ(x))Ee W 2 /2 1(0 < W ≤ x). (6.10) Note that by (3.1) with f (w) = w, EW 2 = E dμ(t) + E(RW ) = ED + E(RW ) Therefore, under the first alternative of (3.4), EW 2 ≤ (1 + 2δ 1 + δ 2 ) + (δ 1 + 2δ 2 )EW 2 , and under the second alternative of (3.4), EW 2 ≤ (1 + 2δ 1 + δ 2 ) + (δ 1 + δ 2 + α)EW 2 . This shows EW 2 ≤ O α (1). Hence the hypotheses of Lemma 5.1 is satisfied with C = O α (1), and therefore also the conclusion of Lemma 5.2. In particular, Ee W 2 /2 1(0 < W ≤ x) ≤ P (0 < W ≤ x) + x 0 ye y 2 /2 P (W > y)dy ≤ O α (1)(1 + x). (6.11) Similarly, by (6.3) again EW 2 f (W ) ≤ 4E|W |1(W > x) + 4(1 − Φ(x))E|W | +3(1 − Φ(x))EW 2 e W 2 /2 1(0 < W ≤ x) and by Lemma 5.2 EW 2 e W 2 /2 1(0 < W ≤ x) ≤ x 0 (y 3 + 2y)e y 2 /2 P (W > y)dy ≤ O α (1)(1 + x 3 ) (6.12) As to E|W |1(W > x) ≤ P (W > x) + EW 2 I(W > x), it follows from Lemma 5.1 that P (W > x) ≤ e −x 2 Ee xW = O α (1)e −x 2 /2 (6.13) and ∞ x tP (W ≥ t)dt ≤ Ee xW ∞ x te −xt dt = Ee xW x −2 (1 + x 2 )e −x 2 ≤ O α (1)e −x 2 /2 x −2 (1 + x 2 ) ≤ O α (1)e −x 2 /2 (6.14) for x ≥ 1. Thus, we have for x > 1 EW 2 1(W > x) = x 2 P (W > x) + ∞ x 2yP (W > y)dy ≤ O α (1)(1 + x 2 )e −x 2 /2 ≤ O α (1)(1 + x 3 )(1 − Φ(x)). (6.15) Clearly, (6.15) remains valid for 0 ≤ x ≤ 1 by the fact that EW 2 1(W > x) ≤ EW 2 ≤ 2. Combining (6.11) -(6.15), we have I 2 ≤ O α (1)(1 + x 3 )(1 − Φ(x)). (6.16) Similarly, I 4 ≤ O α (1)(1 + x 3 )(1 − Φ(x)) (6.17) and I 3 ≤ (1 − Φ(x))E(2 + W 2 ) + E(2 + W 2 )1(W ≥ δ + x) ≤ O α (1)(1 + x 3 )(1 − Φ(x)). (6.18) Let g(w) = (wf (w)) ′ . Then I 1 = δ 0 Eg(W + t)dt. It is easy to see that (for example, Chen and Shao (2001)) g(w) =    √ 2π(1 + w 2 )e w 2 /2 (1 − Φ(w)) − w Φ(x), w ≥ x √ 2π(1 + w 2 )e w 2 /2 Φ(w) + w (1 − Φ(x)),and 0 ≤ √ 2π(1 + w 2 )e w 2 /2 (1 − Φ(w)) − w ≤ 2 1 + w 3 , (6.20) we have for 0 ≤ t ≤ δ Eg(W + t) (6.21) = Eg(W + t)1{W + t ≥ x} + Eg(W + t)1{W + t ≤ 0} +Eg(W + t)1{0 < W + t < x} ≤ 2 1 + x 3 P (W + t ≥ x) + 2(1 − Φ(x))P (W + t ≤ 0) + √ 2π(1 − Φ(x))E (1 + (W + t) 2 + (W + t))e (W +t) 2 /2 1{0 < W + t < x} = O α (1)(1 + x 3 )(1 − Φ(x)) and hence I 1 = O α (1)δ(1 + x 3 )(1 − Φ(x)). (6.22) Putting (6.9), (6.16), (6.17), (6.18) and (6.22) together gives P (W ≥ x + δ) − (1 − Φ(x)) ≤ O α (1)(1 − Φ(x))θ(1 + x 3 )(δ + δ 1 + δ 2 ) and therefore P (W ≥ x) − (1 − Φ(x)) ≤ O α (1)(1 − Φ(x))θ(1 + x 3 )(δ + δ 1 + δ 2 ). (6.23) As to the lower bound, similarly to (6.5) and (6.8), we have f ′ (W + t) ≥ (W − δ)f (W − δ) + 1 − Φ(x) − 1(W ≥ x − δ) and P (W > x − δ) − (1 − Φ(x)) ≥ θE((W − δ)f (W − δ) − W f (W )) − δ 1 E(|W |(1 + |W |)f (W )) −δ 1 E 1 − Φ(x) − 1(W > x − δ) (1 + |W |) − δ 2 E(2 + W 2 )f (W ) imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 Now follwoing the same proof of (6.23) leads to P (W ≥ x) − (1 − Φ(x)) ≥ −O α (1)(1 − Φ(x))θ(1 + x 3 )(δ + δ 1 + δ 2 ). This completes the proof of Theorem 3.1. For 2 k−1 − 1 < n ≤ 2 k − 1, represent 0, . . . , n by the nodes V k,0 , . . . , V k,n respectively. ThenS(X) is the sum of 1's in the shortest path from V k,X to the root of the tree. The condition C3 implies thatS(X) does not depend on k so that the representation is well defined. Proof of Proposition 4.2 We consider two extreme cases. Define a binary tree T by always assigning 0 to the left sibling and 1 to the right sibling. Then the number of 1's in the binary string of X is that in the binary expansion of X. Denote it by S n (= S(X)). Next, define a binary treeT by assigning V k,0 = 0, V k,1 = 1 for all k and assigning 1 imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 to the left sibling and 0 to the right sibling for all other nodes. Let the number of 1's in the binary string of X onT beS n (=S(X)). Both T andT are infinity binary trees satisfying C1, C2 and C3 and both S n andS n satisfy (4.7). It is easy to see that for all integers n ≥ 0, S n ≤ stSn ≤ stSn (6.25) where ≤ st denotes stochastic ordering. Therefore, it suffices to prove Cramér moderate deviation results for W n andW n where W n = (S n − k 2 )/ k 4 and W n = (S n − k 2 )/ k 4 . We suppress the subscript n in the following and follow Diaconis (1977) in constructing the exchangeable pair (W, W ′ ). Let I be a random variable uniformly distributed over the set {1, 2, · · · , k} and independent of X, and let the random variable X ′ be defined by X ′ = k i=1 X ′ i 2 k−i , where X ′ i =      X i if i =E(W − W ′ |W ) = λ(W − (− E(Q|W ) √ k )) (6.27) 1 2λ E((W − W ′ ) 2 |W ) − 1 = − E(Q|W ) k ,(6.28) where λ = 2/k and Q = k i=1 I(X i = 0, X + 2 k−i > n). From Lemma 6.1 and Theorem 3. 1 (with δ = O(k −1/2 ), δ 1 = O(k −1 ), δ 2 = O(k −1/2 )), P (W ≥ x) 1 − Φ(x) = 1 + O(1)(1 + x 3 ) 1 √ k for 0 ≤ x ≤ k 1/6 . Repeat the above argument for −W , we have for 0 ≤ x ≤ k 1/6 . P (W ≤ −x) 1 − Φ(x) = 1 + O(1)(1 + x 3 ) 1 √ k imsart Next, we notice that S andS can be written as, with X ∼ U {0, 1, . . . , n}, S = I(0 ≤ X ≤ 2 k−1 − 1)S + I(2 k−1 ≤ X ≤ n)S andS = I(0 ≤ X ≤ 2 k−1 − 1)S + I(2 k−1 ≤ X ≤ n)S. Therefore, −W − 1 k/4 = − 1 2 + I(0 ≤ X ≤ 2 k−1 − 1)( k−1 2 − S) + I(2 k−1 ≤ X ≤ n)( k−1 2 − S) k/4 and W = − 1 2 + I(0 ≤ X ≤ 2 k−1 − 1)(S − k−1 2 ) + I(2 k−1 ≤ X ≤ n)(S − k−1 2 ) k/4 . Conditioning on 0 ≤ X ≤ 2 k−1 − 1, both the distributions of S(X) andS(X) are Binomial(k − 1, 1/2), which yields L( k − 1 2 − S|0 ≤ X ≤ 2 k−1 − 1) = L(S − k − 1 2 |0 ≤ X ≤ 2 k−1 − 1). On the other hand, when 2 k−1 ≤ X ≤ n,S(X) = k − 1 − S(X). Therefore, W has the same distribution as −W − 1/ k 4 , which implies Cramér moderate deviation results also holds forW . Thus finishes the proof of Proposition 4.2. Lemma 6.1. We have E(Q|S) = O(1)(1 + |W |). Proof. . Write n = i≥1 2 k−pi , with 1 = p 1 < p 2 < · · · ≤ p k1 the positions of the ones in the binary expansion of n, where k 1 ≤ k. Recall that X is uniformly distributed over {0, 1, · · · , n}, imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 and that X = k i=1 X i 2 k−i , with exactly S of the indicator variables X 1 , . . . , X k equal to 1. We say that X falls in category i, i = 1, · · · , k 1 , when X p1 = 1, X p2 = 1, · · · , X pi−1 = 1 and X pi = 0. (6.29) We say that X falls in category k 1 + 1 if X = n. This special category is nonempty only when S = k 1 and in this case, Q = k − k 1 , which gives the last term in (6.30). Note that if X is in category i for i ≤ k 1 , then, since X can be no greater than n, the digits of X and n match up to the p th i , except for the digit in place p i , where n has a one, and X a zero. Further, up to this digit, n has p i − i zeros, and so X has a i = p i − i + 1 zeros. Changing any of these a i zeros except the zero in position p i to ones results in a number n or greater, while changing any other zeros, since digit p i of n is one and of X zero, does not. Hence Q is at most a i when X falls in category i. Since X has S ones in its expansion, i − 1 of which are accounted for by (6.29), the remaining S − (i − 1) are uniformly distributed over the k − p i = k − (i − 1) − a i remaining digits {X pi+1 , · · · , X k }. Thus, we have the inequality E(Q|S) ≤ 1 A i≥1 k − (i − 1) − a i S − (i − 1) a i + I(S = k 1 ) A (k − k 1 ) (6.30) where A = i≥1 k − (i − 1) − a i S − (i − 1) + I(S = k 1 ), and 1 = a 1 ≤ a 2 ≤ a 3 ≤ · · · . Note that if k 1 = k, the last term of (6.30) equals 0. When k 1 < k, we have I(S = k 1 ) A (k − k 1 ) ≤ k − 1 k 1 −1 (k − k 1 ) ≤ 1 (6.31) imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 so we omit this term in the following argument. We consider two cases. Case 1: S ≥ k/2. As a i ≥ 1 for all i, there are at most k + 1 nonzero terms in the sum (6.30). Divide the summands into two groups, those for which a i ≤ 2 log 2 k and those with a i > 2 log 2 k. The first group can sum to no more than 2 log 2 k. because the sum is like weighted average of a i . For the second group, note that k − (i − 1) − a i S − (i − 1) /A ≤ k − (i − 1) − a i S − (i − 1) / k − 1 S = ai−1 j=1 k − S − j k − j i−2 j=0 S − j k − (a i − 1) − 1 − j ≤ 1 2 ai−1 ≤ 1 k 2 , (6.32) where the second inequality follows from S ≥ k/2, and the last inequality from a i > 2 log 2 k. Therefore, the sum of the second group of terms is bounded by 1. Case 2: S < k/2. Divide the sum on the right hand side into two groups according as to whether i ≤ 2 log 2 k or i > 2 log 2 k. Clearly, k − (i − 1) − a i S − (i − 1) /A ≤ i−2 j=0 S − j k − 1 − j ai−1 j=1 k − S − j k − (i − 1) − j ≤ 1/2 i−1 using the assumption S < k/2 and the fact that S ≥ i − 1. The above inequality is true for all i, so the summation for the part where i > 2 log 2 k is bounded by 1. Next we consider i ≤ 2 log 2 k. When S ≥ k log ai ai−1 which is a result of the above assumption on S when i < 2 log 2 k. Now we have + 2 log 2 k, we have a i ( k−S−1 k−(i−1)−1 ) ai−1 ≤ 1. Solving S from the inequality a i ( k−S−1 k−(i−1)−1 ) ai−1 ≤ 1, we see that it is equivalent to the inequality S ≥ (1 − e − log a i a i −1 )k − 1 + e − loga i k − (i − 1) − a i S − (i − 1) /A ≤ a i k − (i − 1) − a i S − (i − 1) / k − 1 S = a i i−2 j=0 S − j k − 1 − j ai−1 j=1 k − S − j k − (i − 1) − j ≤ a i 1 2 i−1 k − S − 1 k − (i − 1) − 1 ai−1 ≤ 1 2 i−1 (6.33) using the fact that a i ( k−S−1 k−(i−1)−1 ) ai−1 ≤ 1. On the other hand, if S < k log ai ai−1 + 2 log 2 k then a i S/(k − 1) = O(1) log 2 k, which implies a i k − (i − 1) − a i S − (i − 1) /A ≤ a i S k − 1 i−2 j=1 S − j k − 1 − j ai−1 j=1 k − S − j k − (i − 1) − j = O(1) log 2 k/2 i−2 . This proves that the right hand side of (6.30) is bounded by O(1) log 2 k. To complete the proof of the lemma, i.e., to prove E(Q|W ) ≤ C(1 + |W |), we only need to show that E(Q|S) ≤ C for some universal constant C when |W | ≤ log 2 k, that is, when k/2 − k/4 log 2 k ≤ S ≤ k/2 + k/4 log 2 k. Following the argument in case 2 above, we only need to consider the summands where i ≤ 2 log 2 k because the other part where i > 2 log 2 k is bounded by 1 as proved in case 2. When a i , k are bigger than some universal constant, k/2 − k/4 log 2 k ≥ log ai ai−1 × k + 2 log 2 k, which implies ( k−S−1 k−(i−1)−1 ) ai−1 × a i ≤ 1 and k−(i−1)−ai S−(i−1) × a i /A ≤ 1/2 i−1 . Since both parts for i ≤ 2 log 2 k and i > 2 log 2 k are bounded by some constant, E(Q|S) ≤ C when |W | ≤ log 2 k and hence the lemma is proved. LetW have the conditional distribution of W (W 1 , W 2 respectively) given |W | ≤ c 1 √ n (|W 1 |, |W 2 | ≤ c 1 √ n respectively) where c 1 is to be determined. If we can prove that P (W ≥ x) 1 − Φ(x) = 1 + O(1)(1 + x 3 )/ √ n (6.34) for 0 ≤ x ≤ n 1/6 , then from the fact that (Ellis (1985)) P (|W | > K √ n) ≤ e −nC(K) ,(6.35) and P (|W 1 | > K √ n|S < 0) ≤ e −nC(K) , P (|W 2 | > K √ n|S > 0) ≤ e −nC(K) for any positive number K where C(K) is a positive constant depending only on K, we have, with δ 2 = O(1/ √ n), P (W ≥ x) 1 − Φ(x) ≤ P (W ≥ x) + P (δ 2 |W | > 1/2) 1 − Φ(x) = 1 + O(1)(1 + x 3 )/ √ n for 0 ≤ x ≤ n 1/6 . Similarly, (4.15) and (4.16) are also true. Therefore, we prove Cramér moderate deviation forW (still denoted by W in the following) defined below. Assume the state space of the spins is Σ = (σ 1 , σ 2 , . . . , σ n ) ∈ {−1, 1} n such that n i=1 σ i /n ∈ [a, b] where [a, bZ −1 β,h exp( β 1≤i<j≤n σ i σ j n + βh n i=1 σ i ). imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 Let I be a random variable uniformly distributed over {1, · · · , n} independent of {σ i , 1 ≤ i ≤ n}. Let σ ′ i be a random sample from the conditional distribution of σ i given {σ j , j = i, 1 ≤ j ≤ n}. Define W ′ = W − (σ I − σ ′ I )/σ. Then (W, W ′ ) is an exchangeable pair. Let A(w) = exp(−β(m + h) − βσw/n + β/n) exp(−β(m + h) − βσw/n + β/n) + exp(β(m + h) + βσw/n − β/n) , and B(w) = exp(β(m + h) + βσw/n + β/n) exp(β(m + h) + βσw/n + β/n) + exp(−β(m + h) − βσw/n − β/n) . It is easy to see that e −β(m+h)−βσw/n e −β(m+h)−βσw/n + e β(m+h)+βσw/n ≤ A(w) = exp(−β(m + h) − βσw/n) exp(−β(m + h) − βσw/n) + exp(β(m + h) + βσw/n − 2β/n) ≤ e −β(m+h)−βσw/n e −β(m+h)−βσw/n + e β(m+h)+βσw/n e 2β/n and e β(m+h)+βσw/n e β(m+h)+βσw/n + e −β(m+h)−βσw/n ≤ B(w) = exp(β(m + h) + βσw/n) exp(β(m + h) + βσw/n) + exp(−β(m + h) − βσw/n − 2β/n) ≤ e β(m+h)+βσw/n e β(m+h)+βσw/n + e −β(m+h)−βσw/n e 2β/n . Note that E(W − W ′ |Σ) = 1 σ E(σ I − σ I |Σ) = 2 σ E(I(σ I = 1, σ ′ I = −1) − I(σ I = −1, σ ′ I = 1)|Σ) = 2 σ σW + nm + n 2n A(W )I(S − 2 ≥ an) − 2 σ n − σW − nm 2n B(W )I(S + 2 ≤ bn) = (A(W ) + B(W ))( W n + m σ ) + 1 σ (A(W ) − B(W )) − σW + nm + n σn A(W )I(S − 2 < an) + n − σW − nm σn B(W )I(S + 2 > bn) = ( W n + m σ )(1 + O( 1 n )) − 1 σ (tanh(β(m + h) + βσW n ) + O( 1 n )) − S + n σn A(W )I(S − 2 < an) + n − S σn B(W )I(S + 2 > bn) = λ(W − R) where λ = 1 − (1 − m 2 )β n > 0 and R = 1 λ tanh ′′ (β(m + h) + ξ)β 2 σ 2n 2 W 2 + 1 λ S + n σn A(W )I(S − 2 < an) − 1 λ n − S σn B(W )I(S + 2 > bn) + O(1)( W n + 1 σ ) where ξ is between 0 and βσW/n. Similarly, E((W − W ′ ) 2 |Σ) = 4 σ 2 E(I(σ I = 1, σ ′ I = −1) + I(σ I = −1, σ ′ I = 1)|Σ) = 2(1 − m 2 ) σ 2 + O(1) W nσ + O( 1 nσ 2 ) + O( I(S − 2 < an or S + 2 > bn) σ 2 ). Therefore, recall that σ 2 = n 1−m 2 1−(1−m 2 )β , |E(D|W ) − 1| ≤ O( 1 √ n )(1 + |W |). imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 For R, with δ 2 = O(1/ √ n), |E(R|W )| ≤ δ 2 (1 + W 2 ), and if c 1 is chosen such that δ 2 |W | ≤ 1/2, the second alternative of (3.4) is satisfied with α = 1/2. Thus from Theorem 3.1 we have the following moderate deviation result for W P (W ≥ x) 1 − Φ(x) = 1 + O(1)(1 + x 3 ) 1 √ n for 0 ≤ x ≤ n 1/6 . This completes the proof of (4.12) and (4.15). Proof of Proposition 4.5 . Let f = f x be the Stein solution to equation (6.2). Let W (i) = W − ξ i and K i (t) = Eξ i (I{0 ≤ t ≤ ξ i } − I{ξ i ≤ t ≤ 0}). It is known that (see, for example, [(2.18) in Chen and Shao (2005)]) It suffices to show that Since (1 − Φ(x)) ≥ 1 2(1+x) e −x 2 /2 for x ≥ 0,EW f (W ) = n i=1 E ∞ −∞ f ′ (W (i) + t)K i (t)dt. Since ∞ −∞ K i (t)dt = Eξ 2 i , we have P (W ≥ x) − (1 − Φ(x)) = EW f (W ) − Ef ′ (W ) = n i=1 E ∞ −∞ (f ′ (W (i) + t) − f ′ (W ))K i (t)dt = n i=1 E ∞ −∞ ((W (i) + t)f (W (i) + t) − W f (W ))K i (t)dt + n i=1 E ∞ −∞ (I{W (i) + t ≥ x} − I{W ≥ x})K i (t)dt := R 1 + R 2 ,(6.|R 1 | ≤ C(1 + x 3 )γ(1 − Φ(x) )e x 3 γ (6.38) and |R 2 | ≤ C(1 + x 2 )γ(1 − Φ(x))e x 3 γ . (6.39) To estimate R 1 , let g(w) = (wf (w)) ′ . It is easy to see that Noting that (6.43) also implies that R 1 = n i=1P (W (i) ≥ x − s) ≤ e −x 2 Ee x(W (i) +s) ≤ exp(−x 2 /2 + x|s| + x 3 γ) ≤ (1 + x)(1 − Φ(x)) exp(x|s| + x 3 γ) we have Eg(W (i) + s) ≤ C(1 + x 3 )(1 − Φ(x))e x 3 γ+x|s| and therefore by (6.40) |R 1 | ≤ n i=1 E ∞ −∞ t ξi g(W (i) + s)ds K i (t)dt ≤ C(1 + x 3 )(1 − Φ(x))e x 3 γ n i=1 E ∞ −∞ (|t|e x|t| + |ξ i |e x|ξi| )K i (t)dt ≤ C(1 + x 3 )γ(1 − Φ(x))e x 3 γ (6.45) This proves (6.38). As to R 2 , we apply an exponential concentration inequality of Shao (2010) (see Theorem 2.7 in [30]): for a ≥ 0 and b ≥ 0 P (x − a ≤ W (i) ≤ x + b) ≤ Ce xγ+xa−x 2 (γ + b + a)E|W (i) |e xW (i) + (Ee 2xW (i) ) 1/2 exp(−γ −2 /32) ≤ Ce xγ+xa−x 2 (γ + b + a)(EW (i) e xW (i) + 1)(Ee 2xW (i) ) 1/2 exp(−γ −2 /32) ≤ Ce xγ+xa−x 2 (γ + b + a)((1 + x)e x 2 /2+x 3 γ + e x 2 +x 3 γ exp(−γ −2 /32) Here we use the fact that EW (i) e xW (i) ≤ xe x 2 /2+x 3 γ , by following the proof of (6.43). Therefore R 2 ≤ n i=1 E ∞ −∞ P (x − ξ i ≤ W (i) ≤ x − t | ξ i )K i (t)dt ≤ C(1 − Φ(x))e x 3 γ n i=1 ∞ −∞ (1 + x 2 )E(γ + |t| + |ξ i |)e x|ξi| + exp(x 2 − γ −2 /32) K i (t)dt ≤ C(1 − Φ(x))e x 3 γ (1 + x 2 )γ + exp(x 2 − γ −2 /32) ≤ C(1 − Φ(x) )e x 3 γ by (6.36). Similarly, the above bound holds for −R 2 . This proves (6.39). . 1 . 1Let W be a random variable of interest. Assume that there exist a deterministic positive constant δ, a random positive measureμ with support [−δ, δ] and a random variable R such thatEW f (W ) = E |t|≤δ f ′ (W + t)dμ(t) + E(Rf (W )) (3.1)imsart-generic ver. 2006/10/13 file: 11-5-16.tex date:May 30, 2011 for all absolutely continuous function f for which the expectation of Suppose that there exist constants δ 1 , δ 2 and θ ≥ 1 such that|E(D|W ) − 1| ≤ δ 1 (1 + |W |), (3.3) |E(R|W )| ≤ δ 2 (1 + |W |) or |E(R|W )| ≤ δ 2 (1 + W 2 ) & δ 2 |W | ≤ α < 1 (3.4)and E(D|W ) ≤ θ (3.5) 2, ..., n} independent of other random variables. Consider the case where the distribution of X is the stationary distribution. Then as shown in Rinott and Rotar (1997), (W, W ′ ) is an exchangeable pair and E(W − W ′ |W ) is satisfied with δ = 2/σ and R = 0. To check conditions (3.3) and (3.5), let T denote the number of 1's among X 1 , · · · , X n , a be the number of edges connecting two 1's, b be the number of edges connecting two −1's, and c be the number of edges connecting 1 and −1. Since it is a complete graph, a = T (T −1) 2 , b = (n−T )(n−T −1)2 defined as follows. Suppose each nonnegative integer x is coded by a binary string consisting of 0's and 1's. LetS(x) denote the number of 1's in the resulting coding string of x and letS = (S(0),S(1), . . .). nonnegative integer n, defineS n =S(X), where X is a random integer uniformly distributed over the set {0, 1, . . . , n}. The general system of binary codes introduced by Chen, Hwang and Zacharovas (2011) is one in which S 2m−1 =S m−1 + I in distribution for all m ≥ 1, (4.7) Proposition 4.2 provides a Cramér moderate deviation result for W n . Other examples of this system of binary codes include the binary reflected Gray code and a coding system using translation and complementation. Detailed descriptions of these codes are given in Chen, Hwang and Zacharovas (2011). Proposition 4. 4 . 4In Case 2, define optimal. By comparing with (1.1) the results in the four examples discussed above may be optimal. For n ≥ 2 , 2X ∼ U {0, 1, . . . , n}, letS n =S(X) be the number of 1's in the binary string of X generated in any system of binary codes satisfying ((4.7) allowsS(X) to be represented in terms of the labels of the nodes in a binary tree described as follows. LetT be an infinite binary tree. For k ≥ 0, the nodes ofT in the kth generation are denoted by (from left to right) (V k,0 , . . . , V k,2 k −1 ). Each node is labeled by 0 or 1. AssumeT satisfies C1. The root is labeled by 0 C2. The labels of two siblings are different. C3. Infinite binary subtrees ofT with roots {V k,0 : k ≥ 0} are the same asT . ′ = S − X I + X ′ I , W ′ = (S ′ − k/2)/ k/4. As proved inDiaconis (1977), (W, W ′ ) is an exchangeable pair and generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 imsart-generic ver. 2006/10/13 file: 11-5-16.tex date:May 30, 2011 ] is any interval within which there is only one solution m to(4.9).Let S = n i=1 σ i , W = S−nm σ and σ 2 = n 1−m 2 1−(1−m 2 )β . Note that in Case 1 and Case 2, 1 − (1 − m 2 )β > 0, thus σ 2 is well defined. Moreover, [a, b] is chosen such that |W | ≤ c 1 √ n.The joint distribution of the spins is W ) − B(W ) = − tanh(β(m + h) + βσW/n) + O(1) 1 n . = Eg(W (i) + s)I{W (i) + s ≥ x}dt + Eg(W (i) + s)I{W (i) + s ≤ 0} +Eg(W (i) + s)I{0 < W (i) + s < x} ≤ 2 1 + x 3 P (W (i) + s ≥ x) + 2(1 − Φ(x))P (W (i) + s ≤ 0) + √ 2π(1 − Φ(x))E (1 + (W (i) + s) 2 )e (W (i) +s) 2 /2 I{0 < W (i) + s < x} ≤ 2 1 + x 3 P (W (i) ≥ x − s) + 2(1 − Φ(x))P (W (i) + s ≤ 0) y 2 )e y 2 /2 dP (W (i) + s > y) = 2 1 + x 3 P (W (i) ≥ x − s) + 2(1 − Φ(x))P (W (i) + s ≤ 0) + √ 2π(1 − Φ(x))P (W (i) + s > 0) + √ 2π(1 − Φ(x))J(s) ≤ 2 1 + x 3 P (W (i) ≥ x − s) + y 3 )e y 2 /2 P (W (i) + s > y)dy.(6.42)Clearly, for 0 < t ≤ x ≤ Ce x 3 γ+xa−x 2 /2 (γ + b + a)(1 + x) + exp(x 2 /2 − γ −2 /32) ≤ C(1 − Φ(x))e x 3 γ+xa (γ + b + a)(1 + x 2 ) + exp(x 2 − γ −2 /32) ,imsart-generic ver. 2006/10/13 file: 11-5-16.tex date:May 30, 2011 Corollary 3.1. Let W and W * be defined on the same probability space satisfying (2.4). Assume that EW = 0, EW 2 = 1 and |W − W * | ≤ δ for someconstant δ. Then P (W ≥ x) 1 − Φ(x) = 1 + O(1)(1 + x 3 )δ imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 for the results on limiting distributions. See also Chatterjee and Shao (2011) for a Berry-Esseen type bound when the limiting distribution is not Gaussian. Here we focus on the Gaussian case and prove the following two Cramér moderate deviation results for Case 1 and Case 2. where [t] denotes the integer part of t. For the first integral, noting that sup j−1≤u≤j e u 2 /2−ju =imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 (4.18) becomes trivial if xγ ≥ 1/8. Thus, we can assume xγ ≤ 1/8. (6.36) 37 ) 37imsart-generic ver. 2006/10/13 file: 11-5-16.tex date:May 30, 2011 Ee tξj = 1 + t 2 Eξ 2 j /2 + E|ξ j | 3 e x|ξj| imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011∞ k=3 (tξ j ) k k! ≤ 1 + t 2 Eξ 2 j /2 + t 3 6 E|ξ j | 3 e t|ξj| ≤ exp t 2 Eξ 2 j /2 + x 3 6 and hence Ee t(W (i) +s) ≤ exp(t 2 /2 + x|s| + x 3 6 γ for 0 ≤ t ≤ x. (6.43) By (6.43), following the proof of Lemma 5.2 yields J(s) ≤ C (1 + x 3 )e x 3 γ+x|s| (6.44) imsart-generic ver. 2006/10/13 file: 11-5-16.tex date: May 30, 2011 L.H.Y. Chen, X. Fang and Q.M. Shao/Moderate Deviations imsart-generic ver. 2006/10/13 file: 11-5-16.tex date:May 30, 2011 A normal approximations for the number of local maxima of a random function on a graph. P Baldi, Y Rinott, C Stein, Probability. Honor of Samuel Karlin. T.W. Anderson, K.B. Athreya and D.L.Academic PressIglehart eds.Baldi, P., Rinott, Y. and Stein, C. (1989). A normal approximations for the number of local maxima of a random function on a graph, In Proba- bility, Statistics and Mathematics, Papers in Honor of Samuel Karlin. T.W. Anderson, K.B. Athreya and D.L. Iglehart eds., Academic Press, 59-81. Stein's method and Poisson process convergence. A Barbour, J. Appl. Prob. 25Barbour, A. (1988). Stein's method and Poisson process convergence, J. Appl. Prob. 25(A), 175-184. Stein's method for diffusion approximations. A Barbour, Probab. Th. Related Fields. 84Barbour, A. (1990). Stein's method for diffusion approximations, Probab. Th. Related Fields 84, 297-322. An Introduction to Stein method. A D Barbour, L H Y Chen, Institute for Mathematical Sciences. 4National University of Singapore, Singapore University Press and World ScientificLecture Notes SeriesBarbour, A.D. and Chen, L.H.Y. (2005). An Introduction to Stein method. Lecture Notes Series 4, Institute for Mathematical Sciences, Na- tional University of Singapore, Singapore University Press and World Sci- entific. Stein's method for concentration inequalities. S Chatterjee, Probab. Theory Related Fields. 138Chatterjee, S. (2007). Stein's method for concentration inequalities. Probab. Theory Related Fields 138, 305-321. A new method of normal approximation. S Chatterjee, Ann. Prob. 36Chatterjee, S. (2008). A new method of normal approximation, Ann. Prob. 36, 1584-1610. Applications of Stein's method for concentration inequalities. S Chatterjee, P Dey, Ann. Prob. 38Chatterjee, S. and Dey, P. (2010). Applications of Stein's method for concentration inequalities. Ann. Prob. 38, 2443-2485. Non-normal approximation by Stein's emthod of exchangeable pairs with applications to the Curie-Weiss model. S Chatterjee, Q M Shao, Ann. App. Probab. 21Chatterjee, S. and Shao, Q.M. (2011). Non-normal approximation by Stein's emthod of exchangeable pairs with applications to the Curie-Weiss model. Ann. App. Probab. 21, 464-483. Poisson approximation for dependent trials. L H Y Chen, Ann. Prob. 3Chen, L.H.Y. (1975). Poisson approximation for dependent trials, Ann. Prob. 3, 534-545. Distribution of the sum-of-digits function of random integers: a survey. L H Y Chen, H K Hwang, V Zacharovas, Chen, L.H.Y., Hwang, H.K. and Zacharovas, V. (2011). Distribution of the sum-of-digits function of random integers: a survey. Stein couplings for normal approximation. L H Y Chen, A Röllin, Chen, L.H.Y. and Röllin, A. (2010). Stein couplings for normal ap- proximation. A non-uniform Berry-Esseen bound via Stein's method. L H Y Chen, Q M Shao, Probab. Th. Related Fields. 120Chen, L.H.Y. and Shao, Q.M. (2001). A non-uniform Berry-Esseen bound via Stein's method. Probab. Th. Related Fields 120, 236-254. Normal approximation under local dependence. L H Y Chen, Q M Shao, Ann. Probab. 32Chen, L.H.Y. and Shao, Q.M. (2004). Normal approximation under local dependence. Ann. Probab. 32, 1985-2028. Stein's Method for Normal Approximation. An Introduction to Stein's Method. L H Y Chen, Q M Shao, Lecture Notes Series 4, Institute for Mathematical Sciences. A.D. Barbour and L. H. Y. ChenNational University of Singapore, Singapore University Press and World ScientificChen, L.H.Y. and Shao, Q.M. (2005). Stein's Method for Normal Ap- proximation. An Introduction to Stein's Method (A.D. Barbour and L. H. Y. Chen, eds), Lecture Notes Series 4, Institute for Mathematical Sciences, National University of Singapore, Singapore University Press and World Scientific, 1-59. Sur un nouveau theoreme-limite de la theorie des probabilites. H Cramér, no. 736Hermann et Cie. ParisCramér, H. (1938). Sur un nouveau theoreme-limite de la theorie des probabilites, Actualites Scientifiques et Industrielles no. 736. Paris: Her- mann et Cie. Some examples of normal approximations by Stein's method, Random discrete structures IMA. A Dembo, Y Rinott, Math. Appl. 76Dembo, A. and Rinott, Y. (1996). Some examples of normal approx- imations by Stein's method, Random discrete structures IMA Vol. Math. Appl., 76, 25-44. The distribution of leading digits and uniform distribution mod 1. P Diaconis, Ann. Probab. 5Diaconis, P. (1977). The distribution of leading digits and uniform dis- tribution mod 1. Ann. Probab. 5, 72-81. Stein's method: expository lectures and applications. P Diaconis, S Holmes, IMS Lecture Notes. 46Diaconis, P. and Holmes, S. (2004). Stein's method: expository lectures and applications. IMS Lecture Notes, Vol. 46, Hayward, CA. Entropy, large deviations, and statistical mechanics. R S Ellis, Springer-VerlagEllis, R.S. (1985). Entropy, large deviations, and statistical mechanics. Springer-Verlag. The statistics of Curie-Weiss models. R S Ellis, C M Newman, J. Statist. Phys. 19Ellis, R.S. and Newman, C.M. (1978a). The statistics of Curie-Weiss models. J. Statist. Phys. 19, 149-161. Limit theorems for sums of dependent random variables occurring in statistical mechanics. R S Ellis, C M Newman, Z. Wahrsch. Verw. Gebiete. 44Ellis, R.S. and Newman, C.M. (1978b). Limit theorems for sums of dependent random variables occurring in statistical mechanics. Z. Wahrsch. Verw. Gebiete 44, 117-139. Berry Esseen bounds for combinatorial central limit theorems and pattern occurences, using zero and size biasing. L Goldstein, J. Appl. Probab. 42Goldstein, L. (2005). Berry Esseen bounds for combinatorial central limit theorems and pattern occurences, using zero and size biasing. J. Appl. Probab. 42, 661-683. Stein's method and the zero bias transfromation with application to simple random sampling. L Goldstein, G Reinert, Ann. Appl. Probab. 7Goldstein, L. and Reinert, G. (1997). Stein's method and the zero bias transfromation with application to simple random sampling. Ann. Appl. Probab. 7, 935-952. Multivariate normal approximations by Stein's method and size bias couplings. L Goldstein, Y Rinott, A. Appl. Probab. 33Goldstein, L. and Rinott, Y. (1996). Multivariate normal approxima- tions by Stein's method and size bias couplings. A. Appl. Probab. 33, 1-17. On the probability of large deviations for the sums of independent variables. Yu V Linnik, Proc. Sixth Berkeley Symp. Sixth Berkeley SympBerkeley, CalifUniv. California Press2Linnik, Yu. V. (1961). On the probability of large deviations for the sums of independent variables. Proc. Sixth Berkeley Symp. Math. Stat. Prob. 2, 289-306, Univ. California Press. Berkeley, Calif. Stein's method on Weiner Chaos. I Nourdin, G Pecatti, Probab. Th. Related Fields. 145Nourdin, I. and Pecatti, G. (2009). Stein's method on Weiner Chaos. Probab. Th. Related Fields 145, 75-118. Sums of Independent Random Variables. V Petrov, SpringerNew YorkPetrov, V. (1975). Sums of Independent Random Variables. Springer, New York. CLT-related large deviation bounds based on Stein's method. M Raic, Adv. Appl. Probab. 39Raic, M. (2007). CLT-related large deviation bounds based on Stein's method. Adv. Appl. Probab. 39, 731-752. On coupling constructions and rates in the CLT for dependent summands with applications to antivoter model and weighted U-statistics. Y Rinott, V Rotar, A. Appl. Probab. 7Rinott, Y. and Rotar, V. (1997). On coupling constructions and rates in the CLT for dependent summands with applications to antivoter model and weighted U-statistics. A. Appl. Probab. 7, 1080-1105. Stein's method, self-normalized limit theory and applications. Q M Shao, Proceedings of the International Congress of Mathematicians (IV). the International Congress of Mathematicians (IV)Hyderabad, IndiaShao, Q.M. (2010). Stein's method, self-normalized limit theory and appli- cations. Proceedings of the International Congress of Mathematicians (IV), pp. 2325-2350, Hyderabad, India. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. C Stein, Proc. Sixth Berkeley Symp. Sixth Berkeley SympBerkeley, CalifUniv. California Press2Stein, C. (1972). A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. Proc. Sixth Berkeley Symp. Math. Stat. Prob. 2, 583-602, Univ. California Press. Berkeley, Calif. Approximation Computation of Expectations. C Stein, Inst. Math. Statist. 7Lecture NotesStein, C. (1986). Approximation Computation of Expectations. Lecture Notes 7, Inst. Math. Statist., Hayward, Calif.
[]
[]
[]
[]
[]
This article is a short review on the concept of information. We show the strong relation between Information Theory and Physics, beginning by the concept of bit and its representation with classical physical systems, and then going to the concept of quantum bit (the so-called "qubit") and exposing some differences and similarities.This paper is intended to be read by non-specialists and undergraduate students of Computer Science, Mathematics and Physics, with knowledge of Linear Algebra and Quantum Mechanics.
null
[ "https://arxiv.org/pdf/physics/0404133v1.pdf" ]
17,773,703
physics/0404133
7242d4a42f8dd5b74608d791fed718cf12fd692b
28 Apr 2004 April, 2004 28 Apr 2004 April, 2004arXiv:physics/0404133v1 [physics.ed-ph] Considerations on Classical and Quantum Bits * F.L. Marquezino and R.R. Mello Júnior CBPF -Centro Brasileiro de Pesquisas Físicas CCP -Coordenação de Campos e Partículas Av. Dr. Xavier Sigaud, 150 22.290-180 Rio de Janeiro (RJ) Brazil (CNPq Fellows/PIBIC) * The authors are undergraduate students of Computer Science at the Catholic Univer-sity of Petrópolis. They are also members of Grupo de Física Teórica José Leite Lopes.Information TheoryQuantum InformationQuantum ComputationComputer Science This article is a short review on the concept of information. We show the strong relation between Information Theory and Physics, beginning by the concept of bit and its representation with classical physical systems, and then going to the concept of quantum bit (the so-called "qubit") and exposing some differences and similarities.This paper is intended to be read by non-specialists and undergraduate students of Computer Science, Mathematics and Physics, with knowledge of Linear Algebra and Quantum Mechanics. Introduction Physics is an important subject in the study of information processing. It could not be different, since information is always represented by a physical system. When we write, the information is encoded in ink particles over a paper surface. When we think or memorize something, our neurons are storing and processing information. Morse code uses a physical system, such as light or sound waves to encode and transfer messages. As Rolf Laudauer said, "information is physical". At least for the purposes of our study, this statement is very adequate. Every day, we use classical systems to store or read information. This is part of human life since the very beginning of history. But, what happens if we use quantum systems instead of classical ones? This is an interesting subject in the intersection of Physics, Computer Science and Mathematics. In this article, we show how information is represented, both in quantum and classical systems. The plan of our work is as follows: in Section 2 we argue about the physical character of information. In Section 3 we show the classical point of view of information, i.e., according to Newtonian Mechanics. In Section 4, the point of view of Quantum Mechanics will be shown. We also suggest some introductory references that explain most of the concepts discussed here [1,2,3]. The main goal of this paper is to review some mathematical and physical aspects of classical information and compare them with its quantum counterpart. Information is physical In its very beginning, Computer Science could be considered a branch of Mathematics, exclusively. However, since a few decades some scientists have been giving special attention to the correlation between Computer Science and Physics. One of the first physical aspects that we can raise in classical computation is thermodynamics. How much energy is spent when the computer does a certain calculation, and how much heat is dissipated? Is it possible to create a computer that does not spend any energy at all? To answer these questions we will begin by examining Landauer's principle. According to Landauer's principle, when a computer erases a bit, the amount of energy dissipated is at least k B T ln 2, where k B is Boltzmann's constant and T is the temperature of the environment. The entropy of the environment increases at least k B ln 2. This means that any irreversible operation performed by a computer dissipates heat and spends energy. For instance, the AND logical operation 1 is irreversible, because given an output we cannot necessarily know the inputs. If the output is 0, the inputs could be 00, 01 or 10. This operation erases information from the input, so it dissipates energy, according to Landauer's principle. If one could create a computer using only reversible operations, this computer would not spend any energy. That would be a great achievement, given the fact that our modern society spends more and more in energy, and computers are responsible for great part of the problem. Charles Bennett, in 1973, proved that building a reversible computer is possible [5]. The next step would be finding universal reversible gates, i.e., a gate or a small set of gates that allows the construction of circuits to calculate any computable function. E. Fredkin and T. Toffoli proved the existence of such gate in 1982 [6]. The Toffoli gate is equivalent to the traditional NAND operation (which is universal in classical computation) and works as follows: Toffoli(A, B, C) = (A, B, C ⊕ A ∧ B),(1) where ⊕ is sum modulus 2 and ∧ in the logical AND. In priciple we could build a reversible computer by simply replacing NAND gates by Toffoli gates. That is not so simple to implement, though. Besides, one could question whether this gate is actually non-dissipative, since we generate a lot of junk bits that will need to be erased sometime. Bennett solved this problem by observing that we could perform the entire computation, print the answer (which is a reversible operation in Classical Mechanics) and then run the computer backwards, returning to the initial state. So, we do not need to erase the extra bits. Another interesting subject that leads us to the intersection between Computer Science and Physics is the Maxwell's demon. In 1871, J.C. Maxwell proposed a theoretic machine, operated by a little "demon" that could violate the second law of thermodynamics [7]. The machine would have two partitions, separated by a little door controlled by this demon. The modus operandi of this demon would be quite interesting. It would watch the movement of each molecule, opening the door whenever they approach, allowing fast ones to pass from the left to right partition and slow ones to pass from right to left partition. By doing that, heat would flow from a cold place to a hot one at no cost. The solution for this apparently paradox resides in the fact that the demon must store information about the movement of the particles. Since the demon's memory is finite, it will have to erase information in a moment, dissipating energy and then increasing the entropy of the system. The topics pointed out in this section show how close Computer Science and Physics are. In the next sections we will show how information is represented by Classical Mechanics, and what happens if we use Quantum Mechanics instead. On classical bits A classical computer performs logical and arithmetical operations with a certain (finite) alphabet 2 . Each one of the symbols that compose this alphabet must be represented by a specific state of a classical system. Since we are used to perform calculations with decimal numbers it is very natural to think that the computer's alphabet should be composed by ten different symbols. However, it would be very expensive and complex to build a computer with this characteristic. Instead, computers work with 2-state systems, the so-called bits, and represent binary numbers. The concept of bit was anticipated by Leo Szilard [9] while analyzing the Maxwell's demon paradox. However, the word bit (binary digit) was first introduced by Tukey. The bit is the fundamental concept in Information Theory, and is the smallest information that can be handled by a classical computer. Every information stored in the computer is either a bit or a sequence of bits. If we join n bits, we can represent 2 n different characters. But, how many bits are necessary to represent all the characters in the English alphabet, plus the numbers and some special characters? If we use 8 bits, we can represent 256 characters, which is enough! To these 8 bits we give the name byte 3 . Another interesting unit is the nibble, which is formed by 4 bits. With one nibble we can represent all the hexadecimal numbers (2 4 = 16). Since the hexadecimal base is largely used in assembly languages and low-level computing, some computer scientists work with nibbles quite often. The byte is a very small unit, so we normally use some of its multiples. The kilobyte (KB) corresponds to 1024 bytes, i.e., 8192 bits. One could think that 1KB should be 1000 bytes, but as we are dealing with binary numbers, the power of 2 which is closer to 1000 is actually 2 10 = 1024. There are also some other useful units: megabyte (MB), which corresponds to 1024 KB, gigabyte (GB), equals to 1024 MB, terabyte (TB), equivalent to 1024 GB and petabyte, which corresponds to 1024 TB. At this point, the idea of Shannon entropy should be introduced [10]. Shannon entropy is an important concept of Information Theory, which quantifies the uncertainty about a physical system. We can also look at Shannon entropy in a different point of view, as a function that measures the amount of information we obtain, on average, when we learn the state of a physical system. We define Shannon entropy as a function of a probability distribution, p 1 , p 2 , . . . , p n : H(p 1 , p 2 , . . . , p n ) ≡ − x p x log p x(2) where 0 log 0 ≡ 0, in the context of distributions or generalized functions. Note that lim x→0 (x log x) = 0. This function will be explained in this paper through an exercise, which can also be found in [11]. This is an intuitive justification for the function we defined above. Suppose we want to measure the amount of information associated to an event E, which occurs in a probabilistic experiment. We will use a function I(E), which fits the following requirements: 1. I(E) is a function only of the event E, so we may write I = I(p), where p is the probability of the event E; 2. I is a smooth function of probability; 3. I(pq) = I(p) + I(q) when p, q > 0, i.e., the information obtained when two independent events occur with probabilities p and q is the sum of the information obtained by each event alone. We want to show that I = k log p, for some constant k. From the third condition of the problem, I(pq) = I(p) + I(q),(3) we can let q = 1, verifying that I(1) = 0. Now, we can differentiate both sides of the above equation with respect to p. ∂I(pq) ∂p = dI(p) dp + dI(q) dp (4) dI(pq) d(pq) · ∂pq ∂p = I ′ (p) (5) I ′ (pq) · q = I ′ (p).(6) When p = 1 we can easily note that I ′ (q) · q = I ′ (1).(7) Based on the second condition of the problem, we know that I ′ (p) is well defined when p = 1, so I ′ (1) = k, k constant. I ′ (q) = k q (8) I(q) = k q dq (9) I(q) = k log q.(10) The function I(p) appeared naturally and satisfies the three conditions specified by the problem. However, the function I(p) represents the amount of information gained by one event with probability p. We are interested in a function that gives us the mean information, that is, the entropy. H =< I >= x p x (k log p x ) x p x(11)H =< I >= k x p x log p x(12) where k = −1, and we have the Shannon entropy's formula: H = − x p x log p x .(13) If we apply (13) specifically to the case where we have a binary random variable (which is very common in Computer Science), this entropy receive the name binary entropy: H bin (p) = −p log p − (1 − p) log (1 − p)(14) where p is the probability for the variable to have the value v, and (1 − p) is the probability for the variable to assume the value ¬v. Information Theory studies the amount of information contained in a certain message, and its transmission through a channel. Shannon's Information Theory was responsible for giving a precise and mathematical definition for information. Written languages can be analyzed with the help of Information Theory [12]. For a given language, we can define the rate of the language as r = H(M ) N ,(15) where H(M ) is the Shannon entropy of a particular message and N is the lenght of this message. In English texts, r normally varies from 1.0 to 1.5 bit per letter. Cover found r = 1.3 bits/letter in [13]. Assuming that, in a certain language composed by L characters, the probability of occurence of each letter is equal, one can easily found the amount of information contained in each character. R = log L,(16) where R, the maximum entropy of each character in a language, is called absolute rate. The English alphabet is composed by 26 letters, so its absolute rate is log 26 ≈ 4.7 bits/letter. The absolute rate is normally higher that the rate of the language. Hence, we can define the redundancy of a language as D = R − r.(17) In the English language, if we consider r = 1.3 according to [13], and if we apply eq. (16) to find R, we find out that the redundancy is 3.4 bits/letter. We cannot forget that we deal with information every day. In this exact moment, you are dealing with the information contained in this paper. So, it is natural to ask how much information our senses can deal with. Studies have shown that vision can receive 2.8 · 10 8 bits per second, while audition can deal with 3 · 10 4 bits per second. Our memory can store and organize information for a long time. The storage capacity of the human brain varies from 10 11 to 10 12 bits. Just as a comparison we can mention that the knowledge of a foreign language requires about 4 · 10 6 bits [14]. Introducing qubits The quantum bit is often called "qubit". The key idea is that a quantum system will be used to store and handle data. When we use a classical system, such as a capacitor or a transistor, the properties of Classical Mechanics are still observed. On the other hand, if we use a quantum system to process information, we can take advantage of the quantum-mechanical properties. Quantum Mechanics has a probabilistic character. While a classical system can be in one, and only one state, a quantum system can be in a state of superposition, as if it was in different states simultaneously, each one associated to a probability. Mathematically, we will express this N -state quantum system as the linear combination, |ψ = N −1 i=0 a i |i(18) where a i are complex numbers called amplitudes. We know, from the Quantum Mechanics postulates, that a k 2 is the probability of obtaining |k when measuring the state |ψ . Then, N −1 i=0 a i 2 = 1.(19) In Quantum Computation we normally work with 2-state systems (otherwise we would not be referring to qubits, but qutrits, qu-nits or something similar). So, the quantum bits can assume any value in the form: |ψ = α|0 + β|1(20) with α, β complex numbers, and α 2 + β 2 = 1. It is important to stress that the amplitudes are not simple probabilities. The state 1 √ 2 (|0 + |1 ) is different from 1 √ 2 (|0 − |1 ) , for instance. In this case we say that the two states are different by a relative phase. However, the states |ψ and e iθ |ψ (where θ is a real number) are considered equal, because they differ only by a global phase. The global phase factor does not have influence in the measurement of the state. Superposition is quite interesting because while classical bits can assume only one value, its quantum counterpart can assume a superposition of states. A single qubit can value both 0 and 1 simultaneously. Similarly Figure 1: Bloch sphere. a n-qubit register can value, simultaneously, all the values from 0 to 2 n − 1. Consequently, we can do the same calculation on different values at the same time, simply by performing an operation on a quantum system. Now, returning to the mathematical study of the quantum system, we can observe that a single qubit can be represented in a two-dimensional complex vector space. Of course, that does not help us so much in terms of geometric visualization. However, note that we may rewrite eq. (20): |ψ = e iγ cos θ 2 |0 + e iϕ sin θ 2 |1 (21) where θ, ϕ and γ are real numbers. The global factor e iγ can be ignored, since it has no observable effects. |ψ = cos θ 2 + e iϕ sin θ 2 |1 .(22) Now, we can represent a qubit in a three-dimensional real vector space. According to eq. (19), the qubit norm must be equal to 1, so the numbers θ and ϕ will define a sphere: the so-called Bloch sphere. As we can see, there are infinite points on the Bloch sphere. Nevertheless, it is important to emphasize that all we can learn from a measurement is 0 or 1, but not the values of θ or ϕ. Moreover, after performing a measurement the state will be irreversibly collapsed (projected) to either |0 or |1 . Should it be different, we could write an entire encyclopedia in one qubit, by taking advantage of the infinite solutions of (19). If we wished to represent a composite physical system (which could be a quantum register, for instance), we would use an operation called tensor product, represented by the symbol ⊗. The state of a quantum register |φ composed by the qubits |ψ i , where i varies from 1 to n is |φ = |ψ 1 ⊗ |ψ 2 ⊗ . . . ⊗ |ψ n .(23) We recommend that the reader refers to [11] to get more information on this postulate. The no-cloning theorem There is a remarkable difference between classical and quantum states, which is the impossibility of the latter to be perfectly cloned when it is not known a priori. This can be proved by the no-cloning theorem, published by W.K. Wooters and W.H. Zurek, in 1982 [15]. Here, we will prove that a generic quantum state cannot be cloned. The authors recommend the reading of the article cited before for a more complete comprehension. Let us suppose we wish to create a machine that receives two qubits as inputs, called qubit A and qubit B. Qubit A will receive an unknown quantum state, |ψ , and qubit B a pure standard state, |s (such as a blank sheet of paper in a copy machine). We wish to copy the state |ψ to qubit B. The initial state of the machine is |ψ ⊗ |s . If the copy was possible, there were an unitary operator U such that U (|ψ ⊗ |s ) = |ψ ⊗ |ψ . However, we wish our machine to be able to copy different states. So, the operator U must be such that U (|φ ⊗|s ) = |φ ⊗|φ . The inner product between these two equations is ψ|φ = ( ψ|φ ) 2 .(25) It is easy to realize that the solutions for this equation are ψ|φ = 1 and ψ|φ = 0, i.e., when |φ = |ψ , or when |φ ⊥ |ψ . The first solution is useless, so we proved that a perfect cloning machine is only able to clone orthogonal states. The non-cloning theorem leads us to a very interesting application of Quantum Mechanics: a provable secure protocol for key distribution that can be used together with Vernam's cipher to provide an absolutely reliable cryptography. The reader can refer to [2] for a short introduction to this subject. Von Neumann entropy Up to this point, we have been using the vector language to express Quantum Mechanics. From now on, it will be interesting to introduce another formalism: the density operator (also called "density matrix"). This is absolutely equivalent to the language of state vectors, but it will make the calculations much easier in this case. Besides, the density operator is an excellent way to express quantum systems whose state is not completely known. If we have a quantum system with probability p i to be in the state |ψ i , then we call {p i , |ψ i } an ensemble of pure states. We define the density matrix for this system as ρ = i p i |ψ i ψ i |.(26) Von Neumann entropy is very similar to the Shannon entropy. It measures the uncertainty associated with a quantum state. The quantum state ρ has its Von Neumann entropy given by the formula S(ρ) = −tr(ρ log ρ).(27) Let λ i be the eigenvalues of ρ. It is not very difficult to realize that the Von Neumann entropy can be rewritten as S(ρ) = − x λ x log λ x .(28) Another important concept is the relative entropy. We can define the relative entropy of ρ to σ as S(ρ||σ) = tr(ρ log ρ) − tr(ρ log σ)(29) where ρ and σ are density operators. According to Klein's inequality, the quantum relative entropy is never negative: S(ρ||σ) ≥ 0(30) with equality holding if and only if ρ = σ. The proof for this theorem is not relevant here, but it can be found in [11, page 511]. Further comments on Quantum Information Theory The Quantum Information Theory is concerned with the information exchange between two or more parties, when a quantum mechanical channel is used to achieve this objective. Naturally, the purpose of this paper is not to give a deep comprehension of this subject. Quantum Information Theory, as well as its classical counterpart, is a vast area of knowledge, which would require much more than just few pages to be fully explained. Instead, we give some basic elements, allowing the reader, independently of his area of knowledge, to have a better comprehension of Quantum Computation and Quantum Information Processing. Quantum systems have a collection of astonishing properties. Some of them could, at least in principle, be used in Computer Science, allowing the production of new technology. One of these amazing properties we have already mentioned: it is the superposition. If in the future mankind learn how to control a large number of qubits in a state of superposition for enough time, we will have the computer of our dreams. It would be a great step for science. Another important property is the entanglement [11,16]. Some states are so strongly connected that one cannot be written disregarding the other. In other words, they cannot be written separately, as a tensor product. This property brings interesting consequences. Imagine that Alice prepares the state below 4 in her laboratory, in Brazil: |β 00 = |0 a |0 b + |1 a |1 b √ 2 .(31) After that, Alice keeps qubit a and gives qubit b to Bob, who will take it to another laboratory, let us say, in Australia. Now, we know from the third postulate of Quantum Mechanics that if any of them measure the state, it will collapse either to |0 a |0 b or to |1 a |1 b . So, the state of the qubit in Australia can be modified by a measurement done in Brazil and vice-versa! Reference [11] is strongly recommended as a starting point, for those who want to study this topic more deeply. Concluding remarks In this paper, we have shown some of the main aspects of information. Information Theory normally considers that all information must have a physical representation. But, Nature is much more than the classical world, that we see every day. If we remember that the amazing quantum world can also represent information, we discover astonishing properties, leading us to a new field of study. Here, we briefly introduced this subject to students and researchers from different areas of knowledge. In Computer Science, we normally wish to represent some information, manipulate it in order to perform some calculation and, finally, measure it, obtaining the result. We began by showing how information is represented, in classical systems and in quantum systems. In a forthcoming work [4], we show how information can be manipulated in each case. Both classical and quantum information have similarities and differences, that were quickly exposed in this article. The technological differences are still enormous. While the technology to produce classical computers are highly developed, the experiments involving quantum computers are not so simple and have a slow progress. However, as we saw in this article, the properties of quantum information are so interesting that the development of quantum computers in the future can become one of the greatest achievements of our history. If the reader is not familiar with the concept of logical gate, we recommend the reading of[4]. The Turing machine was proposed by Alan Turing in 1936 and became very important for the understanding of what computers can do[8]. It is composed by a program, a finite state control, a tape and a read/write tape head. Some authors say that the group of 8 bits are special because of the 80x86 processor. This processor used 8 bits to give memory addresses, i.e., it had 256 different addresses in memory. This is one of the so-called Bell states. AcknowledgementsThe authors thank Prof. J.A. Helayël-Neto (CBPF) and Dr. J.L. Acebal (PUC-Minas) for reading the manuscripts, and for providing helpful discussions. We thank the Group of Quantum Computation at LNCC, in particular Drs. R. Portugal and F. Haas, for the courses and stimulating discussions. We also thank the brazilian institution CNPq and the PIBIC program, for the financial support. Grover's Algorithm: Quantum Database Search. C Lavor, L R Manssur, R Portugal, Lavor, C., Manssur, L.R.U and Portugal, R., "Grover's Al- gorithm: Quantum Database Search". (2003) Available at www.arxiv.org/quant-ph/0301079. Estudo Introdutório do Protocolo Quântico BB84 para Troca Segura de Chaves". Centro Brasileiro de Pesquisas Físicas. F L Marquezino, MO-001/03.Marquezino, F.L., "Estudo Introdutório do Protocolo Quântico BB84 para Troca Segura de Chaves". Centro Brasileiro de Pesquisas Físicas, Monography CBPF-MO-001/03. (2003) Fundamentos da Teoria Geral da Comunicação. S Maser, São Paulo, EPU-EDUSPMaser, S., "Fundamentos da Teoria Geral da Comunicação". São Paulo, EPU-EDUSP. (1975) An Introduction to Logical Operations on Classical and Quantum Bits. F L Marquezino, R R Mello Júnior, Work in progressMarquezino, F.L. and Mello Júnior, R.R., "An Introduction to Logical Operations on Classical and Quantum Bits". Work in progress. Logical reversibility of computation. C H Bennett, IBM J.Res.Dev. 176Bennett, C.H., "Logical reversibility of computation", IBM J.Res.Dev., 17(6):525-32. (1973) Conservative logic. E Fredkin, T Toffoli, Int.J.Theor.Phys. 213/4Fredkin, E. and Toffoli, T. "Conservative logic", Int.J.Theor.Phys., 21(3/4):219-253. (1982) Theory of Heat. J C Maxwell, Green Longmans, Co, LondonMaxwell, J.C., "Theory of Heat", Longmans, Green, and Co., London. (1871) On computable numbers, with an application to the Entscheidungsproblem. A M Turing, Proc.Lond.Math.Soc. 2230Turing, A.M., "On computable numbers, with an application to the Entscheidungsproblem", Proc.Lond.Math.Soc. 2, 42:230. (1936) Über die entropieverminderung in einen thermodynamischen system bei eingriffen intelliganter wesen. L Szilard, Z. Phys. 53Szilard, L., "Über die entropieverminderung in einen thermodynamis- chen system bei eingriffen intelliganter wesen". Z. Phys., 53:840-856. (1929) A Mathematical Theory of Communication. C E Shannon, Bell System Technical Journal. 27Shannon, C.E., "A Mathematical Theory of Communication", Bell Sys- tem Technical Journal, 27, pp. 379-423 and 623-656, July and October (1948) M A Nielsen, I L Chuang, Quantum Computation e Quantum Information. Cambridge University PressNielsen, M.A. and Chuang, I.L., "Quantum Computation e Quantum Information", Cambridge University Press. (2000) Applied Cryptography. B Schneier, John Wiley & Sons, Inc2nd ednSchneier, B., "Applied Cryptography", 2nd edn, Wiley Computer Pub- lishing, John Wiley & Sons, Inc. (1996) A Convergent Gambling Estimate of the Entropy of English. T M Cover, R C King, IEEE Transactions on Information Theory, v. IT-24, n. 4, July. Cover, T.M. and King, R.C., "A Convergent Gambling Estimate of the Entropy of English", IEEE Transactions on Information Theory, v. IT-24, n. 4, July. (1978) pp.413-421. Nachrichtenverarbeitung im Menschen. K Küpfmüller, University of Darmstadt.Küpfmüller, K., "Nachrichtenverarbeitung im Menschen", University of Darmstadt. (1975) A single quantum cannot be cloned. W K Wooters, W H Zurek, Nature. 299Wooters, W.K. and Zurek, W.H., "A single quantum cannot be cloned", Nature, 299, 802-803. (1982) J Preskill, Quantum Information and Computation. 229California Institute of Technology.Preskill, J., "Lecture Notes for Physics 229: Quantum Information and Computation". California Institute of Technology. (1998)
[]
[ "The mass-radius relation for neutron stars in f (R) = R + αR 2 gravity: a comparison between purely metric and torsion formulations", "The mass-radius relation for neutron stars in f (R) = R + αR 2 gravity: a comparison between purely metric and torsion formulations" ]
[ "P Feola \nDIME Sez. Metodi e Modelli Matematici\nUniversità di Genova\nVia All' Opera Pia 15a16100GenovaItaly\n", "Xisco Jiménez Forteza \nINFN Sez. di Napoli\nCompl. Univ. Monte S. Angelo Ed. G\nVia CinthiaI-80126NapoliItaly\n", "S Capozziello \nDipartimento di Fisica\n\"E. Pancini\" Università \"Federico II\" di Napoli\nCompl. Univ. Monte S. Angelo Ed. G\nVia CinthiaI-80126NapoliItaly\n\nINFN Sez. di Napoli\nCompl. Univ. Monte S. Angelo Ed. G\nVia CinthiaI-80126NapoliItaly\n\nSystems and Radioelectronics (TUSUR)\nLaboratory for Theoretical Cosmology\nTomsk State University of Control\n634050TomskRussia\n", "R Cianci \nDIME Sez. Metodi e Modelli Matematici\nUniversità di Genova\nVia All' Opera Pia 15a16100GenovaItaly\n", "S Vignolo \nDIME Sez. Metodi e Modelli Matematici\nUniversità di Genova\nVia All' Opera Pia 15a16100GenovaItaly\n" ]
[ "DIME Sez. Metodi e Modelli Matematici\nUniversità di Genova\nVia All' Opera Pia 15a16100GenovaItaly", "INFN Sez. di Napoli\nCompl. Univ. Monte S. Angelo Ed. G\nVia CinthiaI-80126NapoliItaly", "Dipartimento di Fisica\n\"E. Pancini\" Università \"Federico II\" di Napoli\nCompl. Univ. Monte S. Angelo Ed. G\nVia CinthiaI-80126NapoliItaly", "INFN Sez. di Napoli\nCompl. Univ. Monte S. Angelo Ed. G\nVia CinthiaI-80126NapoliItaly", "Systems and Radioelectronics (TUSUR)\nLaboratory for Theoretical Cosmology\nTomsk State University of Control\n634050TomskRussia", "DIME Sez. Metodi e Modelli Matematici\nUniversità di Genova\nVia All' Opera Pia 15a16100GenovaItaly", "DIME Sez. Metodi e Modelli Matematici\nUniversità di Genova\nVia All' Opera Pia 15a16100GenovaItaly" ]
[]
Within the framework of f (R) = R + αR 2 gravity, we study realistic models of neutron stars, using equations of state compatible with the LIGO constraints. i.e. APR4, MPA1, SLy, and WW1. By numerically solving modified Tolman-Oppenheimer-Volkoff equations, we investigate the Mass-Radius relation in both metric and torsional f (R) = R + αR 2 gravity models. In particular, we observe that torsion effects decrease the compactness and total mass of neutron star with respect to the General Relativity predictions, therefore mimicking the effects of a repulsive massive field. The opposite occurs in the metric theory, where mass and compactness increase with α, thus inducing an excess of mass that overtakes the standard General Relativity limit. We also find that the sign of α must be reversed whether one considers the metric theory (positive) or torsion (negative) to avoid blowing up solutions. This could draw an easy test to either confirm or discard one or the other theory by determining the sign of parameter α.
10.1103/physrevd.101.044037
[ "https://arxiv.org/pdf/1909.08847v2.pdf" ]
202,677,675
1909.08847
72c193d45970b773f1dd09b7be9ad0b717c83ba2
The mass-radius relation for neutron stars in f (R) = R + αR 2 gravity: a comparison between purely metric and torsion formulations P Feola DIME Sez. Metodi e Modelli Matematici Università di Genova Via All' Opera Pia 15a16100GenovaItaly Xisco Jiménez Forteza INFN Sez. di Napoli Compl. Univ. Monte S. Angelo Ed. G Via CinthiaI-80126NapoliItaly S Capozziello Dipartimento di Fisica "E. Pancini" Università "Federico II" di Napoli Compl. Univ. Monte S. Angelo Ed. G Via CinthiaI-80126NapoliItaly INFN Sez. di Napoli Compl. Univ. Monte S. Angelo Ed. G Via CinthiaI-80126NapoliItaly Systems and Radioelectronics (TUSUR) Laboratory for Theoretical Cosmology Tomsk State University of Control 634050TomskRussia R Cianci DIME Sez. Metodi e Modelli Matematici Università di Genova Via All' Opera Pia 15a16100GenovaItaly S Vignolo DIME Sez. Metodi e Modelli Matematici Università di Genova Via All' Opera Pia 15a16100GenovaItaly The mass-radius relation for neutron stars in f (R) = R + αR 2 gravity: a comparison between purely metric and torsion formulations (Dated: September 20, 2019)numbers: 1130-j0450Kd9760Jd Keywords: modified gravityf (R) gravity with torsioncompact starscosmologystellar structure Within the framework of f (R) = R + αR 2 gravity, we study realistic models of neutron stars, using equations of state compatible with the LIGO constraints. i.e. APR4, MPA1, SLy, and WW1. By numerically solving modified Tolman-Oppenheimer-Volkoff equations, we investigate the Mass-Radius relation in both metric and torsional f (R) = R + αR 2 gravity models. In particular, we observe that torsion effects decrease the compactness and total mass of neutron star with respect to the General Relativity predictions, therefore mimicking the effects of a repulsive massive field. The opposite occurs in the metric theory, where mass and compactness increase with α, thus inducing an excess of mass that overtakes the standard General Relativity limit. We also find that the sign of α must be reversed whether one considers the metric theory (positive) or torsion (negative) to avoid blowing up solutions. This could draw an easy test to either confirm or discard one or the other theory by determining the sign of parameter α. I. INTRODUCTION Compact objects, such as Neutron Stars (NS), are astrophysical objects that can be described by General Relativity (GR). These relativistic stars are natural laboratories for studying the behavior of high-density nuclear matter using an appropriate equation of state (EoS), which relates the pressure and density of degenerate matter. This allows to obtain the Mass-Radius relation, M − R, and other macroscopic properties such as the tidal deformability and the stellar momentum of inertia [1]. Since the internal structure of a NS cannot be reproduced in the laboratory because of the extreme conditions in which it operates, only theoretical models can be formulated where there are a very large number of EoS candidates. The astrophysical measurements of the macroscopic properties of NS are very useful because they allow us to understand what can be realistic EoS . In fact, they can provide information on whether the EoS is soft or stiff and what is the pressure several times the density of nuclear saturation [2][3][4][5]. Therefore, measuring the mass value of a NS could help us to describe matter at extreme gravity regimes. Einstein's theory describe accurately the physical properties that govern the stability of NS where Chandrasekhar, considering degenerate matter, fixed a theoretical upper limit of 1.44M so that the stability of a non-rotating degenerate star is conserved [6]. Instead, as confirmed by several astrophysical observations, there exist binary systems with NS having mass values that violates this limit allowing larger masses [7][8][9][10][11][12]. To study these observational evidences, as already done in some previous works, developed in metric formalism, [13][14][15][16][17][18], Extended Theories of Gravity [19] can be used. In particular f (R) gravity, i.e. a class of Lagrangians considering a generic function of the Ricci curvature scalar. The primary objective is to obtain the M − R relation for a NS that allows, given an EoS, to derive the maximum mass value. From a cosmological point of view, f (R) theories, beside addressing in a straightforward way the inflationary paradigm [20], could be useful in view of problems like the accelerated expansion of the universe (the dark energy issue), confirmed by several observations [21][22][23][24][25][26], and the problem of the formation of large-scale structures, called dark matter. Unlike the standard Concordance Lambda Cold Dark Matter (ΛCDM) Model [27][28][29], similar results can be obtained without considering dark components but extending the gravitational sector at infrared scales [19,[30][31][32][33][34][35][36][37]. Specifically, f (R) gravity is acquiring a growing interest because it allows a good description of gravitating structures without non-baryonic dark matter: extra degrees of freedom of gravitational field can be dealt as effective scalar fields contributing to the structure formation and stability [38,39]. In this perspective, it is possible to unify the cosmic acceleration [30,40], the early-time inflation [20,41], thus leading to a complete picture of the evolution of the Universe [32,[42][43][44][45][46][47] and large-scale structures therein [48][49][50][51]. However, the dark side and the f (R) descriptions are, in some sense, equivalent at large scale so one needs an experimentum crucis capable of discriminating among the two competing pictures. Discovering new particles out of the Standard Model or addressing gravitational phenomena that escape from the GR description could be an approach to fix this challenging issue. Observing exotic stars modeled by some alternative theory of gravity could be a goal in this perspective. In this paper, we derive the M − R diagram, using realistic EoS compatible with the LIGO constraints [52] for a f (R) = R+αR 2 Lagrangian, using two different approaches: the purely metric theory and a theory with torsion that allows to introduce the spin degrees of freedom in GR [53]. In our specific model, the torsion field is due to the non-linearity of f (R). Here the massenergy is the source of curvature and the spin is the source of torsion. In this way, torsion contributions could provide additional information for compact stars in extreme gravity regimes. The goal of this paper is to obtain realistic M − R relation by solving numerically a modified system of equations, derived from Tolman-Oppenheimer-Volkoff (TOV) [54] equations, and compare results with the LIGO constraints. Specifically, we shall consider quadratic corrections to the Ricci scalar and discuss models with and without torsion comparing them with GR. The paper is organized as follows. In Section II we derive the TOV equations for f (R) gravity in the metric and torsion formalism. Section III is devoted to the problems related to the numerical aspects of TOV equations in f (R) gravity. In Section IV we derive the numerical solutions of stellar structure equations and compare the results of the M − R relations. Discussion and conclusions are given in Sec. V. II. TOLMAN-OPPENHEIMER-VOLKOV EQUATIONS IN f (R) GRAVITY A. The metric theory In the metric formulation, the action for f (R) gravity (in units for G = c = 1) is given by A = 1 16π d 4 x √ −g[ f (R) + L matter ],(1) where f (R) is a function of the scalar curvature R, g is determinant of the metric tensor g i j and L matter is the matter Lagrangian. Varying the action (1) with respect to the metric tensor g i j , one gets the field equations: f (R)R i j − 1 2 f (R)g i j − (∇ i ∇ j − g i j ) f (R) = 8πΣ i j .(2) In eqs. (2), R i j is the Ricci tensor, f (R) denotes the derivative of f (R) with respect to the scalar curvature, Σ i j = −2 √ −g δ √ −gL m δg i j is the energy-momentum tensor of matter and = 1 √ −g ∂ ∂x j √ −gg i j ∂ ∂x i indicates the covariant d'Alembert operator. Here we adopt the signature (+, −, −, −). In order to describe stellar objects, we assume that the metric is static and spherically symmetric of the form: ds 2 = e 2ψ dt 2 − e 2λ dr 2 − r 2 (dθ 2 + sin 2 θdφ 2 ),(3) where ψ and λ are functions depending only on the radial coordinate r. We assume that the interior of the star matter is described by a perfect fluid, with energy-momentum tensor Σ i j = diag(e 2ψ ρ, e 2λ p, r 2 p, r 2 p sin 2 θ), where ρ = ρ(r) and p = p(r) are the matter density and pressure respectively. By a direct calculation, it is possible to show that field eqs. (2), evaluated in the metric (3), are equivalent to a set of equations consisting of the Tolmann-Oppenheimer-Volkov (TOV) equations for f (R) gravity and a continuity equation given by the contracted Bianchi identity ∇ i Σ i j = 0. Specifically, the TOV equations for f (R)gravity are dλ dr = e 2λ [r 2 (16πρ + f (R)) − f (R)(r 2 R + 2)] + 2R 2 r f (R)r 2 + 2r f (R)[rR r,r + 2R r ] + 2 f (R) 2r 2 f (R) + rR r f (R) ,(4)dψ dr = e 2λ [r 2 (16πp − f (R)) + f (R)(r 2 R + 2)] − 2(2r f (R)R r + f (R)) 2r 2 f (R) + rR r f (R) ,(5) while the continuity equation is d p dr = −(ρ + p) dψ dr .(6) Here R r and R r,r denote respectively the first and second derivative of R(r) with respect to radial coordinate r. In order to solve numerically the equations (4), (5) and (6), we can consider the scalar curvature R as an independent dynamical field. In doing this, we need an additional equation which is directly obtained form the definition of scalar curvature: R = 2e −2λ ψ 2 r − ψ r λ r + ψ r,r + 2ψ r r − 2λ r r + 1 r 2 − e 2λ r 2 ,(7) Indeed, inserting the content of eqs. (4), (5) and (6) into (7), we get the dynamical equation for R: d 2 R dr 2 = R r λ r + 1 r + f (R) f (R) 1 r 3ψ r − λ r + 2 r − e 2λ R 2 + 2 r 2 − R 2 r f (R) f (R) .(8) Finally, the numerical solution of the resulting dynamical equations relies on the assignment of a suitable EoS, p = p(ρ), relating pressure and density inside the star, as well as of initial data (i.e. values of the fields at the center of the star). B. The theory with torsion In f (R) gravity with torsion, the gravitational and dynamical fields are pairs (g, Γ) consisting of a pseudo-Riemannian metric g and a metric compatible linear connection Γ. with non-vanishing torsion. The corresponding field equations are obtained by varying the action functional (1) independently with respect to the metric and the connection. It is worth noticing that now R refers to the scalar curvature associated with the dynamical connection Γ. Moreover, we recall that any metric compatible linear connection Γ may be decomposed as the sum Γ h i j =Γ h i j − K h i j ,(9) whereΓ h i j is the Levi-Civita connection associated with the given metric g and K h i j denotes the contorsion tensor, related to the torsion tensor T h i j = Γ h i j − Γ h ji by the relation [55]: K h i j = 1 2 −T h i j + T h j i − T h i j .(10) The contorsion tensor (10) verifies the antisymmetry property K j h i = −K h j i and, together with the metric tensor g, identifies the actual degrees of freedom of the theory. Making use of eqs. (9) and (10), we can decompose the Ricci and the scalar curvature of the dynamical connection respectively as: R i j =R i j +∇ j K h hi −∇ h K h ji + K p ji K h hp − K p hi K h jp(11) and R =R +∇ j K jh h −∇ h K jh j + K jp j K h hp − K jp h K h jp(12) whereR i j andR are the Ricci and the scalar curvature of the Levi-Civita connection induced by the metric g. In the absence of matter spin density, variations of (1) yield the field equations [56][57][58][59][60]: f (R)R i j − 1 2 f (R)g i j = 8πΣ i j ,(13) and T h i j = 1 2 f (R) ∂ f (R) ∂x p (δ p j δ h i − δ p i δ h j ),(14) where Σ i j denotes again the energy-momentum tensor of matter, and the non-linearity of the gravitational Lagrangian function f (R) becomes a source of torsion. Now, by inserting eqs. (11) and (14) into eqs. (13), it is possible to show that the whole set of field equations evaluated in the metric (3) is equivalent to the system formed by the following two TOV equations dλ dr = e 2λ r 2 (16πρ + f (R)) − f (R)(r 2 R + 2) + 2R 2 r f (R)r 2 + 2r 2 f (R) R r,r + 2R r r − 3 f (R)R 2 r 4 f (R) + 2 f (R) 2r 2 f (R) + rR r f (R) ,(15)dψ dr = e 2λ [r 2 (16πp − f (R)) + f (R)(r 2 R + 2)] − 2r f (R)R r 2 + 3 f (R)rR r 4 f (R) − 2 f (R) 2r 2 f (R) + rR r f (R) ,(16) together with the continuity equation d p dr = −(ρ + p) dψ dr ,(17) which also holds in the present case [61,62]. Also in the torsional case, we consider the scalar curvature R as an independent dynamical variable, introducing a consequent additional equation derived from the very definition of R itself. In fact, inserting eqs. (10) and (14) into (12), evaluating all in the metric (3) and making use of eqs. (15) and (16), we obtain the evolution equation: d 2 R dr 2 = R r λ r + 1 r − 2 f (R) f (R) 1 r 3ψ r − λ r + 2 r − e 2λ R 2 + 2 r 2 − R 2 r f (R) f (R) + 3 f (R) 2 f (R) + 3ψ r R r + 9 rR r ,(18) Again, in order to be solved, the set of dynamical equations (15), (16), (17) and (18) for the unknowns R, λ, ψ, p and ρ must be completed by an EoS and initial data. C. The f (R) = R + αR 2 model We consider here the specific form of f (R): f (R) = R + αR 2 ,(19) where α is the coupling parameter of the quadratic curvature correction. This model is specially suitable to account for cosmological inflation, where higher-order curvature terms naturally lead to cosmic accelerated expansion. The quadratic term emerges in strong gravity regimes, while at Solar System scales and, more in general, in the weak field regime, the linear term predominates. Since the interior of a NS could present energy conditions in some sense similar to those early universe [15], the model (19) is particularly suitable for our considerations. In this model, eqs. (4), (5) and (8) take the explicit form: dλ dr = e 2λ [16πr 2 ρ − 2 − αR(r 2 R + 4)] + 4α(r 2 R r,r + 2rR r + R) + 2 4r [1 + α(2R + rR r )] ,(20)dψ dr = e 2λ [16πr 2 p + 2 + αR(r 2 R + 4)] − 4α(2rR r + R) − 2 4r [1 + α(2R + rR r )] ,(21)d 2 R dr 2 = R r λ r + 1 r + 1 + 2αR 2α 1 r 3ψ r − λ r + 2 r − e 2λ R 2 + 2 r 2 ,(22) while eqs. (15), (16) and (18) become respectively: dλ dr = e 2λ [16πr 2 ρ − 2 − αR(r 2 R + 4)] + 4α r 2 R r,r + 2rR r + R − 3αr 2 R 2 r 2(1+2αR) + 2 4r [1 + α(2R + rR r )] ,(23)dψ dr = e 2λ [16πr 2 p + 2 + αR(r 2 R + 4)] − 4α 2rR r + R + 3αr 2 R 2 r 2(1+2αR) − 2 4r [1 + α(2R + rR r )] ,(24)d 2 R dr 2 = R r λ r + 1 r − 1 + 2αR α 1 r 3ψ r − λ r + 2 r − e 2λ R 2 + 2 r 2 − R 2 r 3α 1 + 2αR + 3ψ r R r + 9 rR r .(25) Clearly the torsion contributions emerge in the second system. In the next Section, we shall discuss numerical solutions for the interior space-time of spherically symmetric NS in both metric and torsional f (R) = R + αR 2 gravity. Our aim is to compare the solutions of the above two systems of differential equations in order to point out the torsion contribution with respect to the purely metric one. In view of this, it is worth noticing that, in vacuo, f (R) = R + αR 2 gravity with torsion amounts to GR plus, eventually, a cosmological term [56,60]. Therefore, under the assumption of spherical symmetry, in the case with torsion, the spacetime outside the star has to coincide with the Schwarzschild one. In order to compare the two models, it is then reasonable and consistent assuming the external Schwarzschild solution as the space-time outside the star also in the case of purely metric theory. It is worth noticing that the Schwarzschild external solution is actually a vacuum solution for purely metric f (R) = R+αR 2 gravity as demonstrated in [63,64]. Therefore, viable interior solutions, at the boundary, have to match the external Schwarzschild solution. In this regard, we recall that junction conditions for f (R) gravity have been studied in [65] for the purely metric formulation, and in [66] for the theory with torsion. Referring the reader to [65,66] for more details, we assume the following junction conditions at the stellar radius λ ∈ C 0 , ψ ∈ C 1 , R ∈ C 1 in purely metric case ,(26)λ ∈ C 0 , ψ ∈ C 1 , dR dr ∈ C 0 in torsional case .(27) Outside the star λ, ψ and R refer to the corresponding Schwarzschild quantities. Eqs. (26) and (27) are the conditions at the stellar radius to be satisfied by the numerical solutions we shall investigate in the next Sections. III. NUMERICAL ASPECTS OF THE TOV EQUATIONS IN f (R) = R + αR 2 GRAVITY The TOV equations presented in Sec. II, together with an EoS, form a closed system of equations that can be solved numerically once a suitable set of initial conditions are provided. The EoS accounts for the behavior of the matter fields in the NS at nuclear level. However, it also dominates the NS macroscopic properties as the total mass M, radius R S and compactness C = M/R S . The total mass M and the radius R S may vary significantly depending on the state of matter in the NS interior where C ≈ [0.02, 0.25], being C = 0.5 the black hole solution. On the other hand, the knowledge of the macroscopic properties provides a direct insight to understand the particle interactions, energy transport and state of the matter in the NS core. Until recently, there were placed only vague constraints on the EoS of NSs from electromagnetic observations [67]. The recent LIGO-Virgo binary neutron star (BNS) observation has significantly clarified the state of art concerning the EoS physics. The largest accuracy of the gravitational wave (GW) channel in relation to the electromagnetic (EM) observations allowed to rule out stiffer solutions (less compact) thus reducing significantly the number of astrophysically relevant EoS. In this section, we discuss some aspects of the numerical solution of TOV equations in the metric and torsional f (R), formulations described above, for four EoS compatible with the recent LIGO constraints: APR4, MPA1, SLy, WFF1 [68][69][70][71], accurately described the piecewise polytropic fits provided in [72]. Then, to solve numerically the TOV equations, we use a dimensionless version of the them by re-scaling our physical variables as r → r/r g , R → R/r 2 g , p → P/P 0 , ρ → ρ/ρ 0 ,(28) where r g = GM /c 2 , P 0 = M c 2 /r 3 g , ρ 0 = M /r 3 g ,(29) and M is the mass of the sun, r g is the gravitational radius ( 1.5km), G Newton's Gravitational constant and c the speed of light. The two systems of differential equations shown in Subsection II C take the following form, p = f 1 (ρ, p, ψ , r), λ = f 2 (λ, R, R , R , ρ, r), ψ = f 3 (λ, R, R , p, r), R = f 4 (λ, λ , ψ , R, R , ρ, r), p = f 5 (ρ) ,(30) where the primed variables denote radial derivatives. Therefore, we are left to setup five initial conditions (ICs) for the variables {p(0), λ(0), ψ(0), R(0), R (0)} to complete the numerical scheme. ICs are chosen at the center of the star r = 0 in order to preserve regularity, thus preventing the generation of large gradients that may lead to numerical instabilities. Mathematically, this involves that any expansion around the NS center must have a zero first derivative. In particular, the scalar curvature at the NS center may be expanded as, R(r → 0) ≈ R(0) + R (0)r + 1 2 R (0)r 2 ,(31) where regularity involves R (0) = 0. Pressure and density at the center ρ(0) = ρ c and p(0) = p c are given by the EoS so they only depend on the type of fluid under consideration. For the metric potential λ, it is natural to fix λ(0) = 0, analogously to what happens in Newtonian gravity, where the λ(r) and ψ(r) variables are matched to the m(r) mass of the system by, e 2λ(r) = 1 − 2 m(r) r −1 , e 2ψ(r) = 1 − 2 m(r) r .(32) Notice that the variable ψ(r) does not enter directly in our system of differential equations which implies that ψ(0) can be defined up to any arbitrary constant. Therefore we adjust ψ(0) conveniently to match (i) the internal solutions with the external Schwarzschild solution at the stellar radius R S and (ii) to obtain asymptotically the O(r −1 ) profile as, λ(r → ∞) ≈ M r , ψ(r → ∞) ≈ − M r and ρ(r → ∞) = 0, p(r → ∞) = 0 .(33) The star radius is ideally defined where the pressure p(R S ) ≈ 0 though, in practice, and for numerical reasons, it is sufficient to set a ground value as p(R S )/p c ≤ ∼ 10 −10 . The fulfillment of eqs. (33) requires to find an optimal choice for the Ricci scalar R c = R(0). In general, this is achieved by shooting the central value R c within some sufficiently large range [R min c , R max c ], containing the true value R c . Then R c is found by applying bijection root-finding methods until eqs. (33) are satisfied up to numerical tolerance. Unfortunately, the existence of such R c strongly depends on the particular form of the f (R) model, giving rise to ghosts in case of ill-defined configuration of the model parameters. This is true for both metric and torsional (R + αR 2 ) theories discussed in this work. Then we choose the sign of α to be the one that better matches the junction conditions at the surface of the star (26), (27) for the metric and torsional theory respectively. As we evince in the following sections, the only choices that reproduce not blowing up solutions are α > 0 for the metric case and α < 0 for the torsion one. Unfortunately, these choices generate some typical tachyonic oscillations due to a bad behaved f (R) and that we could not remove numerically. This effect was also reported in [73] and it shows an oscillatory behavior, in the form of a damped-sinusoid outside the star, even in the minimally perturbed scenario with α 1. These oscillations grow as the value of α increases and they are as well propagated to our metric potentials λ(r) and ψ(r). This inserts some ambiguity in defining the asymptotic conditions (33) for large r since the oscillations are not totally vanished when the numerical noise begins to dominate the solution (for r ∼ 100). To overcome this issue and to reduce the amplitude of the oscillations, we restrict our analysis to small values of α ∈ [0.001, 0.1]. As this is anyway consistent with current observational tests, in doing so, we are not discarding any relevant astrophysical scenario and this fact allows us to set R c ≈ R GR . This hypothesis is shown to have a minimal impact in the M − R diagrams as we will discuss throughout next sections. Moreover, the assumption of a Schwarzschild-type solution outside the star allows us to smooth out these oscillations and to recover a good fulfillment of the junction conditions. According to the above positions, we justify the choice of α > 0 for the metric theory and α < 0 for the torsional one. Finally, the two systems of ordinary differential equations (ODE) are solved by using a 8th-order Runge-Kutta with adaptive step-size and high-stiffness control methods implemented in the Wolfram Mathematica package [74]. These methods regulate the discretization step-size by estimating the error of the Runge-Kutta method point by point ensuring the numerical convergence of the solution step by step. The stiffness control methods use polynomial extrapolation on the short regimes where the gradients become too large. We have found these methods essential to ensure the accuracy of the solutions in the torsional formulation. IV. NUMERICAL SOLUTIONS We compute the M − R diagrams for metric and torsional formulations of f (R) = R + αR 2 gravity. Due to the numerical limitations found throghout our analysis, we restrict |α| ∈ [0, 0.1] where α is required to be positive for the purely metric theory and negative in the theory with torsion to avoid blowing up solutions [73]. These values are anyway consistent with solar system tests of GR [73,75]. Such tests fix light constraints on the form f (R) 10 −6 rather than on the parameter α, thus being translated as R + |α|R 2 10 −6 . Bearing in mind that curvatures themselves are expected to be small, this leaves the parameter α rather unconstrained. Other tests as Eöt-Wash laboratory experiment set α 10 −10 m 2 . On the contrary, there exist alternative observational space-based constraints coming from the Gravity Probe B experiment [76] or the observation of the binary pulsar PSR J0737-3039 [77,78] that set α [5 × 10 11 , 2.3 × 10 15 ]m 2 . Therefore, the discrepancies among the several experiments do not set tight bounds on the value of α, and our choice seems to be compatible with existing data. A. Purely metric theory The solutions of the TOV equations for the purely metric f (R) = R + αR 2 model are illustrated in Figure 1. The pressure at the center of the star p c drops quickly until it eventually gets equal to zero, thus defining the radius of the star R S . This radius is used as our reference point to compute the total mass M by means of eq. (32). The numerical system exhibits some dissipative oscillations about the Ricci scalar R and the metric potential λ. These oscillations naturally arise from the harmonic-form of the Ricci scalar R(r) equation in vacuum [73], for a non optimal choice of the Ricci scalar R c at the center of the star, and where optimal choice is here defined as that matching the Schwarzschild junction conditions at the stellar radius. Unfortunately, such a choice becomes increasingly difficult as α tends to zero since the system of equations become also stiffer [79]. Generally speaking, this may appear to be counterintuitive, since α → 0 should exactly recover the GR space-time. However the asymptotic approach to α → 0 of the Ricci scalar equations (22) (25) are ill-defined. This is clear if, for instance, one re-expresses (22) as, Notice that the numerator of the first term is exactly zero in GR and that ideally approaches to zero faster than linear order in α. However, this is not so exact when dealing with numerical uncertainties, where the same factor may behave as a ∼ 0/0 solution for α << 1 thus requiring much more precision on the estimation of central value R c . To overcome this issue, we have set R(0) = R GR = 8π(3p c − ρ c ) to the GR value. Though this seems apparently an arbitrary choice, we notice that, for α < 1, the solution must be close to t GR so the value cannot be further to that of GR. This is self-evident from Fig. 2, where, in the right plot, we illustrate the variations on the pressure p(r) and the Ricci scalar R(r) for different choices of the central value R c = R GR c , 0.2R GR c , 2R GR c . Then, notice than the effect of varying R c on the radius R for such small values of α is about ∼ 2% considering the maximum and minimum choices of R c . This variation is then compared with the uncertainty arising from the definition of the star radius R S to be the place where the pressure drops by a factor . Then, in the left plot, we show that the impact of relaxing this value to ∼ 10 −9 would generate an uncertainty of about 4%, thus larger than the one from varying R c . ; a Schwarzschild fit (green line) to the numerical data outside the star, that is, with R > 11.6km. We note that, for α smal and averaging out all the oscillations, all physical quantities reproduce rather well the Schwarzschild solution outside the star, while matching as well the junction conditions (26). From the fitted results we get M = 1.40M , thus very close to the theoretical one. R = − e 2λ (8π(ρ − 3p) + R) 6α − R −λ + ψ + 2 r .(34) Exact In Fig. (3), we show the behavior of the metric potentials λ(r) and ψ(r) and the derivatives R (r) and ψ (r) paying special attention to: (i) the junction conditions at the NS boundary and (ii) their profiles as r → ∞. We show the full numerical solution (blue line), its corresponding Schwarzschild solution (orange line) given by eqs. (32) with M = 1.43M and the result of fitting the exterior data to the same Schwarzschild-like ansatz in order to quantify the agreement with the Schwarzschild solution outside the star and which results in a NS with total mass M = 1.40M . The good agreement between the three lines confirms that the solution is well approximated by the Schwarzschild solution right outside the star radius better than ∼ 2%. This good match is also extended to their derivatives thus globally satisfying the necessary junction conditions of eqs. (26) once the oscillations are averaged out. On the other hand, since the oscillations do not appear on ψ(r), we choose this quantity more appropriated to define the NS mass M. Finally, in Fig. 4, we show the M − R diagrams for the four EoS considered in this work. For each choice of the central density ρ c , we get a different estimate of the radius R S and the total mass M. We loop over ρ c until dM/dR = 0 which defines the unstable branch, i.e. the point at which the NS is expected to collapse to a black hole and that provides the maximum allowed mass M max for the given EoS. Note that for all the EoS considered, the total mass tends to increase with respect to GR as in [18,79]. This is because gravity becomes stronger, thus allowing more massive systems. Indeed, in the f (R) = R + αR 2 scenario, Newton's gravitational constant G is replaced by G → G e f f = G f (R) = G 1 + 2αR .(35) The combined conditions of α > 0 and R < 0 imply then G e f f > G, thus generating a more attractive gravity. B. Theory with torsion We repeat the analysis for the torsional f (R) = R + αR 2 theory. Although further models have been also considered in the literature, the numerical complexity of torsional equations makes difficult a full exploration of other kinds of f (R) functions. This issue becomes more relevant when considering the torsional theory with spin [57], where spin gradients add higher order derivatives to our system of equations that increase the stiffness of the numerical system. We plan to extend our study in the presence of spin matter in a forthcoming paper. Then, in Figure 5, we show the results we obtained for the theory with torsion, using the same range for |α| as in the metric case but choosing α < 0. In this scenario, we see that the general trend predicts a decreasing of the total mass of NS, independently of the EoS considered. This could be related with the fact that the stable branch of the solutions, given by the sign of α, is reversed with respect to the purely metric case to avoid for ghosts. However, estimates for the total mass and radius are still compatible with the astrophysical observations [4], thus not allowing us to rule out any of the models studied here. On the other hand, if we further increase |α| the errors generated by eq. (25) and propagated to the total mass M and the total radius R S become too large. Therefore, we restrict our analysis to |α| ≤ 0.1. In Fig. 6 we repeat the same Schwarzschild-based tests adopted for the metric formalism for α = 0.05. In this case, the total mass M = 1.37M is slightly diminished with respect to the metric case. Notice that the Schwarzschild solution is as well verified at the star radius R s , where the metric λ(r) is clearly C 0 and ψ(r) still preserves the C 1 condition. Outside the star, and once the oscillations are vanished, the metric functions λ and ψ still preserve the 1/r decay. Finally, in Fig 7 we compare the different predictions obtained in the purely metric and the torsional formulation respectively, for α = 0.1. Note that, in the theory with torsion, though the total mass of the NS decreases, while increases with respect to the metric case, the relative deviations, in absolute value, with respect to GR seem to be larger than in the metric case. This is caused by the effective repulsion generated by the extra torsional terms (see eq. (25)) which induce a partial screening of the gravitational field that prevents to reach NS masses as large as in standard GR. This is explicitly shown in Table I, where we show the variation of the maximum mass M max , radius R max and compactness C for the purely metric and torsional theories respectively, corresponding to the points in the M − R diagrams where dM/dR = 0. Note that whereas the purely metric formulation tends to more massive and compact NSs, the opposite occurs when considering torsion. Specifically, as the quadratic term in the curvature increases, the effects of torsion counterbalance the increase of total mass. This can be intuitively derived by using the same reasoning as in (35) with α < 0 and R < 0, implying G e f f < G and thus generating a less attractive gravity. V. DISCUSSION AND CONCLUSIONS In this paper, we have studied the existence of realistic NSs in the context of the f (R) = R + αR 2 theory both in purely metric and torsional formulations. The main results concern the computation of the M − R diagrams resulting from the two different theoretical frameworks considered. Matter fields have been represented by static and spherically symmetric perfect fluids where the EoS have been chosen to agree with the recent LIGO-Virgo constraints [52]. The parameter α has been restricted to be smaller than |α| ≤ 0.1 to avoid unrealistically large oscillations (see e.g. [73]) on our metric potentials and therefore ensuring the(i) fullfillment of junction conditions and (ii) the accurate recovery of the Schwarzschild solution far from the source. These two requirements single out four of the five initial conditions: p(0), λ(0),ψ(0) and R (0), while R(0) remains free. R(0) is ideally defined by choosing this parameter in such a way to match the junction conditions (26), (27). However, the oscillatory behavior of some solutions for r → ∞ prevents from finding a unique value for R(0). To overcome this issue, we have set R(0) = R GR identical to the GR value. This assumption have been shown to be valid for small α, being the estimates of the NS radius only mildly dependent on the R(0) choice, but this is no longer true for α 1. In the purely metric theory, the obtained results show a progressive increasing of the total mass as |α| increases, for all the four EoS considered. This allows for higher masses and more compact NSs than in GR. This absloute increasing of the mass and compactness could be also reproduced by assuming softer EoS in GR, consistent with the recent observations [52]. In the case with torsion, the NS mass tends to decrease for all the EoS considered. This could be related with the fact that the stable branch of the solutions is flipped with respect to the purely metric case to ensure the stability of the numerical system. The physical existence of such solutions could help us to describe NS compact or not, based on astrophysical observations, choosing the appropriate theory by simply constraining whether α is positive or negative. In the torsional framework, the differences in the M − R predictions with respect to GR are larger than those obtained in the purely metric case. As a consequence, the allowed intervals on α are poles apart from the two theories. Moreover, the theory with torsion would seem to describe less compact NS. This would allow to obtain solutions that could be reproduced using EoS with stiff matter in the limit of GR. Unfortunately, this is in disagreement with the recent LIGO-Virgo discoveries [52]. What comes to the rescue is that given the current accuracy of electromagnetic observations, we cannot deny the NS observations yet because the differences with the GR are still too small. However, this issues could be addressed by next generation gravitational wave detectors (3G) [80][81][82], where the opportunity to test results presented in this work could be realistic. FIG. 1 : 1Solutions of the TOV equations for GR (blue) and purely metric R + αR 2 with α = 0.05 (orange), using the SLy EoS. All the plotted quantities show small deviations with respect to GR. Note the asymptotic decay of the metric potentials λ and ψ as r → ∞. Our choice of α explains the oscillatory behavior as reported in[73].FIG. 2: Profiles for the pressure P (left) and the Ricci scalar R (right) corresponding to R c = R GRc , 0.2R GRc , 2R GRc for the f (R) = R + αR 2 model with α = 0.1. In the zoomed-in plot for the pressure, the grid lines fix two possible values for the radius of the star R S that depend on to the accuracy chosen in defining its position as: p(R S )/pc ≤ 10 −9 , 10 −10 providing a relative difference of about 4%. Complementary, on the right hand side plot we show the R = 0 point for different choices of the central value R c . Notice that on the latter the effects of choosing one or another R c contribute in total about the ∼ 2% between 0.2R GRc and 2R GRc choices thus this error being smaller than the our error estimate in defining R S . FIG. 3 : 3Results of our analysis with α = 0.05 for λ and ψ (left plots) and the derivatives for R and ψ (right plots) for the exact numerical solution (blue line); the Schwarzschild solution (orange line) with mass M = 1.43M FIG. 4 : 4M − R relations obtained within the purely metric formalism with α = {0, 0.001, 0.01, 0.05, 0.1} for the four EoS considered in this work. Note the general increase of the total mass as the quadratic term takes larger values, thus favoring the formation of more massive objects than in standard GR. FIG. 5 : 5Analogous M − R relations to those ofFig. 4but here obtained within the torsional formalism. The effect of the torsion tends to decrease the total mass of the NS, contrary to what occurs in the purely metric case. This is dominantly caused by sign flip on the α-dependent part of eq. (25) with respect to eq. (22), which actually acts as a repulsive term. FIG. 6 : 6Results of the analysis in the torsional case with α = 0.05. We show the metric potentials λ and ψ (left plots) and the derivatives for ψ and R (right plots) for the exact numerical solution (blue line), the Schwarzschild solution (orange line) and a Schwarzschild fit (green line) to the numerical data outside the star, that is with R > 11.6km. Notice that once the oscillations are averaged out, all the distributions satisfy (up to numerical accuracy) the junction conditions. FIG. 7 : 7M − R relations for α = 0.1 in GR (blue), metric (green) and torsion (orange) for the four EoS considered in this work. The torsion contributions tend to decrease the total mass of the system. Science and Technology). AcknowledgementsWe want to thanks Alvaro de la Cruz-Dombriz, Miguel Bezares Figueroa and Carlos Palenzuela for providing useful discussions about this paper. SC acknowledges INFN Sez. di Napoli (Iniziative Specifiche QGSKY and MOONLIGHT2) for support. This article is also based upon work from COST action CA15117 (CANTATA), supported by COST (European Cooperation in . A W Steiner, S Gandolfi, F J Fattoyev, W G Newton, Phys. Rev. C. 9115804A. W. Steiner, S. Gandolfi, F. J. Fattoyev, and W. G. Newton, Phys. Rev. C 91, 015804 (2015). . J M Lattimer, M Prakash, Phys. Rept. 621127J. M. Lattimer and M. Prakash, Phys. Rept. 621, 127 (2016). . K Hebeler, J M Lattimer, C J Pethick, A Schwenk, Astrophys. J. 77311K. Hebeler, J. M. Lattimer, C. J. Pethick, and A. Schwenk, Astrophys. J. 773, 11 (2013). . F Ozel, P Freire, Ann. Rev. A&A. 54401F. Ozel and P. Freire, Ann. Rev. A&A 54, 401 (2016). . A W Steiner, C O Heinke, S Bogdanov, C Li, W C G Ho, A Bahramian, S Han, Mon. Not. Roy. Astron. Soc. 476421A. W. Steiner, C. O. Heinke, S. Bogdanov, C. Li, W. C. G. Ho, A. Bahramian, and S. Han, Mon. Not. Roy. Astron. Soc. 476, 421 (2018). . S Chandrasekhar, Astrophys. J. 7481S. Chandrasekhar, Astrophys. J. 74, 81 (1931). . O Barziv, A&A. 377925O. Barziv et al., A&A 377, 925 (2001). . M L Rawls, Astrophys. J. 73025M.L. Rawls et al., Astrophys. J. 730, 25 (2011). . F Mullally, C Badenes, S E Thompson, R Lupton, Astrophys. J. 70751F. Mullally, C. Badenes, S.E. Thompson and R. Lupton, Astrophys. J. 707, L51 (2009). . D Nice, Astrophys. J. 6341242D. Nice et al., Astrophys. J. 634, 1242 (2005). . P B Demorest, T Pennucci, S M Ransom, M S E Roberts, J W T Hessels, Nature. 4671081P.B. Demorest, T. Pennucci, S.M. Ransom, M.S.E. Roberts, J.W.T. Hessels, Nature 467, 1081 (2010). . Nai-Bo Zhang, Bao-An Li, Astrophys. J. 87999Nai-Bo Zhang, Bao-An Li, Astrophys. J. 879, 99 (2019). . A V Astashenok, S Capozziello, S D Odintsov, Physics Letters B. 742160A. V. Astashenok, S. Capozziello, S. D. Odintsov, Physics Letters B 742, 160 (2015). . A V Astashenok, S Capozziello, S D Odintsov, JCAP. 131240A. V. Astashenok, S. Capozziello and S. D. Odintsov, JCAP 1312, 040 (2013). . A V Astashenok, S Capozziello, S D Odintsov, Phys. Rev. D. 89103509A. V. Astashenok, S. Capozziello and S. D. Odintsov, Phys. Rev. D 89, 103509 (2014). . A V Astashenok, S Capozziello, S D Odintsov, Astrophys. Space Sci. 355333A. V. Astashenok, S. Capozziello and S. D. Odintsov, Astrophys. Space Sci. 355, 333 (2015). . A V Astashenok, S Capozziello, S D Odintsov, JCAP. 15011A. V. Astashenok, S. Capozziello and S. D. Odintsov, JCAP 1501, 001 (2015). . S Capozziello, M De Laurentis, R Farinelli, S D Odintsov, Phys. Rev. D. 9323501S. Capozziello, M. De Laurentis, R. Farinelli and S. D. Odintsov, Phys. Rev. D 93, 023501 (2016). . S Capozziello, M De Laurentis, Phys. Rept. 509167S. Capozziello and M. De Laurentis, Phys. Rept. 509, 167 (2011). . A A Starobinsky, Phys. Lett. B. 9199A. A. Starobinsky, Phys. Lett. B 91, 99 (1980). . S Perlmutter, Astrophys. J. 517565Supernova Cosmology ProjectS. Perlmutter et al. [Supernova Cosmology Project], Astrophys. J. 517, 565 (1999). . A G Riess, Astron. J. 1161009Supernova Search TeamA. G. Riess et al. [Supernova Search Team], Astron. J. 116, 1009 (1998). . A G Riess, Astrophys. J. 607665Supernova Search TeamA. G. Riess et al. [Supernova Search Team], Astrophys. J. 607, 665 (2004). . D N Spergel, WMAP CollaborationAstrophys. J. Suppl. 148175D. N. Spergel et al. [WMAP Collaboration], Astrophys. J. Suppl. 148, 175 (2003). . C Schimdt, Astron. Astrophys. 463405C. Schimdt et al., Astron. Astrophys. 463, 405 (2007). . P Mcdonald, SDSS) Astrophys. J. Suppl. 16380P. McDonald et al., (SDSS) Astrophys. J. Suppl. 163, 80 (2006). . N A Bahcall, Science. 2841481N.A. Bahcall et al., Science 284, 1481 (1999). . K Bamba, S Capozziello, S Nojiri, S D Odintsov, Astrophys.Space Sci. 342155K. Bamba, S. Capozziello, S. Nojiri, S.D. Odintsov, Astrophys.Space Sci. 342, 155 (2012). . A Joyce, B Jain, J Khoury, M Trodden, Phys.Rept. 56898A. Joyce, B. Jain, J. Khoury, M. Trodden, Phys.Rept. 568, 98 (2015). . S Capozziello, Int. J. Mod. Phys. D. 11483S. Capozziello, Int. J. Mod. Phys. D 11, 483 (2002). . S Capozziello, S Carloni, A Troisi, Recent Res. Dev. Astron. Astrophys. 1625S. Capozziello, S. Carloni, A. Troisi, Recent Res. Dev. Astron. Astrophys. 1, 625 (2003). . S Nojiri, S D Odintsov, Phys. Rev. D. 68123512S. Nojiri, S.D. Odintsov, Phys. Rev. D 68, 123512 (2003). . S M Carroll, V Duvvuri, M Trodden, M S Turner, Phys. Rev. D. 7043528S. M. Carroll, V. Duvvuri, M. Trodden and M. S. Turner, Phys. Rev. D 70, 043528 (2004). . G J Olmo, Int.J.Mod.Phys. D. 20413G. J. Olmo, Int.J.Mod.Phys. D 20, 413 (2011). . S Nojiri, S D Odintsov, Phys. Rept. 50559S. Nojiri and S. D. Odintsov, Phys. Rept. 505, 59 (2011). Beyond Einstein gravity: A Survey of gravitational theories for cosmology and astrophysics. Fundamental Theories of Physics. S Capozziello, V Faraoni, 978-94-007-0164-9Springer170S. Capozziello and V. Faraoni, Beyond Einstein gravity: A Survey of gravitational theories for cosmology and astrophysics. Fundamental Theories of Physics. 170, Springer (2010), ISBN 978-94-007-0164-9. . A De La Cruz-Dombriz, D Saez-Gomez, Entropy. 141717A. de la Cruz-Dombriz and D. Saez-Gomez, Entropy 14, 1717 (2012). . S Capozziello, M De Laurentis, Annalen Phys. 524545S. Capozziello and M. De Laurentis, Annalen Phys. 524, 545 (2012). . J A R Cembranos, Phys. Rev. Lett. 102141301J. A. R. Cembranos, Phys. Rev. Lett. 102, 141301 (2009). . A De La Cruz-Dombriz, A Dobado, Phys. Rev. D. 7487501A. de la Cruz-Dombriz and A. Dobado, Phys. Rev. D 74, 087501 (2006). . P A R Ade, Planck CollaborationAstron. Astrophys. 57122P. A. R. Ade et al. [Planck Collaboration], Astron. Astrophys. 571, A22 (2014). . S Ferrara, A Kehagias, A Riotto, Fortsch. Phys. 62573S. Ferrara, A. Kehagias and A. Riotto, Fortsch. Phys. 62, 573 (2014). . L Sebastiani, G Cognola, R Myrzakulov, S D Odintsov, S Zerbini, Phys. Rev. D. 8923518L. Sebastiani, G. Cognola, R. Myrzakulov, S. D. Odintsov and S. Zerbini, Phys. Rev. D 89, 023518 (2014). . K Bamba, R Myrzakulov, S D Odintsov, L Sebastiani, Phys. Rev. D. 9043505K. Bamba, R. Myrzakulov, S. D. Odintsov and L. Sebastiani, Phys. Rev. D 90, 043505 (2014). . K Bamba, S Nojiri, S D Odintsov, D Saez-Gomez, Phys. Rev. D. 90124061K. Bamba, S. Nojiri, S. D. Odintsov and D. Saez-Gomez, Phys. Rev. D 90, 124061 (2014). . S Nojiri, S D Odintsov, D Saez-Gomez, Phys. Lett. B. 68174S. Nojiri, S. D. Odintsov and D. Saez-Gomez, Phys. Lett. B 681, 74 (2009). . E Elizalde, D Saez-Gomez, Phys. Rev. D. 8044030E. Elizalde and D. Saez-Gomez, Phys. Rev. D 80, 044030 (2009). . A De La Cruz-Dombriz, A Dobado, A L Maroto, Phys. Rev. D. 77123515A. de la Cruz-Dombriz, A. Dobado and A. L. Maroto, Phys. Rev. D 77, 123515 (2008). . A Abebe, A De La Cruz-Dombriz, P K S Dunsby, Phys. Rev. D. 8844050A. Abebe, A. de la Cruz-Dombriz and P. K. S. Dunsby, Phys. Rev. D 88, 044050 (2013). . A Abebe, M Abdelwahab, A De La Cruz-Dombriz, P K S Dunsby, Class. Quant. Grav. 29135011A. Abebe, M. Abdelwahab, A. de la Cruz-Dombriz and P. K. S. Dunsby, Class. Quant. Grav. 29, 135011 (2012). . S Nojiri, S D Odintsov, V K Oikonomou, Phys. Rept. 6921S. Nojiri, S. D. Odintsov, V. K. Oikonomou, Phys. Rept. 692, 1 (2017). . B Abbott, Virgo ; LIGO ScientificPhys. Rev. Lett. 119161101B. Abbott et al. (Virgo, LIGO Scientific), Phys. Rev. Lett. 119, 161101 (2017). . F W Hehl, P Von Der Heyde, G D Kerlick, J M Nester, Rev. Mod. Phys. 48393F. W. Hehl, P. Von Der Heyde, G. D. Kerlick and J. M. Nester, Rev. Mod. Phys. 48, 393 (1976). . J R M Oppenheimer &amp; G, Volkoff, Phys. Rev. 55J.R. Oppenheimer & G.M. Volkoff Phys. Rev. 55, 374 (1939). . F W Hehl, B K Datta, J. Math. Phys. 121334F. W. Hehl and B. K. Datta, J. Math. Phys. 12, 1334 (1971). . S Capozziello, R Cianci, C Stornaiolo, S Vignolo, Class. Quantum Grav. 246417S. Capozziello, R. Cianci, C. Stornaiolo and S. Vignolo, Class. Quantum Grav. 24, 6417 (2007). . S Capozziello, R Cianci, C Stornaiolo, S Vignolo, Int. J. Geom. Meth. Mod. Phys. 5765S. Capozziello, R. Cianci, C. Stornaiolo and S. Vignolo, Int. J. Geom. Meth. Mod. Phys. 5, 765 (2008). . S Capozziello, R Cianci, C Stornaiolo, S Vignolo, Phys. Scripta. 7865010S. Capozziello, R. Cianci, C. Stornaiolo and S. Vignolo, Phys. Scripta 78, 065010 (2008). . S Capozziello, R Cianci, M De Laurentis, S Vignolo, Eur. Phys. J. C. 70341S. Capozziello, R. Cianci, M. De Laurentis and S. Vignolo, Eur. Phys. J. C 70, 341 (2010). . S Capozziello, S Vignolo, Ann. Phys. (Berlin). 19238S. Capozziello and S. Vignolo, Ann. Phys. (Berlin) 19, 238 (2010). . S Capozziello, S Vignolo, Class. Quantum Grav. 26175013S. Capozziello and S. Vignolo, Class. Quantum Grav. 26, 175013 (2009). . S Capozziello, S Vignolo, Int. J. Geom. Meth. Mod. Phys. 91250006S. Capozziello and S. Vignolo, Int. J. Geom. Meth. Mod. Phys. 9, 1250006 (2012). . B Whitt, Phys. Lett. B. 145176B. Whitt, Phys. Lett. B 145, 176 (1984). . S Mignemi, D L Wiltshire, Phys. Rev. D. 461475S. Mignemi and D. L. Wiltshire, Phys. Rev. D 46, 1475 (1992). . N Deruelle, M Sasaki, Y Sendouda, Progress of Theoretical Physics. 119237N. Deruelle, M. Sasaki and Y. Sendouda, Progress of Theoretical Physics 119, 237 (2008). . S Vignolo, R Cianci, S Carloni, Class. Quantum Grav. 3595014S. Vignolo, R. Cianci and S. Carloni, Class. Quantum Grav. 35, 095014 (2018). . D Radice, A Perego, F Zappa, S Bernuzzi, Astrophys. J. 85229D. Radice, A. Perego, F. Zappa, and S. Bernuzzi, Astrophys. J. 852, L29 (2018). . M Alford, M Braby, M W Paris, S Reddy, Astrophys. J. 629969M. Alford, M. Braby, M. W. Paris, and S. Reddy, Astrophys. J. 629, 969 (2005). . H Mueller, B D Serot, Nucl. Phys. 606508H. Mueller and B. D. Serot, Nucl. Phys. A606, 508 (1996). . F Douchin, P Haensel, Astron. Astrophys. 380151F. Douchin and P. Haensel, Astron. Astrophys. 380, 151 (2001). . R B Wiringa, V Fiks, A Fabrocini, Phys. Rev. C. 381010R. B. Wiringa, V. Fiks, and A. Fabrocini, Phys. Rev. C 38, 1010 (1988). . J S Read, C Markakis, M Shibata, K Uryu, J D E Creighton, J L Friedman, Phys. Rev.D. 79124033J. S. Read, C. Markakis, M. Shibata, K. Uryu, J. D. E. Creighton, and J. L. Friedman, Phys. Rev.D 79, 124033 (2009). . M Aparicio Resco, A De La Cruz Dombriz, F J Estrada, V Zapatero Castrillo, Phys. Dark Univ. 13147M. Aparicio Resco, A. de la Cruz Dombriz, F. J. Llanes Estrada, and V. Zapatero Castrillo, Phys. Dark Univ. 13, 147 (2016). Version 11.3, champaign, IL. W R Inc, Mathematica, W. R. Inc., Mathematica, Version 11.3, champaign, IL, 2018. . L Lombriser, Phys. Rev. D. 85102001L. Lombriser, et al., Phys. Rev. D 85, 102001 (2012). . C W F Everitt, Phys. Rev. Lett. 106221101C. W. F. Everitt et al., Phys. Rev. Lett. 106, 221101 (2011). . R P Breton, V M Kaspi, M Kramer, M A Mclaughlin, M Lyutikov, S M Ransom, I H Stairs, R D Ferdman, F Camilo, A Possenti, Science. 321R. P. Breton, V. M. Kaspi, M. Kramer, M. A. McLaughlin, M. Lyutikov, S. M. Ransom, I. H. Stairs, R. D. Ferdman, F. Camilo, and A. Possenti, Science 321, 104 (2008). . J Näf, P Jetzer, Phys. Rev. D. 81104003J. Näf and P. Jetzer, Phys. Rev. D 81, 104003 (2010). . D D Doneva, Astrophys. J. 7816D.D. Doneva, et al., Astrophys. J. 781, L6 (2013). . B Sathyaprakash, Class. Quant. Grav. 29124013B. Sathyaprakash et al., Class. Quant. Grav. 29, 124013 (2012). . R Essick, S Vitale, M Evans, Phys. Rev. D. 9684004R. Essick, S. Vitale, and M. Evans, Phys. Rev. D 96, 084004 (2017). ArXiv e-prints 1702. P Amaro-Seoane, EoS |α| M (M). 786P. Amaro-Seoane, et al., ArXiv e-prints 1702.00786 (2017). EoS |α| M (M)
[]
[ "Witnessing Bell violations through probabilistic negativity", "Witnessing Bell violations through probabilistic negativity" ]
[ "Benjamin Morris \nSchool of Physics and Astronomy and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems\nUniversity of Nottingham\nNG7 2RDUniversity Park, NottinghamUnited Kingdom\n", "Lukas J Fiderer \nInstitute for Theoretical Physics\nUniversity of Innsbruck\n6020InnsbruckAustria\n", "Ben Lang \nSchool of Physics and Astronomy and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems\nUniversity of Nottingham\nNG7 2RDUniversity Park, NottinghamUnited Kingdom\n", "Daniel Goldwater \nSchool of Mathematical Sciences and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems\nUniversity of Nottingham\nNG7 2RDUniversity Park, NottinghamUnited Kingdom\n" ]
[ "School of Physics and Astronomy and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems\nUniversity of Nottingham\nNG7 2RDUniversity Park, NottinghamUnited Kingdom", "Institute for Theoretical Physics\nUniversity of Innsbruck\n6020InnsbruckAustria", "School of Physics and Astronomy and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems\nUniversity of Nottingham\nNG7 2RDUniversity Park, NottinghamUnited Kingdom", "School of Mathematical Sciences and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems\nUniversity of Nottingham\nNG7 2RDUniversity Park, NottinghamUnited Kingdom" ]
[]
Bell's theorem shows that no hidden-variable model can explain the measurement statistics of a quantum system shared between two parties, thus ruling out a classical (local) understanding of nature. In this work we demonstrate that by relaxing the positivity restriction in the hidden-variable probability distribution it is possible to derive quasiprobabilistic Bell inequalities whose sharp upper bound is written in terms of a negativity witness of said distribution. This provides an analytic solution for the amount of negativity necessary to violate the CHSH inequality by an arbitrary amount, therefore revealing the amount of negativity required to emulate the quantum statistics in a Bell test.
10.1103/physreva.105.032202
[ "https://arxiv.org/pdf/2105.01685v1.pdf" ]
233,739,817
2105.01685
014f36a189901aabe5f821102559bbdbaa5535e7
Witnessing Bell violations through probabilistic negativity Benjamin Morris School of Physics and Astronomy and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems University of Nottingham NG7 2RDUniversity Park, NottinghamUnited Kingdom Lukas J Fiderer Institute for Theoretical Physics University of Innsbruck 6020InnsbruckAustria Ben Lang School of Physics and Astronomy and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems University of Nottingham NG7 2RDUniversity Park, NottinghamUnited Kingdom Daniel Goldwater School of Mathematical Sciences and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems University of Nottingham NG7 2RDUniversity Park, NottinghamUnited Kingdom Witnessing Bell violations through probabilistic negativity Bell's theorem shows that no hidden-variable model can explain the measurement statistics of a quantum system shared between two parties, thus ruling out a classical (local) understanding of nature. In this work we demonstrate that by relaxing the positivity restriction in the hidden-variable probability distribution it is possible to derive quasiprobabilistic Bell inequalities whose sharp upper bound is written in terms of a negativity witness of said distribution. This provides an analytic solution for the amount of negativity necessary to violate the CHSH inequality by an arbitrary amount, therefore revealing the amount of negativity required to emulate the quantum statistics in a Bell test. INTRODUCTION It has now been 60 years since John Stewart Bell wrote his famous paper on the Einstein-Podolsky-Rosen (EPR) paradox [1], and 50 years since the first experimental Bell test [2]. The majority of physicists are perfectly happy to concede that in the lab we see experimental results consistent with the postulates of quantum mechanics. However, the implications of these mathematical postulates on the 'reality' of the wavefunction is still very much up for debate [3][4][5][6][7][8]. These Bell experiments remain as some of the most important demonstrations for the reality of the quantum state and the death of a 'local realism' picture of nature. In such an experiment a physical system is distributed between spatially separated observers, and we allow these observers to perform measurements on their local system. The emerging statistics prove that physical systems are not bound to behave locally (in accordance to local hidden-variable models). Rather, the statistics are consistent with the postulates governing quantum mechanics. In this work we remove the postulates of quantum mechanics and instead allow a physical system to be distributed according to a quasiprobability (hidden-variable) distribution that is allowed to take negative values. Although we are perfectly content with real negative numbers in physics, negative quasiprobabilities-despite receiving support arXiv:2105.01685v1 [quant-ph] 4 May 2021 2 from individuals such as Dirac [9] and Feynman [10] and having a solid mathematical foundation [11,12]-have been a long debated issue in theoretical physics [13]. See, for example, the extensive discussion surrounding the interpretation of negative values in the Wigner distribution [14,15]. In the majority of considerations, quasiprobability distributions are used to describe states that are not directly observed; that is, all observable measurement statistics must be governed by ordinary probability distributions. As an example, a Wigner function may assign a negative quasiprobability to a particle having a particular position/momentum combination, but any physical measurement, constrained by Heisenberg uncertainty, will have an all-positive outcome distribution. This feature ensures that no outcome is ever predicted to be seen occurring a negative number of times [10], and similarly protects the quasiprobability physicist from falling victim to 'Dutch book' arguments [16,Ch.3]. An important motivator for this work is the result of Al-Safi and Short [17] (expanded upon by the authors of [18]) which showed that it is possible to simulate all non-signalling correlations, (those which adhere to the principles of special relativity) [19,20], if one allows negative values in a probability distribution. However, physical reality does not explore this full set of correlations -but rather, is restricted to those achievable by quantum correlations. Therefore the question that we pose in this work is: "What are the restrictions on the negativity in a hidden-variable probability distribution such that it can emulate the statistics seen in a physical Bell experiment?" In order to answer this question we construct CHSH inequalities for two parties [21] whose degree of violation is witnessed by the amount of negativity present in the hidden-variable probability distribution. Our witness yields a value of 0 for a quasiprobability distribution which is entirely positive, such as that which would describe an ordinary classical system. We start by describing the setup necessary for the construction of these nonlocal experiments, introducing the probability distributions admitted by classical, quantum, and non-signalling theories, and giving the famous Bell scores that these distributions respectively allow one to reach in such nonlocal experiments. We then give the definition of a quasiprobability distribution and motivate negativity witnesses as a quantitative method of detecting negativity in said distributions. Our main result is that the violation of the CHSH inequality (and -measurement generalisations) can be exactly characterised by a negativity witness of the hidden-variable distribution defined over the local states, and that there exists quasiprobability distributions which can saturate (up to the no-signalling limit) any such violation, whilst still having well-defined local statistics. This shows that it is possible to recapture the nonlocal features of Bell experiments through having a finite amount of negativity allowed in a hidden-variable distribution over scenarios which are, in themselves, entirely local and classical. SETUP Let us consider the following experimental setup. A source distributes a system between 2 observers, the th observer can choose some measurement ∈ {0, 1, . . . , } and record some outcome ∈ {1, 2, . . . , }, the possible values of being { , }. A specific experimental setup is characterised by the conditional probability, ( , | , ) .(1) The physical theory governing the behaviour of the system and experiment determines the achievability of certain conditional probability distributions resulting from these experiments. We are interested in the following three physical theories: • Classical theory admits probability distributions of the following form, ( , | , ) = ∑︁ , ( | , ) ( | , ) Λ ( , ) ,(2) where Λ ( , ) is a joint probability distribution defined over local hidden variables. With each choice of hidden variables we associate a local scenario governed by ordinary local probability distributions ( | , ) for the observables , . The hidden-variable probability distribution Λ determines how such local scenarios are mixed, and the probability distributions ( | , ) are called " -local" because they belong to the scenario associated with a particular value of , not to be confused with the (observable) marginal probability distributions that are obtained by marginalising the total probability distribution, equation (2). • Quantum theory endows us with a Hilbert space structure for our quantum states that admits probability It can be seen that the collection of functions adhering to the above definition forms a convex set, which we will denoteP, a super set of the convex set of positive probability distributions P ⊂P. We must now determine how to quantify the presence of negativity in our quasiprobability distributions. To this end we will use a well-known method for quantitatively detecting properties of a quantum state, witnesses [25][26][27][28]. Let us therefore proceed by defining a negativity witness: Definition 3 (Negativity witness). Given some properly normalised probability distribution , a well-defined negativity witness is one which, N ( ) = 0 ∀ ∈ P.(7) We may additionally require such a witness to 'faithfully' detect negativity, N ( ) > 0 ∀ ∈P\P.(8) In the following, we consider classical local hidden-variable models as defined in equation (2), but we replace the hidden-variable probability distribution Λ with a quasiprobability distribution˜Λ, ( , | , ) = ∑︁ , 1 ( | , ) 2 ( | , )˜Λ ( , ) .(9) This corresponds to a scenario where different local statistics of observations, governed by -local (ordinary) probability distributions ( | , ), are mixed according to a quasiprobability distribution˜Λ. However, wheñ Λ takes negative values, we should no longer think of the model as an ignorance mixture of valid local scenarios but rather as a nonlocal model [17]. Furthermore, when compared with ordinary hidden-variable models not all combinations of hidden-variable and -local probability distributions are valid; only those combinations which lead to well-defined ( , | , ) are valid, i.e., comprised of values between 0 and 1 (the normalisation condition is always fulfilled). In addition to the correlation function between two measurements and , ( , ), it will also be useful to define -local expectation values corresponding to an imagined scenario where observer is able to perform measurement in the local scenario corresponding to , := ∑︁ ( | , ).(10) This -local expectation value will be useful to formulate our results, but does not correspond to the actual observations which are themselves governed by equation (9). We are now in a position to state the main result of this work, the quasiprobabilistic Bell inequality. Theorem 1 (Quasiprobabilistic Bell inequality). Given observers and , each with measurement choice ∈ {0 , 1 } with outcomes ∈ {−1, +1} whose systems are distributed according to some quasiprobability distributioñ Λ , then the quasiprobabilistic Bell inequality holds: | (0 , 0 ) − (0 , 1 ) + (1 , 0 ) + (1 , 1 )| ≤ 2 + N (˜Λ),(11) where N (˜Λ) ≔          N + (˜Λ) if (1 , 0 ) + (1 , 1 ) < 0, N − (˜Λ) else,(12) is a negativity witness, and N ± (˜Λ) := ∑︁ , 2 ± 1 1 + 1 0 ˜Λ ( , ) −˜Λ( , ) . The proof of this theorem begins analogously with Bell's proof of the CHSH bound [29], but diverges when the assumption ∈ P is made in Bell's proof. The above result shows that if an arbitrary amount of negativity is allowed in the hidden-variable probability distribution then the upper bound of equation (11) can be arbitrarily large. However, it should be noted that a natural limit of 4 in the relevant Bell tests (i.e., for the upper bound in the quasiprobabilistic Bell inequality) is imposed by the requirement that ( , | , ) is a well-defined, valid probability distribution 1 . The previous result of Al-Safi and Short [17] showed that it was possible to violate said inequality up to this no-signalling bound of = 4. Therefore, in order to emulate the physical results seen in Bell tests (Tsirelson bound) one needs a negative probability distribution whose witness equals N (˜Λ) = 2( √ 2 − 1) . In section 4 we show that for any N (˜Λ)≤ 2, there exist quasiprobabilistic hidden-variable models with valid local measurement statistics that saturate inequality (11). We would hope that if a physical mechanism was discovered that allowed a joint hidden-variable probability distribution to have the appearance of negativity, one would expect that said physical mechanism was limited in such a way that it resulted in the Tsirelson bound and more generally was able to reconstruct the limits on quantum correlations. It is also important to note that although said witness N (˜Λ) is a valid witness according to definition 3 it is not There are numerous generalisations of the famous CHSH inequalities, such as multiple parties [30], arbitrary outcomes [31], etc. These would no doubt be interesting to study but we leave it to future work to explore these other generalisations and instead focus on the scenario in which Alice and Bob have access to an arbitrary number of measurement settings [32]. Theorem 2. Given observers and , each with ≥ 2 measurements ∈ {0 , 1 , . . . , − 1 } with outcomes ∈ {−1, +1} whose systems are distributed according to some quasiprobability distribution˜Λ, −1 ∑︁ =0 ( , ) + −1 ∑︁ =1 ( , − 1 ) − (0 , − 1 ) ≤ 2 − 2 + N (˜Λ),(13) where N (˜Λ) = −1 =1 N ( ) (˜Λ) is a negativity witness with N ( ) (˜Λ) ≔          N ( ) + (˜Λ) if (0 , ) + (0 , − 1 ) < 0, N ( ) − (˜Λ) else,(14) where N ( ) ± (˜Λ) , 2 ± + −1 ˜Λ ( , ) −˜Λ( , ) . The proof of the above theorem can be found in appendix B, it utilises proof by induction by chaining together the inequalities from theorem 1. In section 4 we show that the bound in theorem 2 can be saturated. Namely, for any N (˜Λ)≤ 2, there exist well-defined ( , | , ), characterised by a quasiprobability hidden-variable distribution˜Λ( , . . . , ), that saturate inequality (13). In addition, analogously to the two measurement result, at the cost of loosening the bound we can ensure that the above witness is also 'faithful' by choosing for all , N ( ) (˜Λ) = N (˜Λ). EXAMPLE In order to understand how to saturate the Bell inequality from Theorem 1, we rewrite the left-hand side of equation (11) as, ∑︁ M ( )˜Λ( ) .(15) We replaced the hidden variables and with a single hidden variable because our example only uses a single The source produces each of the distributions according to the following quasiprobability distribution, Λ ( ) =          4+N 12 for = 1, 2, 3, − N 4 for = 4,(16) where M ( ) = 2 if = 1, 2, 3 and M ( ) = −2 if = 4. We can use tables to represent -local probability distributions, and the total probability distribution is then given as the weighted sum of such tables: 4 + N 12                  −− −+ +− ++                 +                  −− −+ +− ++                 +                  −− −+ +− ++                 − N 4              −− −+ +− ++             = 1 12              −− −+ +− ++ 00 4 + N 0 4 − 2N 4 + N 01 0 4 + N 4 + N 4 − 2N 10 8 − N 0 0 4 + N 11 4 + N 4 − 2N 0 4 + N              .(17) The requirement that the resulting total probability distribution must be valid implies N ≤ 2 which corresponds to the no-signalling limit. Furthermore, it is easy to check that said distribution indeed gives a value of N for the negativity witness. The quasiprobabilistic Bell inequality score for this experiment is 2 + N , which upon substituting equation (16) into the negativity witness, can be seen to saturate the bound. In appendix C we discuss how one can generalise the above to the -measurement scenario. 9 CONCLUSION We have shown that there exists a relationship between the amount of negativity allowed in a joint hidden-variable distribution, and the degree to which said distribution can demonstrate nonlocality in a Bell experiment. In particular, theorem 2 introduces a quasiprobabilistic Bell inequality, which gives us a sharp bound in the scenario of two parties with inputs (corresponding to a choice between measurements) and can be used straightforwardly to reconstruct quantum statistics using nothing more than local, separable classical probability distributions and a quasiprobability distribution over them -granted an appropriately well spent budget of negativity. Our work sits within the long-established tradition of trying to understand quantum theory through interpretative lenses which remove some particular aspect from a classical worldview. Such approaches are wide and varied, including superdeterminism [33,34]; retro-causality [35]; invoking an irreducible role for subjectivity in physics [7,36,37]; taking physical reality to consist of interacting, separate realms [38,39]; allowing the relativity of pre and post-selection [40]; taking Hilbert space to be literal [41], and so on. Here we add to this list, in that we present an additional way to re-capture the nonlocal features of quantum theory: through having a finite amount of negativity allowed in a hidden-variable distribution over scenarios which are, in themselves, entirely local and classical. We are not claiming that such quasi distributions are 'real' -only, more modestly, that such a perspective could not be ruled out at this stage. Pursuing this line of reasoning, we would hope that our results may help to determine the fundamental restrictions Theorem. Given observers and , each with measurement choice ∈ {0 , 1 } with outcomes ∈ {−1, +1} whose systems are distributed according to some quasiprobability distribution˜Λ, then the quasiprobabilistic Bell inequality holds: | (0 , 0 ) − (0 , 1 ) + (1 , 0 ) + (1 , 1 )| ≤ 2 + N (˜Λ),(A1) where N (˜Λ) ≔          N + (˜Λ) if (1 , 0 ) + (1 , 1 ) < 0, N − (˜Λ) else,(A2) is a negativity witness, and N ± (˜Λ) := ∑︁ , 2 ± 1 1 + 1 0 ˜Λ ( , ) −˜Λ( , ) . Proof. The first part of the proof follows Bell's 1971 derivation of the CHSH inequality [29]. For brevity in the proof we will just write˜Λ as . We start by rewriting the correlation function, ( , ) := ∑︁ , ∑︁ , ( | , ) ( | , ) ( , ) = ∑︁ , ( , ),(A3) where , defined in equation (10), is the -local expectation value for observer performing measurement . Starting with the following difference between correlation functions, (0 , 0 ) − (0 , 1 ) = ∑︁ , 0 0 − 0 1 ( , ) = ∑︁ , 0 0 − 0 1 ± 0 0 1 1 ∓ 0 0 1 1 ( , ) = ∑︁ , 0 0 1 ± 1 1 ( , ) − ∑︁ , 0 1 1 ± 1 0 ( , ),(A4) where the "±" in equation (A4) is to be understood as either "+" in all terms or "−" in all terms. Taking the absolute value of both sides and using the triangular inequality, Starting with the first term on the right-hand side of inequality (A5), we again apply the triangular inequality, ∑︁ , 0 0 1 ± 1 1 ( , ) ≤ ∑︁ , 0 0 1 ± 1 1 ( , ) = ∑︁ , 0 0 1 ± 1 1 ( , ) . (A6) As ∈ {−1, +1} we can say ≤ 1 ∀ , we can write, ∑︁ , 0 0 1 ± 1 1 ( , ) ≤ ∑︁ , 1 ± 1 1 ( , ) = ∑︁ , 1 ± 1 1 ( , ) = ∑︁ , 1 ± 1 1 ( , )(A7) where we have used the fact that 1 ± 1 1 is necessarily non-negative because of the choice of eigenvalues, ∈ {−1, +1}. Similarly, we find for the second term on the right-hand side of inequality (A5) ∑︁ , 0 1 1 ± 1 0 ( , ) ≤ ∑︁ , 1 ± 1 0 ( , ) .(A8) By adding inequalities (A7) and (A8) we find the following upper bound for the left-hand side of inequality (A5), (0 , 0 ) − (0 , 1 ) ≤ ∑︁ , 2 ± 1 1 + 1 0 ( , ) .(A9) So far the proof followed Bell's 1971 derivation [29] of the CHSH inequality. In Bell's derivation, one assumes that the joint probability distribution is positive, ( , ) ≥ 0, which, using the definition of the correlation function and the triangle inequality, leads to the well-known CHSH inequality, | (0 , 0 ) − (0 , 1 ) + (1 , 0 ) + (1 , 1 )| ≤ 2. We have to take another approach because here ( , ) can be a quasiprobability distribution and thus take negative values. For each of the two inequalities (A9) (corresponding to the choice for "±"), we define a negativity witness N ± ( ) for some normalised distribution ∈P as the difference obtained by replacing | ( , )| with ( , ) in the right-hand side of inequality (A9), N ± ( ) := ∑︁ , 2 ± 1 1 + 1 0 [| ( , )| − ( , )] .(A10) Note that although this negativity witness is perfectly valid according to definition 3, it is not faithful because 2 ± 1 1 + 1 0 may be zero for ∈P/P, i.e., N ± ( ) may be zero for a quasiprobability distribution. Nevertheless we can now write inequality (A9) as, (0 , 0 ) − (0 , 1 ) ≤ ∑︁ , 2 ± 1 1 + 1 0 ( , ) + N ± ( ). (A11) The first term on the right-hand side of inequality (A11) can then be simplified using the definition of the correlation function (A3) and that ( , ) is normalized, ∑︁ , 2 ± 1 1 + 1 0 ( , ) = ∑︁ , 2 ( , ) ± ∑︁ , 1 1 + 1 0 ( , ) = 2 ± [ (1 , 0 ) + (1 , 1 )] .(A12) Thus, inequality (A11) becomes (0 , 0 ) − (0 , 1 ) ≤ 2 ± [ (1 , 0 ) + (1 , 1 )] + N ± ( ).(A13) Now, we choose the inequality corresponding to "+" if [ (1 , 0 ) + (1 , 1 )] is negative, and the inequality corresponding to "−" else. This allows us to write (0 , 0 ) − (0 , 1 ) ≤ 2 − | (1 , 0 ) + (1 , 1 )| + N ( ),(A14) where we defined N ( ) ≔          N + ( ) if (1 , 0 ) + (1 , 1 ) < 0, N − ( ) else.(A15) From inequality (A14), we obtain (0 , 0 ) − (0 , 1 ) + | (1 , 0 ) + (1 , 1 )| ≤ 2 + N ( ),(A16) and with one final use of the triangular inequality we find a CHSH-type inequality for arbitrary ∈˜, | (0 , 0 ) − (0 , 1 ) + (1 , 0 ) + (1 , 1 )| ≤ 2 + N ( ),(A17) completing the proof. 13 Appendix B: proof of theorem 2 Theorem. Given observers and , each with ≥ 2 measurements ∈ {0 , 1 , . . . , − 1 } with outcomes ∈ {−1, +1} whose systems are distributed according to some quasiprobability distribution˜Λ, −1 ∑︁ =0 ( , ) + −1 ∑︁ =1 ( , − 1 ) − (0 , − 1 ) ≤ 2 − 2 + N (˜Λ),(B1) where N (˜Λ) = −1 =1 N ( ) (˜Λ) is a negativity witness with N ( ) (˜Λ) ≔          N ( ) + (˜Λ) if (0 , ) + (0 , − 1 ) < 0, N ( ) − (˜Λ) else,(B2) where N ( ) ± (˜Λ) , 2 ± + −1 ˜Λ ( , ) −˜Λ( , ) . Proof. The proof is similar to the creation of chained CHSH inequalities, see [42] for an intuitive description, and works by induction in . Anchor step = 2: see theorem 1. Inductive step: Suppose that theorem 2 holds for = . We will prove the theorem for = + 1. Starting from the left-hand side of equation (B1) for = + 1, we find + (0 , − 1 ) − (0 , − 1 ) (B5) ≤ −1 ∑︁ =0 ( , ) + −1 ∑︁ =1 ( , − 1 ) − (0 , − 1 ) + | ( , ) + ( , − 1 ) + (0 , − 1 ) − (0 , )| (B6) ≤ 2 − 2 + −1 ∑︁ =1 N ( ) (˜Λ) + 2 + N ( ) (˜Λ) (B7) = 2( + 1) − 2 + N +1 (˜Λ),(B8) which concludes the induction. The inequality in line (B6) is the triangle inequality, and we proceed from that line by using the induction hypothesis and theorem 1 for measurements 0 , for Alice, and − 1 , and for Bob. Appendix C: Saturation of the -measurement quasiprobabilistic Bell inequality We can generalise the 2-measurement example from the main text to measurements in the following way. Using equation (A3), we rewrite the left-hand side of equation (13) as, ∑︁ M ( )˜Λ( ) ,(C1) where we use only a single hidden variable , and M ( ) := −1 ∑︁ =0 + −1 ∑︁ =1 −1 − 0 −1 (C2) are the scores of each of the -local distributions, that is −(2 − 2) ≤ M ( ) ≤ 2 − 2 holds. We again consider 4 classical scenarios, 3 of which achieve a score of 2 − 2 but now the last achieving a score of 2 − 6. The source produces each of the distributions according to the following quasiprobability distribution, Λ ( ) =          4+N 12 for = 1, 2, 3, − N 4 for = 4,(C3) where = 1, 2, 3 corresponds to classical distributions with score 2 − 2, and = 4 to 2 − 6. We can see that this distribution saturates the -measurement quasiprobabilistic Bell inequality from theorem 2, It is easy to check that all such distributions achieve for = 1, 2, 3 a score 2 − 2, and for = 4, 2 − 6. Since the distribution for = 4 enters into the total probability distribution, ( , | , ), with negative weight, the other -local distributions (with = 1, 2, 3) must compensate for that negativity to ensure that the total probability distribution is valid. It is easy to see that this is indeed the case by observing that for each combination of Alice and Bob's signs for = 4 that same combination of symbols appear in the same places for at least one of the other distributions. We also find that requiring positivity of the total probability distribution also gives us the no-signalling condition: as required. * [email protected] 3 FIG. 1 . 31A source distributes a system between two spatially separated observers, Alice ( ) and Bob ( ). Alice and Bob choose to measure their local part of the system with measurements , , with possible outcomes , ∈ {−1, +1}. The statistics of their measurement outcomes depending upon the physical system being distributed by the source. necessarily a 'faithful' one. However this can be rectified, at the cost of loosening the bound, by redefining said witness. For example the function N (˜Λ) := , 4 ˜Λ ( , ) −˜Λ( , ) , is defined to be both a valid and 'faithful' witness. 7 the scores of each of the -local distributions , that is −2 ≤ M ( ) ≤ 2 holds. For a given value of the negativity witness, we exceed the local bound maximally by the simple strategy of weighting classical distributions with M ( ) = +2 with positive quasiprobability, while simultaneously taking 8 a classical distribution with M ( ) = −2 with negative weight. To ensure that the total probability distribution ( , | , ) is well-defined we make a choice of three deterministic classical distributions with positive weight and a fourth with negative weight. Our four deterministic classical distributions can be denoted [(−, −) , (−, +) ], [(+, −) , (−, −) ], [(+, +) , (+, +) ] and [(+, −) , (−, +) ]. Here, our notation means that the distributions can be produced by assigning the first pair of symbols to Alice and the second to Bob. Each party chooses to read either the first or second of the symbols given to them (this choice reflects their measurement setting ) while the outcome of their measurement is determined by the symbol itself; that is, = +1 ( = −1) for a plus (minus) sign. This experimental description of distributing classical information makes clear that these distributions are local, with our hidden variable indicating which of these sets the source actually produces. on a system's quasiprobability hidden-variable distribution such that it captures the full character of physical correlations. Put another way; we know that zero negativity can capture the set of classical correlations, whilst un-bounded negativity can capture the non-signalling set. Given that the set of quantum correlations lies between these two -what are the restrictions on the quasiprobability hidden-variable distribution which would suffice to identify the full set of quantum correlations? We hope to explore this question in further work.ACKNOWLEDGMENTS We are grateful to Benjamin Yadin, Richard Moles and Thomas Veness for helpful discussions and the Physical Institute for Theoretical Hierarchy (PITH) for encouraging an investigation into this topic. BM acknowledges financial support from the Engineering and Physical Sciences Research Council (EPSRC) under the Doctoral Prize Grant (Grant No. EP/T517902/1). LF acknowledges financial support from the Austrian Science Fund (FWF) through SFB BeyondC (Grant No. F7102). BL is supported by Leverhulme Trust Research Project Grant (RPG-2018-213). D.G. acknowledges support from FQXI (grant no. RFP-IPW-1907) 10 Appendix A: proof of theorem 1 ( , ) . (A5) 11 (( , − 1 ) − (0 , ) + ( , ) + ( , − 1 ) , − 1 ) − (0 , ) + ( , ) + ( , − 1 ) (2 − 2 ) 2˜Λ(1) + (2 − 2)˜Λ(2) + (2 − 2)˜Λ(3) + (2 − 6)˜Λ(4) = 2 − 2 + N . (C4) We now need to come up with the -local probability distributions which result in a well-defined ( , | , ) and gives the correct value for the witness N . To do this we can generalise the classical distributions from the main text for measurements, using the same notation as previously, such classical distributions are, [( −, . . . , −) , . . . , −) , ( −, . . . , −) ] =2 , [( +, . . . , +) , ( +, . . . , +) ] =3 , . . . , −, +) ] =4 . ˜Λ thing to check is that said distributions in equation (C5) coupled with the quasiprobability distribution in equation (C3) gives the required value N for the negativity witness N (˜Λ) ( , ) −˜Λ( , ) . Firstly, we can see by going through the distributions in equation (C5) that for all measurement choices , (0 , ) + (0 , − 1 ) > 0 meaning that the witness we calculate for all in the sum of N (˜Λ) is N ( ) − (˜Λ). We then go through the expectation values in the definition of N ( ) − (˜Λ) for all for the = 4 distribution given in equation (C5), from which we can seeupon calculating N (˜Λ) −1 =1 N ( ) (˜Λ), we get N (˜Λ) =2 ˜Λ (4) −˜Λ(4) =N , This can be easily seen as for well-defined experimental setups where = {−1, +1}, i.e. for valid ( , | , ), each term on the left hand side of equation(11) can be at most 1 and least −1. On the einstein podolsky rosen paradox. S John, Bell, Physics Physique Fizika. 13195John S Bell. On the einstein podolsky rosen paradox. Physics Physique Fizika, 1(3):195, 1964. Experimental test of local hidden-variable theories. J Stuart, John F Freedman, Clauser, Physical Review Letters. 2814938Stuart J Freedman and John F Clauser. Experimental test of local hidden-variable theories. Physical Review Letters, 28(14):938, 1972. Quantum probabilities as bayesian probabilities. M Carlton, Caves, A Christopher, Rüdiger Fuchs, Schack, Physical review A. 65222305Carlton M Caves, Christopher A Fuchs, and Rüdiger Schack. Quantum probabilities as bayesian probabilities. Physical review A, 65(2):022305, 2002. Einstein, incompleteness, and the epistemic view of quantum states. Nicholas Harrigan, W Robert, Spekkens, Foundations of Physics. 402Nicholas Harrigan and Robert W Spekkens. Einstein, incompleteness, and the epistemic view of quantum states. Foundations of Physics, 40(2):125-157, 2010. z On the reality of the quantum state. Jonathan Matthew F Pusey, Terry Barrett, Rudolph, Nature Physics. 8616Matthew F Pusey, Jonathan Barrett, and Terry Rudolph. On the reality of the quantum state. Nature Physics, 8(6):475-478, 2012. 16 Is a system's wave function in one-to-one correspondence with its elements of reality?. Roger Colbeck, Renato Renner, Physical Review Letters. 10815150402Roger Colbeck and Renato Renner. Is a system's wave function in one-to-one correspondence with its elements of reality? Physical Review Letters, 108(15):150402, 2012. Physics: Qbism puts the scientist back into science. David Mermin, Nature News. 5077493421N David Mermin. Physics: Qbism puts the scientist back into science. Nature News, 507(7493):421, 2014. Measurements on the reality of the wavefunction. Martin Ringbauer, Ben Duffus, Cyril Branciard, Eric G Cavalcanti, Andrew G White, Alessandro Fedrizzi, Nature Physics. 113Martin Ringbauer, Ben Duffus, Cyril Branciard, Eric G Cavalcanti, Andrew G White, and Alessandro Fedrizzi. Measure- ments on the reality of the wavefunction. Nature Physics, 11(3):249-254, 2015. Bakerian lecture-the physical interpretation of quantum mechanics. Maurice Paul Adrien, Dirac, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences. 180980Paul Adrien Maurice Dirac. Bakerian lecture-the physical interpretation of quantum mechanics. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences, 180(980):1-40, 1942. Richard P Feynman, Negative probability. Quantum implications: essays in honour of David Bohm. Richard P Feynman. Negative probability. Quantum implications: essays in honour of David Bohm, pages 235-248, 1987. Algebraic probability theory. Imre Z Ruzsa, J Gábor, Székely, J Gábor, Székely, John Wiley & Sons Incorporated213Imre Z Ruzsa, Gábor J Székely, Gábor J Székely, et al. Algebraic probability theory, volume 213. John Wiley & Sons Incorporated, 1988. Generalized probabilities taking values in non-archimedean fields and in topological groups. Yu Khrennikov, Russian Journal of Mathematical Physics. 142A Yu Khrennikov. Generalized probabilities taking values in non-archimedean fields and in topological groups. Russian Journal of Mathematical Physics, 14(2):142-159, 2007. A review of extended probabilities. Wolfgang Mückenheim, C Ludwig, Dewdney, Holland, Kyprianidis, Cufaro Vigier, Petroni, E T Bartlett, Jaynes, Physics Reports. 1336Wolfgang Mückenheim, G Ludwig, C Dewdney, PR Holland, A Kyprianidis, JP Vigier, N Cufaro Petroni, MS Bartlett, and ET Jaynes. A review of extended probabilities. Physics Reports, 133(6):337-401, 1986. Quasi-probability representations of quantum theory with applications to quantum information science. Christopher Ferrie, Reports on Progress in Physics. 7411116001Christopher Ferrie. Quasi-probability representations of quantum theory with applications to quantum information science. Reports on Progress in Physics, 74(11):116001, 2011. Negative quasi-probability as a resource for quantum computation. Victor Veitch, Christopher Ferrie, David Gross, Joseph Emerson, New Journal of Physics. 1411113011Victor Veitch, Christopher Ferrie, David Gross, and Joseph Emerson. Negative quasi-probability as a resource for quantum computation. New Journal of Physics, 14(11):113011, 2012. Theory of probability: A critical introductory treatment. Finetti Bruno De, John Wiley & Sons6Bruno De Finetti. Theory of probability: A critical introductory treatment, volume 6. John Wiley & Sons, 2017. Simulating all nonsignaling correlations via classical or quantum theory with negative probabilities. Anthony J Sabri W Al-Safi, Short, Physical Review Letters. 11117170403Sabri W Al-Safi and Anthony J Short. Simulating all nonsignaling correlations via classical or quantum theory with negative probabilities. Physical Review Letters, 111(17):170403, 2013. Exploring non-signalling polytopes with negative probability. G Oas, C Acacio De Barros, Carvalhaes, Physica Scripta. T16314034G Oas, J Acacio de Barros, and C Carvalhaes. Exploring non-signalling polytopes with negative probability. Physica Scripta, 2014(T163):014034, 2014. Quantum nonlocality as an axiom. Sandu Popescu, Daniel Rohrlich, Foundations of Physics. 243Sandu Popescu and Daniel Rohrlich. Quantum nonlocality as an axiom. Foundations of Physics, 24(3):379-385, 1994. Quantum information and relativity theory. Asher Peres, Daniel R Terno, Reviews of Modern Physics. 76193Asher Peres and Daniel R Terno. Quantum information and relativity theory. Reviews of Modern Physics, 76(1):93, 2004. Proposed experiment to test local hidden-variable theories. F John, Clauser, A Michael, Abner Horne, Richard A Shimony, Holt, Physical review letters. 2315880John F Clauser, Michael A Horne, Abner Shimony, and Richard A Holt. Proposed experiment to test local hidden-variable theories. Physical review letters, 23(15):880, 1969. M A Nielsen, I L Chuang, I L Chuang, Quantum Computation and Quantum Information. Cambridge Series on Information and the Natural Sciences. Cambridge University PressM.A. Nielsen, I.L. Chuang, and I.L. Chuang. Quantum Computation and Quantum Information. Cambridge Series on Information and the Natural Sciences. Cambridge University Press, 2000. The set of quantum correlations is not closed. William Slofstra, Forum of Mathematics, Pi. Cambridge University Press7William Slofstra. The set of quantum correlations is not closed. In Forum of Mathematics, Pi, volume 7. Cambridge University Press, 2019. Quantum generalizations of bell's inequality. S Boris, Cirel&apos;son, Letters in Mathematical Physics. 42Boris S Cirel'son. Quantum generalizations of bell's inequality. Letters in Mathematical Physics, 4(2):93-100, 1980. Bell inequalities and the separability criterion. M Barbara, Terhal, Physics Letters A. 2715-6Barbara M Terhal. Bell inequalities and the separability criterion. Physics Letters A, 271(5-6):319-326, 2000. Optimization of entanglement witnesses. Maciej Lewenstein, Barabara Kraus, Ignacio Cirac, P Horodecki, Physical Review A. 62552310Maciej Lewenstein, Barabara Kraus, J Ignacio Cirac, and P Horodecki. Optimization of entanglement witnesses. Physical Review A, 62(5):052310, 2000. Quantitative entanglement witnesses. Jens Eisert, Gsl Fernando, Brandao, Koenraad, Audenaert, New Journal of Physics. 9346Jens Eisert, Fernando GSL Brandao, and Koenraad MR Audenaert. Quantitative entanglement witnesses. New Journal of Physics, 9(3):46, 2007. . Ulysse Chabaud, Pierre-Emmanuel Emeriau, Frédéric Grosshans, arXiv:2102.06193Witnessing wigner negativity. arXiv preprintUlysse Chabaud, Pierre-Emmanuel Emeriau, and Frédéric Grosshans. Witnessing wigner negativity. arXiv preprint arXiv:2102.06193, 2021. Speakable and unspeakable in quantum mechanics: Collected papers on quantum philosophy. S John, John Stewart Bell, Bell, Cambridge university pressJohn S Bell and John Stewart Bell. Speakable and unspeakable in quantum mechanics: Collected papers on quantum philosophy. Cambridge university press, 2004. Unbounded violation of tripartite bell inequalities. David Pérez-García, M Michael, Carlos Wolf, Ignacio Palazuelos, Marius Villanueva, Junge, Communications in Mathematical Physics. 2792David Pérez-García, Michael M Wolf, Carlos Palazuelos, Ignacio Villanueva, and Marius Junge. Unbounded violation of tripartite bell inequalities. Communications in Mathematical Physics, 279(2):455-486, 2008. Bell inequalities from no-signaling distributions. Thomas Cope, Roger Colbeck, Physical Review A. 100222114Thomas Cope and Roger Colbeck. Bell inequalities from no-signaling distributions. Physical Review A, 100(2):022114, 2019. Tsirelson bounds for generalized clauser-horne-shimony-holt inequalities. Stephanie Wehner, Physical Review A. 73222110Stephanie Wehner. Tsirelson bounds for generalized clauser-horne-shimony-holt inequalities. Physical Review A, 73(2):022110, 2006. Rethinking superdeterminism. Sabine Hossenfelder, Tim Palmer, Frontiers in Physics. 8139Sabine Hossenfelder and Tim Palmer. Rethinking superdeterminism. Frontiers in Physics, 8:139, 2020. Quantum mechanics and global determinism. Emily Christine Adlam, Quanta. 71Emily Christine Adlam. Quantum mechanics and global determinism. Quanta, 7(1):40-53, 2018. Does time-symmetry imply retrocausality? how the quantum world says "maybe. Huw Price, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 43Huw Price. Does time-symmetry imply retrocausality? how the quantum world says "maybe"? Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 43(2):75-83, 2012. An introduction to qbism with an application to the locality of quantum mechanics. A Christopher, Fuchs, Rüdiger David Mermin, Schack, American Journal of Physics. 828Christopher A Fuchs, N David Mermin, and Rüdiger Schack. An introduction to qbism with an application to the locality of quantum mechanics. American Journal of Physics, 82(8):749-754, 2014. Law without law: from observer states to physics via algorithmic information theory. P Markus, Mueller, 301Markus P Mueller. Law without law: from observer states to physics via algorithmic information theory. Quantum, 4:301, 2020. Bohmian mechanics and quantum theory: an appraisal. T James, Arthur Cushing, Sheldon Fine, Goldstein, Springer Science & Business Media184James T Cushing, Arthur Fine, and Sheldon Goldstein. Bohmian mechanics and quantum theory: an appraisal, volume 184. Springer Science & Business Media, 2013. The ontology of bohmian mechanics. Michael Esfeld, Mario Hubert, Dustin Lazarovici, Detlef Dürr, The British Journal for the Philosophy of Science. 654Michael Esfeld, Mario Hubert, Dustin Lazarovici, and Detlef Dürr. The ontology of bohmian mechanics. The British Journal for the Philosophy of Science, 65(4):773-796, 2014. Reverse bell's theorem and relativity of pre-and postselection. Guido Bacciagaluppi, Ronnie Hermens, arXiv:2002.03935arXiv preprintGuido Bacciagaluppi and Ronnie Hermens. Reverse bell's theorem and relativity of pre-and postselection. arXiv preprint arXiv:2002.03935, 2020. Reality as a vector in hilbert space. M Sean, Carroll, arXiv:2103.09780arXiv preprintSean M Carroll. Reality as a vector in hilbert space. arXiv preprint arXiv:2103.09780, 2021. Wringing out better bell inequalities. L Samuel, Carlton M Braunstein, Caves, Annals of Physics. 2021Samuel L Braunstein and Carlton M Caves. Wringing out better bell inequalities. Annals of Physics, 202(1):22-56, 1990.
[]
[ "Phase estimation of spin-torque oscillator by nonlinear spin-torque diode effect", "Phase estimation of spin-torque oscillator by nonlinear spin-torque diode effect" ]
[ "Terufumi Yamaguchi \nNational Institute of Advanced Industrial Science and Technology (AIST)\nSpintronics Research Center\n305-8568TsukubaIbarakiJapan\n", "Sumito Tsunegi \nNational Institute of Advanced Industrial Science and Technology (AIST)\nSpintronics Research Center\n305-8568TsukubaIbarakiJapan\n", "Tomohiro Taniguchi \nNational Institute of Advanced Industrial Science and Technology (AIST)\nSpintronics Research Center\n305-8568TsukubaIbarakiJapan\n" ]
[ "National Institute of Advanced Industrial Science and Technology (AIST)\nSpintronics Research Center\n305-8568TsukubaIbarakiJapan", "National Institute of Advanced Industrial Science and Technology (AIST)\nSpintronics Research Center\n305-8568TsukubaIbarakiJapan", "National Institute of Advanced Industrial Science and Technology (AIST)\nSpintronics Research Center\n305-8568TsukubaIbarakiJapan" ]
[]
A theoretical analysis is developed on spin-torque diode effect in nonlinear region. An analytical solution of the diode voltage generated from spin-torque oscillator by the rectification of an alternating current is derived. The diode voltage is revealed to depend nonlinearly on the phase difference between the oscillator and the alternating current. The validity of the analytical prediction is confirmed by numerical simulation of the Landau-Lifshitz-Gilbert equation. The results indicate that the spin-torque diode effect is useful to evaluate the phase of a spin-torque oscillator in forced synchronization state.PACS numbers:Generating microwave power by using spin-torque oscillator (STO) 1-6 has been an exciting topic in the field of spintronics because of the applicability to practical devices such as magnetic recording head 7-10 . The previous works on STO have focused on its frequency, linewidth, and/or power because these quantities determine the quality of the STO assembled in microwave generators. Recent growth of interest on the applicability of STOs to other technologies, such as neuromorphic computing and phased array radar 11-17 , motivates us to investigate another physical quantity of the oscillator, namely phase. For example, the pattern recognition by using an array of spin-Hall oscillators is based on the control of the phase differences among the oscillators 12 . The performance of the reservoir computing was improved by using the phase synchronization of an STO to a microwave magnetic field 16,17 . The phased array radar controls the propagating direction of the wave signal by tuning the phase difference between the oscillators and the signal. As can be seen in these examples, the phase plays a key role in next-generation spintronics devices. However, studies investigating the STO's phase are still few 18-20 . In this work, we focus on the phase difference of an STO in an injection-locked (forced synchronization) state, where the oscillation frequency and phase of the STO are locked to those of an injected alternating current.Spin-torque diode 21-23 is another spintronics device generating a direct voltage by rectifying an injected alternating current. The spin-torque diode effect is caused by a linear (small amplitude) oscillation of the magnetization. Recently, however, the spin-torque diode effect has been extended to nonlinear region 24-27 . It should be emphasized here that the spin-torque diode effect in the nonlinear region corresponds to the injection locking of an STO; see also the description below. Although the previous works partially implied that the diode voltage in the nonlinear region reflects the phase of the STO, the main focus was on the experiments to enhance the diode sensitivity. A detail analysis of the relation between the diode voltage and the phase of the STO has not been developed yet from the theoretical point of view.In this work, we have developed a theoretical frame-work proposing an evaluation method of the STO's phase in an injection-locked state by focusing on the spin-torque diode signal from the STO. It is analytically shown that the spin-torque diode voltage of the STO in the frequency domain depends nonlinearly on the phase difference between the oscillator and injected alternating current. Numerical simulation of the Landau-Lifshitz-Gilbert (LLG) equation is also performed to confirm the analytical prediction. The results indicate that the spin-torque diode measurement in nonlinear region can be used as a convenient experimental tool to evaluate the phase of the STO.Before showing our calculation details, let us first emphasize the difference of the spin-torque diode effect between the linear and nonlinear regions. The conventional spin-torque diode effect 21-23 is a linear effect. It is caused by an alternating current, and is related to a linear oscillation of the magnetization called ferromagnetic resonance (FMR). The output is the direct voltage as a result of the rectification of the alternating current, and has a peak at the FMR frequency. When a direct current is simultaneously injected into the diode, it results in a modulation of the spectrum linewidth. Note however that the necessity of the direct current is, in principle, not essential in the linear spin-torque diode effect.On the other hand, the spin-torque diode effect in the nonlinear region in this work corresponds to the injection locking of an STO. The auto-oscillation in the STO is a nonlinear oscillation caused by an injection of a direct current. The oscillation frequency of the STO can be tuned by changing the magnitude of the direct current. The output of the STO is presented as an oscillating power. When an alternating current is simultaneously injected into the STO with some conditions fulfilled, however, the oscillation frequency and phase of the STO are locked to those of the alternating current. The phenomenon is called the injection locking or forced synchronization. Note that, because of the presence of the alternating current, the STO in the injection-locking state is also expected to output a direct (rectified) voltage, similar to the conventional spin-torque diode effect. The direct voltage is calculated in the following.
10.35848/1347-4065/ab6a28
[ "https://arxiv.org/pdf/2001.09247v1.pdf" ]
210,920,384
2001.09247
ea51cc3db04321c7885dd7ecbca484bbe51f70ff
Phase estimation of spin-torque oscillator by nonlinear spin-torque diode effect 25 Jan 2020 Terufumi Yamaguchi National Institute of Advanced Industrial Science and Technology (AIST) Spintronics Research Center 305-8568TsukubaIbarakiJapan Sumito Tsunegi National Institute of Advanced Industrial Science and Technology (AIST) Spintronics Research Center 305-8568TsukubaIbarakiJapan Tomohiro Taniguchi National Institute of Advanced Industrial Science and Technology (AIST) Spintronics Research Center 305-8568TsukubaIbarakiJapan Phase estimation of spin-torque oscillator by nonlinear spin-torque diode effect 25 Jan 2020(Dated: January 28, 2020) A theoretical analysis is developed on spin-torque diode effect in nonlinear region. An analytical solution of the diode voltage generated from spin-torque oscillator by the rectification of an alternating current is derived. The diode voltage is revealed to depend nonlinearly on the phase difference between the oscillator and the alternating current. The validity of the analytical prediction is confirmed by numerical simulation of the Landau-Lifshitz-Gilbert equation. The results indicate that the spin-torque diode effect is useful to evaluate the phase of a spin-torque oscillator in forced synchronization state.PACS numbers:Generating microwave power by using spin-torque oscillator (STO) 1-6 has been an exciting topic in the field of spintronics because of the applicability to practical devices such as magnetic recording head 7-10 . The previous works on STO have focused on its frequency, linewidth, and/or power because these quantities determine the quality of the STO assembled in microwave generators. Recent growth of interest on the applicability of STOs to other technologies, such as neuromorphic computing and phased array radar 11-17 , motivates us to investigate another physical quantity of the oscillator, namely phase. For example, the pattern recognition by using an array of spin-Hall oscillators is based on the control of the phase differences among the oscillators 12 . The performance of the reservoir computing was improved by using the phase synchronization of an STO to a microwave magnetic field 16,17 . The phased array radar controls the propagating direction of the wave signal by tuning the phase difference between the oscillators and the signal. As can be seen in these examples, the phase plays a key role in next-generation spintronics devices. However, studies investigating the STO's phase are still few 18-20 . In this work, we focus on the phase difference of an STO in an injection-locked (forced synchronization) state, where the oscillation frequency and phase of the STO are locked to those of an injected alternating current.Spin-torque diode 21-23 is another spintronics device generating a direct voltage by rectifying an injected alternating current. The spin-torque diode effect is caused by a linear (small amplitude) oscillation of the magnetization. Recently, however, the spin-torque diode effect has been extended to nonlinear region 24-27 . It should be emphasized here that the spin-torque diode effect in the nonlinear region corresponds to the injection locking of an STO; see also the description below. Although the previous works partially implied that the diode voltage in the nonlinear region reflects the phase of the STO, the main focus was on the experiments to enhance the diode sensitivity. A detail analysis of the relation between the diode voltage and the phase of the STO has not been developed yet from the theoretical point of view.In this work, we have developed a theoretical frame-work proposing an evaluation method of the STO's phase in an injection-locked state by focusing on the spin-torque diode signal from the STO. It is analytically shown that the spin-torque diode voltage of the STO in the frequency domain depends nonlinearly on the phase difference between the oscillator and injected alternating current. Numerical simulation of the Landau-Lifshitz-Gilbert (LLG) equation is also performed to confirm the analytical prediction. The results indicate that the spin-torque diode measurement in nonlinear region can be used as a convenient experimental tool to evaluate the phase of the STO.Before showing our calculation details, let us first emphasize the difference of the spin-torque diode effect between the linear and nonlinear regions. The conventional spin-torque diode effect 21-23 is a linear effect. It is caused by an alternating current, and is related to a linear oscillation of the magnetization called ferromagnetic resonance (FMR). The output is the direct voltage as a result of the rectification of the alternating current, and has a peak at the FMR frequency. When a direct current is simultaneously injected into the diode, it results in a modulation of the spectrum linewidth. Note however that the necessity of the direct current is, in principle, not essential in the linear spin-torque diode effect.On the other hand, the spin-torque diode effect in the nonlinear region in this work corresponds to the injection locking of an STO. The auto-oscillation in the STO is a nonlinear oscillation caused by an injection of a direct current. The oscillation frequency of the STO can be tuned by changing the magnitude of the direct current. The output of the STO is presented as an oscillating power. When an alternating current is simultaneously injected into the STO with some conditions fulfilled, however, the oscillation frequency and phase of the STO are locked to those of the alternating current. The phenomenon is called the injection locking or forced synchronization. Note that, because of the presence of the alternating current, the STO in the injection-locking state is also expected to output a direct (rectified) voltage, similar to the conventional spin-torque diode effect. The direct voltage is calculated in the following. A theoretical analysis is developed on spin-torque diode effect in nonlinear region. An analytical solution of the diode voltage generated from spin-torque oscillator by the rectification of an alternating current is derived. The diode voltage is revealed to depend nonlinearly on the phase difference between the oscillator and the alternating current. The validity of the analytical prediction is confirmed by numerical simulation of the Landau-Lifshitz-Gilbert equation. The results indicate that the spin-torque diode effect is useful to evaluate the phase of a spin-torque oscillator in forced synchronization state. PACS numbers: Generating microwave power by using spin-torque oscillator (STO) [1][2][3][4][5][6] has been an exciting topic in the field of spintronics because of the applicability to practical devices such as magnetic recording head [7][8][9][10] . The previous works on STO have focused on its frequency, linewidth, and/or power because these quantities determine the quality of the STO assembled in microwave generators. Recent growth of interest on the applicability of STOs to other technologies, such as neuromorphic computing and phased array radar [11][12][13][14][15][16][17] , motivates us to investigate another physical quantity of the oscillator, namely phase. For example, the pattern recognition by using an array of spin-Hall oscillators is based on the control of the phase differences among the oscillators 12 . The performance of the reservoir computing was improved by using the phase synchronization of an STO to a microwave magnetic field 16,17 . The phased array radar controls the propagating direction of the wave signal by tuning the phase difference between the oscillators and the signal. As can be seen in these examples, the phase plays a key role in next-generation spintronics devices. However, studies investigating the STO's phase are still few [18][19][20] . In this work, we focus on the phase difference of an STO in an injection-locked (forced synchronization) state, where the oscillation frequency and phase of the STO are locked to those of an injected alternating current. Spin-torque diode 21-23 is another spintronics device generating a direct voltage by rectifying an injected alternating current. The spin-torque diode effect is caused by a linear (small amplitude) oscillation of the magnetization. Recently, however, the spin-torque diode effect has been extended to nonlinear region [24][25][26][27] . It should be emphasized here that the spin-torque diode effect in the nonlinear region corresponds to the injection locking of an STO; see also the description below. Although the previous works partially implied that the diode voltage in the nonlinear region reflects the phase of the STO, the main focus was on the experiments to enhance the diode sensitivity. A detail analysis of the relation between the diode voltage and the phase of the STO has not been developed yet from the theoretical point of view. In this work, we have developed a theoretical frame-work proposing an evaluation method of the STO's phase in an injection-locked state by focusing on the spin-torque diode signal from the STO. It is analytically shown that the spin-torque diode voltage of the STO in the frequency domain depends nonlinearly on the phase difference between the oscillator and injected alternating current. Numerical simulation of the Landau-Lifshitz-Gilbert (LLG) equation is also performed to confirm the analytical prediction. The results indicate that the spin-torque diode measurement in nonlinear region can be used as a convenient experimental tool to evaluate the phase of the STO. Before showing our calculation details, let us first emphasize the difference of the spin-torque diode effect between the linear and nonlinear regions. The conventional spin-torque diode effect 21-23 is a linear effect. It is caused by an alternating current, and is related to a linear oscillation of the magnetization called ferromagnetic resonance (FMR). The output is the direct voltage as a result of the rectification of the alternating current, and has a peak at the FMR frequency. When a direct current is simultaneously injected into the diode, it results in a modulation of the spectrum linewidth. Note however that the necessity of the direct current is, in principle, not essential in the linear spin-torque diode effect. On the other hand, the spin-torque diode effect in the nonlinear region in this work corresponds to the injection locking of an STO. The auto-oscillation in the STO is a nonlinear oscillation caused by an injection of a direct current. The oscillation frequency of the STO can be tuned by changing the magnitude of the direct current. The output of the STO is presented as an oscillating power. When an alternating current is simultaneously injected into the STO with some conditions fulfilled, however, the oscillation frequency and phase of the STO are locked to those of the alternating current. The phenomenon is called the injection locking or forced synchronization. Note that, because of the presence of the alternating current, the STO in the injection-locking state is also expected to output a direct (rectified) voltage, similar to the conventional spin-torque diode effect. The direct voltage is calculated in the following. We consider an STO consisting of a perpendicularly magnetized free layer and an in-plane magnetized reference layer 6 schematically shown in Fig. 1. The z axis is perpendicular to the film-plane, whereas the x axis is parallel to the magnetization direction of the reference layer. An external field H appl and electric current I is applied along the z direction. A positive current corresponds to the electrons flowing from the free to reference layer. In the present work, the current consists of direct and alternating currents as I = I dc + I ac cos 2πf ac t,(1) where I dc and I ac are the amplitudes of the direct and alternating currents, whereas f ac corresponds to the frequency of the alternating current. It was clarified in the previous works that the magnetization dynamics in the STO is well described by the LLG equation with macrospin assumption 6,28 , which is given by dm dt = −γm × H − γH s m × (p × m) + αm × dm dt ,(2) where m and p are the unit vectors pointing in the magnetization directions of the free and reference layers, respectively. The gyromagnetic ratio and the Gilbert damping constant are denoted as γ and α, respectively. The magnetic field H consists of the perpendicular field H appl , the interfacial anisotropy field H K , and the demagnetization field −4πM as H = [H appl + (H K − 4πM ) m z ] e z .(3) The spin-torque strength is given by H s = ηI 2e(1 + λm · p)M V ,(4) where η is the spin polarization of the current whereas λ characterizes the angular dependence of the spin torque 29 . The saturation magnetization and volume of the free layer are denoted as M and V , respectively. It is useful to introduce H ac = ηI ac /(2eM V ) for the latter discussion, which represents the magnitude of the contribution from the alternating current to the spin torque. Let us first consider the auto-oscillation in the absence of the alternating current. We introduce the zenith and azimuth angles, (θ, ϕ), as m = (sin θ cos ϕ, sin θ sin ϕ, cos θ). In the auto-oscillation state, the angle θ is almost constant, as clarified in our previous work 28 . The averaged angle θ and the direct current injected into the STO are related by the following equation, I dc = 2αeλM V η cos θ 1 1 − λ 2 sin 2 θ − 1 −1 × [H appl + (H K − 4πM ) cos θ] sin 2 θ.(5) The physical meaning of Eq. (5) is that, when a direct current I dc is injected, an auto-oscillation with the cone angle θ satisfying Eq. (5) is excited with the oscillation frequency of f (θ), where f (θ) = γ 2π [H appl + (H K − 4πM ) cos θ] .(6) Note that the averaged value of θ can be regarded as the tilted angle of the magnetization from the z axis, whereas ϕ is the phase of the magnetization in the xy plane. On the other hand, in the presence of the alternating current, the spin torque due to the alternating current locks the frequency and phase of the STO when the condition 2π [f (θ) − f ac ] = − √ A 2 + B 2 γH ac 2 sin θ sin (Φ − φ ′ ) ,(7) is satisfied 30 ; see also Supplementary data. Here, we introduce the phase difference between the STO and the alternating current as Φ ≡ ϕ − 2πf ac t,(8) where 2πf ac t is the phase of the alternating current, according to Eq. (1). Note that the phase difference Φ is constant in the synchronized state because the phase ϕ oscillates with the frequency f ac when the synchronization is realized. The dimensionless quantities A and B are given by A = δω(θ) sin θ cos θ F (θ) 2 λ 2 sin 2 θ 1 1 − λ 2 sin 2 θ − 1 ,(9)B = 2 1 − 1 − λ 2 sin 2 θ λ 2 sin 2 θ ,(10) where δω(θ) = γ (H K − 4πM ) sin θ,(11)F (θ) =γH dc λ 2 cos 2 θ (1 − λ 2 sin 2 θ) 3/2 − 1 sin 2 θ 1 1 − λ 2 sin 2 θ − 1 − αγ [H appl cos θ + (H K − 4πM ) cos 2θ] .(12) The angle φ ′ satisfies sin φ ′ = A / √ A 2 + B 2 and cos φ ′ = B/ √ A 2 + B 2 . Since λ sin θ < 1, A and B are approximated as A ≃ δω(θ) sin θ cos θ/F (θ) and B ≃ 1. Note that |A | ≫ 1 for typical parameters 30 . Equation (7) indicates that the phase difference Φ is a function of the frequency of the alternating current f ac . Accordingly, measuring the diode voltage as a function of f ac enables us to identify the phase difference Φ, as shown below. Now let us investigate the role of the STO's phase on the spin-torque diode effect. The resistance of a magnetic tunnel junction is well described as R = R 0 2 − ∆R 2 m · p,(13) where R 0 = R P + R AP and ∆R = R AP − R P with the resistances R P and R AP being the parallel and antiparallel alignments of the magnetizations. The second term of Eq. (13) shows the oscillation reflecting the magnetization oscillation in the free layer. In the injection-locked state, the oscillation frequency is identical to that of the alternating current. Therefore, the rectified voltage of the spin-torque diode effect is defined as 21 V dc = 1 T T 0 dtI ac cos (2πf ac t) −∆R 2 m · p,(14) where T = 1/f ac . Note that m · p = m x = sin θ cos ϕ in the present system. Since the tilted angle θ of the magnetization is almost constant in the auto-oscillation state, we find V dc = − I ac ∆R 4 sin θ cos Φ.(15) Substituting Eq. (7) to Eq. (15) and using the fact that |A | ≫ 1 and B ≃ 1, Eq. (15) can be rewritten as V dc ≃ I ac ∆R π[f (θ) − f ac ] sin 2 θ γH ac √ 1 + A 2 .(16) Equations (15) and (16) predict several interesting features of the rectified voltage generated by an STO. For example, Eq. (16) indicates that the dependence of the diode voltage on the frequency f ac of the alternating current is linear; not a Lorentzian nor anti-Lorentzian function as in the case of the conventional spin-torque diode effect 21 . The difference is due to the fact that the conventional spin-torque diode effect results from the FMR (linear oscillation) state, whereas the present study deals with a nonlinear oscillation. In addition, Eq. (15) indicates that the diode voltage reflects the phase difference Φ between the STO and the alternating current. The result implies that the spin-torque diode effect of the STO can be used to estimate the oscillator's phase experimentally. We perform numerical simulation of Eq. (2) to investigate the validity of Eqs. (15) and (16). The values of the parameters used in the following are obtained from the experiment 6 and its theoretical analysis 28 as M = 1448.3 emu/c.c., H K = 1.8616×10 4 Oe, H appl = 2.0 kOe, V = π × 60 2 × 2 nm 3 , η = 0.537, λ = 0.288, γ = 1.764 × 10 7 rad/(Oe s), and α = 0.005. The resistance difference at the parallel and antiparallel alignment of the magnetizations is ∆R = 150 Ω. The magnitudes of the direct and alternating currents are fixed to I dc = 2.5 mA and I ac = 0.03 mA, respectively. The oscillation frequency excited by this direct current is estimated to be 6.24 GHz from the LLG simulation, corresponding to that the averaged tilted angle is about θ = 56.9 • . Figure 2 shows an example of the definition of the phase difference Φ, where the time evolutions of m x and the alternating current [cos(2πf ac t)] are shown. The frequency of the alternating current f ac is set to be f ac = 6.26 GHz. The figure indicates that the frequency of the STO is fixed to that of the alternating current, and the phase difference in this case is nearly 60 • . Next, we examine the validity of Eqs. (15) and (16) by the following approach. First, we evaluate the diode voltage defined by Eq. (14) with the numerical solution of m x obtained by the LLG simulation. The solid line in Fig. 3 shows the diode voltage obtained by this method. Note that a finite voltage appears when the injection locking is achieved. For the present parameters, the injection locking occurs for 6.21 f ac 6.28 GHz. Outside the locking range, the diode voltage becomes nearly zero. This is because the oscillation frequency of the magnetization differs from that of the alternating current, and thus, a long-time average of Eq. (14) becomes zero, although the numerical simulation is performed during a finite time, and thus, the voltage in Fig. 3 be emphasized that the diode voltage in the locking region shows a linear dependence on f ac , indicating the validity of Eq. (16). Second, we compare this diode voltage with the theoretical formula given by Eq. (15). The dots in Fig. 3 are obtained from Eq. (15) by inserting the value of the phase difference estimated by the LLG simulation, as done in Fig. 2. The inset of Fig. 3 shows the relation between the frequency of the alternating current and the phase Φ in the locked state. The results indicate that the diode voltage in the frequency domain reflects the phase of the STO. In other words, the spin-torque diode effect of the STO is useful to estimate its phase. Although the numerical results are well explained by the analytical formulas, we need to validate the applicability of these formulas for completeness. An assumption used in the derivation of these formulas is that the cone angle θ of the magnetization oscillation is solely determined by the direct current through Eq. (5). Strictly speaking, however, the cone angle in the presence of the alternating current depends not only on I dc but also on I ac and f ac . The dependence of the diode voltage on I ac and f ac is rather complex. For example, Eq. (15) with the assumption of θ being solely determined by direct current indicates that the dependence of the diode voltage on f ac is linear. However, since θ in Eq. (15) depends on f ac , the diode voltage is not a simple linear function of f ac . Simultaneously, however, we should emphasize that the real value of the cone angle is close to that estimated by Eq. (5), and therefore, our proposal to estimate the STO's phase from the spin-torque diode effect works well, as can be seen in Fig. 3. The detail of these points is summarized in Supplementary data. It should also be noted that another direct voltage, I dc R 0 , will appear in experiment 26 , in addition to Eq. (15). However, this direct voltage can be experimentally separated from the rectified voltage because it is independent of the magnitude and frequency of the alternating current. Therefore, we consider that this contribution to the direct voltage does not affect the phase evaluation proposed in this work. In conclusion, the spin-torque diode effect of an STO was studied theoretically. An analytical formula of the diode voltage was derived, which indicates that the rectified voltage of the STO depends linearly on the frequency of the alternating current. The formula also reveals that the diode voltage depends nonlinearly on the phase difference between the magnetization and the alternating current injected into the STO. Numerical simulation of the LLG equation confirmed the validities of the analytical calculations. The result implied that measuring the spin-torque diode voltage of the STO is useful to evaluate the oscillator's phase. The authors are grateful to Yoshishige Suzuki and Minori Goto for valuable discussions. This paper was based on the results obtained from a project (Innovative AI Chips and Next-Generation Computing Technology Development/(2) Development of next-generation computing technologies/Exploration of Neuromorphic Dynamics towards Future Symbiotic Society) commissioned by NEDO. FIG. 1 : 1Schematic view of the spin torque oscillator with the perpendicularly magnetized free layer and in-plane magnetized reference layer. The unit vectors m and p point to the directions of the magnetizations of the free and reference layers. Direct and alternating currents are applied to the oscillator, in addition to a perpendicular magnetic field H appl in the z direction. FIG. 2 : 2Time evolutions of mx and cos 2πfact for Iac = 0.03 mA and fac = 6.26 GHz. The phase difference Φ in this case is 60 • . FIG. 3 : 3Dependence of the diode voltage on the frequency of the alternating current, where Iac = 0.03 mA. Solid line is obtained by evaluating Eq. (14) numerically with the numerical solution of mx, whereas dots are obtained by using Eq. (15) using the phase difference Φ estimated by the numerical simulation. The inset shows the phase Φ as a function of fac. . S I Kiselev, J C Sankey, I N Krivorotov, N C Emley, R J Schoelkopf, R A Buhrman, D C Ralph, Nature. 425380S. I. Kiselev, J. C. Sankey, I. N. Krivorotov, N. C. Emley, R. J. Schoelkopf, R. A. Buhrman, and D. C. Ralph, Nature 425, 380 (2003). . W H Rippard, M R Pufall, S Kaka, S E Russek, T J Silva, Phys. Rev. Lett. 9227201W. H. Rippard, M. R. Pufall, S. Kaka, S. E. Russek, and T. J. Silva, Phys. Rev. Lett. 92, 027201 (2004). . I N Krivorotov, N C Emley, J C Sankey, S I Kiselev, D C Ralph, R A Buhrman, Science. 307228I. N. Krivorotov, N. C. Emley, J. C. Sankey, S. I. Kiselev, D. C. Ralph, and R. A. Buhrman, Science 307, 228 (2005). . D Houssameddine, U Ebels, B Delaët, B Rodmacq, I Firastrau, F Ponthenier, M Brunet, C Thirion, J.-P Michel, L Prejbeanu-Buda, Nat. Mater. 6447D. Houssameddine, U. Ebels, B. Delaët, B. Rodmacq, I. Firastrau, F. Ponthenier, M. Brunet, C. Thirion, J.- P. Michel, L. Prejbeanu-Buda, et al., Nat. Mater. 6, 447 (2007). . S Urazhdin, P Tabor, V Tiberkevich, A Slavin, Phys. Rev. Lett. 105104101S. Urazhdin, P. Tabor, V. Tiberkevich, and A. Slavin, Phys. Rev. Lett. 105, 104101 (2010). . H Kubota, K Yakushiji, A Fukushima, S Tamaru, M Konoto, T Nozaki, S Ishibashi, T Saruya, S Yuasa, T Taniguchi, Appl. Phys. Express. 6103003H. Kubota, K. Yakushiji, A. Fukushima, S. Tamaru, M. Konoto, T. Nozaki, S. Ishibashi, T. Saruya, S. Yuasa, T. Taniguchi, et al., Appl. Phys. Express 6, 103003 (2013). . J.-G Zhu, X Zhu, Y Tang, IEEE Trans. Magn. 44125J.-G. Zhu, X. Zhu, and Y. Tang, IEEE Trans. Magn. 44, 125 (2008). . K Kudo, T Nagasawa, K Mizushima, H Suto, R Sato, Appl. Phys. Express. 343002K. Kudo, T. Nagasawa, K. Mizushima, H. Suto, and R. Sato, Appl. Phys. Express 3, 043002 (2010). . S Bosu, H S Amin, Y Sakuraba, M Hayashi, C Abert, D Suess, T Schrefl, K Hono, Appl. Phys. Lett. 10872403S. Bosu, H. S.-Amin, Y. Sakuraba, M. Hayashi, C. Abert, D. Suess, T. Schrefl, and K. Hono, Appl. Phys. Lett. 108, 072403 (2016). . H Suto, T Kanao, T Nagasawa, K Kudo, K Mizushima, R Sato, Appl. Phys. Lett. 110132403H. Suto, T. Kanao, T. Nagasawa, K. Kudo, K. Mizushima, and R. Sato, Appl. Phys. Lett. 110, 132403 (2017). . J Torrejon, M Riou, F A Araujo, S Tsunegi, G Khalsa, D Querlioz, P Bortolotti, V Cros, K Yakushiji, A Fukushima, Nature. 547428J. Torrejon, M. Riou, F. A. Araujo, S. Tsunegi, G. Khalsa, D. Querlioz, P. Bortolotti, V. Cros, K. Yakushiji, A. Fukushima, et al., Nature 547, 428 (2017). . K Kudo, T Morie, Appl. Phys. Express. 1043001K. Kudo and T. Morie, Appl. Phys. Express 10, 043001 (2017). . T Furuta, K Fujii, K Nakajima, S Tsunegi, H Kubota, Y Suzuki, S Miwa, Phys. Rev. Applied. 1034063T. Furuta, K. Fujii, K. Nakajima, S. Tsunegi, H. Kubota, Y. Suzuki, and S. Miwa, Phys. Rev. Applied 10, 034063 (2018). . S Tsunegi, T Taniguchi, R Lebrun, K Yakushiji, V Cros, J Grollier, A Fukushima, S Yuasa, H Kubota, Sci. Rep. 813475S. Tsunegi, T. Taniguchi, R. Lebrun, K. Yakushiji, V. Cros, J. Grollier, A. Fukushima, S. Yuasa, and H. Kub- ota, Sci. Rep. 8, 13475 (2018). . S Tsunegi, T Taniguchi, S Miwa, K Nakajima, K Yakushiji, A Fukushima, S Yuasa, H Kubota, Jpn. J. Appl. Phys. 57120307S. Tsunegi, T. Taniguchi, S. Miwa, K. Nakajima, K. Yakushiji, A. Fukushima, S. Yuasa, and H. Kubota, Jpn. J. Appl. Phys. 57, 120307 (2018). . D Markovic, N Leroux, M Rioud, F A Araujo, J Torrejon, D Querlioz, A Fukushima, S Yuasa, J Trastoy, P Bortolotti, Appl. Phys. Lett. 11412409D. Markovic, N. Leroux, M. Rioud, F. A. Araujo, J. Tor- rejon, D. Querlioz, A. Fukushima, S. Yuasa, J. Trastoy, P. Bortolotti, et al., Appl. Phys. Lett. 114, 012409 (2019). . S Tsunegi, T Taniguchi, K Nakajima, S Miwa, K Yakushiji, A Fukushima, S Yuasa, H Kubota, Appl. Phys. Lett. 114164101S. Tsunegi, T. Taniguchi, K. Nakajima, S. Miwa, K. Yakushiji, A. Fukushima, S. Yuasa, and H. Kubota, Appl. Phys. Lett. 114, 164101 (2019). . W H Rippard, M R Pufall, S Kaka, T J Silva, S E Russek, J A Katine, Phys. Rev. Lett. 9567203W. H. Rippard, M. R. Pufall, S. Kaka, T. J. Silva, S. E. Russek, and J. A. Katine, Phys. Rev. Lett. 95, 067203 (2005). . Y Zhou, J Persson, S Bonetti, J Akerman, Appl. Phys. Lett. 9292505Y. Zhou, J. Persson, S. Bonetti, and J. Akerman, Appl. Phys. Lett. 92, 092505 (2008). . G Finocchio, M Carpentieri, A Giordano, B Azzerboni, Phys. Rev. B. 8614438G. Finocchio, M. Carpentieri, A. Giordano, and B. Azzer- boni, Phys. Rev. B 86, 014438 (2012). . A A Tulapurkar, Y Suzuki, A Fukushima, H Kubota, H Maehara, K Tsunekawa, D D Djayaprawira, N Watanabe, S Yuasa, Nature. 438339A. A. Tulapurkar, Y. Suzuki, A. Fukushima, H. Kub- ota, H. Maehara, K. Tsunekawa, D. D. Djayaprawira, N. Watanabe, and S. Yuasa, Nature 438, 339 (2005). . H Kubota, A Fukushima, K Yakushiji, T Nagahama, S Yuasa, K Ando, H Maehara, Y Nagamine, K Tsunekawa, D D Djayaprawira, Nat. Phys. 437H. Kubota, A. Fukushima, K. Yakushiji, T. Naga- hama, S. Yuasa, K. Ando, H. Maehara, Y. Nagamine, K. Tsunekawa, D. D. Djayaprawira, et al., Nat. Phys. 4, 37 (2008). . J C Sankey, Y.-T Cui, J Z Sun, J C Slonczewski, R A Buhrman, D C Ralph, Nat. Phys. 467J. C. Sankey, Y.-T. Cui, J. Z. Sun, J. C. Slonczewski, R. A. Buhrman, and D. C. Ralph, Nat. Phys. 4, 67 (2008). . X Cheng, J A Katine, G E Rowlands, I N Krivorotov, Appl. Phys. Lett. 10382402X. Cheng, J. A. Katine, G. E. Rowlands, and I. N. Krivo- rotov, Appl. Phys. Lett. 103, 082402 (2013). . B Fang, M Carpentieri, X Hao, H Jiang, J A Katine, I N Krivorotov, B Ocker, J L K L Wang, B Zhang, B Azzerboni, Nat. Commun. 711259B. Fang, M. Carpentieri, X. Hao, H. Jiang, J. A. Katine, I. N. Krivorotov, B. Ocker, J. L. K. L. Wang, B. Zhang, B. Azzerboni, et al., Nat. Commun. 7, 11259 (2016). . L Zhang, B Fang, J Cai, M Carpentieri, V Puliafito, F Garesci, P K Amiri, G Finocchio, Z Zeng, Appl. Phys. Lett. 113102401L. Zhang, B. Fang, J. Cai, M. Carpentieri, V. Puliafito, F. Garesci, P. K. Amiri, G. Finocchio, and Z. Zeng, Appl. Phys. Lett. 113, 102401 (2018). . M Goto, Y Suzuki, private communicationM. Goto and Y. Suzuki, private communication. . T Taniguchi, H Arai, S Tsunegi, S Tamaru, H Kubota, H Imamura, Appl. Phys. Express. 6123003T. Taniguchi, H. Arai, S. Tsunegi, S. Tamaru, H. Kubota, and H. Imamura, Appl. Phys. Express 6, 123003 (2013). . J C Slonczewski, Phys. Rev. B. 7124411J. C. Slonczewski, Phys. Rev. B 71, 024411 (2005). . T Yamaguchi, N Akashi, K Nakajima, S Tsunegi, H Kubota, T Taniguchi, Phys. Rev. B. 100224422T. Yamaguchi, N. Akashi, K. Nakajima, S. Tsunegi, H. Kubota, and T. Taniguchi, Phys. Rev. B 100, 224422 (2019).
[]
[ "Revisiting the source with infrared interferometry and optical integral field spectroscopy", "Revisiting the source with infrared interferometry and optical integral field spectroscopy" ]
[ "A J Frost \nInstitute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium\n", "J Bodensteiner \nEuropean Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nKarl-Schwarzschild-Str. 285748Garching b. MünchenGermany\n", "Th Rivinius \nEuropean Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nSantiago 1919001CasillaChile\n", "D Baade \nEuropean Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nKarl-Schwarzschild-Str. 285748Garching b. MünchenGermany\n", "A Merand \nEuropean Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nKarl-Schwarzschild-Str. 285748Garching b. MünchenGermany\n", "F Selman \nEuropean Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nSantiago 1919001CasillaChile\n", "M Abdul-Masih \nEuropean Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nSantiago 1919001CasillaChile\n", "G Banyard \nInstitute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium\n", "E Bordier \nInstitute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium\n\nEuropean Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nSantiago 1919001CasillaChile\n", "K Dsilva \nInstitute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium\n", "C Hawcroft \nInstitute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium\n", "L Mahy \nRoyal Observatory of Belgium\nAvenue Circulaire 3B-1180BrusselBelgium\n", "M Reggiani \nInstitute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium\n", "T Shenar \nAnton Pannekoek Institute for Astronomy\nUniversity of Amsterdam\n94249, 1090 GEPostbus, AmsterdamThe Netherlands\n", "M Cabezas \nAstronomical Institute, Academy of Sciences of the Czech Republic\n141 31 Praha 41401BočníIICzech Republic\n", "P Hadrava \nAstronomical Institute, Academy of Sciences of the Czech Republic\n141 31 Praha 41401BočníIICzech Republic\n", "M Heida \nEuropean Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nKarl-Schwarzschild-Str. 285748Garching b. MünchenGermany\n", "R Klement \nThe CHARA Array\nGeorgia State University\nMount Wilson Observatory, Mount Wilson91023CAUSA\n", "H Sana \nInstitute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium\n" ]
[ "Institute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium", "European Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nKarl-Schwarzschild-Str. 285748Garching b. MünchenGermany", "European Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nSantiago 1919001CasillaChile", "European Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nKarl-Schwarzschild-Str. 285748Garching b. MünchenGermany", "European Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nKarl-Schwarzschild-Str. 285748Garching b. MünchenGermany", "European Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nSantiago 1919001CasillaChile", "European Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nSantiago 1919001CasillaChile", "Institute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium", "Institute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium", "European Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nSantiago 1919001CasillaChile", "Institute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium", "Institute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium", "Royal Observatory of Belgium\nAvenue Circulaire 3B-1180BrusselBelgium", "Institute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium", "Anton Pannekoek Institute for Astronomy\nUniversity of Amsterdam\n94249, 1090 GEPostbus, AmsterdamThe Netherlands", "Astronomical Institute, Academy of Sciences of the Czech Republic\n141 31 Praha 41401BočníIICzech Republic", "Astronomical Institute, Academy of Sciences of the Czech Republic\n141 31 Praha 41401BočníIICzech Republic", "European Organisation for Astronomical Research in the Southern Hemisphere (ESO)\nKarl-Schwarzschild-Str. 285748Garching b. MünchenGermany", "The CHARA Array\nGeorgia State University\nMount Wilson Observatory, Mount Wilson91023CAUSA", "Institute of Astronomy\nKU Leuven\nCelestijnlaan 200D3001LeuvenBelgium" ]
[]
Context. Two scenarios have been proposed to match the existing observational constraints of the object HR 6819. The system could consist of a close inner B-type giant plus a black hole (BH) binary with an additional Be companion in a wide orbit. Alternatively, it could be a binary composed of a stripped B star and a Be star in a close orbit. Either scenario makes HR 6819 a cornerstone object as the stellar BH closest to Earth, or as an example of an important transitional, non-equilibrium phase for Be stars with solid evidence for its nature. Aims. We aim to distinguish between the two scenarios for HR 6819. Both models predict two luminous stars but with very different angular separations and orbital motions. Therefore, the presence of bright sources in the 1-100 milliarcsec (mas) regime is a key diagnostic for determining the nature of the HR 6819 system. Methods. We obtained new high-angular resolution data with VLT/MUSE and VLTI/GRAVITY of HR 6819. The MUSE data are sensitive to bright companions at large scales, whilst the interferometric GRAVITY data are sensitive down to separations on mas scales and large magnitude differences. Results. The MUSE observations reveal no bright companion at large separations and the GRAVITY observations indicate the presence of a stellar companion at an angular separation of ∼ 1.2 mas that moves on the plane of the sky over a timescale compatible with the known spectroscopic 40-day period. Conclusions. We conclude that HR 6819 is a binary system and that no BH is present in the system. The unique nature of HR 6819, and its proximity to Earth make it an ideal system for quantitatively characterising the immediate outcome of binary interaction and probing how Be stars form.
10.1051/0004-6361/202143004
[ "https://arxiv.org/pdf/2203.01359v1.pdf" ]
247,222,660
2203.01359
321364b852c24826eb50f48cfb9b237dab0a83e7
Revisiting the source with infrared interferometry and optical integral field spectroscopy March 4, 2022 A J Frost Institute of Astronomy KU Leuven Celestijnlaan 200D3001LeuvenBelgium J Bodensteiner European Organisation for Astronomical Research in the Southern Hemisphere (ESO) Karl-Schwarzschild-Str. 285748Garching b. MünchenGermany Th Rivinius European Organisation for Astronomical Research in the Southern Hemisphere (ESO) Santiago 1919001CasillaChile D Baade European Organisation for Astronomical Research in the Southern Hemisphere (ESO) Karl-Schwarzschild-Str. 285748Garching b. MünchenGermany A Merand European Organisation for Astronomical Research in the Southern Hemisphere (ESO) Karl-Schwarzschild-Str. 285748Garching b. MünchenGermany F Selman European Organisation for Astronomical Research in the Southern Hemisphere (ESO) Santiago 1919001CasillaChile M Abdul-Masih European Organisation for Astronomical Research in the Southern Hemisphere (ESO) Santiago 1919001CasillaChile G Banyard Institute of Astronomy KU Leuven Celestijnlaan 200D3001LeuvenBelgium E Bordier Institute of Astronomy KU Leuven Celestijnlaan 200D3001LeuvenBelgium European Organisation for Astronomical Research in the Southern Hemisphere (ESO) Santiago 1919001CasillaChile K Dsilva Institute of Astronomy KU Leuven Celestijnlaan 200D3001LeuvenBelgium C Hawcroft Institute of Astronomy KU Leuven Celestijnlaan 200D3001LeuvenBelgium L Mahy Royal Observatory of Belgium Avenue Circulaire 3B-1180BrusselBelgium M Reggiani Institute of Astronomy KU Leuven Celestijnlaan 200D3001LeuvenBelgium T Shenar Anton Pannekoek Institute for Astronomy University of Amsterdam 94249, 1090 GEPostbus, AmsterdamThe Netherlands M Cabezas Astronomical Institute, Academy of Sciences of the Czech Republic 141 31 Praha 41401BočníIICzech Republic P Hadrava Astronomical Institute, Academy of Sciences of the Czech Republic 141 31 Praha 41401BočníIICzech Republic M Heida European Organisation for Astronomical Research in the Southern Hemisphere (ESO) Karl-Schwarzschild-Str. 285748Garching b. MünchenGermany R Klement The CHARA Array Georgia State University Mount Wilson Observatory, Mount Wilson91023CAUSA H Sana Institute of Astronomy KU Leuven Celestijnlaan 200D3001LeuvenBelgium Revisiting the source with infrared interferometry and optical integral field spectroscopy March 4, 2022Received December 24, 2021; Accepted February 1, 2022.Astronomy & Astrophysics manuscript no. aandacorr Letter to the Editor HR 6819 is a binary system with no black holeStars: individual: HR 6819 -Stars: massive -Stars: emission-line, Be -Binaries: close -Techniques: interferometric - Techniques: imaging spectroscopy Context. Two scenarios have been proposed to match the existing observational constraints of the object HR 6819. The system could consist of a close inner B-type giant plus a black hole (BH) binary with an additional Be companion in a wide orbit. Alternatively, it could be a binary composed of a stripped B star and a Be star in a close orbit. Either scenario makes HR 6819 a cornerstone object as the stellar BH closest to Earth, or as an example of an important transitional, non-equilibrium phase for Be stars with solid evidence for its nature. Aims. We aim to distinguish between the two scenarios for HR 6819. Both models predict two luminous stars but with very different angular separations and orbital motions. Therefore, the presence of bright sources in the 1-100 milliarcsec (mas) regime is a key diagnostic for determining the nature of the HR 6819 system. Methods. We obtained new high-angular resolution data with VLT/MUSE and VLTI/GRAVITY of HR 6819. The MUSE data are sensitive to bright companions at large scales, whilst the interferometric GRAVITY data are sensitive down to separations on mas scales and large magnitude differences. Results. The MUSE observations reveal no bright companion at large separations and the GRAVITY observations indicate the presence of a stellar companion at an angular separation of ∼ 1.2 mas that moves on the plane of the sky over a timescale compatible with the known spectroscopic 40-day period. Conclusions. We conclude that HR 6819 is a binary system and that no BH is present in the system. The unique nature of HR 6819, and its proximity to Earth make it an ideal system for quantitatively characterising the immediate outcome of binary interaction and probing how Be stars form. Introduction The majority of main-sequence (MS) massive OB-type stars belong to binary or higher-order multiple systems, among which close binaries (P < 10 yr) are common (Mason et al. 1998;Sana et al. 2012;Sota et al. 2014;Moe & Di Stefano 2017;Almeida et al. 2017;Banyard et al. 2021;Villaseñor et al. 2021). Close companions will most likely interact and exchange mass and angular momentum, significantly impacting their subsequent evolutionary path and final fates. Binary interactions are likely to leave the involved stars chemically and physically altered (Paczyński 1971;Pols et al. 1991;de Mink et al. 2013), although high-quality observations, which theoretical predictions might be compared to, are lacking (Mahy et al. 2020a,b). If the central core is massive enough and the system survives both the interaction and a potential supernova explosion, it may result in a MS star and compact object binary (Langer et al. 2020). Those are prime candidate progenitors of high-and low-mass X-ray binaries (e.g. Liu et al. 2006), and for a small fraction of them, of double compact binaries and gravitational wave (GW) mergers (e.g. Abbott et al. 2019). HR 6819 (also known as HD 167 128, ALS 15 056, and QV Tel) is a highly intriguing object that was recently proposed as a candidate multiple system containing a stellar-mass black hole (BH). Based on optical spectroscopic monitoring over Article number, page 1 of 12 arXiv:2203.01359v1 [astro-ph.SR] 2 Mar 2022 A&A proofs: manuscript no. aandacorr ∼5yr, Rivinius et al. (2020) determined HR 6819 to be a narrow, singled-lined spectroscopic binary with a 40 d orbit and a negligible eccentricity. The authors proposed the HR 6819 system to be a hierarchical triple, with a distant classical Be star orbiting a short-period inner binary consisting of a B3 III star and a stellar, quiescent (non-accreting) BH in a close circular orbit. Classical Be stars are rapidly rotating B stars with decretion disks that produce strong emission lines (e.g. Rivinius et al. 2013), with studies suggesting that they may form as a result of mass and angular momentum transfer in binary systems (e.g. Wang et al. 2021;Bodensteiner et al. 2020b;Klement et al. 2019). This would make HR 6819 a lower mass counterpart of LB-1, another intriguing spectroscopic binary system (Irrgang et al. 2020). While early works implied that the detected RV signal was spurious and a more conservative mass BH could reproduce the observations of LB-1 El-Badry & Quataert 2020;Simón-Díaz et al. 2020), Shenar et al. (2020) showed that the spectral variations could instead be due to a rare Be binary system consisting of a stripped star and a Be star rotating near its critical velocity. Both models were additionally tested by Lennon et al. (2021). For HR 6819, Bodensteiner et al. (2020a) and El-Badry & Quataert (2021) independently proposed that the spectral observations could similarly be caused by a binary system consisting of a stripped B-type primary and a rapidly-rotating Be star that formed from a previous mass-transfer event. Additionally, Gies & Wang (2020) reported a small reflex motion detected in the Hα line arising in the Be star disk, invoked by a companion in a close orbit, therefore yielding the same conclusion. In the meantime, Safarzadeh et al. (2020) proposed based on stability arguments that the triple configuration is highly unlikely. On the other hand, Mazeh & Faigler (2020) argued that if HR 6819 is indeed a triple system, the putative BH could itself be an undetected binary system of two A0 stars, making HR 6819 a quadruple system. Klement et al. (2021) later reported on speckle observations (obtained with the Zorro imager on the Gemini South telescope in Chile) that indicated a possible optical source at 120 mas from the central source. Given that the brightness of the source could not be fully constrained (it can be up to five magnitudes fainter than the central object), it could either be the Be star and thus an indication for the triple scenario, or an unrelated background or foreground source. The main difference between the two main proposed scenarios is the separation of the two luminous sources. The triple scenario relies on a wide orbit for the two bright stars in the system with an angular separation of ∼100 mas, so that significant shifts are not generated in the RV measurements over five years. On the other hand, the binary model requires a much smaller separation of ∼1-2 mas. Both models have consequential implications. If HR 6819 were to contain a BH, the system would constitute a prime testing centre for investigating supernova kicks and GWprogenitors. If HR 6819 were to contain a stripped star, it would provide a direct observation of a binary system briefly after mass transfer as well as evidence that the binary channel is a way in which classical Be stars form. In this letter we present new observations of HR 6819 designed to distinguish between these two scenarios. Specifically, observations from the GRAVITY and Multi Unit Spectroscopic Explorer (MUSE) instruments at the Very Large Telescope/Interferometer (VLT/I) were selected. The MUSE data are sensitive to a wide companion, with separations in the range of ∼100 mas to 7.5", whilst the GRAVITY data cover the range from a fraction of a mas to ∼100 mas. In Sections 2 and 3 we present our MUSE and GRAVITY observations respectively. Our results are discussed and summarised in Section 4, where we offer our conclusion that HR 6819 is a close binary consisting of a stripped B star and a rapidly rotating Be star. MUSE: Search for a wide companion Observational setup and data reduction MUSE is an integral field spectrograph operating over visible wavelengths (480-930nm) (Bacon et al. 2010). The MUSE observations were designed to detect a possible wide companion. The spectroscopic observations implied that the B star and Be star are of similar brightness (Rivinius et al. 2020;Bodensteiner et al. 2020a), and the speckle interferometry suggested a source at most five magnitudes fainter than the central source at a separation of 120 mas (Klement et al. 2021). Therefore, the observations were set up to yield a signal-to-noise (S/N) of 25 in Hα for a putative companion at 120 mas that is five magnitudes fainter than the central source. One epoch of VLT/MUSE observations of HR 6819 was obtained on July 22, 2021 (2107.D-5026, PI: Rivinius) in the narrow-field mode (NFM), supported by adaptive optics (AO), and in nominal wavelength mode (N). The NFM covers a 7.5" × 7.5" field of view (FoV) with a spatial sampling of 0.025" × 0.025". At a distance of 340 pc, the FoV corresponds to ∼2550 AU (or ∼0.12 pc). Given the brightness of the target (V = 5.4), the observations were not carried out in the normal NFM-AO mode but using a procedure that widens the laser configuration and thus allows the observations of brighter stars. The observation was split into several subsets with a small (∼1") offset and position angle 0 or 90 • . Each set was then made up by 6 or 12 short (3-second) exposures, leading to a total of 36 exposures with an overall exposure time of 108 seconds. The individual MUSE NFM AO N observations were reduced using the standard MUSE pipeline (Weilbacher et al. 2020), which includes bias subtraction, flat fielding, wavelength calibration and illumination correction for each individual integral field unit (IFU). After recombining the data subsets from all the IFUs, the data were flux-calibrated using a standard star and a sky subtraction was performed. Given the extremely good weather conditions during the observations (at the start of the first exposure, the seeing was 0.58", which varied between the observations and increased to around 0.86"), most of the exposures are saturated in the central source. This does not affect the purpose of the MUSE observations, however, which is to probe whether there is a wide companion at ∼120 mas. Results The individual (not saturated) MUSE exposures show one central source. Collapsing the wavelengths to a white-light image (see Figure A.1) we find no indication for a second bright source at 120 mas. Using the spectroscopic capabilities of MUSE, we find no indication for a second star farther out (which could for example be indicated by spatially localised absorption lines in the Ca triplet at around 8500 Å). Additionally, there seems to be no significant spatial change in the spectrum of the central source, so we extracted the spectrum by summing over the 3×3 most central pixels with QFitsView 1 . We compared this extracted spectrum to an observed FEROS spectrum (described in Rivinius et al. 2020) at a similar orbital phase, after resampling In the appendices, we explain in detail how we determined this as our final fit to the data. While the different rows show different VLTI baselines (see inset labels), the different panels show the normalised flux, the closure phases, the differential phases and visibilities, from left to right. The blue dots represent the continuum. the spectra to the MUSE resolution and binning. The comparison (see Fig. A.2) shows that there is no significant difference between the spectra taken by the two instruments, implying that the two sources that contribute to the composite FEROS spectrum (the B and the Be star, which are common to both models) are both located within in the central source detected in the MUSE data. These two findings combined (i.e. the lack of a second bright source at 120 mas from the central source, and the fact that both the B and the Be star both contribute to the spectrum of the central source) unambiguously show that the Be star in HR 6819 is not a wide companion located at 120 mas as suggested by the speckle interferometry. GRAVITY: Probing the inner regions Observations and fitting process GRAVITY is the K-band four-telescope beam combiner at the Very Large Telescope Interferometer (Gravity Collaboration et al. 2017). Three GRAVITY observations of HR 6819 were obtained in August and September 2021 using the high spectral resolving power setting of GRAVITY (R = λ/∆λ = 4000). Observations were taken with the large configuration (A0-G1-J2-J3) of the 1.8 m Auxiliary Telescopes (ATs). The first observation (August 2021) was unsuccessful, as severe weather and technical issues only allowed the science source to be observed with no calibrator. The following two attempts, on September 6 and 19 2021, were successful, however, with clear sky conditions, precipitable water vapour below 30 mm, and seeing of less than 1" for both the science source and the calibrator star (HD 161420). The data were reduced and calibrated using the standard GRAV-ITY pipeline (Lapeyrere et al. 2014). Geometric fitting was applied to the interferometric observables (i.e., visibilities, closure phases, differential phases and normalised fluxes) using the Python3 module PMOIRED 2 , which allows the display and modelling of interferometric data stored in the OIFITS format. Notably, a strong Brγ line is visible across the observables. When fitting models with PMOIRED, the primary is described as the central source, fixed at position (0,0) in the FoV. The flux of the primary star is normalised to 1 so it can be used as a reference point to determine the relative fluxes of any companions. Additionally, any present emission lines in the fluxes and visible in the differential phases can be modelled, by attributing Lorentzian or Gaussian line profiles to the model components. The fitting process was judged using the reduced chi-square statistic. Results A variety of models were run, including single star models, single disk models, binary disk models and triple systems. We refer to Appendix B for an in-depth description of the fitting process. The best fit to the GRAVITY data, with an example shown in Fig. 1, comes from a model composed of two sources, implying that a binary system with two optically bright companions is present at GRAVITY scales at a separation of ∼1 mas across A&A proofs: manuscript no. aandacorr (left) and synthetic spectra (right) corresponding to the best-fit model shown in Fig. 1. The green line corresponds to the primary star that is fixed at (0,0) and whose continuum flux is normalised to 1. The orange line corresponds to the secondary star. The black line is the total spectrum. The spectral line included in the model is at the Brγ wavelength. both epochs. Table B.1 displays the specific parameters resulting in the best-fitting model. We determined the errors on our derived measurements of the sources using bootstrapping. In the bootstrapping procedure, data are drawn randomly to create new datasets 200 times and the final parameters and uncertainties are estimated as the average and standard deviation of all the fits that were performed. Figure 2 displays a model image and the synthetic spectra of the model components. The dimmer star is on average 56% the flux of the brighter star on average and in this best-fitting model the Brγ emission comes exclusively from the brightest star in the model. Astrometric orbit The interferometric data provide us with astrometric information about the luminous stars in the system. We fed this astrometric data into the SPectroscopic and INterferometric Orbital Solution finder (spinOS, Fabry et al. 2021) to compare the movement of the system as constrained by interferometry to the orbit found by Bodensteiner et al. (2020a). Given the limited astrometric data, we opted to to fix the distance. Rivinius et al. (2020) suggested a distance of 310 ± 60 pc while Gaia yielded values of 342 +25 −21 pc and 364 +17 −16 pc in DR2 and eDR3, respectively (Bailer-Jones et al. 2018. It is unclear, however, how the presence of two optical components in HR 6819 impacts the Gaia distance estimate. Astrometric excess noise can be caused by the high target brightness as well as by binarity and thus the purely statistical errors given by the parallax error parameter of Gaia can be incomplete. For source 6649357561810851328 (HR 6819) the astrometric excess noise is marked as 0.857 mas for eDR3 at a significance level of 1180 and 0.731 mas at a significance level of 247 for DR2 and thus these two values overlap within their error ranges. We adopted a distance value of 340 pc, which sits comfortably within the estimated range of distances and is consistent with the distance value adopted in Bodensteiner et al. (2020a). Furthermore, we used the presence of the strong Brγ emission to identify the relative locations of the Be and B stars in the GRAVITY data, as the best interferometric fit determines that the emission determines from the brighter star only. We also used the orbital parameters (P, i, T 0 ) of Bodensteiner et al. (2020a) as initial guesses as well as the measured RVs of the narrow-line star to perform a first optimisation of the combined astrometric and RV data through the Levenberg-Marquardt optimiser in spinOS. The orbit was assumed to be circular. The obtained parameters then served as input for a full Markov Chain Monte Carlo (MCMC) optimisation where the period, inclination, time of perihelion passage, ascending node, semi-amplitude and total mass (P, i, T 0 , Ω, K 1 , γ 1 , and M tot , respectively) were varied simultaneously. The resulting best-fit astrometric orbit and RV curve are shown in Fig. 3. The new astrometric data are simultaneously fit together with the RV measurements of the narrow-line star with a fixed distance of 340 pc. We recovered the orbital parameters of Bodensteiner et al. (2020a), except for a higher inclination of 49 ± 2 • (compared to their estimated value of ∼32 • ) and a small adjustment of the ephemeris within the uncertainty of Bodensteiner et al. (2020a) (Table C.1). Our combined solution yields a total mass for the system of 6.5±0.3 M , with the errors estimated from the MCMC (Fig. C.1). This derived mass is in excellent agreement with the masses estimated from the atmospheric analysis of the Be and stripped B star scenario of Bodensteiner et al. (2020a). However, the estimated mass ratio would yield K 2 ≈ 25 km s −1 , in stark contrast with the value of 4 ± 1 km s −1 estimated from spectral disentangling. An inclination of 35 • would be needed to reconcile the two K 2 values. A solution with an inclination fixed at i = 37 • is possible, but the residuals of the astrometric solution increase from 11µas to 67µas. In this case, the derived total mass is 5.8±0.2M . Possible further avenues to reconcile the two datasets include a much closer distance (∼ 260 pc, Table C.1), larger statistical errors on the GRAVITY measurements, small systematic errors (∼70 µas) in the relative positions listed in Table B.1, or a combination of these. The assumed distance indeed has a strong impact on the estimated total mass (Table C.1). Varying the distance by +25 pc (or −30 pc) translates into a ±1.5 M shift of the total mass. In addition, astrometric residuals of the fit of around 10 µas clearly indicate that we are over-fitting the data, that is we are lacking a sufficient number of observational constraints with respect to the number of model parameters. Uncertainties, in this situation, are typically underestimated and the goodness-of-the-fit cannot be used as an estimate of the quality of the model. It is therefore clear that more GRAVITY data are required if reliable estimates of the orbit orientation and the individual component masses of HR 6819 are to be obtained. More interferometric data would also help to lift the uncertainties surrounding the Gaia estimate. Despite these limitations, the fact that the MUSE data described in the previous section detect no wide, bright companion, and the fact that GRAVITY shows two bright objects in a 40-day orbit lead us to conclude that the (B+BH)+Be triple system scenario should be excluded at a high confidence level. Conclusions We have presented new MUSE and GRAVITY data for the exotic source HR 6819 in order to distinguish between the two proposed hypotheses for the nature of the system. The first scenario suggests that HR 6819 is a triple system with an inner B star plus BH binary and an outer Be star tertiary companion on a wide orbit (Rivinius et al. 2020) and the second scenario suggests that the system is a binary system consisting of a Be star and a B star that have previously interacted (Bodensteiner et al. 2020a;El-Badry & Quataert 2021). If the first scenario were correct, a bright quasi-stationary companion should be present at ∼100 mas. If the second scenario were correct, no bright companion should be detected at this separation and two bright stars should be detected at small separation. Our MUSE data show no bright companion at ∼100 mas separation. Our GRAVITY data resolve a close binary with ∼1 mas separation composed of two bright stars. The GRAVITY spectra show the characteristic Brγ emission associated with a Be star decretion disk (Rivinius et al. 2013). Therefore, we conclude that HR 6819 is a binary system and reject the presence of a BH on a short-period orbit in this system. HR 6819 therefore constitutes a perfect source for investigating the origin of Be stars and their possible formation through a binary channel. In future work, further monitoring of the system with GRAVITY will be crucial. Not only can the orbit be better constrained, but these measurements will provide distance and precise mass estimates of what is likely a newly post-interaction, bloated stripped object and its associated Be star for the first time. Together with higherresolution spectroscopy (e.g. from UVES), abundances of both stars could be derived. With this information, HR 6819 would constitute a corner-stone object for comparing binary evolution models. While one spectrum is extracted by summing the central 5x5 pixel of the source (and therefore increasing the S/N), the second spectrum is extracted just from the central pixel alone. The FEROS spectrum is downgraded to MUSE resolution and binning in wavelength. While slight differences are apparent, the overall spectra are quite similar and therefore indicate that the two stars making up the composite FEROS spectrum are within the central source that is visible in the MUSE data. Appendix B: Modelling the GRAVITY data Throughout the fitting process geometrical models were fit to the normalised K-band flux, the closure phases, the differential phases and the visibilities of the GRAVITY data using the code PMOIRED. The interferometric phases can tell us about asymmetries and the orientation of a system, whilst the visibilities describe the spatial extent of a source, with lower visibilities corresponding to a more resolved object (Millour 2014). A perfect cosine wave pattern should be generated in the u − v plane for two point sources and the deviation of the observations from this is indicative of the position angle between the sources, the frequency of the cosine is related to the separation of the sources and the contrast of the wave pattern is indicative of the flux ratio. Moreover, GRAVITY provides spectrally resolved data, and the pattern is wavelength dependent (with a known dependence), so that in the end the available measurements very strongly overconstrain the system parameters (separation, PA and flux ratio). The GRAVITY data were taken in two polarisation planes, and the closure phase flips between these two datasets. Therefore, in order to reliably constrain the orientation of our system, the differential phase was of great importance. In our datasets a Brγ emission line is clearly visible in the normalised flux and in the differential phase. A double peak is present in this line, particu- larly in the second epoch of the data (see Fig 1). This profile is often indicative of a rotating elongated structure such as a disk (e.g. Rivinius et al. 2013). Reproducing the Brγ line proved very useful during the fitting process in order to determine the nature of HR 6819, and we describe this process in this appendix. A single-star model provided a poor fit to the datasets, as did a triple. Using the detectionLimit method of PMOIRED (the same as the approach implemented in the CANDID 3 software Gallenne et al. 2015) on our GRAVITY data, we find that a third companion within 100 mas of the binary must have a flux contrast of 2.0% (2.8%) at most in the K band compared to the primary, otherwise we would have detected it at the 3 σ (5 σ) level. Binary models proved much more successful. One such model that presented a good fit was a binary model in which an emission line was associated with each star. The majority of the emission came from the brightest star in the system and was emitted at the central wavelength of Brγ. The remaining contribution to the line seen in the total flux came from the dimmer star. The emission of the dimmer star in this model is slightly offset to shorter wavelengths, reproducing the double peak in the total flux. Brγ emission coming from the two stars in a Be post-interaction binary might be plausible if some of the accreting material had failed to leave its Roche lobe and had created a small disk around the B star (as suggested in studies such as Shenar et al. 2020 andHennicker et al. 2021). However, if this were the case, the emission of the two stars would be seen to switch throughout the orbit, that is, the wavelength of the weaker emission line should change. This is not observed between the two epochs. Additionally, the superposition of two emission lines from each star could only reproduce the peak in the normalised flux, not the shape of the differential phase. Further modelling of disks around the B stars in stripped B plus Be binaries and the generation of synthetic interferometric observ-A&A proofs: manuscript no. aandacorr ables could help to better quantify the effects of this potential emission. Because a model with two emission lines did not produce consistent results, we ran a fit with two point sources of which only one has an emission line. This resulting fit for this model shows that the Brγ emission comes from the brighter star in the binary and provides an excellent fit to the data as quantified by the chi-square value. PMOIRED can also run a grid search to find the best model, based on the capabilities of an associated code, CANDID (Gallenne et al. 2015). Using this grid search on our datasets, the code determines that a two-point source model is the best option for the datasets, with an emission line in the brighter star. Figure B.1 shows the result of this grid search. However, this model fails to reproduce the small double-peak feature (see Fig. 1). While the main peak seen in the differential phase is reproduced, the slight absorption feature is not. Despite the ambiguity, this implies that the brighter star in the system is most likely the Be star because Be star decretion disks are known to generate a surplus of Brγ emission and infrared excess (Rivinius et al. 2013). The shape of the differential phase can provide key information about a system as it is a measure of the spatial offset of the line-emitting region with respect to the continuum emission (Millour 2014;Weigelt et al. 2007). For example, a Vshaped differential phase can imply asymmetry or simple displacement of the photocentre, whilst an S-shaped phase can imply rotation. Such S-shapes are typical of disks, and Meilland et al. (2012) note their presence in their spectro-interferometric VLTI/AMBER survey of Be stars. In our differential phases we observe a combination of these two features. Thus, we decided to determine whether a disk model could fit the observables. Uniform disk models were first attempted, but least-square fitting was unable to converge. Be stars are expected to have Keplerian rotating disks (Rivinius et al. 2006), therefore we also tried this, but while it was more successful the code also struggled to con-verge to a physical solution. This could be due to the fact that the spatial extent of the disk is too small to be resolved by GRAV-ITY (<1 mas) whilst its emission persists in the total spectrum. A toy model was used to determine whether the blue-and redshifted emission could be used to help constrain the Keplerian fit. The model designed to represent the emission of a Be star consisted of one point source with an associated emission line at the redshifted wavelength, one point source with an emission line at the blueshifted wavelength, and one source with no emission line meant to represent the centre of the disk. The dimmer star was modelled as a point source without an emission line. Figures B.2 and B.3 show the best-fit dataset and a visualisation of the model. The model provides a good fit to the data, particularly the second epoch, despite its nonphysical nature. The only caveat is that the fit to the visibilities of the first dataset is worse, but this is consistent because the separation of the blue and red shifted emission is not-technically resolvable with the VLTI. We used this model to provide the starting parameters of another Keplerian disk fit. With these starting parameters, this Keplerian disk model could converge, resulting in the fits shown in Figures B.4 and B.5. The Keplerian model improves the shape of the model differential phase and normalised flux profiles further and is the only model that additionally fits the small peak feature in the visibilities. Again, the main caveat is that the size of the disk as determined through the fitting process is too small to be detected with the VLTI. With only two epochs the measure of the velocity structure of the disk is also likely unreliable. Whilst the exact physical nature of the disk cannot be determined with our limited datasets, we conclude from our GRAV-ITY data 1) that the system is indeed a binary, as all the bestfitting models include two sources, 2) that the separation of these two sources is ∼1 mas (across all models), and 3) that the brighter star in the system is likely to be the Be star, given the convergence on a model with the emission line in the brighter star and the successful (preliminary) detection of a disk around this brighter star. In the main body of the text we present the simplest binary model (with Brγ emission in the brightest star) as our final model, given the uncertainty on the physical properties of this small disk that can be obtained with our dataset. Figures 1 and 2 shown in the main text correspond to this model. Table B.1 shows the details of the final model. Table B.1. Parameters derived from the GRAVITY fits of the final model. ρ is the separation and PA is the position angle. The position of the primary was fixed at (0,0). In this best-fitting model both stars are modelled as uniform disks with 0.2 mas diameters, such that they appear as point sources. f K is the flux ratio of the dimmer star in the K band in reference to the flux of the normalised brightest star (fixed to 1 during the fitting process). f line is the flux of the Brγ line with respect to the continuum, w line is the full-width at half maximum (FWHM) of the line and λ line is its wavelength. Fits to the second GRAVITY dataset formed by a toy model composed of two point sources and two wavelength components surrounding the primary star (meant to mimic the blue-and redshifted emission of a disk) that was run to obtain first guesses for the Keplerian fit. The inset labels are the baselines of the observations. NFLUX, T3PHI, DPHI and |V| refer to the normalised flux, closure phase, differential phase and visibilities respectively. Appendix C: Orbital details Based on Director Discretionary Time (DDT) observations made with ESO Telescopes at the Paranal Observatory under programme IDs 2107.D-5026, 63.H-0080 and 073.D-0274. Fig. 1 . 1Best fits to the GRAVITY dataset taken on September 19 2021 formed from a model composed of two point sources. The data are shown in black and the model is shown in red over a wavelength range highlighting the Brγ region as opposed to the whole GRAVITY wavelength range. Fig. 2 . 2Model image Fig. 3 . 3Simultaneous fit to the FEROS RVs of the stripped B star (upper panel) and the GRAVITY relative astrometry (lower panel) obtained using an adopted distance of 340 pc(Table C.1). The corresponding MCMC grid is displayed inFig. C.1. The dashed blue and red lines in the top panel show the best-fit RV of the stripped star and the predicted secondary RV-curve for the adopted solution respectively (see other solutions discussed in the text). The horizontal dotted line shows the systemic velocity of the system. In the bottom panel, we followed the definition ofBodensteiner et al. (2020a) for the identification of the primary and secondary components in the system and placed the suspected stripped B star at coordinate (0,0). Fig. A. 1 . 1Cut-out around the central source in the white-light image created by collapsing one of the unsaturated MUSE exposures. Appendix A: Investigating the MUSE data Figure A.1 shows a cut-out around the central source in one the MUSE exposures of HR 6819 that was created by collapsing the wavelength information in the MUSE data cube to obtain a white-light image. Figure A.2 compares two extracted MUSE spectra with one epoch of FEROS observations (obtained at MJD = 53248.02 d). Fig . A.2. Comparison of two MUSE spectra of the central source, obtained by summing the 5x5 central pixels (black) or extracting only the central pixel (grey), with a downgraded FEROS spectrum (red) at a similar phase. Fig. B. 1 . 1Grid-fit solution to the best-fitting model for a binary system with an emission line in one star. The figure shows the spatial positions that were mapped by the grid search and the associated reduced chisquare value of each of the tested positions (in colour). The central star (whose position is not fit) is shown in blue, and the axes depict the space on the sky in arcseconds. 516±0.019 1.16±0.03 −167±0.8 0.892±0.001 2.643±0.004 2.166367±0.000001 Fig. B.2.Mean Julian Date χ 2 red f K ρ PA f line w line λ line (MJD) (mas) ( • ) (nm) (µm) 59463.117 1.243 0.599±0.017 1.14±0.02 58.1±1 1.06±0.01 2.422±0.006 2.1661055±0.000002 59476.031 1.321 0. Table C . 1 . C1Parameters of the orbit determined byBodensteiner et al. (2020a) compared to the current derivations for different assumed distances D. Circular orbits have been assumed in both works. MCMC was used to estimate the uncertainties. https://www.mpe.mpg.de/~ott/QFitsView/ Article number, page 2 of 12 A. J. Frost et al.: HR 6819 is a binary system with no black hole https://github.com/amerand/PMOIRED https://github.com/amerand/CANDID Acknowledgements. We thank Dr. Jesús Maíz Appellániz for his helpful and thoughtful comments as referee. This research has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement number 772225: MUL-TIPLES). JB and MH are supported by an ESO fellowship. TS acknowledges funding received from the European Union's Horizon 2020 under the Marie Skłodowska-Curie grant agreement No. 101024605. We also thank the team behind QFitsView for developing their tool which allowed easy handling of our MUSE data.Article number, page 5 of 12 A&A proofs: manuscript no. aandacorrParameterBodensteiner et al. (2020a)This work P orb[d]40.335 ± 0.007 40.3315 ± 0.00039.13 ± 0.78 6.44 ± 0.0360.4 ± 1.0 62.13 ± 0.044.0 ± 0.8 ∼ 25 (computed) ∼ 32 (computed) ∼ 4 (computed) rms RV (km s −1 ) 4.7 rms as (µas) n/a 11 A&A proofs: manuscript no. aandacorr . B P Abbott, R Abbott, T D Abbott, Physical Review X. 931040Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2019, Physical Review X, 9, 031040 . M Abdul-Masih, G Banyard, J Bodensteiner, Nature. 58011Abdul-Masih, M., Banyard, G., Bodensteiner, J., et al. 2020, Nature, 580, E11 . L A Almeida, H Sana, W Taylor, A&A. 59884Almeida, L. A., Sana, H., Taylor, W., et al. 2017, A&A, 598, A84 Ground-based and Airborne Instrumentation for Astronomy III. R Bacon, M Accardo, L Adjali, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. I. S. McLean, S. K. Ramsay, & H. Takami7735773508Bacon, R., Accardo, M., Adjali, L., et al. 2010, in Society of Photo-Optical In- strumentation Engineers (SPIE) Conference Series, Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III, ed. I. S. McLean, S. K. Ram- say, & H. Takami, 773508 . C A L Bailer-Jones, J Rybizki, M Fouesneau, M Demleitner, R Andrae, AJ. 161147Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Demleitner, M., & Andrae, R. 2021, AJ, 161, 147 . C A L Bailer-Jones, J Rybizki, M Fouesneau, G Mantelet, R Andrae, AJ. 15658Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Mantelet, G., & Andrae, R. 2018, AJ, 156, 58 . G Banyard, H Sana, L Mahy, arXiv:2108.07814arXiv e-printsBanyard, G., Sana, H., Mahy, L., et al. 2021, arXiv e-prints, arXiv:2108.07814 . J Bodensteiner, T Shenar, L Mahy, A&A. 64143Bodensteiner, J., Shenar, T., Mahy, L., et al. 2020a, A&A, 641, A43 . J Bodensteiner, T Shenar, H Sana, A&A. 64142Bodensteiner, J., Shenar, T., & Sana, H. 2020b, A&A, 641, A42 . S E De Mink, N Langer, R G Izzard, H Sana, A De Koter, ApJ. 764166de Mink, S. E., Langer, N., Izzard, R. G., Sana, H., & de Koter, A. 2013, ApJ, 764, 166 . K El-Badry, E Quataert, MNRAS. 49322El-Badry, K. & Quataert, E. 2020, MNRAS, 493, L22 . K El-Badry, E Quataert, MNRAS. 5023436El-Badry, K. & Quataert, E. 2021, MNRAS, 502, 3436 . M Fabry, C Hawcroft, A J Frost, A&A. 651119Fabry, M., Hawcroft, C., Frost, A. J., et al. 2021, A&A, 651, A119 . A Gallenne, A Mérand, P Kervella, A&A. 57968Gallenne, A., Mérand, A., Kervella, P., et al. 2015, A&A, 579, A68 . D R Gies, L Wang, ApJ. 89844Gies, D. R. & Wang, L. 2020, ApJ, 898, L44 . R Abuter, Gravity CollaborationM Accardo, Gravity CollaborationA&A. 60294Gravity Collaboration, Abuter, R., Accardo, M., et al. 2017, A&A, 602, A94 . L Hennicker, N D Kee, T Shenar, arXiv:2111.15345arXiv e-printsHennicker, L., Kee, N. D., Shenar, T., et al. 2021, arXiv e-prints, arXiv:2111.15345 . A Irrgang, S Geier, S Kreuzer, I Pelisoli, U Heber, A&A. 6335Irrgang, A., Geier, S., Kreuzer, S., Pelisoli, I., & Heber, U. 2020, A&A, 633, L5 . R Klement, A C Carciofi, T Rivinius, ApJ. 885147Klement, R., Carciofi, A. C., Rivinius, T., et al. 2019, ApJ, 885, 147 R Klement, N Scott, T Rivinius, D Baade, P Hadrava, The Astronomer's Telegram. 143401Klement, R., Scott, N., Rivinius, T., Baade, D., & Hadrava, P. 2021, The As- tronomer's Telegram, 14340, 1 . N Langer, D Baade, J Bodensteiner, A&A. 63340Langer, N., Baade, D., Bodensteiner, J., et al. 2020, A&A, 633, A40 V Lapeyrere, P Kervella, S Lacour, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. 914691462Proc. SPIELapeyrere, V., Kervella, P., Lacour, S., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9146, Proc. SPIE, 91462D . D J Lennon, J Maíz Apellániz, A Irrgang, A&A. 649167Lennon, D. J., Maíz Apellániz, J., Irrgang, A., et al. 2021, A&A, 649, A167 . Q Z Liu, J Van Paradijs, Van Den, E P J Heuvel, A&A. 4551165Liu, Q. Z., van Paradijs, J., & van den Heuvel, E. P. J. 2006, A&A, 455, 1165 . L Mahy, L A Almeida, H Sana, A&A. 634119Mahy, L., Almeida, L. A., Sana, H., et al. 2020a, A&A, 634, A119 . L Mahy, H Sana, M Abdul-Masih, A&A. 634118Mahy, L., Sana, H., Abdul-Masih, M., et al. 2020b, A&A, 634, A118 . B D Mason, D R Gies, W I Hartkopf, AJ. 115821Mason, B. D., Gies, D. R., Hartkopf, W. I., et al. 1998, AJ, 115, 821 . T Mazeh, S Faigler, MNRAS. 49858Mazeh, T. & Faigler, S. 2020, MNRAS, 498, L58 . A Meilland, F Millour, S Kanaan, A&A. 538110Meilland, A., Millour, F., Kanaan, S., et al. 2012, A&A, 538, A110 . F Millour, EAS Publications Series. EAS Publications SeriesMillour, F. 2014, in EAS Publications Series, Vol. 69-70, EAS Publications Se- ries, 17-52 . M Moe, R Di Stefano, ApJS. 23015Moe, M. & Di Stefano, R. 2017, ApJS, 230, 15 . B Paczyński, ARA&A. 9183Paczyński, B. 1971, ARA&A, 9, 183 . O R Pols, J Cote, L B F M Waters, J Heise, A&A. 419Pols, O. R., Cote, J., Waters, L. B. F. M., & Heise, J. 1991, A&A, 241, 419 . T Rivinius, D Baade, P Hadrava, M Heida, R Klement, A&A. 6373Rivinius, T., Baade, D., Hadrava, P., Heida, M., & Klement, R. 2020, A&A, 637, L3 . T Rivinius, A C Carciofi, C Martayan, A&A Rev. 2169Rivinius, T., Carciofi, A. C., & Martayan, C. 2013, A&A Rev., 21, 69 . T Rivinius, S Štefl, D Baade, A&A. 459137Rivinius, T., Štefl, S., & Baade, D. 2006, A&A, 459, 137 . M Safarzadeh, S Toonen, A Loeb, ApJ. 89729Safarzadeh, M., Toonen, S., & Loeb, A. 2020, ApJ, 897, L29 . H Sana, S E De Mink, A De Koter, Science. 337444Sana, H., de Mink, S. E., de Koter, A., et al. 2012, Science, 337, 444 . T Shenar, J Bodensteiner, M Abdul-Masih, A&A. 6396Shenar, T., Bodensteiner, J., Abdul-Masih, M., et al. 2020, A&A, 639, L6 . S Simón-Díaz, J Maíz Apellániz, D J Lennon, A&A. 6347Simón-Díaz, S., Maíz Apellániz, J., Lennon, D. J., et al. 2020, A&A, 634, L7 . A Sota, J Maíz Apellániz, N I Morrell, ApJS. 21110Sota, A., Maíz Apellániz, J., Morrell, N. I., et al. 2014, ApJS, 211, 10 . J I Villaseñor, W D Taylor, C J Evans, MNRAS. 5075348Villaseñor, J. I., Taylor, W. D., Evans, C. J., et al. 2021, MNRAS, 507, 5348 . L Wang, M S Fujii, A Tanikawa, MNRAS. 5045778Wang, L., Fujii, M. S., & Tanikawa, A. 2021, MNRAS, 504, 5778 . G Weigelt, S Kraus, T Driebe, A&A. 46487Weigelt, G., Kraus, S., Driebe, T., et al. 2007, A&A, 464, 87 . P M Weilbacher, R Palsa, O Streicher, A&A. 64128Weilbacher, P. M., Palsa, R., Streicher, O., et al. 2020, A&A, 641, A28
[ "https://github.com/amerand/PMOIRED", "https://github.com/amerand/CANDID" ]